服务器之家

服务器之家 > 正文

Python基于分析Ajax请求实现抓取今日头条街拍图集功能示例

时间:2021-03-19 00:14     来源/作者:wanlifeipeng

本文实例讲述了Python基于分析Ajax请求实现抓取今日头条街拍图集功能。分享给大家供大家参考,具体如下:

Python基于分析Ajax请求实现抓取今日头条街拍图集功能示例

代码:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
import os
import re
import json
import time
from hashlib import md5
from multiprocessing import Pool
import requests
from requests.exceptions import RequestException
from pymongo import MongoClient
# 配置信息
OFFSET_START = 0  # 爬去页面的起始下标
OFFSET_END = 20  # 爬去页面的结束下标
KEYWORD = '街拍'  # 搜索的关键字
# mongodb相关配置
MONGO_URL = 'localhost'
MONGO_DB = 'toutiao'  # 数据库名称
MONGO_TABLE = 'jiepai' # 集合名称
# 图片保存的文件夹名称
IMAGE_PATH = 'images'
headers = {
  "User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
}
client = MongoClient(host=MONGO_URL)
db = client[MONGO_DB]
jiepai_table = db[MONGO_TABLE]
if not os.path.exists(IMAGE_PATH):
  os.mkdir(IMAGE_PATH)
def get_html(url, params=None):
  try:
    response = requests.get(url, params=params, headers=headers)
    if response.status_code == 200:
      return response.text
    return None
  except RequestException as e:
    print("请求%s失败: " % url, e)
    return None
# 获取索引页内容
def get_index_page(offset, keyword):
  basic_url = 'http://www.toutiao.com/search_content/'
  params = {
    'offset': offset,
    'format': 'json',
    'keyword': keyword,
    'autoload': 'true',
    'count': 20,
    'cur_tab': 3
  }
  return get_html(basic_url, params)
def parse_index_page(html):
  '''
  解析索引页内容
  返回: 索引页中包含的所有详情页url
  '''
  if not html:
    return
  data = json.loads(html)
  if 'data' in data:
    for item in data['data']:
      article_url = item['article_url']
      if 'toutiao.com/group' in article_url:
        yield article_url
# 获取详情页
def get_detail_page(url):
  return get_html(url)
# 解析详情页
def parse_detail_page(url, html):
  '''
    解析详情页
    返回对应的标题,url和包含的图片url
  '''
  title_reg = re.compile('<title>(.*?)</title>')
  title = title_reg.search(html).group(1)
  gallery_reg = re.compile('var gallery = (.*?);')
  gallery = gallery_reg.search(html)
  if gallery and 'sub_images' in gallery.group(1):
    images = json.loads(gallery.group(1))['sub_images']
    image_list = [image['url'] for image in images]
    return {
      'title': title,
      'url': url,
      'images': image_list
    }
  return None
def save_to_mongodb(content):
  jiepai_table.insert(content)
  print("存储到mongdob成功", content)
def download_images(image_list):
  for image_url in image_list:
    try:
      response = requests.get(image_url)
      if response.status_code == 200:
        save_image(response.content)
    except RequestException as e:
      print("下载图片失败: ", e)
def save_image(content):
  '''
    对图片的二进制内容做hash,构造图片路径,以此保证图片不重复
  '''
  file_path = '{0}/{1}/{2}.{3}'.format(os.getcwd(),
                     IMAGE_PATH, md5(content).hexdigest(), 'jpg')
  # 去除重复的图片
  if not os.path.exists(file_path):
    with open(file_path, 'wb') as f:
      f.write(content)
def jiepai(offset):
  html = get_index_page(offset, KEYWORD)
  if html is None:
    return
  page_urls = list(parse_index_page(html))
  # print("详情页url列表:" )
  # for page_url in page_urls:
  #   print(page_url)
  for page in page_urls:
    print('get detail page:', page)
    html = get_detail_page(page)
    if html is None:
      continue
    content = parse_detail_page(page, html)
    if content:
      save_to_mongodb(content)
      download_images(content['images'])
      time.sleep(1)
  print('-------------------------------------')
if __name__ == '__main__':
  offset_list = range(OFFSET_START, OFFSET_END)
  pool = Pool()
  pool.map(jiepai, offset_list)

备注:

其实通过url请求返回的json数据中已经包含了图片列表

?
1
2
3
4
5
6
7
8
9
import requests
basic_url = 'http://www.toutiao.com/search_content/?offset={}&format=json&keyword=%E8%A1%97%E6%8B%8D&autoload=true&count=20&cur_tab=3'
url = basic_url.format(0)
html = requests.get(url).json()
items = html['data']
for item in items:
  title = item['media_name']
  image_list = [image_detail['url'] for image_detail in item['image_detail']]
  print(title, image_list)

希望本文所述对大家Python程序设计有所帮助。

原文链接:http://www.cnblogs.com/hupeng1234/p/7112440.html

相关文章

热门资讯

2020微信伤感网名听哭了 让对方看到心疼的伤感网名大全
2020微信伤感网名听哭了 让对方看到心疼的伤感网名大全 2019-12-26
yue是什么意思 网络流行语yue了是什么梗
yue是什么意思 网络流行语yue了是什么梗 2020-10-11
背刺什么意思 网络词语背刺是什么梗
背刺什么意思 网络词语背刺是什么梗 2020-05-22
Intellij idea2020永久破解,亲测可用!!!
Intellij idea2020永久破解,亲测可用!!! 2020-07-29
苹果12mini价格表官网报价 iPhone12mini全版本价格汇总
苹果12mini价格表官网报价 iPhone12mini全版本价格汇总 2020-11-13
返回顶部