相信爬取大公司的数据时,常常会遇到页面信息动态加载的问题,
如果仅仅使用content = urllib2.urlopen(URL).read(),估计信息是获取不全的,这时候就需要模拟浏览器加载页面的过程,
selenium提供了方便的方法,我也是菜鸟,试了很多种方式,下面提供觉得最靠谱的(已经证明对于爬取新浪微博的topic、twitter under topic完全没问题)。
至于下面的browser变量是什么,看前面的几篇文章。
首先是请求对应的URL:
1
2
3
4
5
6
7
8
9
10
|
right_URL = URL.split( "from" )[ 0 ] + "current_page=" + str (current_page) + "&since_id=" + str (since_id) + "&page=" + str (page_index) + "#Pl_Third_App__" + str (Pl_Third_App) print right_URL try : browser.get(right_URL) print "loading more, sleep 3 seconds ... 0" time.sleep( 3 ) # NO need for this sleep, but we add ... browser = selenuim_loading_more(browser, method_index = 0 ) except : print "one exception happen ==> get_tweeter_under_topic 2 ..." pass |
然后模拟浏览器,加载更多(推荐使用method_index=0,已经证明比其他好用很多):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
def selenuim_loading_more(browser, method_index = 0 ): if method_index = = 0 : browser.implicitly_wait( 3 ) # 为了快速滑动,先设置超时时间为1秒 # while True: for i in range ( 1 , 4 ): # at most 3 times print "loading more, window.scrollTo bettom for the" , i, "time ..." browser.execute_script( "window.scrollTo(0,document.body.scrollHeight);" ) try : # 定位页面底部的换页tab browser.find_element_by_css_selector( "div[class='W_pages']" ) break # 如果没抛出异常就说明找到了底部标志,跳出循环 except NoSuchElementException: pass # 抛出异常说明没找到底部标志,继续向下滑动 browser.implicitly_wait( 4 ) # 将超时时间改回10秒 elif method_index = = 1 : browser.find_element_by_css_selector( "div[class='empty_con clearfix']" ).click() # loading more print "loading more, sleep 4 seconds ... 1" time.sleep( 4 ) browser.find_element_by_css_selector( "div[class='empty_con clearfix']" ).click() # loading more print "loading more, sleep 3 seconds ... 2" time.sleep( 2 ) elif method_index = = 2 : load_more_1 = browser.find_element_by_css_selector( "div[class='empty_con clearfix']" ) # loading more ActionChains(browser).click(load_more_1).perform() print "loading more, sleep 4 seconds ... 1" time.sleep( 4 ) load_more_2 = browser.find_element_by_css_selector( "div[class='empty_con clearfix']" ) # loading more ActionChains(browser).click(load_more_2).perform() print "loading more, sleep 3 seconds ... 2" time.sleep( 2 ) elif method_index = = 3 : print "loading more, sleep 4 seconds ... 1" element = WebDriverWait(browser, 4 ).until( EC.element_to_be_clickable((By.CSS_SELECTOR, "div[class='empty_con clearfix']" )) ) element.click() print "loading more, sleep 2 seconds ... 2" WebDriverWait(browser, 2 ).until( EC.element_to_be_clickable((By.CSS_SELECTOR, "div[class='empty_con clearfix']" )) ).click() return browser |
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。
原文链接:https://blog.csdn.net/mmc2015/article/details/53366452