本文实例讲述了Python数据分析之获取双色球历史信息的方法。分享给大家供大家参考,具体如下:
每个人都有一颗中双色球大奖的心,对于技术人员来说,通过技术分析,可以增加中奖几率,现使用python语言收集历史双色球中奖信息,之后进行预测分析。
说明:采用2016年5月15日获取的双色球数据为基础进行分析,总抽奖数1940次。
初级代码,有些内容比较繁琐,有更好的代码,大家可以分享。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
|
#!/usr/bin/python # -*- coding:UTF-8 -*- #coding:utf-8 #author:levycui #date:20160513 #Description:双色球信息收集 import urllib2 from bs4 import BeautifulSoup #采用BeautifulSoup import os import re #伪装成浏览器登陆,获取网页源代码 def getPage(href): headers = { 'User-Agent' : 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6' } req = urllib2.Request( url = href , headers = headers ) try : post = urllib2.urlopen(req) except urllib2.HTTPError,e: print e.code print e.reason return post.read() #初始化url 双色球首页 url = 'http://kaijiang.zhcw.com/zhcw/html/ssq/list_1.html' #=============================================================================== #获取url总页数 def getPageNum(url): num = 0 page = getPage(url) soup = BeautifulSoup(page) strong = soup.find( 'td' ,colspan = '7' ) # print strong if strong: result = strong.get_text().split( ' ' ) # print result list_num = re.findall( "[0-9]{1}" ,result[ 1 ]) # print list_num for i in range ( len (list_num)): num = num * 10 + int (list_num[i]) return num else : return 0 #=============================================================================== #获取每页双色球的信息 def getText(url): for list_num in range ( 1 ,getPageNum(url)): #从第一页到第getPageNum(url)页 print list_num #打印下页码 href = 'http://kaijiang.zhcw.com/zhcw/html/ssq/list_' + str (list_num) + '.html' #调用新url链接 # for listnum in len(list_num): page = BeautifulSoup(getPage(href)) em_list = page.find_all( 'em' ) #匹配em内容 div_list = page.find_all( 'td' ,{ 'align' : 'center' }) #匹配 <td align=center>这样的内容 #初始化n n = 0 #将双色球数字信息写入num.txt文件 fp = open ( "num.txt" , "w" ) for div in em_list: emnum1 = div.get_text() # print emnum1 text = div.get_text() text = text.encode( 'utf-8' ) #print title n = n + 1 if n = = 7 : text = text + "\n" n = 0 else : text = text + "," fp.write( str (text)) fp.close() #将日期信息写入date.txt文件 fp = open ( "date.txt" , "w" ) for div in div_list: text = div.get_text().strip('') # print text list_num = re.findall( '\d{4}-\d{2}-\d{2}' ,text) list_num = str (list_num[:: 1 ]) list_num = list_num[ 3 : 13 ] if len (list_num) = = 0 : continue elif len (list_num) > 1 : fp.write( str (list_num) + '\n' ) fp.close() #将num.txt和date.txt文件进行整合写入hun.txt文件中 #格式如下: #('2016-05-03', '09,12,24,28,29,30,02') #('2016-05-01', '06,08,13,14,22,27,10') #('2016-04-28', '03,08,13,14,15,30,04') # fp01 = open ( "date.txt" , "r" ) a = [] for line01 in fp01: a.append(line01.strip( '\n' )) # print a fp01.close() fp02 = open ( "num.txt" , "r" ) b = [] for line02 in fp02: b.append(line02.strip( '\n' )) # print b fp02.close() fp = open ( "hun.txt" , "a" ) for cc in zip (a,b): #使用zip方法合并 print cc fp.write( str (cc) + '\n' ) fp.close() #=============================================================================== if __name__ = = "__main__" : pageNum = getPageNum(url) print pageNum getpagetext = getText(url) print getpagetext |
数据样例:
1
2
3
|
( '2015-03-03' , '09,11,16,18,23,24,10' ) ( '2015-03-01' , '08,09,10,13,29,30,01' ) ( '2015-02-26' , '04,07,10,16,23,25,10' ) |
希望本文所述对大家Python程序设计有所帮助。
原文链接:http://blog.csdn.net/levy_cui/article/details/51394450