前言
随着站点的增多,管理复杂性也上来了,俗话说:人多了不好带,我发现站点多了也不好管,因为这些站点里有重要的也有不重要的,重要核心的站点当然就管理的多一些,像一些万年都不出一次问题的,慢慢就被自己都淡忘了,冷不丁那天出个问题,还的手忙脚乱的去紧急处理,所以规范的去管理这些站点是很有必要的,今天我们就做第一步,不管大站小站,先统一把监控做起来,先不说业务情况,最起码那个站点不能访问了,要第一时间报出来,别等着业务方给你反馈,就显得我们不够专业了,那接下来我们看看如果用python实现多网站的可用性监控,脚本如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
|
#!/usr/bin/env python import pickle, os, sys, logging from httplib import HTTPConnection, socket from smtplib import SMTP def email_alert(message, status): fromaddr = 'xxx@163.com' toaddrs = 'xxxx@qq.com' server = SMTP( 'smtp.163.com:25' ) server.starttls() server.login( 'xxxxx' , 'xxxx' ) server.sendmail(fromaddr, toaddrs, 'Subject: %s\r\n%s' % (status, message)) server.quit() def get_site_status(url): response = get_response(url) try : if getattr (response, 'status' ) = = 200 : return 'up' except AttributeError: pass return 'down' def get_response(url): try : conn = HTTPConnection(url) conn.request( 'HEAD' , '/' ) return conn.getresponse() except socket.error: return None except : logging.error( 'Bad URL:' , url) exit( 1 ) def get_headers(url): response = get_response(url) try : return getattr (response, 'getheaders' )() except AttributeError: return 'Headers unavailable' def compare_site_status(prev_results): def is_status_changed(url): status = get_site_status(url) friendly_status = '%s is %s' % (url, status) print friendly_status if url in prev_results and prev_results[url] ! = status: logging.warning(status) email_alert( str (get_headers(url)), friendly_status) prev_results[url] = status return is_status_changed def is_internet_reachable(): if get_site_status( 'www.baidu.com' ) = = 'down' and get_site_status( 'www.sohu.com' ) = = 'down' : return False return True def load_old_results(file_path): pickledata = {} if os.path.isfile(file_path): picklefile = open (file_path, 'rb' ) pickledata = pickle.load(picklefile) picklefile.close() return pickledata def store_results(file_path, data): output = open (file_path, 'wb' ) pickle.dump(data, output) output.close() def main(urls): logging.basicConfig(level = logging.WARNING, filename = 'checksites.log' , format = '%(asctime)s %(levelname)s: %(message)s' , datefmt = '%Y-%m-%d %H:%M:%S' ) pickle_file = 'data.pkl' pickledata = load_old_results(pickle_file) print pickledata if is_internet_reachable(): status_checker = compare_site_status(pickledata) map (status_checker, urls) else : logging.error( 'Either the world ended or we are not connected to the net.' ) store_results(pickle_file, pickledata) if __name__ = = '__main__' : main(sys.argv[ 1 :]) |
脚本核心点解释:
1、getattr()
是python的内置函数,接收一个对象,可以根据对象属性返回对象的值。
2、compare_site_status()
函数是返回的是一个内部定义的函数。
3、map()
,需要2个参数,一个是函数,一个是序列,功能就是将序列中的每个元素应用函数方法。
总结
以上就是这篇文章的全部内容,有需要的朋友们可以参考借鉴。