前言
用nginx做负载均衡,作为架构的最前端或中间层,随着日益增长的访问量,需要给负载均衡做高可用架构,利用keepalived解决单点风险,一旦 nginx宕机能快速切换到备份服务器。
vmware网络配置可能遇到的问题解决方法
-
启动
vmware dhcp service
和vmware nat service
两个服务 - 在网络适配器开启网络共享,允许其他网络打勾保存,重启虚拟机
安装
节点部署
节点 | 地址 | 服务 |
---|---|---|
centos7_1 | 192.168.211.130 | keepalived+nginx |
centos7_2 | 192.168.211.131 | keepalived+nginx |
centos7_3 | 192.168.211.132 | redis服务器 |
web1(物理机) | 192.168.211.128 | fastapi+celery |
web2(物理机) | 192.168.211.129 | fastapi+celery |
web的配置
web1启动python http服务器
1
2
3
4
5
6
7
8
9
|
vim index.html < html > < body > < h1 >web svr 1</ h1 > </ body > </ html > nohup python -m simplehttpserver 8080 > running.log 2>&1 & |
web2启动python http服务器
1
2
3
4
5
6
7
8
9
|
vim index.html < html > < body > < h1 >web svr 2</ h1 > </ body > </ html > nohup python -m simplehttpserver 8080 > running.log 2>&1 & |
关闭防火墙
1
2
3
|
firewall-cmd --state systemctl stop firewalld.service systemctl disable firewalld.service |
现在浏览器访问就正常了,页面显示web svr 1 和 2
centos1和2安装nginx
首先配置阿里云的源
1
2
3
|
mv /etc/yum .repos.d /centos-base .repo /etc/yum .repos.d /centos-base .repo.backup wget -o /etc/yum .repos.d /centos-base .repo https: //mirrors .aliyun.com /repo/centos-7 .repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum .repos.d /centos-base .repo |
安装依赖包
1
2
3
4
|
yum -y install gcc yum install -y pcre pcre-devel yum install -y zlib zlib-devel yum install -y openssl openssl-devel |
下载nginx,并解压
1
2
|
wget http: //nginx .org /download/nginx-1 .8.0. tar .gz tar -zxvf nginx-1.8.0. tar .gz |
安装nginx
1
2
3
4
5
6
7
8
9
|
cd nginx-1.8.0 . /configure --user=nobody --group=nobody --prefix= /usr/local/nginx --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-http_ssl_module make make install cd /usr/local/nginx/sbin/ # 检查配置文件 . /nginx -t # 启动nginx . /nginx |
开放nginx访问
1
2
|
firewall-cmd --zone=public --add-port=80 /tcp --permanent systemctl restart firewalld.service |
此时访问130和131都可以看到nginx的首页。
创建nginx启动文件
需要在init.d文件夹中创建nginx启动文件。 这样每次服务器重新启动init进程都会自动启动nginx。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
|
cd /etc/init .d/ vim nginx #!/bin/sh # # nginx - this script starts and stops the nginx daemin # # chkconfig: - 85 15 # description: nginx is an http(s) server, http(s) reverse \ # proxy and imap/pop3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # pidfile: /var/run/nginx.pid # user: nginx # source function library. . /etc/rc .d /init .d /functions # source networking configuration. . /etc/sysconfig/network # check that networking is up. [ "$networking" = "no" ] && exit 0 nginx= "/usr/local/nginx/sbin/nginx" prog=$( basename $nginx) nginx_conf_file= "/usr/local/nginx/conf/nginx.conf" lockfile= /var/run/nginx .lock start() { [ -x $nginx ] || exit 5 [ -f $nginx_conf_file ] || exit 6 echo -n $ "starting $prog: " daemon $nginx -c $nginx_conf_file retval=$? echo [ $retval - eq 0 ] && touch $lockfile return $retval } stop() { echo -n $ "stopping $prog: " killproc $prog -quit retval=$? echo [ $retval - eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop start } reload() { configtest || return $? echo -n $ "reloading $prog: " killproc $nginx -hup retval=$? echo } force_reload() { restart } configtest() { $nginx -t -c $nginx_conf_file } rh_status() { status $prog } rh_status_q() { rh_status > /dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $ "usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac |
校验配置文件依次输入下列命令
1
2
|
chkconfig --add nginx chkconfig --level 345 nginx on |
给这个文件添加执行权限
1
2
3
4
|
chmod +x nginx ls functions netconsole network nginx readme |
启动nginx服务
1
2
3
|
service nginx start service nginx status service nginx reload |
nginx反向代理、负载均衡(centos_1)
修改nginx.conf配置文件,去除注释的代码
1
2
3
4
5
6
7
|
cd /usr/local/nginx/conf/ mv nginx.conf nginx.conf.bak egrep - v '^#' nginx.conf.bak egrep - v '^#|^[ ]*#' nginx.conf.bak egrep - v '^#|^[ ]*#|^$' nginx.conf.bak egrep - v '^#|^[ ]*#|^$' nginx.conf.bak >> nginx.conf cat nginx.conf |
输出如下
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application /octet-stream ; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x .html; location = /50x .html { root html; } } } |
重新加载nginx配置
1
2
3
4
|
# 测试配置文件是否正常 .. /sbin/nginx -t # 重新加载nginx配置 .. /sbin/nginx -s reload |
配置nginx反向代理、负载均衡
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application /octet-stream ; sendfile on; keepalive_timeout 65; # websvr 服务器集群(也可以叫负载均衡池) upstream websvr { server 192.168.211.128:8001 weight=1; server 192.168.211.129:8001 weight=2; } server { listen 80; # 用来指定ip地址或者域名,多个配置之间用空格分隔 server_name 192.168.211.130; location / { # 将所有请求交给websvr集群去处理 proxy_pass http: //websvr ; } error_page 500 502 503 504 /50x .html; location = /50x .html { root html; } } } |
现在重启nginx
1
|
sbin /nginx -s reload |
websvr名称可自定义,可以指明这些服务器的含义。也就是只需要添加upstream websvr
和proxy_pass
就可以实现负载均衡。
现在访问130,页面上就会出现web svr 1和web svr 2切换,会根据权重选择服务器,weight值越大,权重越高,也就是重复刷新该页面,平均web svr 2出现2次,web svr 1出现1次。
到目前为止,仍然不能实现高可用,虽然web服务可以这样做,单点故障可以通过这种方式处理,但是如果nginx服务故障了,整个系统基本就无法访问了,那么就需要使用多台nginx来保障。
多个nginx协同工作,nginx高可用【双机主从模式】
在131
服务器(centos_2)上新增一台nginx服务,和之前的配置一样,只需要修改 nginx.conf 即可
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application /octet-stream ; sendfile on; keepalive_timeout 65; upstream websvr { server 192.168.211.128:8001 weight=1; server 192.168.211.129:8001 weight=2; } server { listen 80; server_name 192.168.211.131; location / { proxy_pass http: //websvr ; } error_page 500 502 503 504 /50x .html; location = /50x .html { root html; } } } # 重新加载nginx sbin /nginx -s reload |
现在访问 http://192.168.211.130/ 也可以得到和 http://192.168.211.131/ 类似的结果。
这两台nginx服务器的ip是不同的,那怎么做才能将这两台nginx服务器一起工作呢?这就需要用到keepalived了。
安装软件,两台centos同时安装
1
|
yum install keepalived pcre-devel -y |
配置keepalived
两台均备份
1
|
cp /etc/keepalived/keepalived .conf keepalived.conf.bak |
centos_1
配置keepalived-master
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
[root@localhost keepalived] # cat keepalived.conf ! configuration file for keepalived global_defs { script_user root enable_script_security } vrrp_script chk_nginx { # 指定监控脚本,检测nginx服务是否正常运行 script "/etc/keepalived/chk_nginx.sh" # 指定监控时间,每10s执行一次 interval 10 # 脚本结果导致的优先级变更,检测失败(脚本返回非0)则优先级 -5 # weight -5 # # 检测连续2次失败才算确定是真失败。会用weight减少优先级(1-255之间) # fall 2 # 检测1次成功就算成功。但不修改优先级 # rise 1 } vrrp_instance vi_1 { # 指定keepalived的角色,主机设置为master,备用机设置为backup state backup # 指定ha监测网络的接口。centos7使用 ip addr 获取 interface ens33 # 主备的virtual_router_id必须一样,可以设置为ip后一组:must be between 1 & 255 virtual_router_id 51 # 优先级值,在同一个vrrp_instance下, mastre 一定要高于 bauckup,master恢复后,backup自动交接 priority 90 # vrrp 广播周期秒数,如果没检测到该广播,就被认为服务挂了,将切换主备 advert_int 1 # 设置验证类型和密码。主从必须一样 authentication { # 设置vrrp验证类型,主要有pass和ah两种 auth_type pass # 加密的密码,两台服务器一定要一样,才能正常通信 auth_pass 1111 } track_script { # 执行监控的服务,引用vrrp脚本,即在 vrrp_script 部分指定的名字。定期运行它们来改变优先级 chk_nginx } virtual_ipaddress { # vrrp ha 虚拟地址 如果有多个vip,继续换行填写 192.168.211.140 } } |
把配置文件发送到131
节点
1
|
scp /etc/keepalived/keppalived .conf 192.168.211.131: /etc/keepalived/keepalived .conf |
对于131
节点只需要修改
1
2
|
state backup priority 90 |
主keepalived配置监控脚本chk_nginx.sh
创建一个脚本,用于在keepalived中执行
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
vi /etc/keepalived/chk_nginx .sh #!/bin/bash # 查看是否有 nginx进程 把值赋给变量counter counter=` ps -c nginx --no-header | wc -l` # 如果没有进程值得为 0 if [ $counter - eq 0 ]; then # 尝试启动nginx echo "keepalived info: try to start nginx" >> /var/log/messages /etc/nginx/sbin/nginx sleep 3 if [ ` ps -c nginx --no-header | wc -l` - eq 0 ]; then # 输出日至道系统消息 echo "keepalived info: unable to start nginx" >> /var/log/messages # 如果还没没启动,则结束 keepalived 进程 # killall keepalived # 或者停止 /etc/init .d /keepalived stop exit 1 else echo "keepalived info: nginx service has been restored" >> /var/log/messages exit 0 fi else # 状态正常 echo "keepalived info: nginx detection is normal" >> /var/log/messages ; exit 0 fi |
接下来授予执行权限,并测试
1
2
|
chmod +x chk_nginx.sh . /chk_nginx .sh |
两边重启keepalived
1
2
|
systemctl restart keepalived systemctl status keepalived |
此时访问.140
也是可以正常显示的,也就是绑定的ip成功了。执行前可以通过下面命令实时查看 messages 中的输出日志
1
2
3
4
5
6
7
|
tail -f /var/log/messages # 如果nginx关闭 keepalived info: try to start nginx keepalived info: nginx service has been restored # nginx正常打开 keepalived info: nginx detection is normal |
当nginx检测正常,就会返回0;检测没有了,返回1,但是keepalived似乎不是检测这个返回值来实现转移,而是检测keepalived服务是否存在,来释放本地vip后,最终转移虚拟ip,到另一台服务器。
参考文章
https://www.jianshu.com/p/7e8e61d34960
https://www.cnblogs.com/zhangxingeng/p/10721083.html
到此这篇关于vmware部署nginx+keepalived集群双主架构的文章就介绍到这了,更多相关nginx+keepalived集群内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!
原文链接:https://blog.csdn.net/blackxu007/article/details/119860165