使用saltstack編譯安裝keepalived:
建立相應的目錄,並在目錄下建立相應的sls配置檔案
[root@node1 ~]# mkdir /srv/salt/prod/keepalived [root@node1 ~]# mkdir /srv/salt/prod/keepalived/files
1、使用saltstack進行編譯安裝keepalived
1.1將下載好的keepalived原始碼包放置在keepalived目錄下面的files目錄中(files目錄提供需要用的原始碼包,檔案等)
[root@node1 etc]# pwd /usr/local/src/keepalived-1.3.6/keepalived/etc [root@node1 etc]# cp keepalived/keepalived.conf /srv/salt/prod/keepalived/files/ [root@node1 etc]# cp init.d/keepalived /srv/salt/prod/keepalived/files/keepalived.init [root@node1 sysconfig]# pwd /usr/local/src/keepalived-1.3.6/keepalived/etc/sysconfig [root@node1 sysconfig]# cp keepalived /srv/salt/prod/keepalived/files/keepalived.sysconfig
檢視files目錄下面檔案:
[root@node1 keepalived]# ll files/ total 696 -rw-r--r-- 1 root root 702570 Oct 10 22:21 keepalived-1.3.6.tar.gz -rwxr-xr-x 1 root root 1335 Oct 10 22:17 keepalived.init -rw-r--r-- 1 root root 667 Oct 10 22:28 keepalived.sysconfig
1.2haproxy的原始碼包和啟動指令碼準備好後,開始進行安裝keepalived
[root@node1 keepalived]# pwd /srv/salt/prod/keepalived [root@node1 keepalived]# cat install.sls include: - pkg.pkg-init keepalived-install: file.managed: - name: /usr/local/src/keepalived-1.3.6.tar.gz - source: salt://keepalived/files/keepalived-1.3.6.tar.gz - user: root - group: root - mode: 755 cmd.run: - name: cd /usr/local/src/ && tar xf keepalived-1.3.6.tar.gz && cd keepalived-1.3.6 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install - unless: test -d /usr/local/keepalived - require: - pkg: pkg-init - file: keepalived-install keepalived-init: file.managed: - name: /etc/init.d/keepalived - source: salt://keepalived/files/keepalived.init - user: root - group: root - mode: 755 cmd.run: - name: chkconfig --add keepalived - unless: chkconfig --list|grep keepalived - require: - file: /etc/init.d/keepalived /etc/sysconfig/keepalived: file.managed: - source: salt://keepalived/files/keepalived.sysconfig - user: root - group: root - mode: 644 /etc/keepalived: file.directory: - user: root - group: root - mode: 755
總結上面配置檔案包括:1、include進來編譯環境 2、編譯安裝keepalived 3、新增keepalived指令碼檔案,並新增到系統服務中 4、複製keepalived.sysconfig檔案 5、建立keepalived配置檔案目錄
執行install.sls檔案,安裝keepalived:
[root@node1 keepalived]# salt 'node1' state.sls keepalived.install saltenv=prod
3、安裝完keepalived後,並且keepalived已經有了啟動指令碼,接下來需要給keepalived提供配置檔案,最後將keepalived服務開啟,由於根據業務需求的不同,可能用到的keepalived的配置檔案會有區別,
所以這裡將配置檔案與keepalived的安裝分隔開進行狀態管理配置,以後minion的keepalived可以根據配置檔案的不同而提供安裝
[root@node1 cluster]# pwd /srv/salt/prod/cluster [root@node1 cluster]# cat haproxy-outside-keepalived.sls haproxy與keepalived結合使用的高可用 include: - keepalived.install keepalived-service: file.managed: - name: /etc/keepalived/keepalived.conf - source: salt://cluster/files/haproxy-outside-keepalived.conf - user: root - group: root - mode: 644 - template: jinja jinja模板呼叫,使用變數 {% if grains['fqdn'] == 'node1' %} 基於節點的fqdn資訊來賦予變數值 - ROUTEID: haproxy_node1 - STATEID: MASTER - PRIORITYID: 150 {% elif grains['fqdn'] == 'node2' %} - ROUTEID: haproxy_node2 - STATEID: BACKUP - PRIORITYID: 100 {% endif %} service.running: - name: keepalived - enable: True - reload: True - watch: - file: keepalived-service
總結上述配置檔案內容:1、include進來keepalived的安裝 2、給各節點提供不同的配置檔案,用到了jinja模板呼叫grains 3、開啟keepalived服務,並開啟自啟動
最後將keepalived專案新增到top.sls檔案中:
[root@node1 base]# cat top.sls base: '*': - init.env_init prod: 'node1': - cluster.haproxy-outside - cluster.haproxy-outside-keepalived
整個keepalived專案構架圖:
[root@node1 keepalived]# tree . ├── files │ ├── keepalived-1.3.6.tar.gz │ ├── keepalived.init │ └── keepalived.sysconfig └── install.sls 1 directory, 4 files [root@node1 keepalived]# cd ../cluster/ [root@node1 cluster]# tree . ├── files │ ├── haproxy-outside.cfg │ └── haproxy-outside-keepalived.conf ├── haproxy-outside-keepalived.sls └── haproxy-outside.sls
node1節點安裝沒有問題,那麼更改top.sls中節點設定,將node2節點也給新增上:
[root@node1 base]# cat top.sls base: '*': - init.env_init prod: '*': 只有兩個節點,所以這裡*代替了 - cluster.haproxy-outside - cluster.haproxy-outside-keepalived
執行狀態配置檔案:
[root@node1 base]# salt '*' state.highstate
檢視node2狀態:
[root@node2 ~]# netstat -tunlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.44.10:80 0.0.0.0:* LISTEN 16791/haproxy tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1279/sshd tcp 0 0 0.0.0.0:8090 0.0.0.0:* LISTEN 16791/haproxy tcp 0 0 :::8080 :::* LISTEN 14351/httpd tcp 0 0 :::22 :::* LISTEN 1279/sshd udp 0 0 0.0.0.0:68 0.0.0.0:* 1106/dhclient
可以看見haproxy已經監聽起來了,監聽在了一個不是自己實際ip的地址上
檢視node1的vip資訊:
[root@node1 files]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:86:2C:63 inet addr:192.168.44.134 Bcast:192.168.44.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe86:2c63/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230013 errors:0 dropped:0 overruns:0 frame:0 TX packets:172530 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:130350592 (124.3 MiB) TX bytes:19244347 (18.3 MiB) eth0:0 Link encap:Ethernet HWaddr 00:0C:29:86:2C:63 inet addr:192.168.44.10 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:145196 errors:0 dropped:0 overruns:0 frame:0 TX packets:145196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12285984 (11.7 MiB) TX bytes:12285984 (11.7 MiB)
可以看見eth0:0就是vip,手動將keepalived停止,檢視vip是否漂移到nide2?
[root@node1 files]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ]
檢視node2狀態:
[root@node2 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:34:32:CB inet addr:192.168.44.135 Bcast:192.168.44.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe34:32cb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:494815 errors:0 dropped:0 overruns:0 frame:0 TX packets:357301 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:250265303 (238.6 MiB) TX bytes:98088504 (93.5 MiB) eth0:0 Link encap:Ethernet HWaddr 00:0C:29:34:32:CB inet addr:192.168.44.10 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2953 errors:0 dropped:0 overruns:0 frame:0 TX packets:2953 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1272983 (1.2 MiB) TX bytes:1272983 (1.2 MiB)
於是haproxy結合keepalived的高可用基於saltstack安裝成功,下面為haproxy和keepalived的簡單配置檔案:
haproxy配置檔案:
[root@node1 files]# pwd /srv/salt/prod/cluster/files [root@node1 files]# cat haproxy-outside.cfg # # This is a sample configuration. It illustrates how to separate static objects # traffic from dynamic traffic, and how to dynamically regulate the server load. # # It listens on 192.168.1.10:80, and directs all requests for Host 'img' or # URIs starting with /img or /css to a dedicated group of servers. URIs # starting with /admin/stats deliver the stats page. # global maxconn 10000 stats socket /var/run/haproxy.stat mode 600 level admin log 127.0.0.1 local0 uid 200 gid 200 chroot /var/empty daemon defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms # The public 'www' address in the DMZ frontend webserver bind 192.168.44.10:80 default_backend web #bind 192.168.1.10:443 ssl crt /etc/haproxy/haproxy.pem mode http listen base_stats bind *:8090 stats enable stats hide-version stats uri /haproxy?stats stats realm "haproxy statistics" stats auth wadeson:redhat # The static backend backend for 'Host: img', /img and /css. backend web balance roundrobin retries 2 server web1 192.168.44.134:8080 check inter 1000 server web2 192.168.44.135:8080 check inter 1000
keepalived配置檔案:
[root@node1 files]# cat haproxy-outside-keepalived.conf ! Configuration File for keepalived global_defs { notification_email { json_hc@163.com } notification_email_from json_hc@163.com smtp_server smtp.163.com smtp_connect_timeout 30 router_id {{ ROUTEID }} } vrrp_instance VI_1 { state {{ STATEID }} interface eth0 virtual_router_id 51 priority {{ PRIORITYID }} advert_int 1 authentication { auth_type PASS auth_pass password } virtual_ipaddress { 192.168.44.10/24 dev eth0 label eth0:0 } }
檢視高可用的負載效果: