LVS+Heartbeat 高可用叢集方案操作記錄

散盡浮華發表於2018-12-24

 

之前分別介紹了LVS基礎知識Heartbeat基礎知識, 今天這裡簡單說下LVS+Heartbeat實現高可用web叢集方案的操作說明.

Heartbeat 專案是 Linux-HA 工程的一個組成部分,它實現了一個高可用叢集系統。心跳服務和叢集通訊是高可用叢集的兩個關鍵元件,在 Heartbeat 專案裡,由 heartbeat 模組實現了這兩個功能。

Heartbeat的高可用叢集採用的通訊方式是udp協議和串列埠通訊,而且heartbeat外掛技術實現了叢集間的串列埠、多播、廣播和組播通訊。它實現了HA 功能中的核心功能——心跳,將Heartbeat軟體同時安裝在兩臺伺服器上,用於監視系統的狀態,協調主從伺服器的工作,維護系統的可用性。它能偵測伺服器應用級系統軟體、硬體發生的故障,及時地進行錯誤隔絕、恢復;通過系統監控、服務監控、IP自動遷移等技術實現在整個應用中無單點故障,簡單、經濟地確保重要的服務持續高可用性。   Heartbeat採用虛擬IP地址對映技術實現主從伺服器的切換對客戶端透明的功能。但是單一的heartbeat是無法提供健壯的服務的,所以這裡結合使用lvs進行負載均衡。

LVS是Linux Virtual Server的簡寫, 意即Linux虛擬伺服器,是一個虛擬的伺服器叢集系統。說到lvs就得提到ipvs (ipvsadm命令),ipvs 是 lvs叢集系統的核心軟體,它的主要作用是安裝在 Load Balancer 上,把發往 Virtual IP 的請求轉發到 Real Server 上。

ldirectord是配合lvs作為一種健康檢測機制,要不負載均衡器在節點掛掉後依然沒有檢測的功能。

案例架構草圖如下:

1) 基本環境準備 (centos6.9系統)

172.16.60.206(eth0)    HA主節點(ha-master)       heartbeat, ipvsadm, ldirectord
172.16.60.207(eth0)    HA備節點(ha-slave)        heartbeat, ipvsadm, ldirectord
172.16.60.111          VIP地址
172.16.60.204(eth0)    後端節點1(rs-204)         nginx, realserver
172.16.60.205(eth0)    後端節點2(rs-205)         nginx, realserver

1) 關閉防火牆和selinux (四臺節點機都操作)
[root@ha-master ~]# /etc/init.d/iptables stop
[root@ha-master ~]# setenforce 0
[root@ha-master ~]# vim /etc/sysconfig/selinux 
SELINUX=disabled

2) 設定主機名和繫結hosts (兩臺HA節點機器都操作)
主節點操作
[root@ha-master ~]# hostname ha-master
[root@ha-master ~]# vim /etc/sysconfig/network
HOSTNAME=ha-master
[root@ha-master ~]# vim /etc/hosts
172.16.60.206 ha-master
172.16.60.207 ha-slave

備節點操作
[root@ha-slave ~]# hostname ha-slave
[root@ha-slave ~]# vim /etc/sysconfig/network
HOSTNAME=ha-slave
[root@ha-slave ~]# vim /etc/hosts
172.16.60.206 ha-master
172.16.60.207 ha-slave

3) 設定ip路由轉發功能 (四臺節點機器都設定)
[root@ha-master ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@ha-master ~]# vim /etc/sysctl.conf 
net.ipv4.ip_forward = 1
[root@ha-master ~]# sysctl -p

2) 安裝配置 Heartbeat  (兩臺HA節點機都操作)

1) 首先安裝heartbeat (HA主備兩個節點都要同樣操作)
分別下載epel-release-latest-6.noarch.rpm 和 ldirectord-3.9.5-3.1.x86_64.rpm
下載地址: https://pan.baidu.com/s/1IvCDEFLCBYddalV89YvonQ
提取密碼: gz53
  
[root@ha-master ~]# ll epel-release-latest-6.noarch.rpm
-rw-rw-r-- 1 root root 14540 Nov  5  2012 epel-release-latest-6.noarch.rpm
[root@ha-master ~]# ll ldirectord-3.9.5-3.1.x86_64.rpm
-rw-rw-r-- 1 root root 90140 Dec 24 15:54 ldirectord-3.9.5-3.1.x86_64.rpm
  
[root@ha-master ~]# yum install -y epel-release
[root@ha-master ~]# rpm -ivh epel-release-latest-6.noarch.rpm --force
[root@ha-master ~]# yum install -y heartbeat* libnet
[root@ha-master ~]# yum install -y ldirectord-3.9.5-3.1.x86_64.rpm      #因為依賴比較多, 所以直接採用yum方式安裝
  
2) 配置heartbeat (HA主備兩個節點都要操作)
安裝完heartbeat後系統會生成一個/etc/ha.d/目錄,此目錄用於存放heartbeat的有關配置檔案。
Heartbeat自帶配置檔案的註釋資訊較多,在此手工編寫有關配置檔案,heartbeat常用配置檔案有四個,分別是:
ha.cf:heartbeat主配置檔案
ldirectord.cf:資源管理檔案
haresources:本地資原始檔
authkeys:認證檔案
  
[root@ha-master ~]# cd /usr/share/doc/heartbeat-3.0.4/
[root@ha-master heartbeat-3.0.4]# cp authkeys ha.cf haresources /etc/ha.d/
  
[root@ha-master heartbeat-3.0.4]# cd /usr/share/doc/ldirectord-3.9.5
[root@ha-master ldirectord-3.9.5]# cp ldirectord.cf /etc/ha.d/
[root@ha-master ldirectord-3.9.5]# cd /etc/ha.d/
[root@ha-master ha.d]# ll
total 56
-rw-r--r-- 1 root root   645 Dec 24 21:37 authkeys
-rw-r--r-- 1 root root 10502 Dec 24 21:37 ha.cf
-rwxr-xr-x 1 root root   745 Dec  3  2013 harc
-rw-r--r-- 1 root root  5905 Dec 24 21:37 haresources
-rw-r--r-- 1 root root  8301 Dec 24 21:38 ldirectord.cf
drwxr-xr-x 2 root root  4096 Dec 24 21:28 rc.d
-rw-r--r-- 1 root root   692 Dec  3  2013 README.config
drwxr-xr-x 2 root root  4096 Dec 24 21:28 resource.d
-rw-r--r-- 1 root root  2082 Mar 24  2017 shellfuncs
  
3) 配置heartbeat的主配置檔案ha.cf  (HA主備節點配置一樣)
[root@ha-master ha.d]# pwd
/etc/ha.d
[root@ha-master ha.d]# cp ha.cf ha.cf.bak
[root@ha-master ha.d]# > ha.cf
[root@ha-master ha.d]# vim ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log         #日誌存放位置
#crm yes                            #是否開啟叢集資源管理功能
logfacility        local0         #記錄日誌等級
keepalive 2                         #心跳的時間間隔,預設時間單位為秒
deadtime 5                         #超出該時間間隔未收到對方節點的心跳,則認為對方已經死亡。
warntime 3                         #超出該時間間隔未收到對方節點的心跳,則發出警告並記錄到日誌中,但此時不會切換
initdead 10          #在某些系統上,系統啟動或重啟之後需要經過一段時間網路才能正常工作,該選項用於解決這種情況產生的時間間隔。取值至少為deadtime的兩倍。
udpport  694        #設定廣播通訊使用的埠,694為預設使用的埠號。
bcast        eth0               # Linux指定心跳使用乙太網廣播方式,並在eth0上進行廣播。"#"後的要完全刪除,要不然要出錯。
ucast eth0 172.16.60.207       #採用網路卡eth0的UDP多播來組織心跳,後面跟的IP地址應該為雙機中對方的IP地址!!!!!
auto_failback on            #在該選項設為on的情況下,一旦主節點恢復執行,則自動獲取資源並取代備用節點。off主節點恢復後變為備用節點,備用為主節點!!!!!
#stonith_host *     baytech 10.0.0.3 mylogin mysecretpassword
#stonith_host ken3  rps10 /dev/ttyS1 kathy 0
#stonith_host kathy rps10 /dev/ttyS1 ken3 0
#watchdog /dev/watchdog         
node   ha-master           #主機節點名,可通過"uname -n"檢視,預設為主節點!!!!!
node   ha-slave              #備用機節點名,預設為次節點,要注意順序!!!!
#ping 172.16.60.207         # 選擇ping節點,選擇固定路由作為節點。ping節點僅用來測試網路連線。一般選擇這行ping測試就行, 下面一行註釋掉.
ping_group group1 172.16.60.204 172.16.60.205     #這個地址並不是雙機中的兩個節點地址,而是僅僅用來測試網路的連通性. 當這兩個IP 都不能ping通時,對方即開始接管資源。
respawn root /usr/lib64/heartbeat/ipfail                    #選配項。其中rootr表示啟動ipfail程式的身份。要確保/usr/lib64/heartbeat/ipfail這個路徑正確(可以用find命令搜尋出來), 否則heartbeat啟動失敗
apiauth ipfail gid=root uid=root

============================溫馨提示================================
HA備節點的ha.cf檔案只需要將上面配置中的ucast一行內容改為"ucast eth0 172.16.60.206" 即可, 其他配置內容和上面HA主節點的ha.cf完全一樣!

4) 配置heartbeat的認證檔案authkeys (HA主備節點配置必須一致)
[root@ha-master ~]# cd /etc/ha.d/
[root@ha-master ha.d]# cp authkeys authkeys.bak
[root@ha-master ha.d]# >authkeys
auth 3                                                      #auth後面指定的數字,下一行必須作為關鍵字再次出現! 一共有"1", "2","3" 三行, 這裡選擇"3"關鍵字, 選擇"1"和"2"關鍵字也行, HA主備節點必須一致!
#1 crc
#2 sha1 HI!
3 md5 Hello!
  
必須將該檔案授權為600
[root@ha-master ha.d]# chmod 600 authkeys
[root@ha-master ha.d]# ll authkeys
-rw------- 1 root root 20 Dec 25 00:16 authkeys

5) 修改heartbeat的資原始檔haresources (HA主備節點配置必須完全一致)
[root@ha-slave ha.d]# cp haresources haresources.bak
[root@ha-slave ha.d]# >haresources
[root@ha-slave ha.d]# vim haresources          # 在檔案結尾新增下面一行內容. 由於該檔案預設全是註釋,可以先清空該檔案, 然後新增下面這一行內容
ha-master IPaddr::172.16.60.111 ipvsadm ldirectord      

配置說明:
上面設定ha-maser為主節點, 叢集VIP為172.16.60.111, ipvsadm ldirectord為所指定需要監視的應用服務.
這樣啟動heartbeat服務的時候, 會自動啟動ipvsadm和ldirectord服務.
ipvsadm服務的配置檔案為/etc/sysconfig/ipvsadm, 後面會配置這個.
ldirectord 服務的配置檔案為/etc/ha.d/ldirectord.cf, 後面會配置這個

6) 配置heartbeat的監控檔案ldirectord.cf (HA主備節點配置必須完全一致)
ldirectord,用於監控在lvs叢集的真實服務。ldirectord是和heartbeat相結合的一個服務,可以作為heartbeat的一個啟動服務。
Ldirectord 的作用是監測 Real Server,當 Real Server失效時,把它從 Load Balancer列表中刪除,恢復時重新新增。 
將ldrectord的配置檔案複製到/etc/ha.d下,因為預設沒有放到這個路徑下, 並且在ldirectord.cf檔案中要配置"quiescent=no" 。
 
[root@ha-master ha.d]# cp ldirectord.cf ldirectord.cf.bak
[root@ha-master ha.d]# vim ldirectord.cf
checktimeout=3      #判定realserver出錯時間
checkinterval=1      #指定ldirectord在兩次檢查之間的間隔時間,即主從切換的時間間隔
autoreload=yes       #是否自動過載配置檔案
logfile="/var/log/ldirectord.log"     #指定ldirectord的日誌檔案路徑
#logfile="local0"
#emailalert="root@30920.cn" 
#emailalertfreq=3600
#emailalertstatus=all
quiescent=no        #如果一個realserver節點在checktimeout設定的時間週期內沒響應,將會被踢除,中斷現有客戶端的連線。 設定為yes, 則出問題的realserver節點不會被踢出, 只是新的連線不能到達。

virtual=172.16.60.111:80     #指定虛擬IP,注意在virtual這行後面的行必須縮排一個tab字元進行標記!! 否則極有可能因為格式配置不正確而導致ldirectord啟動失敗
        real=172.16.60.204:80 gate   #gate為lvs的DR模式,ipip表示TUNL模式,masq表示NAT模式
        real=172.16.60.205:80 gate   #當所有RS機器不能訪問的時候WEB重寫向地址; 即表示realserver全部失敗,vip指向本機80埠
        fallback=127.0.0.1:80 gate     #指定服務型別,這裡對HTTP進行負載均衡
        service=http         #指定服務型別,這裡對HTTP進行負載均衡
        scheduler=wlc      #指定排程演算法,這裡的演算法一定要和lvs指令碼(/etc/sysconfig/ipvsadm)的演算法一樣
        persistent=600     #持久連結:表示600s之內同一個客戶端ip將訪問同一臺realserver. 除非這個realserver出現故障,才會將請求轉發到另一個realserver
        #netmask=255.255.255.255
        protocol=tcp          # 指定協議
        checktype=negotiate   #指定檢查型別為協商 (或者執行檢查型別為negotiate, 表示通過互動來判斷服務是否正常)
        checkport=80        # 監控的埠
        request="lvs_testpage.html"   #請求監控地址, 這個檔案一定要放到後端realserver監控埠的根目錄下, 即放到兩臺realserver的nginx根目錄下  
        receive="Test HA Page"      #指定請求和應答字串,也就是上面lvs_testpage.html的內容
        #virtualhost=www.x.y.z       #虛擬伺服器的名稱可任意指定

============================溫馨提示======================================
配置如上,通過virtual來定義vip,接下來是定義real service的節點,fallback是當所有real掛掉後,訪問請求到本機的80埠上去,一般這個頁面顯示伺服器正在維護等介面。
service表示;排程的服務,scheduler是排程演算法,protocol是定義協議,checktype是檢查型別為協商,checkport就是檢查的埠,也就是健康檢查。
 
上面在/etc/ha.d/ldirectord.cf檔案裡定義了一個80埠的代理轉發, 如果還有其他埠, 比如3306,
 則只需要在下面再新增一個"virtual=172.16.60.111:3306 ...."類似上面的配置即可! 配置案例在備份的ldirectord.cf.bak檔案裡有.

ldirectord.cf檔案的配置, 最好按照這個檔案裡的配置範例去修改, 不要全部清空後自行新增, 否則容易因為配置格式問題導致ldirectord服務啟動失敗!

使用status檢視ldirectord服務, 只要不出現報錯資訊, 就說明ldirectord.cf檔案配置沒有問題了!
[root@ha-master ha.d]# /etc/init.d/ldirectord status

3) 安裝配置 LVS  (兩臺HA節點機操作一致)

1) 安裝lvs依賴
[root@ha-master ~]# yum install -y libnl* popt*
  
檢視是否載入lvs模組
[root@ha-master ~]# modprobe -l |grep ipvs
kernel/net/netfilter/ipvs/ip_vs.ko
kernel/net/netfilter/ipvs/ip_vs_rr.ko
kernel/net/netfilter/ipvs/ip_vs_wrr.ko
kernel/net/netfilter/ipvs/ip_vs_lc.ko
kernel/net/netfilter/ipvs/ip_vs_wlc.ko
kernel/net/netfilter/ipvs/ip_vs_lblc.ko
kernel/net/netfilter/ipvs/ip_vs_lblcr.ko
kernel/net/netfilter/ipvs/ip_vs_dh.ko
kernel/net/netfilter/ipvs/ip_vs_sh.ko
kernel/net/netfilter/ipvs/ip_vs_sed.ko
kernel/net/netfilter/ipvs/ip_vs_nq.ko
kernel/net/netfilter/ipvs/ip_vs_ftp.ko
kernel/net/netfilter/ipvs/ip_vs_pe_sip.ko
  
2) 下載並安裝LVS
[root@ha-master ~]# cd /usr/local/src/
[root@ha-master src]# unlink /usr/src/linux
[root@ha-master src]# ln -s /usr/src/kernels/2.6.32-431.5.1.el6.x86_64/ /usr/src/linux
[root@ha-master src]# wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
[root@ha-master src]# tar -zvxf ipvsadm-1.26.tar.gz
[root@ha-master src]# cd ipvsadm-1.26
[root@ha-master ipvsadm-1.26]# make && make install
  
LVS安裝完成,檢視當前LVS叢集
[root@ha-master ipvsadm-1.26]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

3) 新增lvs的管理指令碼(ipvsadm)
ipvsadm服務的配置檔案是/etc/sysconfig/ipvsadm
[root@ha-master ha.d]# vim /etc/sysconfig/ipvsadm
#!/bin/bash 
# description: start LVS of DirectorServer 
#Written by :NetSeek http://www.linuxtone.org 
GW=172.16.60.1                                   #這個是VIP所在網段的網段地址
  
# website director vip. 
WEB_VIP=172.16.60.111   
WEB_RIP1=172.16.60.204 
WEB_RIP2=172.16.60.205 
 
  
. /etc/rc.d/init.d/functions 
  
logger $0 called with $1 
  
case "$1" in 
  
start) 
        # Clear all iptables rules. 
         /sbin/iptables -F 
        # Reset iptables counters. 
         /sbin/iptables -Z 
         # Clear all ipvsadm rules/services. 
         /sbin/ipvsadm -C 
  
 #set lvs vip for dr 
        /sbin/ipvsadm --set 30 5 60 
        /sbin/ifconfig eth0:0 $WEB_VIP broadcast $WEB_VIP netmask 255.255.255.255 up 
        /sbin/route add -host $WEB_VIP dev eth0:0 
                /sbin/ipvsadm -A -t $WEB_VIP:80 -s wlc -p 600 
                /sbin/ipvsadm -a -t $WEB_VIP:80 -r $WEB_RIP1:80 -g
                /sbin/ipvsadm -a -t $WEB_VIP:80 -r $WEB_RIP2:80 -g 

        touch /var/lock/subsys/ipvsadm >/dev/null 2>&1 
         
        # set Arp 
                /sbin/arping -I eth0 -c 5 -s $WEB_VIP $GW >/dev/null 2>&1   
       ;; 
stop) 
        /sbin/ipvsadm -C 
        /sbin/ipvsadm -Z 
        ifconfig eth0:0 down 
        route del $WEB_VIP  >/dev/null 2>&1 
        rm -rf /var/lock/subsys/ipvsadm >/dev/null 2>&1 
                /sbin/arping -I eth0 -c 5 -s $WEB_VIP $GW 
        echo "ipvsadm stoped" 
       ;; 
  
status) 
  
        if [ ! -e /var/lock/subsys/ipvsadm ];then 
                echo "ipvsadm is stoped" 
                exit 1 
        else 
                ipvsadm -ln 
                echo "..........ipvsadm is OK." 
        fi 
      ;; 
  
*) 
        echo "Usage: $0 {start|stop|status}" 
        exit 1 
esac 
  
exit 0


===============溫馨提示=================
上面配置中的"-p 600"的意思是會話保持時間為600秒,這個應該和ldirectord.cf檔案配置一致 (還有lvs策略也要一致, 如這裡的lwc)

授權指令碼執行許可權
[root@ha-master ha.d]# chmod 755 /etc/sysconfig/ipvsadm

4) realserver 節點配置

1) 在realserver節點上編寫LVS啟動指令碼 (兩個realserver節點操作完全一致)
[root@rs-204 ~]# vim /etc/init.d/realserver
#!/bin/sh
VIP=172.16.60.111     
. /etc/rc.d/init.d/functions
      
case "$1" in
# 禁用本地的ARP請求、繫結本地迴環地址
start)
    /sbin/ifconfig lo down
    /sbin/ifconfig lo up
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /sbin/sysctl -p >/dev/null 2>&1
    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up   
    /sbin/route add -host $VIP dev lo:0
    echo "LVS-DR real server starts successfully.\n"
    ;;
stop)
    /sbin/ifconfig lo:0 down
    /sbin/route del $VIP >/dev/null 2>&1
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n"
    ;;
status)
    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`
    isRoOn=`/bin/netstat -rn | grep "$VIP"`
    if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
        echo "LVS-DR real server has run yet."
    else
        echo "LVS-DR real server is running."
    fi
    exit 3
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    exit 1
esac
exit 0
  
  
啟動兩臺realserver節點的realserver指令碼
[root@rs-204 ~]# chmod 755 /etc/init.d/realserver
[root@rs-204 ~]# ll /etc/init.d/realserver
-rwxr-xr-x 1 root root 1278 Dec 24 13:40 /etc/init.d/realserver
  
[root@rs-204 ~]# /etc/init.d/realserver start
LVS-DR real server starts successfully.\n
  
設定開機啟動
[root@rs-204 ~]# echo "/etc/init.d/realserver" >> /etc/rc.local
  
檢視, 發現兩臺realserver節點上的lo:0上已經配置了vip地址
[root@rs-204 ~]# ifconfig
...........
lo:0      Link encap:Local Loopback
          inet addr:172.16.60.111  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
  
  
2) 接著部署兩臺realserver的web測試環境  (兩個realserver節點安裝操作一致)
採用yum方式安裝nginx (先安裝nginx的yum源)
[root@rs-204 ~]# rpm -ivh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
[root@rs-204 ~]# yum install nginx
  
realserver01的nginx配置
[root@rs-204 ~]# cd /etc/nginx/conf.d/
[root@rs-204 conf.d]# cat default.conf
[root@rs-204 conf.d]# >/usr/share/nginx/html/index.html
[root@rs-204 conf.d]# vim /usr/share/nginx/html/index.html
this is test page of realserver01:172.16.60.204
  
[root@rs-204 conf.d]# vim /usr/share/nginx/html/lvs_testpage.html
Test HA Page
  
[root@rs-204 conf.d]# /etc/init.d/nginx start
Starting nginx:                                            [  OK  ]
[root@rs-204 conf.d]# lsof -i:80
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   31944  root    6u  IPv4  91208      0t0  TCP *:http (LISTEN)
nginx   31945 nginx    6u  IPv4  91208      0t0  TCP *:http (LISTEN)
  
realserver02的nginx配置
[root@rs-205 src]# cd /etc/nginx/conf.d/
[root@rs-205 conf.d]# cat default.conf
[root@rs-205 conf.d]# >/usr/share/nginx/html/index.html
[root@rs-205 conf.d]# vim /usr/share/nginx/html/index.html
this is test page of realserver02:172.16.60.205

[root@rs-205 conf.d]# vim /usr/share/nginx/html/lvs_testpage.html
Test HA Page
  
[root@rs-205 conf.d]# /etc/init.d/nginx start
Starting nginx:                                            [  OK  ]
[root@rs-205 conf.d]# lsof -i:80
COMMAND   PID  USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
nginx   20839  root    6u  IPv4 289527645      0t0  TCP *:http (LISTEN)
nginx   20840 nginx    6u  IPv4 289527645      0t0  TCP *:http (LISTEN)
  
最後分別訪問realserver01和realserver02節點的nginx,:
訪問http://172.16.60.204/, 訪問結果為"this is test page of realserver01:172.16.60.204"
訪問http://172.16.60.204/lvs_testpage.html, 訪問結果為"Test HA Page"

訪問http://172.16.60.205/, 訪問結果為"this is test page of realserver02:172.16.60.205"
訪問http://172.16.60.205/lvs_testpage.html, 訪問結果為"Test HA Page"

5) 配置兩臺HA節點上轉發到自身80埠的頁面內容 (兩臺HA節點操作一致)

由於在ldirectord.cf檔案中配置了"fallback=127.0.0.1:80 gate", 即當後端realserver都發生故障時, 客戶端的訪問請求將轉發到LVS的HA節點自身的80埠上

[root@ha-master ~]# rpm -ivh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
[root@ha-master ~]# yum install nginx
  
realserver01的nginx配置
[root@ha-master ~]# cd /etc/nginx/conf.d/
[root@ha-master conf.d]# cat default.conf
[root@ha-master conf.d]# >/usr/share/nginx/html/index.html
[root@ha-master conf.d]# vim /usr/share/nginx/html/index.html
Sorry, the access is in maintenance for the time being. Please wait a moment.

[root@ha-master conf.d]# /etc/init.d/nginx start
Starting nginx:                                            [  OK  ]
[root@ha-master conf.d]# lsof -i:80
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   31944  root    6u  IPv4  91208      0t0  TCP *:http (LISTEN)
nginx   31945 nginx    6u  IPv4  91208      0t0  TCP *:http (LISTEN)

訪問http://172.16.60.206/  或者 http://172.16.60.207
訪問結果為"Sorry, the access is in maintenance for the time being. Please wait a moment."

6) 啟動heartbeat服務 (兩個HA節點都要操作)

啟動heartbeat服務的時候, 就會自帶啟動ipvsadm 和 ldirectord, 因為在/etc/ha.d/haresources檔案裡配置了!
需要知道的是: 只有當前提供lvs轉發服務(即擁有VIP資源)的一方 才能在啟動heartbeat的時候, 自帶啟動ipvsadm 和 ldirectord! 

1) 先啟動HA主節點的heartbeat
[root@ha-master ~]# /etc/init.d/heartbeat start
Starting High-Availability services: INFO:  Resource is stopped
Done.

[root@ha-master ~]# ps -ef|grep heartbeat
root     20886     1  0 15:41 ?        00:00:00 heartbeat: master control process
root     20891 20886  0 15:41 ?        00:00:00 heartbeat: FIFO reader        
root     20892 20886  0 15:41 ?        00:00:00 heartbeat: write: bcast eth0  
root     20893 20886  0 15:41 ?        00:00:00 heartbeat: read: bcast eth0   
root     20894 20886  0 15:41 ?        00:00:00 heartbeat: write: ucast eth0  
root     20895 20886  0 15:41 ?        00:00:00 heartbeat: read: ucast eth0   
root     20896 20886  0 15:41 ?        00:00:00 heartbeat: write: ping_group group1
root     20897 20886  0 15:41 ?        00:00:00 heartbeat: read: ping_group group1
root     20917 20886  0 15:41 ?        00:00:00 /usr/lib64/heartbeat/ipfail
root     20938 17616  0 15:41 pts/0    00:00:00 grep heartbeat

heartbeat服務埠預設是694. 
[root@ha-master ~]# lsof -i:694
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
heartbeat 20892 root    7u  IPv4  42238      0t0  UDP *:ha-cluster 
heartbeat 20893 root    7u  IPv4  42238      0t0  UDP *:ha-cluster 
heartbeat 20894 root    7u  IPv4  42244      0t0  UDP *:ha-cluster 
heartbeat 20895 root    7u  IPv4  42244      0t0  UDP *:ha-cluster 

發現ldirectord服務被自帶啟動了, 說明master節點是當前提供lvs轉發服務的一方 
[root@ha-master ~]# ps -ef|grep ldirectord
root     21336     1  0 15:41 ?        00:00:00 /usr/bin/perl -w /usr/sbin/ldirectord start
root     21365 17616  0 15:42 pts/0    00:00:00 grep ldirectord

[root@ha-master ~]# /etc/init.d/ldirectord status
ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 21336

檢視master節點,發現master節點當前佔有vip資源  (首次啟動heartbeat服務後, 需要稍微等待一段時間, vip資源才會出來. 後續再重啟或切換時, vip資源就會迅速出現了)
[root@ha-master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:50:9b brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.206/24 brd 172.16.60.255 scope global eth0
    inet 172.16.60.111/24 brd 172.16.60.255 scope global secondary eth0
    inet6 fe80::250:56ff:feac:509b/64 scope link 
       valid_lft forever preferred_lft forever

master節點當前提供了lvs轉發功能, 可以檢視到轉發效果
[root@ha-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 172.16.60.204:80             Route   1      0          0         
  -> 172.16.60.205:80             Route   1      0          0  

檢視master節點的heartbeat日誌
[root@ha-master ~]# tail -f /var/log/ha-log 
ip-request-resp(default)[21041]:        2018/12/25_15:41:48 received ip-request-resp IPaddr::172.16.60.111 OK yes
ResourceManager(default)[21064]:        2018/12/25_15:41:48 info: Acquiring resource group: ha-master IPaddr::172.16.60.111 ipvsadm ldirectord
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[21092]: 2018/12/25_15:41:48 INFO:  Resource is stopped
ResourceManager(default)[21064]:        2018/12/25_15:41:48 info: Running /etc/ha.d/resource.d/IPaddr 172.16.60.111 start
IPaddr(IPaddr_172.16.60.111)[21188]:    2018/12/25_15:41:48 INFO: Adding inet address 172.16.60.111/24 with broadcast address 172.16.60.255 to device eth0
IPaddr(IPaddr_172.16.60.111)[21188]:    2018/12/25_15:41:48 INFO: Bringing device eth0 up
IPaddr(IPaddr_172.16.60.111)[21188]:    2018/12/25_15:41:48 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-172.16.60.111 eth0 172.16.60.111 auto not_used not_used
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[21174]: 2018/12/25_15:41:48 INFO:  Success
ResourceManager(default)[21064]:        2018/12/25_15:41:48 info: Running /etc/init.d/ipvsadm  start
ResourceManager(default)[21064]:        2018/12/25_15:41:48 info: Running /etc/init.d/ldirectord  start

2) 接著啟動HA備份節點的heartbeat
[root@ha-slave ha.d]# /etc/init.d/heartbeat start
Starting High-Availability services: INFO:  Resource is stopped
Done.

[root@ha-slave ha.d]# ps -ef|grep heartbeat
root     21703     1  0 15:41 ?        00:00:00 heartbeat: master control process
root     21708 21703  0 15:41 ?        00:00:00 heartbeat: FIFO reader        
root     21709 21703  0 15:41 ?        00:00:00 heartbeat: write: bcast eth0  
root     21710 21703  0 15:41 ?        00:00:00 heartbeat: read: bcast eth0   
root     21711 21703  0 15:41 ?        00:00:00 heartbeat: write: ucast eth0  
root     21712 21703  0 15:41 ?        00:00:00 heartbeat: read: ucast eth0   
root     21713 21703  0 15:41 ?        00:00:00 heartbeat: write: ping_group group1
root     21714 21703  0 15:41 ?        00:00:00 heartbeat: read: ping_group group1
root     21734 21703  0 15:41 ?        00:00:00 /usr/lib64/heartbeat/ipfail
root     21769 19163  0 15:42 pts/0    00:00:00 grep heartbeat

[root@ha-slave ha.d]# lsof -i:694
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
heartbeat 21709 root    7u  IPv4 105186      0t0  UDP *:ha-cluster 
heartbeat 21710 root    7u  IPv4 105186      0t0  UDP *:ha-cluster 
heartbeat 21711 root    7u  IPv4 105192      0t0  UDP *:ha-cluster 
heartbeat 21712 root    7u  IPv4 105192      0t0  UDP *:ha-cluster 

發現ldirectord服務沒有被heartbeat自帶啟動 (因為當前備份節點沒有提供lvs轉發功能, 即沒有接管vip資源)
[root@ha-slave ha.d]# /etc/init.d/ldirectord status
ldirectord is stopped for /etc/ha.d/ldirectord.cf

[root@ha-slave ha.d]# ps -ef|grep ldirectord       
root     21822 19163  0 15:55 pts/0    00:00:00 grep ldirectord

發現ipvsadm服務也沒有被heartbeat自帶啟動  (因為當前備份節點沒有提供lvs轉發功能, 即沒有接管vip資源)
[root@ha-slave ha.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:05:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.207/24 brd 172.16.60.255 scope global eth0
    inet6 fe80::250:56ff:feac:5b5/64 scope link 
       valid_lft forever preferred_lft forever
[root@ha-slave ha.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

檢視HA備份節點的heartbeat日誌
[root@ha-slave ha.d]# tail -f /var/log/ha-log   
Dec 25 15:41:37 ha-slave heartbeat: [21734]: info: Starting "/usr/lib64/heartbeat/ipfail" as uid 0  gid 0 (pid 21734)
Dec 25 15:41:38 ha-slave heartbeat: [21703]: info: Status update for node ha-master: status active
harc(default)[21737]:   2018/12/25_15:41:38 info: Running /etc/ha.d//rc.d/status status
Dec 25 15:41:42 ha-slave ipfail: [21734]: info: Status update: Node ha-master now has status active
Dec 25 15:41:44 ha-slave ipfail: [21734]: info: Asking other side for ping node count.
Dec 25 15:41:47 ha-slave ipfail: [21734]: info: No giveup timer to abort.
Dec 25 15:41:48 ha-slave heartbeat: [21703]: info: remote resource transition completed.
Dec 25 15:41:48 ha-slave heartbeat: [21703]: info: remote resource transition completed.
Dec 25 15:41:48 ha-slave heartbeat: [21703]: info: Initial resource acquisition complete (T_RESOURCES(us))
Dec 25 15:41:48 ha-slave heartbeat: [21754]: info: No local resources [/usr/share/heartbeat/Resourc

訪問使用vip地址訪問, 即:
訪問http://172.16.60.111/, 結果為"this is test page of realserver01:172.16.60.204" 或者 "this is test page of realserver02:172.16.60.205"
訪問http://172.16.60.111/lvs_testpage.html, 結果為"Test HA Page"

溫馨提示: 
下面是兩個常用的ipvsadm 關於檢視lvs狀態的命令
======================================
檢視lvs的連線狀態命令
[root@ha-master ~]# ipvsadm  -l  --stats
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port               Conns   InPkts  OutPkts  InBytes OutBytes
  -> RemoteAddress:Port
TCP  172.16.60.111:http                0        0        0        0        0
  -> 172.16.60.204:http                0        0        0        0        0
  -> 172.16.60.205:http                0        0        0        0        0

說明:
Conns    (connections scheduled)  已經轉發過的連線數
InPkts   (incoming packets)       入包個數
OutPkts  (outgoing packets)       出包個數
InBytes  (incoming bytes)         入流量(位元組)  
OutBytes (outgoing bytes)         出流量(位元組)

======================================
檢視lvs的速率
[root@ha-master ~]# ipvsadm   -l  --rate
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port                 CPS    InPPS   OutPPS    InBPS   OutBPS
  -> RemoteAddress:Port
TCP  172.16.60.111:http                0        0        0        0        0
  -> 172.16.60.204:http                0        0        0        0        0
  -> 172.16.60.205:http                0        0        0        0        0

說明:
CPS      (current connection rate)   每秒連線數
InPPS    (current in packet rate)    每秒的入包個數
OutPPS   (current out packet rate)   每秒的出包個數
InBPS    (current in byte rate)      每秒入流量(位元組)
OutBPS   (current out byte rate)     每秒入流量(位元組)

======================================
上面的兩臺HA節點均只有一個網路卡裝置eth0,  如果有兩塊網路卡, 比如還有一個eth1, 則可以將這個eth1作為heartbeat交叉線直連的裝置, 
即HA主備兩臺機器之間使用一根串列埠直連線纜eth1進行連線.
比如:
HA主節點   172.16.60.206(eth0), 10.0.11.21(eth1, heartbeat交叉線直連)
HA備節點   172.16.60.207(eth0), 10.0.11.22(eth1, heartbeat交叉線直連)

這樣比起只有一個eth0, 只需要在ha.cf檔案中多加下面一行 (其他的操作配置都不用變!)
ping_group group1 10.0.11.21 10.0.11.22       //多加這一行
ping_group group1 172.16.60.204 172.16.60.205

7) 故障轉移切換測試

1) 先關閉HA主節點的heartbeat
[root@ha-master ~]# /etc/init.d/heartbeat stop
Stopping High-Availability services: Done.

[root@ha-master ~]# ps -ef|grep heartbeat
root     21625 17616  0 16:03 pts/0    00:00:00 grep heartbeat

發現關閉heartbeat服務後, 主節點的ipvsadm 和 ldirectord都會被自帶關閉, VIP資源也被轉移走了, 即當前master節點不提供lvs轉發服務
[root@ha-master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:50:9b brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.206/24 brd 172.16.60.255 scope global eth0
    inet6 fe80::250:56ff:feac:509b/64 scope link 
       valid_lft forever preferred_lft forever

[root@ha-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

[root@ha-master ~]# ps -ef|grep ldirectord
root     21630 17616  0 16:03 pts/0    00:00:00 grep ldirectord

檢視此時HA主節點的heartbeat日誌
[root@ha-master ~]# tail -1000 /var/log/ha-log
........
Dec 25 16:02:38 ha-master heartbeat: [20886]: info: Heartbeat shutdown in progress. (20886)
Dec 25 16:02:38 ha-master heartbeat: [21454]: info: Giving up all HA resources.
ResourceManager(default)[21467]:        2018/12/25_16:02:38 info: Releasing resource group: ha-master IPaddr::172.16.60.111 ipvsadm ldirectord
ResourceManager(default)[21467]:        2018/12/25_16:02:38 info: Running /etc/init.d/ldirectord  stop
ResourceManager(default)[21467]:        2018/12/25_16:02:38 info: Running /etc/init.d/ipvsadm  stop
ResourceManager(default)[21467]:        2018/12/25_16:02:38 info: Running /etc/ha.d/resource.d/IPaddr 172.16.60.111 stop
IPaddr(IPaddr_172.16.60.111)[21563]:    2018/12/25_16:02:38 INFO: IP status = ok, IP_CIP=
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[21549]: 2018/12/25_16:02:38 INFO:  Success

接著檢視HA備份節點的情況, 發現VIP已將已經切換到備份節點這邊了, 說明當前備份節點提供lvs轉發服務, 則備份節點的ipvsadm 和 ldirectord也被自帶啟動了
[root@ha-slave ha.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:05:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.207/24 brd 172.16.60.255 scope global eth0
    inet 172.16.60.111/24 brd 172.16.60.255 scope global secondary eth0
    inet6 fe80::250:56ff:feac:5b5/64 scope link 
       valid_lft forever preferred_lft forever

[root@ha-slave ha.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 172.16.60.204:80             Route   1      0          0         
  -> 172.16.60.205:80             Route   1      0          0  

[root@ha-slave ha.d]# ps -ef|grep ldirectord
root     22203     1  0 16:02 ?        00:00:01 /usr/bin/perl -w /usr/sbin/ldirectord start
root     22261 19163  0 16:07 pts/0    00:00:00 grep ldirectord

檢視此時HA備份節點的heartbeat日誌
[root@ha-slave ha.d]# tail -1000 /var/log/ha-log 
...........
harc(default)[21887]:   2018/12/25_16:02:39 info: Running /etc/ha.d//rc.d/status status
mach_down(default)[21904]:      2018/12/25_16:02:39 info: Taking over resource group IPaddr::172.16.60.111
ResourceManager(default)[21931]:        2018/12/25_16:02:39 info: Acquiring resource group: ha-master IPaddr::172.16.60.111 ipvsadm ldirectord
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[21959]: 2018/12/25_16:02:39 INFO:  Resource is stopped
ResourceManager(default)[21931]:        2018/12/25_16:02:39 info: Running /etc/ha.d/resource.d/IPaddr 172.16.60.111 start
IPaddr(IPaddr_172.16.60.111)[22055]:    2018/12/25_16:02:39 INFO: Adding inet address 172.16.60.111/24 with broadcast address 172.16.60.255 to device eth0
IPaddr(IPaddr_172.16.60.111)[22055]:    2018/12/25_16:02:39 INFO: Bringing device eth0 up
IPaddr(IPaddr_172.16.60.111)[22055]:    2018/12/25_16:02:39 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-172.16.60.111 eth0 172.16.60.111 auto not_used not_used
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[22041]: 2018/12/25_16:02:39 INFO:  Success
ResourceManager(default)[21931]:        2018/12/25_16:02:39 info: Running /etc/init.d/ipvsadm  start
ResourceManager(default)[21931]:        2018/12/25_16:02:39 info: Running /etc/init.d/ldirectord  start
mach_down(default)[21904]:      2018/12/25_16:02:39 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired
mach_down(default)[21904]:      2018/12/25_16:02:39 info: mach_down takeover complete for node ha-master.

2) 然後在重新啟動HA主節點的heartbeat服務
由於在ha.cf檔案中配置了"auto_failback on "引數, 所以當主節點恢復後, 會將VIP資源自動搶佔回來並替換備份節點重新接管lvs轉發服務.
主節點的heartbeat恢復後, ipvsadm 和 ldirectord也會被重新啟動

[root@ha-master ~]# /etc/init.d/heartbeat start
Starting High-Availability services: INFO:  Resource is stopped
Done.

[root@ha-master ~]# ps -ef|grep heartbeat
root     21778     1  0 16:12 ?        00:00:00 heartbeat: master control process
root     21783 21778  0 16:12 ?        00:00:00 heartbeat: FIFO reader        
root     21784 21778  0 16:12 ?        00:00:00 heartbeat: write: bcast eth0  
root     21785 21778  0 16:12 ?        00:00:00 heartbeat: read: bcast eth0   
root     21786 21778  0 16:12 ?        00:00:00 heartbeat: write: ucast eth0  
root     21787 21778  0 16:12 ?        00:00:00 heartbeat: read: ucast eth0   
root     21788 21778  0 16:12 ?        00:00:00 heartbeat: write: ping_group group1
root     21789 21778  0 16:12 ?        00:00:00 heartbeat: read: ping_group group1
root     21809 21778  0 16:12 ?        00:00:00 /usr/lib64/heartbeat/ipfail
root     21812 21778  0 16:12 ?        00:00:00 heartbeat: master control process
root     21825 21812  0 16:12 ?        00:00:00 /bin/sh /usr/share/heartbeat/ResourceManager takegroup IPaddr::172.16.60.111 ipvsadm ldirectord
root     21949 21935  0 16:12 ?        00:00:00 /bin/sh /usr/lib/ocf/resource.d//heartbeat/IPaddr start
root     21956 17616  0 16:12 pts/0    00:00:00 grep heartbeat

[root@ha-master ~]# lsof -i:694
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
heartbeat 21784 root    7u  IPv4  46306      0t0  UDP *:ha-cluster 
heartbeat 21785 root    7u  IPv4  46306      0t0  UDP *:ha-cluster 
heartbeat 21786 root    7u  IPv4  46312      0t0  UDP *:ha-cluster 
heartbeat 21787 root    7u  IPv4  46312      0t0  UDP *:ha-cluster 

[root@ha-master ~]# ps -ef|grep ldirectord     
root     22099     1  1 16:12 ?        00:00:00 /usr/bin/perl -w /usr/sbin/ldirectord start
root     22130 17616  0 16:12 pts/0    00:00:00 grep ldirectord

[root@ha-master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:50:9b brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.206/24 brd 172.16.60.255 scope global eth0
    inet 172.16.60.111/24 brd 172.16.60.255 scope global secondary eth0
    inet6 fe80::250:56ff:feac:509b/64 scope link 
       valid_lft forever preferred_lft forever

[root@ha-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 172.16.60.204:80             Route   1      0          0         
  -> 172.16.60.205:80             Route   1      1          0 

檢視此時HA主節點的heartbeat日誌
[root@ha-master ~]# tail -1000 /var/log/ha-log 
........
ResourceManager(default)[21825]:        2018/12/25_16:12:12 info: Acquiring resource group: ha-master IPaddr::172.16.60.111 ipvsadm ldirectord
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[21853]: 2018/12/25_16:12:13 INFO:  Resource is stopped
ResourceManager(default)[21825]:        2018/12/25_16:12:13 info: Running /etc/ha.d/resource.d/IPaddr 172.16.60.111 start
IPaddr(IPaddr_172.16.60.111)[21949]:    2018/12/25_16:12:13 INFO: Adding inet address 172.16.60.111/24 with broadcast address 172.16.60.255 to device eth0
IPaddr(IPaddr_172.16.60.111)[21949]:    2018/12/25_16:12:13 INFO: Bringing device eth0 up
IPaddr(IPaddr_172.16.60.111)[21949]:    2018/12/25_16:12:13 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-172.16.60.111 eth0 172.16.60.111 auto not_used not_used
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[21935]: 2018/12/25_16:12:13 INFO:  Success
ResourceManager(default)[21825]:        2018/12/25_16:12:13 info: Running /etc/init.d/ipvsadm  start
ResourceManager(default)[21825]:        2018/12/25_16:12:13 info: Running /etc/init.d/ldirectord  start

再觀察此時HA備份節點的情況, 發現VIP資源在主節點的heartbeat恢復後就被主節點搶佔回去了, 即此時備份節點沒有vip資源, 也就不提供lvs轉發服務了,
則備份節點的ipvsadm 和 ldirectord服務也會被關閉
[root@ha-slave ha.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:05:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.207/24 brd 172.16.60.255 scope global eth0
    inet6 fe80::250:56ff:feac:5b5/64 scope link 
       valid_lft forever preferred_lft forever

[root@ha-slave ha.d]# ps -ef|grep ldirectord     
root     22516 19163  0 16:14 pts/0    00:00:00 grep ldirectord

[root@ha-slave ha.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

檢視此時HA備份節點的heartbeat日誌
[root@ha-slave ha.d]# tail -1000 /var/log/ha-log 
.......
ResourceManager(default)[22342]:        2018/12/25_16:12:12 info: Releasing resource group: ha-master IPaddr::172.16.60.111 ipvsadm ldirectord
ResourceManager(default)[22342]:        2018/12/25_16:12:12 info: Running /etc/init.d/ldirectord  stop
ResourceManager(default)[22342]:        2018/12/25_16:12:12 info: Running /etc/init.d/ipvsadm  stop
ResourceManager(default)[22342]:        2018/12/25_16:12:12 info: Running /etc/ha.d/resource.d/IPaddr 172.16.60.111 stop
IPaddr(IPaddr_172.16.60.111)[22438]:    2018/12/25_16:12:12 INFO: IP status = ok, IP_CIP=
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.60.111)[22424]: 2018/12/25_16:12:12 INFO:  Success
Dec 25 16:12:12 ha-slave heartbeat: [22329]: info: foreign HA resource release completed (standby).

在上面HA主備節點故障切換的過程中, 客戶端訪問http://172.16.60.111/都是不受影響的, 即對客戶端訪問來說是無感知的故障切換, 實現了lvs代理層的高可用!

3) 先後關閉兩臺realserver節點中的nginx, 然後觀察lvs的轉發情況
[root@ha-master ~]# ipvsadm -Ln                
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 172.16.60.204:80             Route   1      0          0         
  -> 172.16.60.205:80             Route   1      0          2   

先關閉rs-204的nginx服務
[root@rs-204 ~]# /etc/init.d/nginx stop 
Stopping nginx:                                            [  OK  ]
[root@rs-204 ~]# lsof -i:80
[root@rs-204 ~]#

rs-205的nginx保留
[root@rs-205 ~]# ps -ef|grep nginx
root      5211     1  0 15:45 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx     5212  5211  0 15:45 ?        00:00:00 nginx: worker process                   
root      5313  4852  0 16:19 pts/0    00:00:00 grep nginx

檢視lvs轉發情況
[root@ha-master ~]# ipvsadm -Ln                
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 172.16.60.205:80             Route   1      0          2         

這時候訪問http://172.16.60.111, 結果是"this is test page of realserver02:172.16.60.205"

接著啟動rs-204的nginx, 關閉rs-205的nginx
[root@rs-204 ~]# /etc/init.d/nginx start
Starting nginx:                                            [  OK  ]
[root@rs-204 ~]# lsof -i:80             
COMMAND  PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   4883  root    6u  IPv4 143621      0t0  TCP *:http (LISTEN)
nginx   4884 nginx    6u  IPv4 143621      0t0  TCP *:http (LISTEN)

關閉rs-205的nginx
[root@rs-205 ~]# /etc/init.d/nginx stop
Stopping nginx:                                            [  OK  ]
[root@rs-205 ~]# lsof -i:80
[root@rs-205 ~]# 

檢視lvs轉發情況
[root@ha-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 172.16.60.204:80             Route   1      0          0  

這時候訪問http://172.16.60.111, 結果是"this is test page of realserver01:172.16.60.204"

然後把rs-204 和 rs-205兩個節點的nginx都關閉
[root@rs-204 ~]# /etc/init.d/nginx stop 
Stopping nginx:                                            [  OK  ]
[root@rs-205 ~]# /etc/init.d/nginx stop 
Stopping nginx:                                            [  OK  ]

檢視lvs轉發情況
[root@ha-master ~]# ipvsadm -Ln                
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.60.111:80 wlc persistent 600
  -> 127.0.0.1:80                 Local   1      0          0   

這時候訪問http://172.16.60.111, 結果是"Sorry, the access is in maintenance for the time being. Please wait a moment."

上面可知, 在realserver節點發生故障後, 會從lvs叢集中踢出來, 待realserver節點恢復後會再次重新加入到lvs叢集中
這是因為在ldirectord.cf檔案中配置了"quiescent=no "引數 , 這樣就實現了代理節點的高可用! 

相關文章