MFS+Keepalived雙機高可用熱備方案操作記錄

散盡浮華發表於2016-06-08

 

基於MFS的單點及手動備份的缺陷,考慮將其與Keepalived相結合以提高可用性。在Centos下MooseFS(MFS)分散式儲存共享環境部署記錄這篇文件部署環境的基礎上,只需要做如下改動:

1)將master-server作為Keepalived_MASTER(啟動mfsmaster、mfscgiserv)
2)將matelogger作為Keepalived_BACKUP(啟動mfsmaster、mfscgiserv)
3)將ChunkServer伺服器裡配置的MASTER_HOST引數值改為VIP地址
4)clinet掛載的master的ip地址改為VIP地址
按照這樣調整後,需要將Keepalived_MASTER和Keepalived_BACKUP裡面的hosts繫結資訊也修改下。

方案實施原理及思路

1)mfsmaster的故障恢復在1.6.5版本後可以由mfsmetalogger產生的日誌檔案changelog_ml.*.mfs和metadata.mfs.back檔案通過命令mfsmetarestore恢復
2)定時從mfsmaster獲取metadata.mfs.back 檔案用於master恢復
3)Keepalived MASTER檢測到mfsmaster程式宕停時會執行監控指令碼,即自動啟動mfsmaster程式,如果啟動失敗,則會強制kill掉keepalived和mfscgiserv程式,由此轉移VIP到BACKUP上面。
4)若是Keepalived MASTER故障恢復,則會將VIP資源從BACKUP一方強制搶奪回來,繼而由它提供服務
5)整個切換在2~5秒內完成 根據檢測時間間隔。

架構拓撲圖如下:

1)Keepalived_MASTER(mfs master)機器上的操作

mfs的master日誌伺服器的安裝配置已經在另一篇文件中詳細記錄了,這裡就不做贅述了。下面直接記錄下keepalived的安裝配置:
-----------------------------------------------------------------------------------------------------------------------
安裝Keepalived
[root@Keepalived_MASTER ~]# yum install -y openssl-devel popt-devel
[root@Keepalived_MASTER ~]# cd /usr/local/src/
[root@Keepalived_MASTER src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz
[root@Keepalived_MASTER src]# tar -zvxf keepalived-1.3.5.tar.gz
[root@Keepalived_MASTER src]# cd keepalived-1.3.5
[root@Keepalived_MASTER keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived
[root@Keepalived_MASTER keepalived-1.3.5]# make && make install
     
[root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@Keepalived_MASTER keepalived-1.3.5]# mkdir /etc/keepalived/
[root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@Keepalived_MASTER keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local
      
[root@Keepalived_MASTER keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived      #新增執行許可權
[root@Keepalived_MASTER keepalived-1.3.5]# chkconfig keepalived on                   #設定開機啟動
[root@Keepalived_MASTER keepalived-1.3.5]# service keepalived start                   #啟動
[root@Keepalived_MASTER keepalived-1.3.5]# service keepalived stop                    #關閉
[root@Keepalived_MASTER keepalived-1.3.5]# service keepalived restart                 #重啟
    
配置Keepalived
[root@Keepalived_MASTER ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak
[root@Keepalived_MASTER ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
  notification_email {
    root@localhost
    }
  
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MFS_HA_MASTER
}
  
vrrp_script chk_mfs {                           
  script "/usr/local/mfs/keepalived_check_mfsmaster.sh"
  interval 2
  weight 2
}
  
vrrp_instance VI_1 {
  state MASTER
  interface eth0
  virtual_router_id 51
  priority 100
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
}
  track_script {
    chk_mfs
}
virtual_ipaddress {
    182.48.115.239
}
notify_master "/etc/keepalived/clean_arp.sh 182.48.115.239"
}
    
    
接著編寫監控指令碼
[root@Keepalived_MASTER ~]# vim /usr/local/mfs/keepalived_check_mfsmaster.sh
#!/bin/bash
A=`ps -C mfsmaster --no-header | wc -l`
if [ $A -eq 0 ];then
/etc/init.d/mfsmaster start
sleep 3
   if [ `ps -C mfsmaster --no-header | wc -l ` -eq 0 ];then
      /usr/bin/killall -9 mfscgiserv
      /usr/bin/killall -9 keepalived
   fi
fi

[root@Keepalived_MASTER ~]# chmod 755 /usr/local/mfs/keepalived_check_mfsmaster.sh

設定更新虛擬伺服器(VIP)地址的arp記錄到閘道器指令碼
[root@Keepalived_MASTER ~]# vim /etc/keepalived/clean_arp.sh 
#!/bin/sh
VIP=$1
GATEWAY=182.48.115.254                                        //這個是閘道器地址                       
/sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null

[root@Keepalived_MASTER ~]# chmod 755 /etc/keepalived/clean_arp.sh
    
啟動keepalived(確保Keepalived_MASTER機器的mfs master服務和Keepalived服務都要啟動)
[root@Keepalived_MASTER ~]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]
[root@Keepalived_MASTER ~]# ps -ef|grep keepalived
root     28718     1  0 13:09 ?        00:00:00 keepalived -D
root     28720 28718  0 13:09 ?        00:00:00 keepalived -D
root     28721 28718  0 13:09 ?        00:00:00 keepalived -D
root     28763 27466  0 13:09 pts/0    00:00:00 grep keepalived
    
檢視vip
[root@Keepalived_MASTER ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.239/32 scope global eth0
    inet6 fe80::5054:ff:fe09:2160/64 scope link
       valid_lft forever preferred_lft forever

2)Keepalived_BACKUP(mfs master)機器上的操作

在另一篇文件裡,該機器是作為metalogger後設資料日誌伺服器的,那麼在這個高可用環境下,該臺機器改為Keepalived_BACKUP使用。
即去掉metalogger的部署,直接部署mfs master(部署過程參考另一篇文件)。下面直接說下Keepalived_BACKUP下的Keepalived配置:
 
安裝Keepalived
[root@Keepalived_BACKUP ~]# yum install -y openssl-devel popt-devel
[root@Keepalived_BACKUP ~]# cd /usr/local/src/
[root@Keepalived_BACKUP src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz
[root@Keepalived_BACKUP src]# tar -zvxf keepalived-1.3.5.tar.gz
[root@Keepalived_BACKUP src]# cd keepalived-1.3.5
[root@Keepalived_BACKUP keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived
[root@Keepalived_BACKUP keepalived-1.3.5]# make && make install
      
[root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@Keepalived_BACKUP keepalived-1.3.5]# mkdir /etc/keepalived/
[root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@Keepalived_BACKUP keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local
       
[root@Keepalived_BACKUP keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived      #新增執行許可權
[root@Keepalived_BACKUP keepalived-1.3.5]# chkconfig keepalived on                   #設定開機啟動
[root@Keepalived_BACKUP keepalived-1.3.5]# service keepalived start                   #啟動
[root@Keepalived_BACKUP keepalived-1.3.5]# service keepalived stop                    #關閉
[root@Keepalived_BACKUP keepalived-1.3.5]# service keepalived restart                 #重啟
     
配置Keepalived
[root@Keepalived_BACKUP ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak
[root@Keepalived_BACKUP ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
  notification_email {
    root@localhost
    }
  
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MFS_HA_BACKUP
}
  
vrrp_script chk_mfs {                           
  script "/usr/local/mfs/keepalived_check_mfsmaster.sh"
  interval 2
  weight 2
}
  
vrrp_instance VI_1 {
  state BACKUP
  interface eth0
  virtual_router_id 51
  priority 99
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
}
  track_script {
    chk_mfs
}
virtual_ipaddress {
    182.48.115.239
}
notify_master "/etc/keepalived/clean_arp.sh 182.48.115.239"
}
 
 
接著編寫監控指令碼
[root@Keepalived_BACKUP ~]# vim /usr/local/mfs/keepalived_notify.sh
#!/bin/bash
A=`ps -C mfsmaster --no-header | wc -l`
if [ $A -eq 0 ];then
/etc/init.d/mfsmaster start
sleep 3
   if [ `ps -C mfsmaster --no-header | wc -l ` -eq 0 ];then
      /usr/bin/killall -9 mfscgiserv
      /usr/bin/killall -9 keepalived
   fi
fi
 
[root@Keepalived_BACKUP ~]# chmod 755 /usr/local/mfs/keepalived_notify.sh

設定更新虛擬伺服器(VIP)地址的arp記錄到閘道器指令碼(Haproxy_Keepalived_Master 和 Haproxy_Keepalived_Backup兩臺機器都要操作)
[root@Keepalived_BACKUP ~]# vim /etc/keepalived/clean_arp.sh 
#!/bin/sh
VIP=$1
GATEWAY=182.48.115.254                                      
/sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null
 
啟動keepalived
[root@Keepalived_BACKUP ~]# /etc/init.d/keepalived start
Starting keepalived:
[root@Keepalived_BACKUP ~]# ps -ef|grep keepalived
root     17565     1  0 11:06 ?        00:00:00 keepalived -D
root     17567 17565  0 11:06 ?        00:00:00 keepalived -D
root     17568 17565  0 11:06 ?        00:00:00 keepalived -D
root     17778 17718  0 13:47 pts/1    00:00:00 grep keepalived
 
要保證Keepalived_BACKUP機器的mfs master服務和keepalived服務都要啟動!

3)chunkServer的配置

只需要將mfschunkserver.cfg檔案中的MASTER_HOST引數配置成182.48.115.239,即VIP地址。
其他的配置都不需要修改。然後重啟mfschunkserver服務

4)clinet客戶端的配置

只需要見掛載命令中的後設資料ip改為182.48.115.249即可!

[root@clinet-server ~]# mkdir /mnt/mfs                
[root@clinet-server ~]# mkdir /mnt/mfsmeta 

[root@clinet-server ~]# /usr/local/mfs/bin/mfsmount /mnt/mfs -H 182.48.115.239
mfsmaster accepted connection with parameters: read-write,restricted_ip,admin ; root mapped to root:root

[root@clinet-server ~]# /usr/local/mfs/bin/mfsmount -m /mnt/mfsmeta -H 182.48.115.239
mfsmaster accepted connection with parameters: read-write,restricted_ip

[root@clinet-server ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      8.3G  3.8G  4.1G  49% /
tmpfs                 499M  228K  498M   1% /dev/shm
/dev/vda1             477M   35M  418M   8% /boot
/dev/sr0              3.7G  3.7G     0 100% /media/CentOS_6.8_Final
182.48.115.239:9421   107G   42G   66G  39% /mnt/mfs

[root@clinet-server ~]# mount 
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
/dev/sr0 on /media/CentOS_6.8_Final type iso9660 (ro,nosuid,nodev,uhelper=udisks,uid=0,gid=0,iocharset=utf8,mode=0400,dmode=0500)
182.48.115.239:9421 on /mnt/mfs type fuse.mfs (rw,nosuid,nodev,allow_other)
182.48.115.239:9421 on /mnt/mfsmeta type fuse.mfsmeta (rw,nosuid,nodev,allow_other)

驗證下客戶端掛載MFS檔案系統後的資料讀寫是否正常
[root@clinet-server ~]# cd /mnt/mfs
[root@clinet-server mfs]# echo "12312313" > test.txt
[root@clinet-server mfs]# cat test.txt
12312313

[root@clinet-server mfs]# rm -f test.txt 
[root@clinet-server mfs]# cd ../mfsmeta/trash/
[root@clinet-server trash]# find . -name "*test*"
./003/00000003|test.txt
[root@clinet-server trash]# cd ./003/
[root@clinet-server 003]# ls
00000003|test.txt  undel
[root@clinet-server 003]# mv 00000003\|test.txt undel/
[root@clinet-server 003]# ls /mnt/mfs
test.txt
[root@clinet-server 003]# cat /mnt/mfs/test.txt 
12312313

以上說明掛載後的MFS資料共享正常

5)Keepalived_MASTER和Keepalived_BACKUP的iptales防火牆設定

Keepalived_MASTER和Keepalived_BACKUP的防火牆iptables在實驗中是關閉的。
如果開啟了iptables防火牆功能,則需要在兩臺機器的iptables裡配置如下:

可以使用"ss -l"命令檢視本機監聽的埠
[root@Keepalived_MASTER ~]# ss -l
State       Recv-Q Send-Q                                     Local Address:Port                                         Peer Address:Port   
LISTEN      0      100                                                    *:9419                                                    *:*       
LISTEN      0      100                                                    *:9420                                                    *:*       
LISTEN      0      100                                                    *:9421                                                    *:*       
LISTEN      0      50                                                     *:9425                                                    *:*       
LISTEN      0      128                                                   :::ssh                                                    :::*       
LISTEN      0      128                                                    *:ssh                                                     *:*       
LISTEN      0      100                                                  ::1:smtp                                                   :::*       
LISTEN      0      100                                            127.0.0.1:smtp                                                    *:*    

[root@Keepalived_MASTER ~]# vim /etc/sysconfig/iptables
........
-A INPUT -s 182.148.15.0/24 -d 224.0.0.18 -j ACCEPT       #允許組播地址通訊。注意設定這兩行,就會在Keepalived_MASTER故障恢復後,將VIP資源從Keepalived_BACK那裡再轉移回來
-A INPUT -s 182.148.15.0/24 -p vrrp -j ACCEPT             #允許VRRP(虛擬路由器冗餘協)通訊
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9419 -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9420 -j ACCEPT  
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9421 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9425 -j ACCEPT 

[root@Keepalived_MASTER ~]# /etc/init.d/iptables start

6)故障切換後的資料同步指令碼

上面的配置可以實現Keepalived_MASTER機器出現故障(keepalived服務關閉),VIP資源轉移到Keepalived_BACKUP上;
當Keepalived_MASTER機器故障恢復(即keepalived服務開啟),那麼它就會將VIP資源再次搶奪回來!

但是隻是實現了VIP資源的轉移,但是MFS檔案系統的資料該如何進行同步呢?
下面在兩臺機器上分別寫了資料同步指令碼(Keepalived_MASTER和Keepalived_BACKUP要提前做好雙方的ssh無密碼登陸的信任關係)

Keepalived_MASTER機器上
[root@Keepalived_MASTER ~]# vim /usr/local/mfs/MFS_DATA_Sync.sh 
#!/bin/bash
A=`ip addr|grep 182.48.115.239|awk -F" " '{print $2}'|cut -d"/" -f1`
if [ $A == 182.48.115.239 ];then
   /etc/init.d/mfsmaster stop 
   /bin/rm -f /usr/local/mfs/var/mfs/*
   /usr/bin/rsync -e "ssh -p22" -avpgolr 182.48.115.235:/usr/local/mfs/var/mfs/* /usr/local/mfs/var/mfs/
   /usr/local/mfs/sbin/mfsmetarestore -m
   /etc/init.d/mfsmaster -a
   sleep 3
   echo "this server has become the master of MFS"
   if [ $A != 182.48.115.239 ];then
   echo "this server is still MFS's slave"
   fi
fi

Keepalived_BACKUP機器上
[root@Keepalived_BACKUP ~]# vim /usr/local/mfs/MFS_DATA_Sync.sh 
#!/bin/bash
A=`ip addr|grep 182.48.115.239|awk -F" " '{print $2}'|cut -d"/" -f1`
if [ $A == 182.48.115.239 ];then
   /etc/init.d/mfsmaster stop 
   /bin/rm -f /usr/local/mfs/var/mfs/*
   /usr/bin/rsync -e "ssh -p22" -avpgolr 182.48.115.233:/usr/local/mfs/var/mfs/* /usr/local/mfs/var/mfs/
   /usr/local/mfs/sbin/mfsmetarestore -m
   /etc/init.d/mfsmaster -a
   sleep 3
   echo "this server has become the master of MFS"
   if [ $A != 182.48.115.239 ];then
   echo "this server is still MFS's slave"
   fi
fi

即當VIP資源轉移到自己這一方時,執行這個同步指令碼,就會將對方的資料同步過來了。

7)故障切換測試

1)關閉Keepalived_MASTER的mfsmaster服務
由於keepalived.conf檔案中的監控指令碼定義,當發現mfsmaster程式不存在時,就會主動啟動mfsmaster。只要當mfsmaster啟動失敗,才會強制
killall掉keepalived和mfscgiserv程式

[root@Keepalived_MASTER ~]# /etc/init.d/mfsmaster stop
sending SIGTERM to lock owner (pid:29266)
waiting for termination terminated

發現mfsmaster關閉後,會自動重啟
[root@Keepalived_MASTER ~]# ps -ef|grep mfs
root     26579     1  0 16:00 ?        00:00:00 /usr/bin/python /usr/local/mfs/sbin/mfscgiserv start
root     30389 30388  0 17:18 ?        00:00:00 /bin/bash /usr/local/mfs/keepalived_check_mfsmaster.sh
mfs      30395     1 71 17:18 ?        00:00:00 /etc/init.d/mfsmaster start

預設情況下,VIP資源是在Keepalived_MASTER上的
[root@Keepalived_MASTER ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.239/32 scope global eth0
    inet6 fe80::5054:ff:fe09:2160/64 scope link 
       valid_lft forever preferred_lft forever

在client端掛載後(通過VIP地址掛載),檢視資料
[root@clinet-server ~]# cd /mnt/mfs
[root@clinet-server mfs]# ll
total 4
-rw-r--r-- 1 root root 9 May 24 17:11 grace
-rw-r--r-- 1 root root 9 May 24 17:11 grace1
-rw-r--r-- 1 root root 9 May 24 17:11 grace2
-rw-r--r-- 1 root root 9 May 24 17:11 grace3
-rw-r--r-- 1 root root 9 May 24 17:10 kevin
-rw-r--r-- 1 root root 9 May 24 17:10 kevin1
-rw-r--r-- 1 root root 9 May 24 17:10 kevin2
-rw-r--r-- 1 root root 9 May 24 17:10 kevin3

當keepalived關閉(這個時候mfsmaster關閉後就不會自動重啟了,因為keepalived關閉了,監控指令碼就不會執行了)
[root@Keepalived_MASTER ~]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]
[root@Keepalived_MASTER ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0
    inet6 fe80::5054:ff:fe09:2160/64 scope link 
       valid_lft forever preferred_lft forever

發現,Keepalived_MASTER的keepalived關閉後,VIP資源就不在它上面了。

檢視系統日誌,發現VIP已經轉移
[root@Keepalived_MASTER ~]# tail -1000 /var/log/messages
.......
May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.48.115.239
May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239
May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239
May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239
May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239


然後到Keepalived_BACKUP上面發現,VIP已經過來了
[root@Keepalived_BACKUP ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:82:69:69 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.235/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.239/32 scope global eth0
    inet6 fe80::5054:ff:fe82:6969/64 scope link 
       valid_lft forever preferred_lft forever

檢視日誌,也能看到VIP轉移過來了
[root@Keepalived_BACKUP ~]# tail -1000 /var/log/messages
.......
May 24 17:27:57 centos6-node2 Keepalived_vrrp[5254]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 99
May 24 17:27:57 centos6-node2 Keepalived_vrrp[5254]: VRRP_Instance(VI_1) Entering BACKUP STATE
May 24 17:27:57 centos6-node2 Keepalived_vrrp[5254]: VRRP_Instance(VI_1) removing protocol VIPs.

再次在clinet客戶端掛載後,檢視資料
[root@clinet-server mfs]# ll
[root@clinet-server mfs]#

發現沒有資料,這就需要執行上面的那個同步指令碼
[root@Keepalived_BACKUP ~]# sh -x /usr/local/mfs/MFS_DATA_Sync.sh 

再次在clinet客戶端掛載後,檢視資料
[root@clinet-server mfs]# ll
total 4
-rw-r--r--. 1 root root 9 May 24 17:11 grace
-rw-r--r--. 1 root root 9 May 24 17:11 grace1
-rw-r--r--. 1 root root 9 May 24 17:11 grace2
-rw-r--r--. 1 root root 9 May 24 17:11 grace3
-rw-r--r--. 1 root root 9 May 24 17:10 kevin
-rw-r--r--. 1 root root 9 May 24 17:10 kevin1
-rw-r--r--. 1 root root 9 May 24 17:10 kevin2
-rw-r--r--. 1 root root 9 May 24 17:10 kevin3

發現資料已經同步過來,然後再更新資料
[root@clinet-server mfs]# rm -f ./*
[root@clinet-server mfs]# echo "123123" > wangshibo
[root@clinet-server mfs]# echo "123123" > wangshibo1
[root@clinet-server mfs]# echo "123123" > wangshibo2
[root@clinet-server mfs]# echo "123123" > wangshibo3
[root@clinet-server mfs]# echo "123123" > wangshibo4
[root@clinet-server mfs]# ll
total 3
-rw-r--r--. 1 root root 7 May 24 17:26 wangshibo
-rw-r--r--. 1 root root 7 May 24 17:26 wangshibo1
-rw-r--r--. 1 root root 7 May 24 17:26 wangshibo2
-rw-r--r--. 1 root root 7 May 24 17:26 wangshibo3
-rw-r--r--. 1 root root 7 May 24 17:26 wangshibo4


2)恢復Keepalived_MASTER的keepalived程式
[root@Keepalived_MASTER ~]# /etc/init.d/keepalived start
Starting keepalived:                                       [  OK  ]
[root@Keepalived_MASTER ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.239/32 scope global eth0
    inet6 fe80::5054:ff:fe09:2160/64 scope link 
       valid_lft forever preferred_lft forever

發現Keepalived_MASTER的keepalived程式啟動後,VIP資源又搶奪回來。檢視/var/log/messages日誌能看出VIP資源轉移回來

再次檢視Keepalived_BACKUP,發現VIP不在了
[root@Keepalived_BACKUP ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:82:69:69 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.235/27 brd 182.48.115.255 scope global eth0
    inet6 fe80::5054:ff:fe82:6969/64 scope link 
       valid_lft forever preferred_lft forever

在client端掛載後(通過VIP地址掛載),檢視資料
[root@clinet-server mfs]# ll
total 4
-rw-r--r--. 1 root root 9 May 24 17:11 grace
-rw-r--r--. 1 root root 9 May 24 17:11 grace1
-rw-r--r--. 1 root root 9 May 24 17:11 grace2
-rw-r--r--. 1 root root 9 May 24 17:11 grace3
-rw-r--r--. 1 root root 9 May 24 17:10 kevin
-rw-r--r--. 1 root root 9 May 24 17:10 kevin1
-rw-r--r--. 1 root root 9 May 24 17:10 kevin2
-rw-r--r--. 1 root root 9 May 24 17:10 kevin3

發現資料還是舊的,需要在Keepalived_MASTER上執行同步
[root@Keepalived_MASTER ~]# sh -x /usr/local/mfs/MFS_DATA_Sync.sh 

再次在clinet客戶端掛載後,檢視資料,發現已經同步
[root@xqsj_web3 mfs]# ll
total 3
-rw-r--r-- 1 root root 7 May 24 17:26 wangshibo
-rw-r--r-- 1 root root 7 May 24 17:26 wangshibo1
-rw-r--r-- 1 root root 7 May 24 17:26 wangshibo2
-rw-r--r-- 1 root root 7 May 24 17:26 wangshibo3
-rw-r--r-- 1 root root 7 May 24 17:26 wangshibo4

相關文章