要求:
一、能夠在同一網段內直接通訊
二、節點名稱,要和uname的結果一樣,並保證可以根據節點名稱解析到節點的IP地址,配置本地/etc/hosts
三、SSH互信通訊
四、保證時間同步
環境準備配置:
test1,192.168.10.55配置
1、配置IP地址
[root@test1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
2、配置主機名
[root@test1 ~]# uname -n [root@test1 ~]# hostname master1.local#臨時生效 [root@test1 ~]# vim /etc/sysconfig/network#永久生效
3、配置主機名解析
[root@test1 ~]# vim /etc/hosts 新增: 192.168.10.55master1.local 192.168.10.56master2.local
3.2、測試主機名通訊
[root@test1 ~]# ping master1.local [root@test1 ~]# ping master2.local
4、配置SSH互信認證
[root@test1 ~]# ssh-keygen -t rsa -P `` [root@test1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.10.55
5、使用ntp同步時間
在crontab中加入每5分鐘執行一次ntpdate命令,用來保證伺服器時間是同步的
[root@test1 ~]# crontab -e */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
test2,192.168.10.56配置
1、配置IP地址
[root@test2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
2、配置主機名
[root@test2 ~]# uname -n [root@test2 ~]# hostname test2.local#臨時生效 [root@test2 ~]# vim /etc/sysconfig/network#永久生效
3、配置主機名解析
[root@test2 ~]# vim /etc/hosts 新增: 192.168.10.55test1.localtest1 192.168.10.56test2.localtest2
3.2、測試主機名通訊
[root@test2 ~]# ping test1.local [root@test2 ~]# ping test1
4、配置SSH互信認證
[root@test2 ~]# ssh-keygen -t rsa -P `` [root@test2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.10.56
5、使用ntp同步時間
在crontab中加入每5分鐘執行一次ntpdate命令,用來保證伺服器時間是同步的
[root@test2 ~]# crontab -e */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
安裝配置heartbeat
CentOS直接yum安裝報錯,提示找不到可用的軟體包
解決辦法:
[root@test1 src]# wget http://mirrors.sohu.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm [root@test1 src]# rpm -ivh epel-release-6-8.noarch.rpm
6.1、安裝heartbeat:
[root@test1 src]# yum install heartbeat
6.2、copy配置檔案:
[root@test1 src]# cp /usr/share/doc/heartbeat-3.0.4/{ha.cf,authkeys,haresources} /etc/ha.d/
6.3、配置認證檔案:
[root@test1 src]# dd if=/dev/random count=1 bs=512 |md5sum #生成隨機數 [root@test1 src]# vim /etc/ha.d/authkeys auth 1 1 md5 d0f70c79eeca5293902aiamheartbeat [root@test1 src]# chmod 600 authkeys
test2節點的heartbeat安裝和test1一樣,此處略過。
6.4、heartbeat主配置檔案引數:
[root@test2 ~]# vim /etc/ha.d/ha.cf #debugfile /var/log/ha-debug #排錯日誌 logfile #日誌位置 keepalive 2 #多長時間傳送一次心跳檢測,預設2秒,可以使用ms deadtime 30 #多長時間檢測不到主機就認為掛掉 warntime 10 #如果沒有收到心跳資訊,那麼在等待多長時間就認為對方掛掉 initdead 120 #第一個節點起來後,等待其他節點的時間 baud 19200 #序列線纜的傳送速率是多少 auto_failback on #故障恢復後是否轉移回來 ping 10.10.10.254 #ping node,萬一節點主機不通,要ping哪個主機 ping_group group1 10.10.10.254 10.10.10.253 #ping node group,只要組內有一臺主機能ping通就可以 respawn hacluster /usr/lib/heartbeat/ipfail #當一個heartbeat服務停止了,會重啟對端的heartbeat服務 deadping 30 #ping nodes多長時間ping不通,就真的故障了 # serial serialportname ... #序列裝置是什麼 serial /dev/ttyS0 # Linux serial /dev/cuaa0 # FreeBSD serial /dev/cuad0 # FreeBSD 6.x serial /dev/cua/a # Solaris # What interfaces to broadcast heartbeats over? #如果使用乙太網,定義使用單播、組播還是廣播傳送心跳資訊 bcast eth0 #廣播 mcast eth0 225.0.0.1 694 1 0 #組播 ucast eth0 192.168.1.2 #單播,只有兩個節點的時候才用單播 #定義stonith主機 stonith_host * baytech 10.0.0.3 mylogin mysecretpassword stonith_host ken3 rps10 /dev/ttyS1 kathy 0 stonith_host kathy rps10 /dev/ttyS1 ken3 0 # Tell what machines are in the cluster #告訴叢集中有多少個節點,每一個節點用node和主機名寫一行,主機名要和uname -n保持一致 node ken3 node kathy 一般只要定義心跳資訊的傳送方式、和叢集中的節點就行。 bcasteth0 nodetest1.local nodetest2.local
6.5、定義haresources資源配置檔案:
[root@test2 ~]# vim /etc/ha.d/haresources #node110.0.0.170Filesystem::/dev/sda1::/data1::ext2#預設用作主節點的主機名,要跟uname -n一樣。VIP是多少。自動掛載哪個裝置,到哪個目錄下,檔案型別是什麼。資源型別的引數要用雙冒號隔開 #just.linux-ha.org135.9.216.110http#和上面一樣,這裡使用的資源是在/etc/rc.d/init.d/下面的,預設先到/etc/ha.d/resource.d/目錄下找資源,找不到在到/etc/rc.d/init.d/目錄找 master1.localIPaddr::192.168.10.2/24/eth0 mysqld master1.localIPaddr::192.168.10.2/24/eth0 drbddisk::data Filesystem::/dev/drbd1::/data::ext3mysqld#使用IPaddr指令碼來配置VIP
6.6、拷貝master1.local的配置檔案到master2.local上
[root@test1 ~]# scp -p ha.cf haresources authkeys master2.local:/etc/ha.d/
7、啟動heartbeat
[root@test1 ~]# service heartbeat start [root@test1 ~]# ssh master2.local `service heartbeat start`#一定要在test1上通過ssh的方式啟動test2節點的heartbeat
7.1、檢視heartbeat啟動日誌
[root@test1 ~]# tail -f /var/log/messages Feb 16 15:12:45 test-1 heartbeat: [16056]: info: Configuration validated. Starting heartbeat 3.0.4 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: heartbeat: version 3.0.4 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Heartbeat generation: 1455603909 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth0 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: glib: UDP Broadcast heartbeat closed on port 694 interface eth0 - Status: 1 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: glib: ping heartbeat started. Feb 16 15:12:45 test-1 heartbeat: [16057]: info: G_main_add_TriggerHandler: Added signal manual handler Feb 16 15:12:45 test-1 heartbeat: [16057]: info: G_main_add_TriggerHandler: Added signal manual handler Feb 16 15:12:45 test-1 heartbeat: [16057]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Local status now set to: `up` Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Link 192.168.10.1:192.168.10.1 up. Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Status update for node 192.168.10.1: status ping Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Link test1.local:eth0 up. Feb 16 15:12:51 test-1 heartbeat: [16057]: info: Link test2.local:eth0 up. Feb 16 15:12:51 test-1 heartbeat: [16057]: info: Status update for node test2.local: status up Feb 16 15:12:51 test-1 harc(default)[16068]: info: Running /etc/ha.d//rc.d/status status Feb 16 15:12:52 test-1 heartbeat: [16057]: WARN: 1 lost packet(s) for [test2.local] [3:5] Feb 16 15:12:52 test-1 heartbeat: [16057]: info: No pkts missing from test2.local! Feb 16 15:12:52 test-1 heartbeat: [16057]: info: Comm_now_up(): updating status to active Feb 16 15:12:52 test-1 heartbeat: [16057]: info: Local status now set to: `active` Feb 16 15:12:52 test-1 heartbeat: [16057]: info: Status update for node test2.local: status active Feb 16 15:12:52 test-1 harc(default)[16086]: info: Running /etc/ha.d//rc.d/status status Feb 16 15:13:02 test-1 heartbeat: [16057]: info: local resource transition completed. Feb 16 15:13:02 test-1 heartbeat: [16057]: info: Initial resource acquisition complete (T_RESOURCES(us)) Feb 16 15:13:02 test-1 heartbeat: [16057]: info: remote resource transition completed. Feb 16 15:13:02 test-1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16138]: INFO: Resource is stopped Feb 16 15:13:02 test-1 heartbeat: [16102]: info: Local Resource acquisition completed. Feb 16 15:13:02 test-1 harc(default)[16219]: info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp Feb 16 15:13:02 test-1 ip-request-resp(default)[16219]: received ip-request-resp IPaddr::192.168.10.2/24/eth0 OK yes Feb 16 15:13:02 test-1 ResourceManager(default)[16238]: info: Acquiring resource group: test1.local IPaddr::192.168.10.2/24/eth0 mysqld Feb 16 15:13:02 test-1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16264]: INFO: Resource is stopped Feb 16 15:13:03 test-1 ResourceManager(default)[16238]: info: Running /etc/ha.d/resource.d/IPaddr 192.168.10.2/24/eth0 start Feb 16 15:13:03 test-1 IPaddr(IPaddr_192.168.10.2)[16386]: INFO: Adding inet address 192.168.10.2/24 with broadcast address 192.168.10.255 to device eth0 Feb 16 15:13:03 test-1 IPaddr(IPaddr_192.168.10.2)[16386]: INFO: Bringing device eth0 up Feb 16 15:13:03 test-1 IPaddr(IPaddr_192.168.10.2)[16386]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.10.2 eth0 192.168.10.2 auto not_used not_used Feb 16 15:13:03 test-1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16360]: INFO: Success Feb 16 15:13:03 test-1 ResourceManager(default)[16238]: info: Running /etc/init.d/mysqld start Feb 16 15:13:04 test-1 ntpd[1605]: Listen normally on 15 eth0 192.168.10.2 UDP 123
說明:
1、Link test1.local:eth0 up、Link test2.local:eth0 up#兩個節點連線成功併為UP狀態。
2、Link 192.168.10.1:192.168.10.1 up#ping節點的IP也已經啟動
3、info: Running /etc/init.d/mysqld start#mysql啟動成功
4、Listen normally on 15 eth0 192.168.10.2 UDP 123#VIP啟動成功
7.2、檢視heartbeat的VIP
[root@test1 ha.d]# ip add |grep "10.2" inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.2/24 brd 192.168.10.255 scope global secondary eth0
[root@test-2 ha.d]# ip add |grep "10.2" inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0
注:可以看到現在VIP是在master1.local主機上。而master2.local上沒有VIP
8、測試效果
8.1、正常情況下連線mysql
[root@test3 ha.d]# mysql -uroot -h`192.168.10.2` -p Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show variables like `server_id`; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 1 | +---------------+-------+ 1 row in set (0.00 sec) mysql>
8.2、關閉master1.local上的heartbeat
[root@test1 ha.d]# service heartbeat stop Stopping High-Availability services: Done. [root@test1 ha.d]# ip add |grep "192.168.10.2" inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0 [root@test2 ha.d]# ip add |grep "192.168.10.2" inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.2/24 brd 192.168.10.255 scope global secondary eth0
注:這個時候VIP已經漂移到了master2.local主機上,我們在來看看連線mysql的server_id
[root@test2 ha.d]# mysql -uroot -h`192.168.10.2` -p Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show variables like `server_id`; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 2 | +---------------+-------+ 1 row in set (0.00 sec) mysql>
注:server_id已經從1變成了2,證明此時訪問的是master2.local主機上的mysql服務
測試完畢。下面配置drbd讓兩臺mysql伺服器之間使用同一個檔案系統,以實現mysql的寫高可用。
9、配置DRBD
DRBD:(Distributed Replicated Block Device)分散式複製塊裝置,是linux核心中的一個模組。DRBD作為磁碟映象來講,它一定是主從架構的,它決不允許兩個節點同時讀寫,僅允許一個節點能讀寫,從節點不能讀寫和掛載,
但是DRDB有雙主的概念,主、從的角色可以切換。DRBD分別將位於兩臺主機上的硬碟或磁碟分割槽做成映象裝置,當我們客戶端的程式向主節點發起儲存請求的時候,這個資料會在底層以TCP/IP協議按位同布一份到備節點,
所以這能保證只要我們在主節點上存的資料,備節點上在按位一定有一模一樣的一份資料。這是在兩臺主機上實現的,這意味著DRBD是工作在核心模組當中。不像RAID1的映象是在同一臺主機上實現的。
DRBD雙主模型的實現:一個節點在資料訪問的時候,它一定會將資料、後設資料載入記憶體的,而且它對於某個檔案核心中加鎖的操作,另一個節點的核心是看不到的,那如果它能將它自己施加的鎖通知給另一個節點的核心就可以了。
在這種情況下,我們就只能通過message layer(heartbeat、corosync都可)、pathmaker(把DRBD定義成資源),然後把這兩個主機上對應的映象格式化成叢集檔案系統(GFS2/OCFS2)。
這就是基於結合分散式檔案鎖管理器(DLM Distributed Lock Manager)以及叢集檔案系統所完成的雙主模型。DRBD叢集只允許有兩個節點,要麼雙主,要麼主從。
9.1、DRBD的三種工作模型
A、資料在本地DRBD儲存完成後嚮應用程式返回儲存成功的訊息,非同步模型。效率高,效能好。資料不安全
B、資料在本地DRBD儲存完成後並且通過TCP/IP把所有資料傳送到從DRBD中,才向本地的應用程式返回儲存成功的訊息,半同步模型。一般不用。
C、資料在本地DRBD儲存完成後,通過TCP/IP把所有資料傳送到從DRBD中,從DRBD儲存完成後才嚮應用程式返回成功的訊息,同步模型。效率低,效能若,但是資料安全可靠性大,用的最多。
9.2、DRBD的資源
1、資源名稱,可以是任意的ascii碼不包含空格的字元
2、DRBD裝置,在雙方節點上,此DRBD裝置的裝置檔案,一般為/dev/drbdN,其主裝置號相同都是147,此裝置號用來標識不通的裝置
3、磁碟配置,在雙方節點上,各自提供的儲存裝置,可以是個分割槽,可以是任何型別的塊裝置,也可以是lvm
4、網路配置,雙方資料同步時,所使用的網路屬性
9.3、安裝DRBD
drbd在2.6.33開始,才整合進核心的。
9.3.1、下載drbd
[root@test1 ~]# wget -O /usr/local/src http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz
9.3.2、安裝drbd軟體
[root@test1 ~]# cd /usr/local/src [root@test1 src]# tar -zxvf drbd-8.4.3.tar.gz [root@test1 src]# cd /usr/local/src/drbd-8.4.3 [root@test1 drbd-8.4.3]# ./configure --prefix=/usr/local/drbd --with-km [root@test1 drbd-8.4.3]# make KDIR=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64
[root@test1 drbd-8.4.3]# make install [root@test1 drbd-8.4.3]# mkdir -p /usr/local/drbd/var/run/drbd [root@test1 drbd-8.4.3]# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d/
9.3.3、安裝drbd模組
[root@test1 drbd-8.4.3]# cd drbd/ [root@test1 drbd]# make clean [root@test1 drbd]# make KDIR=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64 [root@test1 drbd]# cp drbd.ko /lib/modules/`uname -r`/kernel/lib/ [root@test1 drbd]# modprobe drbd [root@test1 drbd]# lsmod | grep drbd
9.3.4、為drbd建立新分割槽
[root@test1 drbd]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It`s strongly recommended to switch off the mode (command `c`) and change display units to sectors (command `u`). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): +9G Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. AWRNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [root@test1 drbd]# partprobe /dev/sdb
test2節點的drbd安裝和分割槽配置步驟略過,和test1上一樣安裝,test2節點的drbd配置檔案保證和test1節點一樣,使用scp傳到test2節點即可
10、配置drbd
10.1、配置drbd的通用配置檔案
[root@test1 drbd.d]# cd /usr/local/drbd/etc/drbd.d [root@test1 drbd.d]# vim global_common.conf global { #global是全域性配置 usage-count no; #官方用來統計有多少個客戶使用drbd的 # minor-count dialog-refresh disable-ip-verification } common { #通用配置,用來配置所有資源那些相同屬性的。為drbd提供預設屬性的 protocol C; #預設使用協議C,即同步模型。 handlers { #處理器段,用來配置drbd的故障切換操作 # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";# pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; #腦裂之後的操作 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; #本地i/o錯誤之後的操作 # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb #裝置啟動時,兩個節點要同步,設定節點的等待時間,超時時間等 } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #一旦發生i/o錯誤,就把磁碟解除安裝。不繼續進行同步 # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { #設定網路的buffers/cache大小,初始化時間等 # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle cram-hmac-alg "sha1"; #資料加密使用的演算法 shared-secret "mydrbd1fa2jg8"; #驗證密碼 } syncer { rate 200M; #定義資料傳輸速率 } }
10.2、配置資原始檔,資源配置檔案的名字要和資原始檔中的一樣
[root@test1 drbd.d]# vim mydrbd.res resource mydrbd { #資源名稱,可以是任意的ascii碼不包含空格的字元 on test1.local { #節點1,每個節點必須要能使用名稱的方式解析對方節點 device /dev/drbd0; #drbd裝置的檔名叫什麼 disk /dev/sdb1; #分割槽裝置是哪個 address 192.168.10.55:7789;#節點ip和監聽的埠 meta-disk internal; #drbd的meta(原資料)放在什麼地方,internal是放在裝置內部 } on test2.local { device /dev/drbd0; disk /dev/sdb1; address 192.168.10.56:7789; meta-disk internal; } }
10.3、兩個節點的配置檔案一樣,使用工具把配置檔案傳到另一個節點
[root@test1 drbd.d]# scp -r /usr/local/drbd/etc/drbd.* test2.local:/usr/local/drbd/etc/
10.4、在每個節點上初始化已定義的資源
[root@test1 drbd.d]# drbdadm create-md mydrbd --== Thank you for participating in the global usage survey ==-- The server`s response is: Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. [root@test1 drbd.d]# [root@test2 drbd.d]# drbdadm create-md mydrbd --== Thank you for participating in the global usage survey ==-- The server`s response is: Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. [root@test2 drbd.d]#
10.5、分別啟動兩個節點的drbd服務
[root@test1 drbd.d]# service drbd start [root@test2 drbd.d]# service drbd start
11、測試drbd的同步
11.1、檢視drbd的啟動狀態
[root@test1 drbd.d]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@test1.local, 2016-02-23 10:23:03 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- #兩個節點都是從,將來可以把一個提升為主。Inconsistent處於沒有同步的狀態 ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
11.2、提升一個節點為主,並覆蓋從節點的drbd分割槽資料。在要提升為主的節點上執行
[root@test1 drbd.d]# drbdadm -- --overwrite-data-of-peer primary mydrbd
11.3、檢視主節點同步狀態
[root@test1 drbd.d]# watch -n 1 cat /proc/drbd Every 1.0s: cat /proc/drbd Tue Feb 23 17:10:55 2016 version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@test1.local, 2016-02-23 10:23:03 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n- ns:619656 nr:0 dw:0 dr:627840 al:0 bm:37 lo:1 pe:8 ua:64 ap:0 ep:1 wo:b oos:369144 [=============>.......] sync`ed: 10.3% (369144/987896)K finish: 0:00:12 speed: 25,632 (25,464) K/sec
11.4、檢視從節點的狀態
[root@test2 drbd]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@test2.local, 2016-02-22 16:05:34 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:4 nr:9728024 dw:9728028 dr:1025 al:1 bm:577 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
11.5、在主節點格式化分割槽並掛在寫入資料測試
[root@test1 drbd]# mke2fs -j /dev/drbd0 [root@test1 drbd]# mkdir /mydrbd [root@test1 drbd]# mount /dev/drdb0 /mydrbd [root@test1 drbd]# cd /mydrbd [root@test1 mydrbd]# touch drbd_test_file [root@test1 mydrbd]# ls /mydrbd/ drbd_test_file lost+found
11.6、把主節點降級為從,把從節點提升為主。檢視資料是否同步
11.1、主節點操作
11.1.1、解除安裝分割槽,注意解除安裝的時候要退出掛在目錄,否則會顯示裝置忙,不能解除安裝
[root@test1 mydrbd]# cd ~ [root@test1 ~]# umount /mydrbd [root@test1 ~]# drbdadm secondary mydrbd
11.1.2、檢視現在的drbd狀態
[root@test1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@test2.local, 2016-02-22 16:05:34 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----- ns:4 nr:9728024 dw:9728028 dr:1025 al:1 bm:577 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
注:可以看到,現在drbd的兩個節點的狀態都是secondary的,下面把從節點提升為主
11.2、從節點操作
11.2.1、提升操作
[root@test2 ~]# drdbadm primary mydrbd
11.2.2、掛在drbd分割槽
[root@test2 ~]# mkdir /mydrbd [root@test2 ~]# mount /dev/drbd0 /mydrbd/
11.2.3、檢視是否有資料
[root@test2 ~]# ls /myddrbd/ drbd_test_file lost+found
注:可以看到從節點切換成主後,已經同步了資料。drbd搭建完成。下面結合corosync+mysql配置雙主高可用。
12、結合corosync+drbd+mysql實現資料庫雙主高可用
將drbd配置為corosync雙節點高可用叢集中的資源,能夠實現主從角色的自動切換,注意,要把某一個服務配置為高可用叢集的資源,一定不能讓這個服務開機自動啟動。
12.1、關閉兩臺節點的drbd開機自啟動
12.1.1、主節點操作
[root@test1 drbd.d]# chkconfig drbd off [root@test1 drbd.d]# chkconfig --list |grep drbd drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
12.1.2、從節點操作
[root@test2 drbd.d]# chkconfig drbd off [root@test2 drbd.d]# chkconfig --list |grep drbd drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
12.2、解除安裝drbd的檔案系統並把主節點降級為從節點
12.2.1、從節點操作,注意,這裡的從節點剛才提升為主了。現在把他降級
[root@test2 drbd]# umount /mydata/ [root@test2 drbd]# drbdadm secondary mydrbd [root@test2 drbd]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@test2.local, 2016-02-22 16:05:34 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----- ns:8 nr:9728024 dw:9728032 dr:1073 al:1 bm:577 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
注:確保兩個節點都是secondary
12.3、停止兩個節點的drbd服務
12.3.1、從節點操作
[root@test2 drbd]# service drbd stop Stopping all DRBD resources: . [root@test2 drbd]#
12.3.2、主節點操作
[root@test1 drbd.d]# service drbd stop Stopping all DRBD resources: . [root@test1 drbd.d]#
12.4、安裝corosync並建立日誌目錄
12.4.1、主節點操作
[root@test1 drbd.d]# wget -P /etc/yum.repos.d http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo [root@test1 drbd.d]# yum install corosync pacemaker crmsh [root@test1 drbd.d]# mkdir /var/log/cluster
12.4.2、從節點操作
[root@test1 drbd.d]# wget -P /etc/yum.repos.d http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo [root@test2 drbd.d]# mkdir /var/log/cluster [root@test2 drbd.d]# yum install corosync pacemaker crmsh
12.5、corosync配置檔案
12.5.1、主節點操作
[root@test1 drbd.d]# cd /etc/corosync/ [root@test1 corosync]# cp corosync.conf.example corosync.conf
12.6、配置主節點配置檔案,生成corosync祕鑰檔案並複製給從節點(包括主配置檔案)
12.6.1、主節點操作
[root@test1 corosync]# vim corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2# secauth: Enable mutual node authentication. If you choose to # enable this ("on"), then do remember to create a shared # secret with "corosync-keygen". secauth: on threads: 2 # interface: define at least one interface to communicate # over. If you define more than one interface stanza, you must # also set rrp_mode. interface { # Rings must be consecutively numbered, starting at 0. ringnumber: 0 # This is normally the *network* address of the # interface to bind to. This ensures that you can use # identical instances of this configuration file # across all your cluster nodes, without having to # modify this option. bindnetaddr: 192.168.10.0 # However, if you have multiple physical network # interfaces configured for the same subnet, then the # network address alone is not sufficient to identify # the interface Corosync should bind to. In that case, # configure the *host* address of the interface # instead: bindnetaddr: 192.168.10.0 # When selecting a multicast address, consider RFC # 2365 (which, among other things, specifies that # 239.255.x.x addresses are left to the discretion of # the network administrator). Do not reuse multicast # addresses across multiple Corosync clusters sharing # the same network. mcastaddr: 239.212.16.19 # Corosync uses the port you specify here for UDP # messaging, and also the immediately preceding # port. Thus if you set this to 5405, Corosync sends # messages over UDP ports 5405 and 5404. mcastport: 5405 # Time-to-live for cluster communication packets. The # number of hops (routers) that this ring will allow # itself to pass. Note that multicast routing must be # specifically enabled on most network routers. ttl: 1 #每一個資料包文不允許經過路由 } } logging { # Log the source file and line where messages are being # generated. When in doubt, leave off. Potentially useful for # debugging. fileline: off # Log to standard error. When in doubt, set to no. Useful when # running in the foreground (when invoking "corosync -f") to_stderr: no # Log to a log file. When set to "no", the "logfile" option # must not be set. to_logfile: yes logfile: /var/log/cluster/corosync.log # Log to the system log daemon. When in doubt, set to yes. to_syslog: no # Log debug messages (very verbose). When in doubt, leave off. debug: off # Log messages with time stamps. When in doubt, set to on # (unless you are only logging to syslog, where double # timestamps can be annoying). timestamp: on logger_subsys { subsys: AMF debug: off } } service { ver:0 name:pacemaker } aisexec { user:root group:root } [root@test1 corosync]# corosync-keygen [root@test1 corosync]# scp -p authkey corosync.conf test2.local:/etc/corosync/
12.7、啟動corosync
12.7.1、主節點操作(注意:兩個corosync的啟動操作都要在主節點上進行)
[root@test1 corosync]# service corosync start Starting Corosync Cluster Engine (corosync): [ OK ] [root@test1 corosync]# ssh test2.local `service corosync start` root@test2.local`s password: Starting Corosync Cluster Engine (corosync): [ OK ]
12.8、檢視叢集狀態
12.8.1、問題:安裝之後系統沒有crm命令,不能使用crm的互動式模式
[root@test1 corosync]# crm status -bash: crm: command not found
解決辦法:安裝ha-cluster的yum源,在安裝crmsh軟體包
[root@test1 corosync]# wget -P /etc/yum.repos.d/ http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
參考文件:http://www.dwhd.org/20150530_014731.html 中的第二,安裝crmsh步驟
安裝完成後就可以使用crm命令列模式了
12.8.2、檢視叢集節點狀態
[root@test1 corosync]# crm status Last updated: Wed Feb 24 13:47:17 2016 Last change: Wed Feb 24 11:26:06 2016 Stack: classic openais (with plugin) Current DC: test2.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ test1.local test2.local ] Full list of resources:
注:顯示有2個nodes已配置並處於線上狀態
12.8.3、檢視pacemaker是否啟動
[root@test1 corosync]# grep pcmk_startup /var/log/cluster/corosync.log Feb 24 11:05:15 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Feb 24 11:05:15 corosync [pcmk ] Logging: Initialized pcmk_startup
12.8.4、檢查叢集引擎是否啟動
[root@test1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log Feb 24 11:04:16 corosync [MAIN ] Corosync Cluster Engine (`1.4.7`): started and ready to provide service. Feb 24 11:04:16 corosync [MAIN ] Successfully read main configuration file `/etc/corosync/corosync.conf`.
注:經過上面的步驟,可以確認corosync的服務已經啟動並沒有問題了
12.9、配置corosync的屬性
12.9.1、禁用STONITH裝置以保證verify不會出錯
[root@test1 corosync]# crm configure crm(live)configure# property stonith-enabled=false crm(live)configure# verify crm(live)configure# commit
12.9.2、配置當不具備法定票數的時候不能關閉服務
crm(live)configure# property no-quorum-policy=ignore crm(live)configure# verify crm(live)configure# commit
12.9.3、配置資源預設粘性值
crm(live)configure# rsc_defaults resource-stickiness=100 crm(live)configure# verify crm(live)configure# commit
12.9.3、檢視當前的配置
crm(live)configure# show node test1.local node test2.local property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore rsc_defaults rsc-options: resource-stickiness=100
12.10、配置叢集資源
12.10.1、定義drbd的資源
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20
注:ocf:資源代理,這裡用linbit 代理drbd。 drbd_resource=mydrbd:drbd的資源名字。start timeout:啟動超時時間。stop timeout:停止超時時間。monitor role=Master:定義主節點的監控時間,interval:監控間隔,timeout:超時時間。monitor role=Slave:定義從節點的監控時間,interval:監控間隔,timeout:超時時間
crm(live)configure# show #檢視配置 node test1.local node test2.local primitive mydrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 interval=0 op stop timeout=100 interval=0 op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore rsc_defaults rsc-options: resource-stickiness=100 crm(live)configure# verify #驗證配置
12.10.2、定義drbd的主從資源
crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true 注:ms ms_mydrbd mydrbd:把mydrbd做成主從,這個資源名稱叫ms_mydrbd。meta:定義後設資料屬性。master-max=1:最多有1個主資源,master-node-max=1:最多有1個主節點,clone-max=2:最多有2個克隆,clone-node-max=1:每個節點上,可以啟動幾個克隆資源。
crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node test1.local node test2.local primitive mydrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 interval=0 op stop timeout=100 interval=0 op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s ms ms_mydrbd mydrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore rsc_defaults rsc-options: resource-stickiness=100
注:可以看到現在配置有一個primitive mydrbd的資源,一個ms的主從型別。
12.10.3、檢視叢集節點狀態
crm(live)configure# cd crm(live)# status Last updated: Thu Feb 25 14:45:52 2016 Last change: Thu Feb 25 14:44:44 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 2 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ]
注:正常狀態,已經可以看到主從狀態,test1是master。
錯誤狀態:
Master/Slave Set: ms_mydrbd [mydrbd] mydrbd(ocf::linbit:drbd):FAILED test2.local (unmanaged) mydrbd(ocf::linbit:drbd):FAILED test1.local (unmanaged) Failed actions: mydrbd_stop_0 on test2.local `not configured` (6): call=87, status=complete, last-rc-change=`Thu Feb 25 14:17:34 2016`, queued=0ms, exec=34ms mydrbd_stop_0 on test2.local `not configured` (6): call=87, status=complete, last-rc-change=`Thu Feb 25 14:17:34 2016`, queued=0ms, exec=34ms mydrbd_stop_0 on test1.local `not configured` (6): call=72, status=complete, last-rc-change=`Thu Feb 25 14:17:34 2016`, queued=0ms, exec=34ms mydrbd_stop_0 on test1.local `not configured` (6): call=72, status=complete, last-rc-change=`Thu Feb 25 14:17:34 2016`, queued=0ms, exec=34ms 解決辦法:定義資源的時候要注意,drbd_resource=mydrbd的名字是drbd資源的名字且主從資源的名稱不能和drbd資源的名稱一樣,還有各種超時設定中不要加s。測試了很久,才找到這個問題。。。。如有不同看法,請各位大神賜教,謝謝。
12.10.4、驗證主從的切換
[root@test1 ~]# crm node standby test1.local #將主節點下線 [root@test1 ~]# crm status 注:檢視狀態,顯示主節點已經不線上,而test2成為了master Last updated: Thu Feb 25 14:51:58 2016 Last change: Thu Feb 25 14:51:44 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 2 Resources configured Node test1.local: standby Online: [ test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Stopped: [ test1.local ] [root@test1 ~]# crm node online test1.local #重新將test1上線 [root@test1 ~]# crm status #檢視狀態,顯示test依舊為master,而test1成為了slave Last updated: Thu Feb 25 14:52:55 2016 Last change: Thu Feb 25 14:52:39 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 2 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ]
12.10.5、定義檔案系統資源
[root@test1 ~]# crm crm(live)# configure crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydrbd fstype=ext3 op start timeout=60 op stop timeout=60 注:這裡的語句表示,primitive mystore:定義一個資源mystore,使用heartbeat的檔案系統,params引數定義:drbd的裝置名,掛在目錄,檔案系統型別,和啟動停止超時時間 crm(live)configure# verify
12.10.6、定義排列約束以確保Filesystem和主節點在一起。
crm(live)configure# colocation mystore_withms_mysqldrbd inf: mystore ms_mysqldrbd:Master crm(live)configure# verify
12.10.7、定義Order約束,以確保主從資源必須要先成為主節點以後才能掛在檔案系統
crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start 注:這裡的語句表示,mystore_after_ms_mysqldrbd mandatory:,mystore在ms_mysqldrbd之後啟動,mandatory(代表強制的),先啟動ms_mysqldrbd,promote(角色切換成功後),在啟動mystore:start crm(live)configure# verify crm(live)configure# commit crm(live)configure# cd crm(live)# status #檢視狀態,可以看到檔案系統已經自動掛在到主節點test1.local上了。 Last updated: Thu Feb 25 15:29:39 2016 Last change: Thu Feb 25 15:29:36 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local [root@test1 ~]# ls /mydrbd/#已經有之前建立的檔案了。 inittab lost+found
12.10.8、切換主從節點,驗證檔案系統是否會自動掛載
[root@test1 ~]# crm node standby test1.local [root@test1 ~]# crm node online test1.local [root@test1 ~]# crm status Last updated: Thu Feb 25 15:32:39 2016 Last change: Thu Feb 25 15:32:36 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local [root@test2 ~]# ls /mydrbd/ inittab lost+found
12.9.12、配置mysql
12.9.12.1、關閉mysqld的開機自啟動
[root@test1 ~]# chkconfig mysqld off [root@test2 ~]# chkconfig mysqld off #一定要記住,只要是高可用叢集中的資源的服務,一定不能開機自啟動
12.9.12.2、配置主節點1的mysql服務(mysql安裝就不寫了,直接進入mysql的配置)
[root@test1 mysql]# mkdir /mydrbd/data [root@test1 mysql]# chown -R mysql.mysql /mydrbd/data/ [root@test1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydrbd/data/ --basedir=/usr/local/mysql/ Installing MySQL system tables... 160225 16:07:12 [Note] /usr/local/mysql//bin/mysqld (mysqld 5.5.44) starting as process 18694 ... OK Filling help tables... 160225 16:07:18 [Note] /usr/local/mysql//bin/mysqld (mysqld 5.5.44) starting as process 18701 ... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/local/mysql//bin/mysqladmin -u root password `new-password` /usr/local/mysql//bin/mysqladmin -u root -h test1.local password `new-password` Alternatively you can run: /usr/local/mysql//bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. You can start the MySQL daemon with: cd /usr/local/mysql/ ; /usr/local/mysql//bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.pl cd /usr/local/mysql//mysql-test ; perl mysql-test-run.pl Please report any problems at [root@test1 mysql]# service mysqld start Starting MySQL..... [ OK ] [root@test1 mysql]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.06 sec) mysql> create database drbd_mysql; Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) mysql>
12.9.12.3、配置從節點的mysql
注:因為剛才已經在test1.local上的共享儲存上初始化了mysql的data目錄,在test2.local上就不用重複初始化了。
[root@test1 mysql]# crm status Last updated: Thu Feb 25 16:14:14 2016 Last change: Thu Feb 25 15:35:16 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local
1、先讓test2.local成為master才能繼續操作
[root@test1 mysql]# crm node standby test1.local [root@test1 mysql]# crm node online test1.local [root@test1 mysql]# crm status Last updated: Thu Feb 25 16:14:46 2016 Last change: Thu Feb 25 16:14:30 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Node test1.local: standby Online: [ test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Stopped: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local #要確保讓test2.local成為master節點
2、test2.local的mysql配置
[root@test2 ~]# vim /etc/my.cnf 新增: datadir = /mydrbd/data [root@test2 ~]# service mysqld start Starting MySQL. [ OK ] [root@test2 ~]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 1 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.10 sec) mysql>
12.9.13、定義mysql資源
1、停止mysql
[root@test2 ~]# service mysqld stop Shutting down MySQL. [ OK ]
2、定義mysql資源
[root@test1 mysql]# crm configure crm(live)configure# primitive mysqld lsb:mysqld crm(live)configure# verify
3、定義mysql和主節點約束
crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore #注:因為mystore一定和主節點在一起,那麼我們就定義mysql和mystore的約束。 crm(live)configure# verify
4、定義mysql和mystore啟動次序約束
crm(live)configure# order mysqld_after_mystore mandatory: mystore mysqld #一定要弄清楚啟動的先後次序,mysql是在mystore之後啟動的。 crm(live)configure# verify crm(live)configure# commit crm(live)configure# cd crm(live)# status Last updated: Thu Feb 25 16:44:29 2016 Last change: Thu Feb 25 16:42:16 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 4 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local mysqld(lsb:mysqld):Started test2.local 注:現在主節點在test2.local上。
5、驗證test2.local上的mysql登入是否正常和角色切換後是否正常
[root@test2 ~]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show databases; #test2.local上已經啟動好了mysql並自動掛在了drbd資源 +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.07 sec) mysql> create database mydb; #建立一個資料庫,然後切換到test1.local節點上,看是否正常 Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> exit [root@test1 mysql]# crm node standby test2.local #將test2.local主節點standby,讓test1.local自動成為master [root@test1 mysql]# crm node online test2.local [root@test1 mysql]# crm status Last updated: Thu Feb 25 16:53:24 2016 Last change: Thu Feb 25 16:53:19 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 4 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local #test1.local已經成為master mysqld(lsb:mysqld):Started test1.local [root@test1 mysql]# mysql -uroot #在test1.local上登入mysql Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 1 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | #已經有剛才在test2.local上建立的mydb資料庫。目前,一切正常 | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.15 sec) mysql>
12.9.14、定義VIP資源及約束關係
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip=192.168.10.3 nic=eth0 cidr_netmask=24 #這裡出了一個錯誤,浪費半天時間,是因為子網掩碼寫的255.255.255.0,應該寫成24,忘記命令的時候,要多使用table和help。 crm(live)configure# verify crm(live)configure# colocation myip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master myip #定義vip和ms_mysqldrbd的約束關係 crm(live)configure# verify crm(live)configure# commit crm(live)configure# cd crm(live)# status#檢視狀態 Last updated: Fri Feb 26 10:05:16 2016 Last change: Fri Feb 26 10:05:12 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local mysqld(lsb:mysqld):Started test1.local myip(ocf::heartbeat:IPaddr):Started test1.local #vip已經啟動在test1.local節點上了。
12.9.15、測試連線VIP
[root@test1 ~]# ip addr #檢視test1.local上是否繫結VIP 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:34:7d:9f brd ff:ff:ff:ff:ff:ff inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.3/24 brd 192.168.10.255 scope global secondary eth0 #VIP已經繫結到test1.local上的eth0介面上 inet6 fe80::20c:29ff:fe34:7d9f/64 scope link valid_lft forever preferred_lft forever [root@test-3 ~]# mysql -uroot -h192.168.10.3 -p #使用VIP連線mysql,這裡要給連線的客戶端授權,不然不能登入 Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 6 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show databases; #下面有我們剛才建立的兩個庫 +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.08 sec)
模擬test1.local故障:
[root@test1 mysql]# crm node standby test1.local #將test1.local降級為slave,看vip是否會自動切換 [root@test1 mysql]# crm node online test1.local [root@test1 mysql]# crm status Last updated: Fri Feb 26 10:20:38 2016 Last change: Fri Feb 26 10:20:35 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local mysqld(lsb:mysqld):Started test2.local myip(ocf::heartbeat:IPaddr):Started test2.local #vip已經在test2.local節點上了。
檢視test2的IP資訊:
[root@test2 ~]# ip addr #在test2.local上檢視VIP繫結資訊 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:fd:7f:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.3/24 brd 192.168.10.255 scope global secondary eth0 #VIP已經在test2.local上的eth0介面上。 inet6 fe80::20c:29ff:fefd:7fe5/64 scope link valid_lft forever preferred_lft forever
測試連線MySQL:
[root@test-3 ~]# mysql -uroot -h192.168.10.3 -p #連線mysql Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 1 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type `help;` or `h` for help. Type `c` to clear the current input statement. mysql> show databases; #一切正常。 +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.06 sec)
到此,corosync+drbd+mysql已經配置完畢,文件不夠詳細,沒有corosync、heartbeat、drbd、mysql結合的原理講解,沒有發生腦裂後的處理辦法和預防方法。以後有時間在加上吧。
這篇文件參照了http://litaotao.blog.51cto.com/6224470/1303307,
DRBD安裝參照了:http://88fly.blog.163.com/blog/static/12268039020131113452222/