Mysql+Corosync+Pacemaker+DRBD構建高可用M
本次實驗主要介紹Mysql的高可用叢集構建;其他的不多說了,下面直接開始安裝配置
一、環境介紹及準備
1、本次配置有兩個節點:nod1.allen.com(172.16.14.1) 與 nod2.allen.com(172.16.14.2)
######在NOD1與NOD2節點執行如下命令
cat > /etc/hosts << EOF
172.16.14.1 nod1.allen.com nod1
172.16.14.2 nod2.allen.com nod2
EOF
註釋:讓所有節點的主機名稱與對應的IP地址可以正常解析
2、每個節點的主機名稱須跟"uname -n"命令的執行結果一樣
######NOD1節點執行
sed -i 's@(HOSTNAME=).*@1nod1.allen.com@g' /etc/sysconfig/network
hostname nod1.allen.com
######NOD2節點執行
sed -i 's@(HOSTNAME=).*@1nod2.allen.com@g' /etc/sysconfig/network
hostname nod2.allen.com
註釋:修改檔案須重啟系統生效,這裡先修改檔案然後執行命令修改主機名稱可以不用重啟
3、nod1與nod2兩個節點上各提供了一個相同大小的分割槽作為DRBD裝置,這裡我們在兩個節點上分別建立"/dev/sda3"作為DRBD裝置,大小容量為2G
######在NOD1與NOD2節點上分別建立分割槽,分割槽大小必須保持一樣
fdisk /dev/sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, +cylinders or +size{K,M,G} (7859-15665, default 15665): +2G
Command (m for help): w
partx /dev/sda #讓核心重新讀取分割槽
######檢視核心有沒有識別分割槽,如果沒有需要重新啟動,這裡沒有識別需要重啟系統
cat /proc/partitions
major minor #blocks name
8 0 125829120 sda
8 1 204800 sda1
8 2 62914560 sda2
253 0 20971520 dm-0
253 1 2097152 dm-1
253 2 10485760 dm-2
253 3 20971520 dm-3
reboot
4、關閉兩臺伺服器的SELinux、Iptables與NetworkManager
setenforce 0 #關閉SELinux
service iptables stop #關閉Iptables
chkconfig iptables off #禁止Iptables開機啟動
service NetworkManager stop
chkconfig NetworkManager off
chkconfig --list NetworkManager
NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off
chkconfig network on
chkconfig --list network
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
注意:做的過程中必須關閉NetworkManager服務關閉並設定開機不能自動啟動;將network服務設定開機自啟動;否則作實驗過程中會帶來不必要的麻煩,造成叢集系統不能正常執行
5、配置好YUM源並同步時間,且保證兩個節點的時間要同步 epel源下載
######配置epel源
######在NOD1與NOD2節點分別安裝
rpm -ivh epel-release-6-8.noarch.rpm
6、做雙機互信
[root@nod1 ~]# ssh-keygen -t rsa
[root@nod1 ~]# ssh-copy-id -i .ssh/id_rsa.pub nod2
==================================================
[root@nod2 ~]# ssh-keygen -t rsa
[root@nod2 ~]# ssh-copy-id -i .ssh/id_rsa.pub nod1
7、系統版本:CentOS 6.4_x86_64
8、使用軟體: 其中pacemaker與corosync在光碟映像中有
pssh-2.3.1-2.el6.x86_64 下載見附件
crmsh-1.2.6-4.el6.x86_64 下載見附件
drbd-8.4.3-33.el6.x86_64 DRBD下載地址:
drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64
mysql-5.5.33-linux2.6-x86_64 點此下載
pacemaker-1.1.8-7.el6.x86_64
corosync-1.4.1-15.el6.x86_64
二、安裝配置DRBD DRBD詳解
1、在NOD1與NOD2節點上安裝DRBD軟體包
######NOD1
[root@nod1 ~]# ls drbd-*
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[root@nod1 ~]# yum -y install drbd-*.rpm
######NOD2
[root@nod2 ~]# ls drbd-*
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[root@nod2 ~]# yum -y install drbd-*.rpm
2、檢視DRBD配置檔案
ll /etc/drbd.conf;ll /etc/drbd.d/
-rw-r--r-- 1 root root 133 May 14 21:12 /etc/drbd.conf #主配置檔案
total 4
-rw-r--r-- 1 root root 1836 May 14 21:12 global_common.conf #全域性配置檔案
######檢視主配置檔案內容
cat /etc/drbd.conf
######主配置檔案中包含了全域性配置檔案及"drbd.d/"目錄下以.res結尾的檔案
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
3、修改配置檔案如下:
[root@nod1 ~]#vim /etc/drbd.d/global_common.conf
global {
usage-count no; #是否參加DRBD使用統計,預設為yes
# minor-count dialog-refresh disable-ip-verification
}
common {
protocol C; #使用DRBD的同步協議
handlers {
# These are EXAMPLE handlers only.
# They may have severe implications,
# like hard resetting the node under certain circumstances.
# Be careful when chosing your poison.
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
}
options {
# cpu-mask on-no-data-accessible
}
disk {
on-io-error detach; #配置I/O錯誤處理策略為分離
# size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
# disk-drain md-flushes resync-rate resync-after al-extents
# c-plan-ahead c-delay-target c-fill-target c-max-rate
# c-min-rate disk-timeout
}
net {
cram-hmac-alg "sha1"; #設定加密演算法
shared-secret "allendrbd"; #設定加密金鑰
# protocol timeout max-epoch-size max-buffers unplug-watermark
# connect-int ping-int sndbuf-size rcvbuf-size ko-count
# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
# after-sb-1pri after-sb-2pri always-asbp rr-conflict
# ping-timeout data-integrity-alg tcp-cork on-congestion
# congestion-fill congestion-extents csums-alg verify-alg
# use-rle
}
syncer {
rate 1024M; #設定主備節點同步時的網路速率
}
}
4、新增資原始檔:
[root@nod1 ~]# vim /etc/drbd.d/drbd.res
resource drbd {
on nod1.allen.com { #第個主機說明以on開頭,後面是主機名稱
device /dev/drbd0;#DRBD裝置名稱
disk /dev/sda3; #drbd0使用的磁碟分割槽為"sda3"
address 172.16.14.1:7789; #設定DRBD監聽地址與埠
meta-disk internal;
}
on nod2.allen.com {
device /dev/drbd0;
disk /dev/sda3;
address 172.16.14.2:7789;
meta-disk internal;
}
}
5、將配置檔案為NOD2提供一份
[root@nod1 ~]# scp /etc/drbd.d/{global_common.conf,drbd.res} nod2:/etc/drbd.d/
The authenticity of host 'nod2 (172.16.14.2)' can't be established.
RSA key fingerprint is 29:d3:28:85:20:a1:1f:2a:11:e5:88:cd:25:d0:95:c7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'nod2' (RSA) to the list of known hosts.
root@nod2's password:
global_common.conf 100% 1943 1.9KB/s 00:00
drbd.res 100% 318 0.3KB/s 00:00
6、初始化資源並啟動服務
######在NOD1與NOD2節點上初始化資源並啟動服務
[root@nod1 ~]# drbdadm create-md drbd
Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created. #提示已經建立成功
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
######啟動服務
[root@nod1 ~]# service drbd start
Starting DRBD resources: [
create res: drbd
prepare disk: drbd
adjust disk: drbd
adjust net: drbd
]
..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'drbd'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 12]: yes
7、初始化裝置同步
[root@nod1 ~]# drbdadm -- --overwrite-data-of-peer primary drbd
[root@nod1 ~]# cat /proc/drbd #檢視同步進度
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-
ns:1897624 nr:0 dw:0 dr:1901216 al:0 bm:115 lo:0 pe:3 ua:3 ap:0 ep:1 wo:f oos:207988
[=================>..] sync'ed: 90.3% (207988/2103412)K
finish: 0:00:07 speed: 26,792 (27,076) K/sec
######當同步完成時如以下狀態
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:2103412 nr:0 dw:0 dr:2104084 al:0 bm:129 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
註釋: drbd:為資源名稱
######檢視同步進度也可使用以下命令
drbd-overview
8、建立檔案系統
######格式化檔案系統
[root@nod1 ~]# mkfs.ext4 /dev/drbd0
9、禁止NOD1與NOD2節點上的DRBD服務開機自啟動
[root@nod1 ~]# chkconfig drbd off
[root@nod1 ~]# chkconfig --list drbd
drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
=====================================================================
[root@nod2 ~]# chkconfig drbd off
[root@nod2 ~]# chkconfig --list drbd
drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
三、安裝Mysql
1、安裝Mysql並配置
######在NOD1節點上安裝Mysql
[root@nod1 ~]# mkdir /mydata
[root@nod1 ~]# mount /dev/drbd0 /mydata/
[root@nod1 ~]# mkdir /mydata/data
[root@nod1 ~]# tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C /usr/local/
[root@nod1 ~]# cd /usr/local/
[root@nod1 local]# ln -s mysql-5.5.33-linux2.6-x86_64 mysql
[root@nod1 local]# cd mysql
[root@nod1 mysql]# cp support-files/my-large.cnf /etc/my.cnf
[root@nod1 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@nod1 mysql]# chmod +x /etc/init.d/mysqld
[root@nod1 mysql]# chkconfig --add mysqld
[root@nod1 mysql]# chkconfig mysqld off
[root@nod1 mysql]# vim /etc/my.cnf
datadir = /mydata/data
innodb_file_per_table = 1
[root@nod1 mysql]# echo "PATH=/usr/local/mysql/bin:$PATH" >> /etc/profile
[root@nod1 mysql]# . /etc/profile
[root@nod1 mysql]# useradd -r -u 306 mysql
[root@nod1 mysql]# chown mysql.mysql -R /mydata
[root@nod1 mysql]# chown root.mysql *
[root@nod1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data/
[root@nod1 mysql]# service mysqld start
Starting MySQL..... [ OK ]
[root@nod1 mysql]# chkconfig --list mysqld
mysqld 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@nod1 mysql]# service mysqld stop
Shutting down MySQL. [ OK ]
######在NOD2節點上安裝Mysql
[root@nod2 ~]# scp nod1:/root/mysql-5.5.33-linux2.6-x86_64.tar.gz ./
[root@nod2 ~]# mkdir /mydata
[root@nod2 ~]# tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C /usr/local/
[root@nod2 ~]# cd /usr/local/
[root@nod2 local]# ln -s mysql-5.5.33-linux2.6-x86_64 mysql
[root@nod2 local]# cd mysql
[root@nod2 mysql]# cp support-files/my-large.cnf /etc/my.cnf
######修改配置檔案新增如下配置
[root@nod2 mysql]# vim /etc/my.cnf
datadir = /mydata/data
innodb_file_per_table = 1
[root@nod2 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@nod2 mysql]# chkconfig --add mysqld
[root@nod2 mysql]# chkconfig mysqld off
[root@nod2 mysql]# useradd -r -u 306 mysql
[root@nod2 mysql]# chown -R root.mysql *
2、解除安裝NOD1節點上的DRBD裝置然後降級
[root@nod1 ~]# drbd-overview
0:drbd/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@nod1 ~]# umount /mydata/
[root@nod1 ~]# drbdadm secondary drbd
[root@nod1 ~]# drbd-overview
0:drbd/0 Connected Secondary/Secondary UpToDate/UpToDate C r-----
3、在NOD2節點升級DBRD為主然後掛載DRBD裝置
[root@nod2 ~]# drbd-overview
0:drbd/0 Connected Secondary/Secondary UpToDate/UpToDate C r-----
[root@nod2 ~]# drbdadm primary drbd
[root@nod2 ~]# drbd-overview
0:drbd/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@nod2 ~]# mount /dev/drbd0 /mydata/
4、在NOD2節點上啟動Mysql服務進行測試
[root@nod2 ~]# chown -R mysql.mysql /mydata
[root@nod2 ~]# service mysqld start
Starting MySQL.. [ OK ]
[root@nod2 ~]# service mysqld stop
Shutting down MySQL. [ OK ]
[root@nod2 ~]# chkconfig --list mysqld
mysqld 0:off 1:off 2:off 3:off 4:off 5:off 6:off
5、將DRBD服務都設定為備用節點如:
[root@nod2 ~]# drbdadm secondary drbd
[root@nod2 ~]# drbd-overview
0:drbd/0 Connected Secondary/Secondary UpToDate/UpToDate C r-----
6、解除安裝DRBD裝置並停止NOD1與NOD2節點上的DRBD服務
[root@nod2 ~]# umount /mydata/
[root@nod2 ~]# service drbd stop
Stopping all DRBD resources: .
[root@nod1 ~]# service drbd stop
Stopping all DRBD resources: .
四、安裝Corosync+Pacemaker軟體
1、在NOD1與NOD2節點上安裝
[root@nod1 ~]# yum -y install crmsh*.rpm pssh*.rpm pacemaker corosync
[root@nod2 ~]# scp nod1:/root/{pssh*.rpm,crmsh*.rpm} ./
[root@nod2 ~]# yum -y install crmsh*.rpm pssh*.rpm pacemaker corosync
2、在NOD1上配置Corosync
[root@nod1 ~]# cd /etc/corosync/
[root@nod1 corosync]# ls
corosync.conf.example corosync.conf.example.udpu service.d uidgid.d
[root@nod1 corosync]# cp corosync.conf.example corosync.conf
[root@nod1 corosync]# vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2 #版本號
secauth: on #是否開啟安全認證
threads: 0 #多少個現成認證,0 為無限制
interface {
ringnumber: 0
bindnetaddr: 172.16.0.0 #透過哪個網路通訊
mcastaddr: 226.94.14.12 #組播地址
mcastport: 5405 #組播埠
ttl: 1
}
}
logging {
fileline: off
to_stderr: no #是否傳送標準錯誤輸出
to_logfile: yes #是否開啟日誌
to_syslog: no #是否開啟系統日誌,建議關閉一個
logfile: /var/log/cluster/corosync.log #日誌存放路徑,須手動建立目錄
debug: off
timestamp: on #日誌中是否記錄時間
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service { #新增支援使用Pacemaker
ver: 0
name: pacemaker
}
aisexec { #是否使用openais,有時可能會用到
user: root
group: root
}
3、生成節點之間通訊時用到的認證金鑰檔案
[root@nod1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 152).
Press keys on your keyboard to generate entropy (bits = 216).
註釋:生成金鑰時如果出現以上問題,說明隨機數不夠用,可以安裝軟體來解決
4、將配置檔案及認證檔案複製到NOD2節點一份
[root@nod1 corosync]# scp authkey corosync.conf nod2:/etc/corosync/
authkey 100% 128 0.1KB/s 00:00
corosync.conf 100% 522 0.5KB/s 00:00
5、啟動Corosync服務
[root@nod1 ~]# service corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
######檢視corosync引擎是否正常啟動
[root@nod1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
Sep 19 18:44:36 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
######檢視啟動過程是否產生錯誤資訊;如下資訊可以忽略
[root@nod1 ~]# grep ERROR: /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.
Sep 19 18:44:36 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' () for details on using Pacemaker with CMAN
######檢視初始化成員節點通知是否正常發出
[root@nod1 ~]# grep TOTEM /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Sep 19 18:44:36 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Sep 19 18:44:36 corosync [TOTEM ] The network interface [172.16.14.1] is now up.
Sep 19 18:44:36 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
######檢視pacemaker是否正常啟動
[root@nod1 ~]# grep pcmk_startup /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Sep 19 18:44:36 corosync [pcmk ] Logging: Initialized pcmk_startup
Sep 19 18:44:36 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Sep 19 18:44:36 corosync [pcmk ] info: pcmk_startup: Service: 9
Sep 19 18:44:36 corosync [pcmk ] info: pcmk_startup: Local hostname: nod1.allen.com
6、啟動NOD2節點上Corosync服務
[root@nod1 ~]# ssh nod2 'service corosync start'
Starting Corosync Cluster Engine (corosync): [ OK ]
######檢視叢集節點啟動狀態
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 19:01:33 2013
Last change: Thu Sep 19 18:49:09 2013 via crmd on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
0 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ] #兩個節點都已正常啟動
7、檢視Corosync啟動的相關程式
[root@nod1 ~]# ps auxf
root 10336 0.3 1.2 556824 4940 ? Ssl 18:44 0:04 corosync
305 10342 0.0 1.7 87440 7076 ? S 18:44 0:01 _ /usr/libexec/pacemaker/cib
root 10343 0.0 0.8 81460 3220 ? S 18:44 0:00 _ /usr/libexec/pacemaker/stonit
root 10344 0.0 0.7 73088 2940 ? S 18:44 0:00 _ /usr/libexec/pacemaker/lrmd
305 10345 0.0 0.7 85736 3060 ? S 18:44 0:00 _ /usr/libexec/pacemaker/attrd
305 10346 0.0 4.7 116932 18812 ? S 18:44 0:00 _ /usr/libexec/pacemaker/pengin
305 10347 0.0 1.0 143736 4316 ? S 18:44 0:00 _ /usr/libexec/pacemaker/crmd
五、配置資源
1、Corosync預設啟用了Stonith,而當前叢集並沒有相應的Stonith,會出現以下錯誤;需要禁用Stonith
[root@nod1 ~]# crm_verify -L -V
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
-V may provide more details
######禁用Stonith並檢視
[root@nod1 ~]# crm configure property stonith-enabled=false
[root@nod1 ~]# crm configure show
node nod1.allen.com
node nod2.allen.com
property $id="cib-bootstrap-options"
dc-version="1.1.8-7.el6-394e906"
cluster-infrastructure="classic openais (with plugin)"
expected-quorum-votes="2"
stonith-enabled="false"
2、檢視當前的叢集系統支援的型別
[root@nod1 ~]# crm ra classes
lsb
ocf / heartbeat linbit pacemaker redhat
service
stonith
註釋:linbit 資源型別只有安裝DRBD服務才會有
3、如何檢視某種型別下所用可用的資源代理列表?
crm ra list lsb
crm ra list ocf heartbeat
crm ra list ocf pacemaker
crm ra list stonith
crm ra list ocf linbit
4、配置VIP資源與Mysqld資源
[root@nod1 ~]# crm #進入crm互動模式
crm(live)# configure
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# primitive MyVip ocf:heartbeat:IPaddr params ip="172.16.14.10" #定義虛擬IP資源
crm(live)configure# primitive Mysqld lsb:mysqld #定義Mysql服務資源
crm(live)configure# verify #檢查語法錯誤
crm(live)configure# commit #提交
crm(live)configure# show #檢視配置
node nod1.allen.com
node nod2.allen.com
primitive MyVip ocf:heartbeat:IPaddr
params ip="172.16.14.10"
primitive Mysqld lsb:mysqld
property $id="cib-bootstrap-options"
dc-version="1.1.8-7.el6-394e906"
cluster-infrastructure="classic openais (with plugin)"
expected-quorum-votes="2"
stonith-enabled="false"
no-quorum-policy="ignore"
5、配置DRBD主從資源
crm(live)configure# primitive Drbd ocf:linbit:drbd params drbd_resource="drbd" op monitor interval=10s role="Master" op monitor interval=20s role="Slave" op start timeout=240s op stop timeout=100
crm(live)configure# master My_Drbd Drbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show Drbd
primitive Drbd ocf:linbit:drbd
params drbd_resource="drbd"
op monitor interval="10s" role="Master"
op monitor interval="20s" role="Slave"
op start timeout="240s" interval="0"
op stop timeout="100s" interval="0"
crm(live)configure# show My_Drbd
ms My_Drbd Drbd
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
6、定義一個檔案系統資源
crm(live)configure# primitive FileSys ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op start timeout="60s" op stop timeout="60s"
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show FileSys
primitive FileSys ocf:heartbeat:Filesystem
params device="/dev/drbd0" directory="/mydata" fstype="ext4"
op start timeout="60s" interval="0"
op stop timeout="60s" interval="0"
7、定將資源之間的位置和啟動順序約束
crm(live)configure# colocation FileSys_on_My_Drbd inf: FileSys My_Drbd:Master #讓檔案系統與DRBD主節點執行在一起
crm(live)configure# order FileSys_after_My_Drbd inf: My_Drbd:promote FileSys:start #讓DRBD服務比檔案系統先啟動
crm(live)configure# verify
crm(live)configure# colocation Mysqld_on_FileSys inf: Mysqld FileSys #讓Mysql服務與檔案系統執行在一起
crm(live)configure# order Mysqld_after_FileSys inf: FileSys Mysqld:start #讓檔案系統比Mysql服務先執行
crm(live)configure# verify
crm(live)configure# colocation MyVip_on_Mysqld inf: MyVip Mysqld #讓虛擬IP與Mysql服務執行在一起
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# bye #斷開crm互動連線
8、檢視服務狀態如下:
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 21:18:20 2013
Last change: Thu Sep 19 21:18:06 2013 via crmd on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod2.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ]
Master/Slave Set: My_Drbd [Drbd]
Masters: [ nod2.allen.com ]
Slaves: [ nod1.allen.com ]
FileSys (ocf::heartbeat:Filesystem): Started nod2.allen.com
Failed actions:
Mysqld_start_0 (node=nod1.allen.com, call=60, rc=1, status=Timed Out): unknown error
MyVip_start_0 (node=nod2.allen.com, call=47, rc=1, status=complete): unknown error
Mysqld_start_0 (node=nod2.allen.com, call=13, rc=1, status=complete): unknown error
FileSys_start_0 (node=nod2.allen.com, call=39, rc=1, status=complete): unknown error
註釋:出現以上錯誤是因為我們在定義資源提交時,期間會檢測服務是否執行;如果沒有執行可能會嘗試啟動,而資源還沒有完全定義好,所以會報錯誤;執行如下命令清除錯誤即可
[root@nod1 ~]# crm resource cleanup Mysqld
[root@nod1 ~]# crm resource cleanup MyVip
[root@nod1 ~]# crm resource cleanup FileSys
9、在上一步清除完錯誤後再次檢視:
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 21:26:49 2013
Last change: Thu Sep 19 21:19:35 2013 via crmd on nod2.allen.com
Stack: classic openais (with plugin)
Current DC: nod2.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ]
Master/Slave Set: My_Drbd [Drbd]
Masters: [ nod1.allen.com ]
Slaves: [ nod2.allen.com ]
MyVip (ocf::heartbeat:IPaddr): Started nod1.allen.com
Mysqld (lsb:mysqld): Started nod1.allen.com
FileSys (ocf::heartbeat:Filesystem): Started nod1.allen.com
======================================================================
註釋:由上可見,DRBD_Master、MyVip、Mysqld、FileSys都執行在NOD1節點上,也已經正常執行
六、驗證服務執行是否正常
1、在NOD1節點上檢視是否已經執行Mysqld服務及配置好虛擬IP地址和檔案系統
[root@nod1 ~]# netstat -anpt|grep mysql
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 22564/mysqld
[root@nod1 ~]# mount | grep drbd0
/dev/drbd0 on /mydata type ext4 (rw)
[root@nod1 ~]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:3D:3F:44
inet addr:172.16.14.10 Bcast:172.16.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
2、登入資料庫並建立資料庫用於驗證
[root@nod1 ~]# mysql
mysql> create database allen;
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| allen |
| mysql |
| performance_schema |
| test |
+--------------------+
3、模擬主節點出現故障,將主節點設定為"Standby"狀態,檢視服務是否轉移到備用節點上;當前主節點為:nod1.allen.com 備用節點:nod2.allen.com
[root@nod1 ~]# crm node standby nod1.allen.com
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 22:23:50 2013
Last change: Thu Sep 19 22:23:42 2013 via crm_attribute on nod2.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Node nod1.allen.com: standby
Online: [ nod2.allen.com ]
Master/Slave Set: My_Drbd [Drbd]
Masters: [ nod2.allen.com ]
Stopped: [ Drbd:1 ]
MyVip (ocf::heartbeat:IPaddr): Started nod2.allen.com
Mysqld (lsb:mysqld): Started nod2.allen.com
FileSys (ocf::heartbeat:Filesystem): Started nod2.allen.com
----------------------------------------------------------------------
######由上可見,所有服務已經切換到NOD2節點伺服器上面
4、在NOD2節點上登入Mysql驗證是否有"allen"資料庫
[root@nod2 ~]# mysql
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| allen |
| mysql |
| performance_schema |
| test |
+--------------------+
5、假如NOD1已修復好重新上線;這時NOD2節點上的服務是不會重新切換回NOD1節點上面的;如果想讓切換也不是不可以,這需要設定資源粘性;但建議不要切換,避免服務切換時浪費不必要的資源
[root@nod1 ~]# crm node online nod1.allen.com
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 22:34:55 2013
Last change: Thu Sep 19 22:34:51 2013 via crm_attribute on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ]
Master/Slave Set: My_Drbd [Drbd]
Masters: [ nod2.allen.com ]
Slaves: [ nod1.allen.com ]
MyVip (ocf::heartbeat:IPaddr): Started nod2.allen.com
Mysqld (lsb:mysqld): Started nod2.allen.com
FileSys (ocf::heartbeat:Filesystem): Started nod2.allen.com
6、設定資源粘性命令;這裡就不在做測試了,如果各位博友有興趣可以測試一下
crm configure rsc_defaults resource-stickiness=100
由上可見,所有服務都可以正常工作;到這裡Mysql高可用已全部完成,而且還驗證了Mysql服務的正常執行與資料;在這裡感謝廣大博友的關注與支援,我會繼續努力的!!! 加油 加油 加油
附件:
©著作權歸作者所有:來自51CTO部落格作者ALLEN_YNAG的原創作品,如需轉載,請註明出處,否則將追究法律責任
軟體伺服器Linux叢集
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/4686/viewspace-2820718/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 用 Hystrix 構建高可用服務架構架構
- 使用Keepalived構建LVS高可用叢集
- 構建生產環境可用的高可用kubernetes叢集
- 構建MHA實現MySQL高可用叢集架構MySql架構
- RabbitMQ從零到叢集高可用(.NetCore5.0) -高可用叢集構建落地MQNetCore
- 高可用架構架構
- 構建持續高可用系統的破局之道
- LVS + Keepalived + Nginx基於DR模式構建高可用方案Nginx模式
- 如何快速構建服務發現的高可用能力
- Activemq構建高併發、高可用的大規模訊息系統MQ
- 華為雲FunctionGraph構建高可用系統的實踐Function
- 基於 Apache ShardingSphere 構建高可用分散式資料庫Apache分散式資料庫
- 如何構建高可用、高併發、高效能的雲原生容器網路?
- RabbitMQ(四):使用Docker構建RabbitMQ高可用負載均衡叢集MQDocker負載
- Mysql高可用架構方案MySql架構
- Canal高可用架構部署架構
- 【轉】如何建設高可用系統
- 高可用架構設計全面詳解(8大高可用方案)架構
- 基於Redis的低成本高可用排行榜服務構建Redis
- MySQL 高可用架構之 MMM 架構MySql架構
- MySQL高可用架構對比MySql架構
- mysql高可用架構MHA搭建MySql架構
- AWS 高可用AWS架構方案架構
- 構建高可用性、高效能和可擴充套件的Zabbix Server架構套件Server架構
- 利用 K8S 的反親和性構建高可用應用K8S
- MySQL高可用架構設計分析MySql架構
- k8s高可用架構K8S架構
- 深度解析KubeEdge EdgeMesh 高可用架構架構
- MQ系列9:高可用架構分析MQ架構
- Docker折騰記: (1)構建yapi容器,從構建釋出到可用DockerAPI
- Redis高可用之戰:主從架構Redis架構
- MHA高可用架構的實現方式架構
- MySQL 實現高可用架構之 MHAMySql架構
- MySQL高可用架構-MMM、MHA、MGR、PXCMySql架構
- 部署MHA+keepalived+ProxySQL高可用架構SQL架構
- 高效能,高可用,安全的架構架構
- PostgreSQL repmgr高可用叢集+keepalived高可用SQL
- MySQL高可用架構:mysql+keepalived實現MySql架構