linux-HA 系統的故障切換過程細節。
我們基於corosync + pacemaker 的叢集系統demo版已經完成了。基於這個叢集,我們做一些測試。
關於叢集元件的介紹請檢視:
先前發過的兩個文件。
叢集的配置資訊:
crm(live)configure# show
node node95 ##/* 叢集的節點node
node node96
primitive ClusterIp ocf:heartbeat:IPaddr2 \ ##/* 定義一個資源,這裡是定義了叢集的VIP
params ip="192.168.11.101" cidr_netmask="32" \
op monitor interval="30s" \
meta target-role="Started" is-managed="true"
primitive fence_vm95 stonith:fence_vmware \ ##/* 定義fence 裝置這個是針對node95的定義
params ipaddr="192.168.10.197" login="user" passwd="passwd" vmware_datacenter="GZ-Offices" vmware_type="esx" action="reboot" port="dba-test-Cos6.2.64-11.95" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
op monitor interval="20" timeout="60s" \
op start interval="0" timeout="60s" \
meta target-role="Started"
primitive fence_vm96 stonith:fence_vmware \ ##/*定義fence裝置,這個是針對node96的定義
params ipaddr="192.168.10.197" login="user" passwd="passwd" vmware_datacenter="GZ-Offices" vmware_type="esx" action="reboot" port="dba-test-Cos6.2.64-11.96" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
op monitor interval="20" timeout="60s" \
op start interval="0" timeout="60s" \
meta target-role="Started"
primitive ping ocf:pacemaker:ping \ ##/* 定義一個資源,對叢集的各個節點進行ping 檢查網路連通性
params host_list="192.168.11.95 192.168.11.96 192.168.11.101 " multiplier="10" \
op monitor interval="10s" timeout="60s" \
op start interval="0" timeout="60s" \
op stop interval="0" timeout="100s" clone clone-ping ping \
meta target-role="Started"
primitive postgres_res ocf:heartbeat:pgsql \ ##/* 定義資料庫資源,我們選用的是PG資料庫。
params pgctl="/usr/local/pgsql/bin/pg_ctl" psql="/usr/local/pgsql/bin/psql" start_opt="" pgdata="/usr/local/pgsql/data" config="/usr/local/pgsql/data/postgresql.conf" pgdba="postgres" pgdb="postgres" \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="30s" timeout="30s" depth="0" master-max="2" \
meta target-role="Started" is-managed="true"
location ClusterIp-prefer-to-master ClusterIp 50: node96 ##/* 下面的這些是策略定義
location Pg-prefer-to-master postgres_res 50: node96 ## 要求vip 資料庫啟動的位置 在node96上,就是說,node96 應該是主庫
location cli-prefer-ClusterIp ClusterIp \
rule $id="cli-prefer-rule-ClusterIp" 50: #uname eq node96
location cli-prefer-ping ping \
rule $id="cli-prefer-rule-ping" 50: #uname eq node96
location cli-prefer-postgres_res postgres_res \
rule $id="cli-prefer-rule-postgres_res" 50: #uname eq node96
location loc_fence_vm95 fence_vm95 -inf: node95
location loc_fence_vm96 fence_vm96 -inf: node96
colocation Pg-with-ClusterIp inf: ClusterIp postgres_res ##/* 策略定義 要求vip + 資料庫執行在同一個節點上。
order Pg-after-ClusterIp inf: ClusterIp postgres_res ##/* 策略定義,要求vip先啟動,然後再啟動資料庫
property $id="cib-bootstrap-options" \ ##/* 叢集的預設屬性
dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="true" \
last-lrm-refresh="1347612913" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
(END)
先看系統狀態:
============
Last updated: Mon Sep 17 15:28:28 2012
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96
Stack: openais
Current DC: node96 - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
5 Resources configured.
============
Online: [ node95 node96 ]
fence_vm95 (stonith:fence_vmware): Started node96
fence_vm96 (stonith:fence_vmware): Started node95
ping (ocf::pacemaker:ping): Started node96
postgres_res (ocf::heartbeat:pgsql): Started node96
ClusterIp (ocf::heartbeat:IPaddr2): Started node96
主要的應用都執行在node96上,這跟我們配置的策略相符合。
我在另外一臺機器上配置了一個ping 程式碼,不停的ping vip ,來檢查系統的可用性
--- 192.168.11.101 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.311/0.385/0.529/0.079 ms
Mon Sep 17 15:31:28 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.364 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.352 ms
64 bytes from 192.168.11.101: icmp_seq=3 ttl=64 time=0.332 ms
64 bytes from 192.168.11.101: icmp_seq=4 ttl=64 time=0.396 ms
64 bytes from 192.168.11.101: icmp_seq=5 ttl=64 time=0.479 ms
測試1:
我們重啟node96 主機:
期間一直在ping vip [root@dba-test-11-98 ~]# while true ; do date ; ping -c 2 192.168.11.101; done
Mon Sep 17 15:58:26 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.374 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.434 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.374/0.404/0.434/0.030 ms
Mon Sep 17 15:58:27 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.321 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.445 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.321/0.383/0.445/0.062 ms
Mon Sep 17 15:58:28 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.367 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.442 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.367/0.404/0.442/0.042 ms
Mon Sep 17 15:58:29 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.421 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.335/0.378/0.421/0.043 ms
Mon Sep 17 15:58:30 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.330 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.346 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.330/0.338/0.346/0.008 ms
Mon Sep 17 15:58:31 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.280 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.439 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.280/0.359/0.439/0.081 ms
Mon Sep 17 15:58:32 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.308 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.873 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.308/0.590/0.873/0.283 ms
Mon Sep 17 15:58:33 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.631 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.394 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.394/0.512/0.631/0.120 ms
Mon Sep 17 15:58:35 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.02 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.302/0.662/1.023/0.361 ms
Mon Sep 17 15:58:36 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.390 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.404 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.390/0.397/0.404/0.007 ms
Mon Sep 17 15:58:37 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.98 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.320/1.150/1.980/0.830 ms
Mon Sep 17 15:58:38 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.504 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.335/0.419/0.504/0.086 ms
Mon Sep 17 15:58:39 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.514 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.405 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.405/0.459/0.514/0.058 ms
Mon Sep 17 15:58:40 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.771 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.417 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.417/0.594/0.771/0.177 ms
Mon Sep 17 15:58:41 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.609 ms
--- 192.168.11.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms
Mon Sep 17 15:58:41 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=1.08 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.30 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 1.086/1.193/1.300/0.107 ms
Mon Sep 17 15:58:42 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.475 ms
沒有發生ping 不通的情況。
我們看看系統狀態:
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:15 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
postgres_res (ocf::heartbeat:pgsql): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node96^M
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node96^M
.^[[H^[[J============^M
15:56:15秒的時候系統正常,pg資料庫,vip 在node96 上啟動 node95 node96 都線上
15:56:18秒的時候pg資料庫已經在node96 上停止服務了, vip 還執行在node96 上, node95 node96都線上
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
15:56:18秒同一秒內 vip 停止服務
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:18秒 同一秒內 vip 已經切換到node95 上面, ping 服務還是跑在node96 上,說明node96 的重啟還沒有關機。
Last updated: Mon Sep 17 15:56:19 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:20 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:19秒 ping 服務從node96停止, 此時node96 的重啟還沒有關機關機,fence-vm95還在執行。
15:56:20秒 pg 資料庫從node95上啟動成功, ping 服務還沒有在95上啟動,此時node96 還沒有完成關機。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:20 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:20秒 ping服務沒有切換到node95 ,此時 fence_node95 已經從node96上停止。node96 還沒有完成關機。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:24 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
......^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:25 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:24秒 ping 服務還沒有在node95 上啟動,
15:56:25秒 ping 服務在node95上啟動成功。 此時 node96 重啟關機還沒有完成。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:25 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 ]^M
OFFLINE: [ node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:25秒 ,這個時候 node96 網路關閉了,node96 狀態改為offline,
到這裡為止,整個系統已經切換完畢了。
ping vip 的服務期間一直都沒有中斷。
整個切換過程在10秒鐘左右完成,因為我們的資料時沒有負載的,這個切換時間快了一點。
我們去看看後臺的日誌。
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:ClusterIp:9: start
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: ip -f inet addr add 192.168.11.101/32 brd 192.168.11.101 dev eth0
Sep 17 15:56:18 node95 avahi-daemon[1507]: Registering new address record for 192.168.11.101 on eth0.IPv4.
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: ip link set eth0 up
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: /usr/lib64/heartbeat/send_arp -i 200 -r 5 -p /var/run/heartbeat/rsctmp/send_arp-192.168.11.101 eth0 192.168.11.101 auto not_used not_used
Sep 17 15:56:18 node95 crmd[1840]: info: process_lrm_event: LRM operation ClusterIp_start_0 (call=9, rc=0, cib-update=14, confirmed=true) ok
vip 已經啟動了,同時做了arp廣播。
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:postgres_res:10: start
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:ClusterIp:11: monitor
Sep 17 15:56:18 node95 crmd[1840]: info: process_lrm_event: LRM operation ClusterIp_monitor_30000 (call=11, rc=0, cib-update=15, confirmed=false) ok
Sep 17 15:56:19 node95 lrmd: [1837]: info: rsc:ping:12: start
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: INFO: server starting
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: WARNING: PostgreSQL start SLAVE db role in READ ONLY ,need to contact DBA
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: INFO: PostgreSQL is down
Sep 17 15:56:20 node95 pgsql(postgres_res)[2095]: INFO: PostgreSQL is started.
Sep 17 15:56:20 node95 crmd[1840]: info: process_lrm_event: LRM operation postgres_res_start_0 (call=10, rc=0, cib-update=16, confirmed=true) ok
18秒PG 已經在node95 上啟動了, 下面的warning 是我們針對pg 的從庫,做的設定,是如果是冷啟動,我們是不不會發生切換的,這個時候從庫是隻讀訪問。
關於叢集元件的介紹請檢視:
先前發過的兩個文件。
叢集的配置資訊:
crm(live)configure# show
node node95 ##/* 叢集的節點node
node node96
primitive ClusterIp ocf:heartbeat:IPaddr2 \ ##/* 定義一個資源,這裡是定義了叢集的VIP
params ip="192.168.11.101" cidr_netmask="32" \
op monitor interval="30s" \
meta target-role="Started" is-managed="true"
primitive fence_vm95 stonith:fence_vmware \ ##/* 定義fence 裝置這個是針對node95的定義
params ipaddr="192.168.10.197" login="user" passwd="passwd" vmware_datacenter="GZ-Offices" vmware_type="esx" action="reboot" port="dba-test-Cos6.2.64-11.95" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
op monitor interval="20" timeout="60s" \
op start interval="0" timeout="60s" \
meta target-role="Started"
primitive fence_vm96 stonith:fence_vmware \ ##/*定義fence裝置,這個是針對node96的定義
params ipaddr="192.168.10.197" login="user" passwd="passwd" vmware_datacenter="GZ-Offices" vmware_type="esx" action="reboot" port="dba-test-Cos6.2.64-11.96" pcmk_reboot_action="reboot" pcmk_host_list="node95 node96" \
op monitor interval="20" timeout="60s" \
op start interval="0" timeout="60s" \
meta target-role="Started"
primitive ping ocf:pacemaker:ping \ ##/* 定義一個資源,對叢集的各個節點進行ping 檢查網路連通性
params host_list="192.168.11.95 192.168.11.96 192.168.11.101 " multiplier="10" \
op monitor interval="10s" timeout="60s" \
op start interval="0" timeout="60s" \
op stop interval="0" timeout="100s" clone clone-ping ping \
meta target-role="Started"
primitive postgres_res ocf:heartbeat:pgsql \ ##/* 定義資料庫資源,我們選用的是PG資料庫。
params pgctl="/usr/local/pgsql/bin/pg_ctl" psql="/usr/local/pgsql/bin/psql" start_opt="" pgdata="/usr/local/pgsql/data" config="/usr/local/pgsql/data/postgresql.conf" pgdba="postgres" pgdb="postgres" \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="30s" timeout="30s" depth="0" master-max="2" \
meta target-role="Started" is-managed="true"
location ClusterIp-prefer-to-master ClusterIp 50: node96 ##/* 下面的這些是策略定義
location Pg-prefer-to-master postgres_res 50: node96 ## 要求vip 資料庫啟動的位置 在node96上,就是說,node96 應該是主庫
location cli-prefer-ClusterIp ClusterIp \
rule $id="cli-prefer-rule-ClusterIp" 50: #uname eq node96
location cli-prefer-ping ping \
rule $id="cli-prefer-rule-ping" 50: #uname eq node96
location cli-prefer-postgres_res postgres_res \
rule $id="cli-prefer-rule-postgres_res" 50: #uname eq node96
location loc_fence_vm95 fence_vm95 -inf: node95
location loc_fence_vm96 fence_vm96 -inf: node96
colocation Pg-with-ClusterIp inf: ClusterIp postgres_res ##/* 策略定義 要求vip + 資料庫執行在同一個節點上。
order Pg-after-ClusterIp inf: ClusterIp postgres_res ##/* 策略定義,要求vip先啟動,然後再啟動資料庫
property $id="cib-bootstrap-options" \ ##/* 叢集的預設屬性
dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="true" \
last-lrm-refresh="1347612913" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
(END)
先看系統狀態:
============
Last updated: Mon Sep 17 15:28:28 2012
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96
Stack: openais
Current DC: node96 - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
5 Resources configured.
============
Online: [ node95 node96 ]
fence_vm95 (stonith:fence_vmware): Started node96
fence_vm96 (stonith:fence_vmware): Started node95
ping (ocf::pacemaker:ping): Started node96
postgres_res (ocf::heartbeat:pgsql): Started node96
ClusterIp (ocf::heartbeat:IPaddr2): Started node96
主要的應用都執行在node96上,這跟我們配置的策略相符合。
我在另外一臺機器上配置了一個ping 程式碼,不停的ping vip ,來檢查系統的可用性
--- 192.168.11.101 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.311/0.385/0.529/0.079 ms
Mon Sep 17 15:31:28 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.364 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.352 ms
64 bytes from 192.168.11.101: icmp_seq=3 ttl=64 time=0.332 ms
64 bytes from 192.168.11.101: icmp_seq=4 ttl=64 time=0.396 ms
64 bytes from 192.168.11.101: icmp_seq=5 ttl=64 time=0.479 ms
測試1:
我們重啟node96 主機:
期間一直在ping vip [root@dba-test-11-98 ~]# while true ; do date ; ping -c 2 192.168.11.101; done
Mon Sep 17 15:58:26 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.374 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.434 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.374/0.404/0.434/0.030 ms
Mon Sep 17 15:58:27 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.321 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.445 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.321/0.383/0.445/0.062 ms
Mon Sep 17 15:58:28 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.367 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.442 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.367/0.404/0.442/0.042 ms
Mon Sep 17 15:58:29 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.421 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.335/0.378/0.421/0.043 ms
Mon Sep 17 15:58:30 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.330 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.346 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.330/0.338/0.346/0.008 ms
Mon Sep 17 15:58:31 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.280 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.439 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.280/0.359/0.439/0.081 ms
Mon Sep 17 15:58:32 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.308 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.873 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.308/0.590/0.873/0.283 ms
Mon Sep 17 15:58:33 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.631 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.394 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.394/0.512/0.631/0.120 ms
Mon Sep 17 15:58:35 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.02 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.302/0.662/1.023/0.361 ms
Mon Sep 17 15:58:36 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.390 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.404 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.390/0.397/0.404/0.007 ms
Mon Sep 17 15:58:37 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.98 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.320/1.150/1.980/0.830 ms
Mon Sep 17 15:58:38 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.504 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.335/0.419/0.504/0.086 ms
Mon Sep 17 15:58:39 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.514 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.405 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.405/0.459/0.514/0.058 ms
Mon Sep 17 15:58:40 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.771 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=0.417 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.417/0.594/0.771/0.177 ms
Mon Sep 17 15:58:41 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.609 ms
--- 192.168.11.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms
Mon Sep 17 15:58:41 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=1.08 ms
64 bytes from 192.168.11.101: icmp_seq=2 ttl=64 time=1.30 ms
--- 192.168.11.101 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 1.086/1.193/1.300/0.107 ms
Mon Sep 17 15:58:42 CST 2012
PING 192.168.11.101 (192.168.11.101) 56(84) bytes of data.
64 bytes from 192.168.11.101: icmp_seq=1 ttl=64 time=0.475 ms
沒有發生ping 不通的情況。
我們看看系統狀態:
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:15 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
postgres_res (ocf::heartbeat:pgsql): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node96^M
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node96^M
.^[[H^[[J============^M
15:56:15秒的時候系統正常,pg資料庫,vip 在node96 上啟動 node95 node96 都線上
15:56:18秒的時候pg資料庫已經在node96 上停止服務了, vip 還執行在node96 上, node95 node96都線上
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
15:56:18秒同一秒內 vip 停止服務
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:18 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node96^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:18秒 同一秒內 vip 已經切換到node95 上面, ping 服務還是跑在node96 上,說明node96 的重啟還沒有關機。
Last updated: Mon Sep 17 15:56:19 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:20 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm95 (stonith:fence_vmware): Started node96^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:19秒 ping 服務從node96停止, 此時node96 的重啟還沒有關機關機,fence-vm95還在執行。
15:56:20秒 pg 資料庫從node95上啟動成功, ping 服務還沒有在95上啟動,此時node96 還沒有完成關機。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:20 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:20秒 ping服務沒有切換到node95 ,此時 fence_node95 已經從node96上停止。node96 還沒有完成關機。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:24 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
......^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:25 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:24秒 ping 服務還沒有在node95 上啟動,
15:56:25秒 ping 服務在node95上啟動成功。 此時 node96 重啟關機還沒有完成。
.^[[H^[[J============^M
Last updated: Mon Sep 17 15:56:25 2012^M
Last change: Fri Sep 14 16:55:13 2012 via crmd on node96^M
Stack: openais^M
Current DC: node96 - partition with quorum^M
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14^M
2 Nodes configured, 2 expected votes^M
5 Resources configured.^M
============^M
Online: [ node95 ]^M
OFFLINE: [ node96 ]^M
fence_vm96 (stonith:fence_vmware): Started node95^M
ping (ocf::pacemaker:ping): Started node95^M
postgres_res (ocf::heartbeat:pgsql): Started node95^M
ClusterIp (ocf::heartbeat:IPaddr2): Started node95^M
15:56:25秒 ,這個時候 node96 網路關閉了,node96 狀態改為offline,
到這裡為止,整個系統已經切換完畢了。
ping vip 的服務期間一直都沒有中斷。
整個切換過程在10秒鐘左右完成,因為我們的資料時沒有負載的,這個切換時間快了一點。
我們去看看後臺的日誌。
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:ClusterIp:9: start
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: ip -f inet addr add 192.168.11.101/32 brd 192.168.11.101 dev eth0
Sep 17 15:56:18 node95 avahi-daemon[1507]: Registering new address record for 192.168.11.101 on eth0.IPv4.
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: ip link set eth0 up
Sep 17 15:56:18 node95 IPaddr2(ClusterIp)[2037]: INFO: /usr/lib64/heartbeat/send_arp -i 200 -r 5 -p /var/run/heartbeat/rsctmp/send_arp-192.168.11.101 eth0 192.168.11.101 auto not_used not_used
Sep 17 15:56:18 node95 crmd[1840]: info: process_lrm_event: LRM operation ClusterIp_start_0 (call=9, rc=0, cib-update=14, confirmed=true) ok
vip 已經啟動了,同時做了arp廣播。
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:postgres_res:10: start
Sep 17 15:56:18 node95 lrmd: [1837]: info: rsc:ClusterIp:11: monitor
Sep 17 15:56:18 node95 crmd[1840]: info: process_lrm_event: LRM operation ClusterIp_monitor_30000 (call=11, rc=0, cib-update=15, confirmed=false) ok
Sep 17 15:56:19 node95 lrmd: [1837]: info: rsc:ping:12: start
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: INFO: server starting
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: WARNING: PostgreSQL start SLAVE db role in READ ONLY ,need to contact DBA
Sep 17 15:56:19 node95 pgsql(postgres_res)[2095]: INFO: PostgreSQL is down
Sep 17 15:56:20 node95 pgsql(postgres_res)[2095]: INFO: PostgreSQL is started.
Sep 17 15:56:20 node95 crmd[1840]: info: process_lrm_event: LRM operation postgres_res_start_0 (call=10, rc=0, cib-update=16, confirmed=true) ok
18秒PG 已經在node95 上啟動了, 下面的warning 是我們針對pg 的從庫,做的設定,是如果是冷啟動,我們是不不會發生切換的,這個時候從庫是隻讀訪問。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/133735/viewspace-744208/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- mongodb叢集節點故障的切換方法MongoDB
- 時間系統、程式的排程與切換
- 一次利用mv線上遷移資料、切換系統的過程
- ESXI 6.7 系統安裝詳細過程
- windows怎麼切換回蘋果系統(電腦雙系統切換系統方法)Windows蘋果
- Linunx系統引導過程及MBR/GRUB故障
- Windows系統切換工具Windows
- 切換Windows的系統語言Windows
- MySQL 儲存過程進行切換表MySql儲存過程
- mac電腦如何切換雙系統,雙系統該怎麼切換Mac
- 【freertos】006-任務切換實現細節
- iOS模仿系統相機拍照你不曾注意過的細節iOS
- mac裝win10雙系統的詳細過程MacWin10
- 淺談UITableView內Cell的選中細節過程UIView
- PostgreSQL啟動恢復過程中日誌源的切換SQL
- [原始碼分析] 定時任務排程框架 Quartz 之 故障切換原始碼框架quartz
- 抽獎系統細節玩法
- Android CardView 開發過程中要注意的細節AndroidView
- 5 切換和故障轉移操作
- MHA高可用配置及故障切換
- linux-HA 節點 故障: node xxx UNCLEAN (offline) 一例Linux
- AIX系統儲存故障後的Oracle 10g RAC恢復過程AIOracle 10g
- Linux CPU 上下文切換的故障排查Linux
- windows8.1系統的輸入切換方法Windows
- 單節點DG的switchover切換介紹
- 關於雲控系統的各種細節
- 那些蘋果、谷歌、微軟系統中的魔鬼細節蘋果谷歌微軟
- 透過模切ERP系統把企業各環節管理到位
- 更換HP小型機系統映象故障盤
- Oracle Data Guard 快速啟動故障切換指南Oracle
- ubuntu系統安裝mysql並支援遠端連線的詳細過程UbuntuMySql
- mac系統下nginx的詳細安裝過程及使用(適合新手)MacNginx
- 系統 儲存過程儲存過程
- 3、CentOS 6.5系統安裝配置Tomcat 8詳細過程CentOSTomcat
- win10系統雙顯示卡怎麼切換?Win10系統雙顯示卡切換AMD和英特爾的方法Win10
- DDL語句在儲存過程中使用的細節和要點儲存過程
- 優良的sql語句在書寫過程中要注意的細節SQL
- Linux系統中使用者切換Linux