How To Deal With Split Brain In Oracle 19c Rac-202203
How To Deal With Split Brain In Oracle 19c Rac
導讀:Oracle 19c Rac腦裂驅逐的方法延續了Oracle 12c Rac的方法。節點存活的優先順序是Cohort Size>Weight>lowest numbered node。
1.Oracle 19c Rac 腦裂驅逐的不同場景處理方法:
場景1)當cohorts size不相同時,cohort size數量大的cohorts存活,(weight忽略)。
場景2)當cohorts size相同時,且weight相同時;lowest numbered node存活(注:cohort為null也被驅逐)。
場景3)當cohorts size相同時,且weight不相同時;weight大存活,(lowest numbered node忽略)。
總結規律:節點存活的優先順序:Cohort Size>Weight stamp>lowest numbered node。(cohort size/number node簡述請看文章尾部的附表)
# 版本是以oracle 19.9 ru為基準
2.為什麼要寫這篇文章?
1)腦裂問題發生時,oracle cluster會自動處理它,那為什麼要了解oracle cluster自動處理的腦裂問題的原理內容呢?
1.1 腦裂故障發生時,可以更快更準確的找到故障原因,因為你瞭解腦裂處理過程中,誰是最應該被驅逐的節點。
1.2 腦裂故障發生時,可以提升更重要業務的穩定性強壯性,因為更重要的業務在特定節點上執行,腦裂時特定節點會被保留。
2)為什麼oracle 12c要引入weight概念呢?
2.1 更多的控制權。客戶可以透過weight配置決定Rac腦裂問題處理時存活的特定節點。
weight配置可以透過特定硬體(css\_critical yes節點存活);特定資料庫或服務(css\_critical yes或css_critical yes節點存活);特定資源。
3. 透過實驗證明我們的結論。
3.1 實驗證明:場景1)當cohorts size不相同時,cohort size數量大的cohorts存活,(weight忽略)。
結論轉化成實驗場景思路:
3個節點rac(cohorts size不相同),節點2配置server權重。模擬故障節點2的私有網路down。
若節點1和節點2的cohort(cohort size數量大)存活,節點2(weight高)被驅逐,則說明cohorts size不相同時,cohort size數量大的cohorts存活,(weight忽略)成立。
若節點1和節點2的cohort(cohort size數量大)被驅逐,節點2(weight高)存活,則說明cohorts size不相同時,cohort size數量大的cohorts存活,(weight忽略)不成立。
3個節點rac,節點2配置server權重 [root@rac2 bin]# ./oifcfg getif enp0s3 10.0.0.0 global public enp0s8 192.168.56.0 global cluster_interconnect,asm [root@rac2 bin]# ./olsnodes -s -n rac1 1 Active rac2 2 Active rac3 3 Active [root@rac2 bin]# ./crsctl get server css\_critical CRS-5092: Current value of the server attribute CSS_CRITICAL is yes. [root@rac2 bin]# date 2022年 03月 01日 星期二 05:09:33 EST [root@rac2 bin]# ifdown enp0s8 成功停用連線 "enp0s8"(D-Bus 活動路徑:/org/freedesktop/NetworkManager/ActiveConnection/13 [grid@rac3:/u01/app/grid/diag/crs/rac3/crs/trace]$olsnodes -s -n rac1 1 Active rac2 2 Inactive rac3 3 Active # 實驗過程來看,cohorts size大的(節點1與節點3的cohort)存活,weight高的節點2被驅逐。 ## 從ocssd.trc中分析我們的結論:當cohorts size不相同時,cohort size數量大的cohorts存活,(weight忽略)。 # 通常被驅逐節點資訊更有分析的價值 節點1與節點3權重相同,且與節點2不同(配置server權重):從clssnmrCheckNodeWeight上的各項描述可知曉。 節點1與節點3是(cohorts size=2)大的:Surviving cohort: 1,3 節點2是(cohorts size=1)小的:My cohort: 2 # 以上資訊是ocssd.bin程式透過voting disk得知。 節點1與節點3是(cohorts size=2)大的存活,節點2是(cohorts size=1)小的被驅逐:參考clssnmCheckDskInfo行內容 # ocssd.trc(節點2),因節點2被驅逐,所以分析它的ocssd.trc 2022-03-01 05:10:05.688 : CSSD:1632941824: clssnmrCheckNodeWeight: node(1) has weight stamp(541570462) pebbles (0) goldstars (0) flags (3) SpoolVersion (0) 節點2配置server權重,所以goldstars (1),由0變成1。 2022-03-01 05:10:05.688 : CSSD:1632941824: clssnmrCheckNodeWeight: node(2) has weight stamp(541570462) pebbles (0) goldstars (1) flags (b) SpoolVersion (0) 2022-03-01 05:10:05.688 : CSSD:1632941824: clssnmrCheckNodeWeight: node(3) has weight stamp(541570462) pebbles (0) goldstars (0) flags (3) SpoolVersion (0) 2022-03-01 05:10:05.688 : CSSD:1632941824: clssnmrCheckNodeWeight: Server pool version not consistent 2022-03-01 05:10:05.688 : CSSD:1632941824: clssnmrCheckNodeWeight: stamp(541570462), completed(3/3) 2022-03-01 05:10:05.688 : CSSD:1632941824: [ INFO] clssnmCheckDskInfo: My cohort: 2 # cohorts size=1 2022-03-01 05:10:05.688 : CSSD:1632941824: [ INFO] clssnmCheckDskInfo: Surviving cohort: 1,3 # cohorts size=2 2022-03-01 05:10:05.688 : CSSD:1632941824: [ INFO] clssnmChangeState: oldstate 3 newstate 0 clssnmr.c 3075 2022-03-01 05:10:05.688 : CSSD:1632941824: (:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 1 nodes with leader 2, rac2, loses to cohort of 2 nodes led by node 1, rac1, based on map type 2 since the cohort is larger # cohort is larger(cohort of 2 nodes led by node 1) is active ## 結論:當cohorts size不相同時,cohort size數量大的cohorts存活,(weight忽略)。
3.2 實驗證明:場景2)當cohorts size相同時,且weight相同時;lowest numbered node存活(注:null也被驅逐)。
結論轉化成實驗場景思路:
2個節點rac(cohorts size相同),2個節點均不配置權重。模擬故障節點2的私有網路down。
若節點1存活,則說明cohorts size相同時,且weight相同時;lowest numbered node存活成立。
若節點1被驅逐,則說明cohorts size相同時,且weight相同時;lowest numbered node存活不成立。
2個節點Rac,未配置server權重情況 [root@rac2 bin]# ./oifcfg getif enp0s3 10.0.0.0 global public enp0s8 192.168.56.0 global cluster_interconnect,asm [root@rac2 bin]# ./olsnodes -s -n rac1 1 Active rac2 2 Active [root@rac2 bin]# ./crsctl get server css\_critical CRS-5092: Current value of the server attribute CSS_CRITICAL is no. [root@rac2 bin]# ifdown enp0s8 [root@rac1 bin]# ./olsnodes -s -n rac1 1 Active rac2 2 Inactive # 實驗過程來分析,lowest numbered node(節點1)存活。 ## 從ocssd.trc中分析我們的結論:當cohorts size相同時,且weight相同時;lowest numbered node存活(注:null也被驅逐)。 # 通常被驅逐節點資訊更有分析的價值 這個實驗案例因出現cohort=null,擔心實驗結果不準確,故做了2次。 第一次:2022-02-28 21:07:43 第一次實驗時,cohort驗證前節點2就已經被剔除叢集,故出現clssnmCheckDskInfo: My cohort: NULL(怕這裡是出現錯誤的論點,故又重新做了1次)。同時證明了:cohort可以是null,若節點的cohort是null表明這個節點在cohort驗證前就已經被驅逐。 第二次:2022-02-28 22:59:09 第二次實驗時,完美的證明了當cohorts size相同時,且weight相同時;lowest numbered node存活 # ocssd.trc(節點2)(第一次:2022-02-28 21:07:43),因節點2被驅逐,所以分析它的ocssd.trc 2022-02-28 21:07:43.770 : CSSD:1114588928: [ INFO] clssnmvDHBValidateNCopy: node 1, rac1, has a disk HB, but no network HB, DHB has rcfg 541548100, wrtcnt, 599339, LATS 524224, lastSeqNo 599336, uniqueness 1646100093, timestamp 1646100463/526874 2022-02-28 21:07:43.771 : CSSD:1103550208: clssnmrCheckNodeWeight: node(1) has weight stamp(541548099) pebbles (0) goldstars (0) flags (3) SpoolVersion (0) 2022-02-28 21:07:43.771 : CSSD:1103550208: clssnmrCheckNodeWeight: node(2) has weight stamp(541548099) pebbles (0) goldstars (0) flags (b) SpoolVersion (0) 2022-02-28 21:07:43.771 : CSSD:1103550208: clssnmrCheckNodeWeight: Server pool version not consistent 2022-02-28 21:07:43.771 : CSSD:1103550208: clssnmrCheckNodeWeight: stamp(541548099), completed(2/2) # 為什麼會出現clssnmCheckDskInfo: My cohort: NULL這種情況呢,這裡的number node竟是null? # 答:number node是null說明此時節點已被叢集驅逐,此時ocssd程式無法透過voting disk中的資訊。另外從尾部(local node is already evicted)資訊也可以發現。節點2確實已經被驅逐。 原理:number node其實是每個節點的ocssd程式把node name和node number以佇列的方式記錄在voting disk中,方便節點間進行通訊交流。 2022-02-28 21:07:43.771 : CSSD:1103550208: [ INFO] clssnmCheckDskInfo: My cohort: NULL 2022-02-28 21:07:43.771 : CSSD:1103550208: [ INFO] clssnmCheckDskInfo: Surviving cohort: 1 2022-02-28 21:07:43.771 : CSSD:1103550208: [ INFO] clssnmChangeState: oldstate 3 newstate 0 clssnmr.c 3075 2022-02-28 21:07:43.771 : CSSD:1103550208: [ ERROR] clssscWriteCAlogEvent: CALOG init not done 2022-02-28 21:07:43.771 : CSSD:1103550208: (:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 0 nodes with leader 65535, , loses to cohort of 1 nodes led by node 1, rac1, based on map type 2 since the local node is already evicted # ocssd.trc(節點2)(第二次:2022-02-28 22:59:09),因節點2被驅逐,所以分析它的ocssd.trc 2022-02-28 22:59:09.244 : CSSD:3202848512: [ INFO] clssnmvDHBValidateNCopy: node 1, rac1, has a disk HB, but no network HB, DHB has rcfg 541554728, wrtcnt, 618613, LATS 7209584, lastSeqNo 618610, uniqueness 1646106722, timestamp 1646107149/7212544 檢驗權重節點1與節點2權重相同,lowest numbered node存活【節點1存活,My cohort: 1(節點1);< My cohort: 2(節點2)】。 2022-02-28 22:59:09.244 : CSSD:3196540672: clssnmrCheckNodeWeight: node(1) has weight stamp(541554727) pebbles (0) goldstars (0) flags (3) SpoolVersion (0) 2022-02-28 22:59:09.244 : CSSD:3196540672: clssnmrCheckNodeWeight: node(2) has weight stamp(541554727) pebbles (0) goldstars (0) flags (b) SpoolVersion (0) 2022-02-28 22:59:09.244 : CSSD:3196540672: clssnmrCheckNodeWeight: Server pool version not consistent 2022-02-28 22:59:09.244 : CSSD:3196540672: clssnmrCheckNodeWeight: stamp(541554727), completed(2/2) 2022-02-28 22:59:09.244 : CSSD:3196540672: [ INFO] clssnmCheckDskInfo: My cohort: 2 2022-02-28 22:59:09.244 : CSSD:3196540672: [ INFO] clssnmCheckDskInfo: Surviving cohort: 1 2022-02-28 22:59:09.244 : CSSD:3196540672: [ INFO] clssnmChangeState: oldstate 3 newstate 0 clssnmr.c 3075 2022-02-28 22:59:09.245 : CSSD:3196540672: (:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 1 nodes with leader 2, rac2, loses to cohort of 1 nodes led by node 1, rac1, based on map type 2 since the cohort is the only one with public network access ## 場景2)結論:當cohorts size相同時,且weight相同時;lowest numbered node存活(注:null也被驅逐)。
3.3 實驗證明:場景3)當cohorts size相同時,且weight不相同時;weight大存活,(lowest numbered node忽略)。
結論轉化成實驗場景思路:
2個節點rac(cohorts size相同),節點2配置server權重。模擬故障節點2的私有網路down。
若節點2存活,則說明cohorts size相同時,且weight不相同時;weight大存活成立。
若節點2被驅逐,則說明cohorts size相同時,且weight不相同時;weight大存活不成立。
2個節點Rac,節點2配置server權重情況 [root@rac2 bin]# ./oifcfg getif enp0s3 10.0.0.0 global public enp0s8 192.168.56.0 global cluster_interconnect,asm [root@rac2 bin]# ./olsnodes -s -n rac1 1 Active rac2 2 Active [root@rac2 bin]# ./crsctl get server css\_critical CRS-5092: Current value of the server attribute CSS_CRITICAL is no. [root@rac2 bin]# ./crsctl set server css\_critical yes CRS-4416: Server attribute 'CSS_CRITICAL' successfully changed. Restart Oracle High Availability Services for new value to take effect. ./crsctl stop has ./crsctl start has [root@rac2 bin]# ./crsctl get server css\_critical CRS-5092: Current value of the server attribute CSS_CRITICAL is yes. [root@rac2 bin]# date 2022年 03月 01日 星期二 02:16:41 EST [root@rac2 bin]# ifdown enp0s8 成功停用連線 "enp0s8"(D-Bus 活動路徑:/org/freedesktop/NetworkManager/ActiveConnection/9) [root@rac2 bin]# ./olsnodes -s -n rac1 1 Inactive rac2 2 Active # 實驗過程分析,cohorts size相同時,weight大的cohort存活 ## 從ocssd.trc中分析我們的結論:當cohorts size相同時,且weight不相同時;weight大存活,(lowest numbered node忽略)。 # 通常被驅逐節點資訊更有分析的價值 這個實驗案例因出現cohort=null,擔心實驗結果不準確,故做了2次。 第一次:2022-03-01 02:17:14 第一次實驗時,權重驗證前節點1就已經被剔除叢集,故clssnmCheckDskInfo: My cohort: NULL(怕這裡是出現錯誤的論點,故又重新做了1次)。同時證明了:cohort可以是null,若節點的cohort是null 其實表明這個節點在權重驗證前已經被驅逐。 第二次:2022-03-01 03:18:52 第二次實驗時,2個節點rac(cohorts size相同),節點2配置server權重,故節點權重(goldstars)大;節點2存活,則說明cohorts size相同時,且weight不相同時;weight大存活成立 # 無論是cohort為null,還是cohort為1時,rac使用保證了配置server權重的節點存活。 #(第一次:2022-03-01 02:17:14) # 節點1的ocssd.trc,因節點1被驅逐,所以分析它的ocssd.trc 2022-03-01 02:17:14.359 : CSSD:345351936: [ INFO] clssnmvDHBValidateNCopy: node 2, rac2, has a disk HB, but no network HB, DHB has rcfg 541554732, wrtcnt, 633947, LATS 19097864, lastSeqNo 633944, uniqueness 1646118528, timestamp 1646119033/19094454 2022-03-01 02:17:14.359 : CSSD:132638464: clssnmrCheckNodeWeight: node(1) has weight stamp(541554731) pebbles (0) goldstars (0) flags (3) SpoolVersion (0) 2022-03-01 02:17:14.359 : CSSD:132638464: clssnmrCheckNodeWeight: node(2) has weight stamp(541554731) pebbles (0) goldstars (1) flags (b) SpoolVersion (0) 2022-03-01 02:17:14.359 : CSSD:132638464: clssnmrCheckNodeWeight: Server pool version not consistent 2022-03-01 02:17:14.359 : CSSD:132638464: clssnmrCheckNodeWeight: stamp(541554731), completed(2/2) 2022-03-01 02:17:14.360 : CSSD:132638464: [ INFO] clssnmCheckDskInfo: My cohort: NULL 2022-03-01 02:17:14.360 : CSSD:132638464: [ INFO] clssnmCheckDskInfo: Surviving cohort: 2 2022-03-01 02:17:14.360 : CSSD:132638464: [ INFO] clssnmChangeState: oldstate 3 newstate 0 clssnmr.c 3075 2022-03-01 02:17:14.360 : CSSD:132638464: [ ERROR] clssscWriteCAlogEvent: CALOG init not done cohort為null的原因,下面的描述已經說明。檢查cohorts時,節點2已被驅逐 2022-03-01 02:17:14.360 : CSSD:132638464: (:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 0 nodes with leader 65535, , loses to cohort of 1 nodes led by node 2, rac2, based on map type 2 since the local node is already evicted #(第二次:2022-03-01 03:18:52) # 節點1的ocssd.trc,因節點1被驅逐,所以分析它的ocssd.trc 2022-03-01 03:18:52.125 : CSSD:1275066112: [ INFO] clssnmHBInfo: This node has lost connectivity with all the other nodes in the cluster, therefore setting Network Timeout = 0 2022-03-01 03:18:52.127 : CSSD:1487120128: [ INFO] clssnmvDHBValidateNCopy: node 2, rac2, has a disk HB, but no network HB, DHB has rcfg 541570458, wrtcnt, 644669, LATS 22795554, lastSeqNo 644666, uniqueness 1646122452, timestamp 1646122731/22792384 2022-03-01 03:18:52.127 : CSSD:1273489152: clssnmrCheckNodeWeight: node(1) has weight stamp(541570457) pebbles (1) goldstars (0) flags (3) SpoolVersion (0) 2022-03-01 03:18:52.127 : CSSD:1273489152: clssnmrCheckNodeWeight: node(2) has weight stamp(541570457) pebbles (0) goldstars (1) flags (b) SpoolVersion (0) 2022-03-01 03:18:52.127 : CSSD:1273489152: clssnmrCheckNodeWeight: Server pool version not consistent 2022-03-01 03:18:52.127 : CSSD:1273489152: clssnmrCheckNodeWeight: stamp(541570457), completed(2/2) 2022-03-01 03:18:52.127 : CSSD:1273489152: [ INFO] clssnmCheckDskInfo: My cohort: 1 2022-03-01 03:18:52.127 : CSSD:1273489152: [ INFO] clssnmCheckDskInfo: Surviving cohort: 2 2022-03-01 03:18:52.127 : CSSD:1273489152: [ INFO] clssnmChangeState: oldstate 3 newstate 0 clssnmr.c 3075 cohort相同;節點2權重大;儘管節點1的number node是1,節點2的number node是2,但是節點2還是存活下來。證明了在配置了權重時,由權重決定保留誰,number node則忽略。 證明了場景3的結論:當cohorts size相同時,且weight不相同時;weight大存活,(lowest numbered node忽略)。 2022-03-01 03:18:52.127 : CSSD:1273489152: (:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 1 nodes with leader 1, rac1, loses to cohort of 1 nodes led by node 2, rac2, based on map type 2 since the cohort has higher cumulative gold star weight ## 結論:場景3)當cohorts size相同時,且weight不相同時;weight大存活,(lowest numbered node忽略)。
4.附表
核心詞語解釋如下: # lowest number node number node其實是每個節點的ocssd程式把node name和node number以佇列的方式記錄在votind disk中,方便節點間進行通訊交流。 [root@rac1 bin]# ./olsnodes -n rac1 1 <=node number rac2 2 <=node number # 以下均是擷取ocssd.trc的內容 clssnmCheckDskInfo: My cohort: 1 <=其中這個'1'就是 number node,通常這個'1'與node1對應,但不是絕對的。例如下面的案例中節點2就出現 My cohort: NULL的情況,因為在檢查權重時發現節點2已剔除叢集,所以節點2不能從獲取voting disk發現節點的node name和node number。 # cohorts size cohorts size相同時,2個node中抓取ocssd.trc中的一部分 My cohort:1 <=node number=1;cohrots size=1 Surviving cohort:2<=node number=2;cohrots size=1 cohorts size不相同時,4個node中抓取ocssd.trc中的一部分 My cohort:1 <=<=node number=1;cohrots size=1 Surviving cohort:2,3,4 <=node number=2,3,4;cohrots size=3(即2,3,4數量和)
5.參考文獻
12C:Which Node Will Survive when Split Brain Takes Place(Doc ID 1951726.1)
Brain: What’s new in Oracle Database 12.1.0.2c?
########################################################################################
版權所有,文章允許轉載,但必須以連結方式註明源地址,否則追究法律責任!【QQ交流群:53993419】
QQ:14040928 E-mail:dbadoudou@163.com
本文連結: http://blog.itpub.net/26442936/viewspace-2868705/
########################################################################################
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26442936/viewspace-2868705/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- How To Deal With Split Brain In Oracle 19c RacAIOracle
- MySQL InnoDB Cluster – how to manage a split-brain situationMySqlAI
- Split Brain in Oracle Clusterware and Real Application ClusterAIOracleAPP
- RAC之Split brainAI
- RAC 腦裂 處理機制 Oracle RAC Brain SplitOracleAI
- Oracle10g RAC clusterware split-brain - 腦裂OracleAI
- Waiting for clusterware split-brain resolutionAI
- 轉:瞭解Oracle RAC Brain Split Resolution叢集腦裂協議OracleAI協議
- split-brain 腦裂問題(Keepalived)AI
- Oracle RAC叢集腦裂split-brain概述及解決辦法OracleAI
- ZooKeeper 05 - ZooKeeper 叢集的腦裂問題(Split Brain)AI
- 記一次RAC Brain Split腦裂分析過程AI
- RAC腦裂(split brain)相關: DTO(disktimeout) - SDTO,LDTOAI
- 容器化RDS—計算儲存分離架構下的“Split-Brain”架構AI
- 自動SPLIT ORACLE PARTITIONOracle
- How Oracle Works!Oracle
- How to Study OracleOracle
- Oracle 19C EMOracle
- Oracle索引分裂(Index Block Split)Oracle索引IndexBloC
- How to enable trace in OracleOracle
- In Oracle,How to use dumpOracle
- An Example of How Oracle WorksOracle
- Oracle 19c Broker配置Oracle
- oracle 19c dataguard silent install (oracle 19c dataguard 靜默安裝)Oracle
- Oracle HowTo:How to get Oracle SCN?Oracle
- How Oracle Locking WorksOracle
- How to rename an Oracle stored procedureOracle
- oracle 19c 初體驗Oracle
- oracle 19c pdb遷移Oracle
- Oracle 19c Database Management ToolsOracleDatabase
- Oracle 19c的安裝Oracle
- Oracle索引塊分裂split資訊彙總Oracle索引
- Oracle資料泵(Oracle Data Pump) 19cOracle
- 【19c】Oracle 19c 和 20c 的新特性解密Oracle解密
- How Oracle Store Number internal(zt)Oracle
- How to Relink Oracle Database SoftwareOracleDatabase
- How to Shrink Undo Segment In Oracle DatabaseOracleDatabase
- How To Delete An Oracle Applications UserdeleteOracleAPP