oracle11GR2 RAC節點crash故障分析
環境:AIX 7100
oracle11GR2 RAC
詳細版本:11.2.0.4
現象:
節點2 CRS HANG住了,CRSCTL命令完全沒反應,直接幹掉CRS程式主機重啟後,但VIP沒飄到節點1
分析思路;
1、DB下的alert日誌及相關trace日誌。
2. 檢視所有節點的"errpt -a"的輸出。
3. 檢視發生問題時所有節點的GI日誌:
<GRID_HOME>/log/<host>/alert*.log
<GRID_HOME>/log/<host>/crsd/crsd.log
<GRID_HOME>/log/<host>/cssd/ocssd.log
<GRID_HOME>/log/<host>/agent/ohasd/oracssdmonitor_root/oracssdmonitor_root.log
<GRID_HOME>/log/<host>/agent/ohasd/oracssdagent_root/oracssdagent_root.log
/etc/oracle/lastgasp/*, or /var/opt/oracle/lastgasp/*(If have)
注:如果是CRS發起的重啟主機會在/etc/oracle/lastgasp/目錄下的檔案中新增一條記錄。
4. 檢視發生問題時所有節點的LMON, LMS*,LMD0 trace files。
5. 檢視發生問題時所有節點OSW的所有輸出。
詳細分析過程如下:
節點1DB的alert日誌:
Tue Mar 25 12:59:07 2014
Thread 1 advanced to log sequence 245 (LGWR switch)
Current log# 2 seq# 245 mem# 0: +SYSDG/dbracdb/onlinelog/group_2.264.840562709
Current log# 2 seq# 245 mem# 1: +SYSDG/dbracdb/onlinelog/group_2.265.840562727
Tue Mar 25 12:59:20 2014
Archived Log entry 315 added for thread 1 sequence 244 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:14:54 2014
IPC Send timeout detected. Sender: ospid 6160700 [oracle@dbrac1 (LMS0)]
Receiver: inst 2 binc 291585594 ospid 11010320
IPC Send timeout to 2.1 inc 50 for msg type 65518 from opid 12
Tue Mar 25 13:14:59 2014
Communications reconfiguration: instance_number 2
Tue Mar 25 13:15:01 2014
IPC Send timeout detected. Sender: ospid 12452050 [oracle@dbrac1 (LMS1)]
Receiver: inst 2 binc 291585600 ospid 11534636
IPC Send timeout to 2.2 inc 50 for msg type 65518 from opid 13
Tue Mar 25 13:15:22 2014
IPC Send timeout detected. Sender: ospid 10682630 [oracle@dbrac1 (TNS V1-V3)]
Receiver: inst 2 binc 50 ospid 6095056
Tue Mar 25 13:15:25 2014
Detected an inconsistent instance membership by instance 1
Evicting instance 2 from cluster
Waiting for instances to leave: 2
Tue Mar 25 13:15:26 2014
Dumping diagnostic data in directory=[cdmp_20140325131526], requested by (instance=2, osid=8192018 (LMD0)), summary=[abnormal instance termination].
Tue Mar 25 13:15:42 2014
Reconfiguration started (old inc 50, new inc 54)
List of instances:
1 (myinst: 1)
...
Tue Mar 25 13:15:52 2014
Archived Log entry 316 added for thread 2 sequence 114 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:15:53 2014
ARC3: Archiving disabled thread 2 sequence 115
Archived Log entry 317 added for thread 2 sequence 115 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:16:37 2014
Thread 1 advanced to log sequence 246 (LGWR switch)
Current log# 3 seq# 246 mem# 0: +SYSDG/dbracdb/onlinelog/group_3.266.840562735
Current log# 3 seq# 246 mem# 1: +SYSDG/dbracdb/onlinelog/group_3.267.840562747
Tue Mar 25 13:16:46 2014
Decreasing number of real time LMS from 2 to 0
Tue Mar 25 13:16:51 2014
Archived Log entry 318 added for thread 1 sequence 245 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:20:50 2014
IPC Send timeout detected. Sender: ospid 9306248 [oracle@dbrac1 (PING)]
Receiver: inst 2 binc 291585377 ospid 2687058
Tue Mar 25 13:30:08 2014
Thread 1 advanced to log sequence 247 (LGWR switch)
Current log# 1 seq# 247 mem# 0: +SYSDG/dbracdb/onlinelog/group_1.262.840562653
Current log# 1 seq# 247 mem# 1: +SYSDG/dbracdb/onlinelog/group_1.263.840562689
Tue Mar 25 13:30:20 2014
Archived Log entry 319 added for thread 1 sequence 246 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:45:23 2014
Thread 1 advanced to log sequence 248 (LGWR switch)
Current log# 2 seq# 248 mem# 0: +SYSDG/dbracdb/onlinelog/group_2.264.840562709
Current log# 2 seq# 248 mem# 1: +SYSDG/dbracdb/onlinelog/group_2.265.840562727
節點2DB的alert日誌:
Tue Mar 25 12:07:15 2014
Archived Log entry 309 added for thread 2 sequence 112 ID 0xffffffff82080958 dest 1:
Tue Mar 25 12:22:22 2014
Dumping diagnostic data in directory=[cdmp_20140325122222], requested by (instance=1, osid=7012828), summary=[incident=384673].
Tue Mar 25 12:45:21 2014
Thread 2 advanced to log sequence 114 (LGWR switch)
Current log# 6 seq# 114 mem# 0: +SYSDG/dbracdb/onlinelog/group_6.274.840563009
Current log# 6 seq# 114 mem# 1: +SYSDG/dbracdb/onlinelog/group_6.275.840563017
Tue Mar 25 12:45:22 2014
Archived Log entry 313 added for thread 2 sequence 113 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:14:57 2014
IPC Send timeout detected. Receiver ospid 11010320
Tue Mar 25 13:14:57 2014
Errors in file /oraclelog/diag/rdbms/dbracdb/dbracdb2/trace/dbracdb2_lms0_11010320.trc:
IPC Send timeout detected. Receiver ospid 11534636 [
Tue Mar 25 13:15:01 2014
Errors in file /oraclelog/diag/rdbms/dbracdb/dbracdb2/trace/dbracdb2_lms1_11534636.trc:
Tue Mar 25 13:15:25 2014
LMS0 (ospid: 11010320) has detected no messaging activity from instance 1
LMS0 (ospid: 11010320) issues an IMR to resolve the situation
Please check LMS0 trace file for more detail.
Tue Mar 25 13:15:25 2014
Suppressed nested communications reconfiguration: instance_number 1
Detected an inconsistent instance membership by instance 1
Tue Mar 25 13:15:25 2014
Received an instance abort message from instance 1
Please check instance 1 alert and LMON trace files for detail.
LMD0 (ospid: 8192018): terminating the instance due to error 481
Tue Mar 25 13:15:26 2014
ORA-1092 : opitsk aborting process
Tue Mar 25 13:15:29 2014
System state dump requested by (instance=2, osid=8192018 (LMD0)), summary=[abnormal instance termination].
System State dumped to trace file /oraclelog/diag/rdbms/dbracdb/dbracdb2/trace/dbracdb2_diag_9699724_20140325131529.trc
Instance terminated by LMD0, pid = 8192018
節點1的OSW PRVTNET日誌:
zzz ***Tue Mar 25 13:12:19 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:12:31 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * *
2 * * *
3 * dbrac2-priv (192.168.100.2) 0 ms *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:13:17 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * *
2 * * *
3 dbrac2-priv (192.168.100.2) 0 ms * *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:14:04 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * * <===============================注:*代表traceroute不成功,3個*代表執行了3次網路互動
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
8 dbrac2-priv (192.168.100.2) 0 ms 0 ms *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:16:01 BEIST 2014 <====================================This snapshot is taken after 2 mins, OSW gap happened.
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * dbrac2-priv (192.168.100.2) 0 ms 0 ms
節點2的OSW PRVTNET日誌:
zzz ***Tue Mar 25 13:14:39 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 * dbrac1-priv (192.168.100.1) 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:14:51 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 * dbrac1-priv (192.168.100.1) 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Tue Mar 25 13:14:56 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 * * dbrac1-priv (192.168.100.1) 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:15:08 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms * 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Warning. Traceroute response is spanning snapshot intervals.
分析如下:
例項2和例項1之間發生了IPC Send timeout,之後例項2被驅逐出了叢集。
在traceroute中我們看到當時節點間私網通訊存在問題:
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * * <===============================注:*代表traceroute不成功,3個*代表執行了3次網路互動
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
OSW netstat日誌如下:
OSW
---------------------
dbrac1_netstat_14.03.25.1300.dat
zzz Tue Mar 25 13:14:33 BEIST 2014
udp:
439252958 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
328052 dropped due to no socket
4 broadcast/multicast datagrams dropped due to no socket
7289 socket buffer overflows
438917613 delivered
434514260 datagrams output
...
zzz Tue Mar 25 13:15:08 BEIST 2014
udp:
439253377 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
328066 dropped due to no socket
4 broadcast/multicast datagrams dropped due to no socket
7289 socket buffer overflows
438918018 delivered
434516319 datagrams output
OSW
---------------------
dbrac2_netstat_14.03.25.1300.dat
zzz Tue Mar 25 13:14:33 BEIST 2014
udp:
182516839 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
303810 dropped due to no socket
0 broadcast/multicast datagrams dropped due to no socket
0 socket buffer overflows
182213029 delivered
188182343 datagrams output
...
zzz Tue Mar 25 13:15:02 BEIST 2014
udp:
182517239 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
303819 dropped due to no socket
0 broadcast/multicast datagrams dropped due to no socket
0 socket buffer overflows
182213420 delivered
188182641 datagrams output
分析:udp上沒有發現overflow, bad checksums/incomplete headers etc.
OSW vmstat日誌如下:
OSW
-------------------------
dbrac1_vmstat_14.03.25.1300.dat
zzz ***Tue Mar 25 13:14:39 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
3 0 6927214 7214 0 0 0 0 0 0 60 3262132 9492 11 9 78 1
0 0 6926387 9432 0 0 0 1414 1413 0 72 3213613 9907 10 7 82 1
2 0 6926147 9652 0 0 0 0 0 0 56 3236071 7970 6 5 89 1
zzz ***Tue Mar 25 13:14:45 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
3 0 6922909 12885 0 0 0 0 0 0 94 3252893 10659 11 9 79 0
2 0 6922917 12856 0 0 0 0 0 0 50 3211865 8039 7 5 88 0
1 0 6922685 13069 0 0 0 0 0 0 49 3247836 7710 6 5 88 0
zzz ***Tue Mar 25 13:14:51 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 0 6922855 12894 0 0 0 0 0 0 126 3243290 9261 11 9 79 1
1 0 6922863 12865 0 0 0 0 0 0 34 3206410 7722 7 5 88 1
2 0 6924085 11623 0 0 0 0 0 0 45 3229507 7859 7 5 88 1
zzz ***Tue Mar 25 13:14:56 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 0 6922895 12779 0 0 0 0 0 0 79 3252256 8939 11 9 80 1
1 0 6922902 12750 0 0 0 0 0 0 44 3195964 7207 7 5 87 1
2 0 6922671 12929 0 0 0 0 0 0 80 3208503 7401 6 5 89 1
OSW
-------------------------
dbrac2_vmstat_14.03.25.1300.dat
zzz ***Tue Mar 25 13:14:39 BEIST 2014
System configuration: lcpu=24 mem=49151MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 0 4369629 7628806 0 0 0 0 0 0 57 3252089 7493 10 8 81 0
1 0 4369637 7628777 0 0 0 0 0 0 22 3201077 6327 6 4 89 1
1 0 4369406 7628989 0 0 0 0 0 0 32 3210307 7840 6 5 89 0
zzz ***Tue Mar 25 13:14:45 BEIST 2014
System configuration: lcpu=24 mem=49151MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 0 4369486 7628912 0 0 0 0 0 0 118 3252929 7799 11 9 79 1
1 1 4369410 7628967 0 0 0 0 0 0 304 3210820 9094 6 5 88 1
1 0 4369179 7629179 0 0 0 0 0 0 38 3215090 5315 6 4 89 1
zzz ***Tue Mar 25 13:14:51 BEIST 2014
System configuration: lcpu=24 mem=49151MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
3 0 4369636 7628720 0 0 0 0 0 0 55 3255957 9446 11 9 79 1
2 0 4369644 7628691 0 0 0 0 0 0 22 3219176 6673 6 4 89 1
1 0 4369413 7628903 0 0 0 0 0 0 27 3205112 5832 6 4 89 1
分析:run queue 不高,b不高,cpu使用率不高,沒有pi/po
dbracdb1_lmon_6423022.trc:
2014-03-25 13:14:59.599871 : kjxgrcomerr: Communications reconfig: instance 2 (50,50)
*** 2014-03-25 13:14:59.687
2014-03-25 13:14:59.687302 : kjxgrrcfg: done - ret = 3 hist 0x1679a (initial rsn: 3)
kjxgrrcfgchk: Initiating reconfig, reason=3 <======================================reason 3: communication failure
kjxgrrcfgchk: COMM rcfg - Disk Vote Required
kjfmReceiverHealthCB_CheckAll: Recievers are healthy.<============================Recievers are healthy
2014-03-25 13:14:59.687425 : kjxgrnetchk: start 0x19b28646, end 0x19b33c90
2014-03-25 13:14:59.687444 : kjxgrnetchk: Network Validation wait: 46 sec
kjxgrnetchk: ce-event: from inst 1 to inst 2 ver 0x7e8aac60
kjxgrrcfgchk: prev pstate 6 mapsz 512
kjxgrrcfgchk: new bmp: 1 2
kjxgrrcfgchk: work bmp: 1 2
kjxgrrcfgchk: rr bmp: 1 2
*** 2014-03-25 13:14:59.687
kjxgmrcfg: Reconfiguration started, type 3
CGS/IMR TIMEOUTS:
CSS recovery timeout = 31 sec (Total CSS waittime = 65)
IMR Reconfig timeout = 75 sec
CGS rcfg timeout = 85 sec
kjxgmcs: Setting state to 50 0.
kjxgrs0h: disable CGS timeout
...
2014-03-25 13:15:42.511713 : kjxgrDiskVote: bitmap: 1
CGS/IMR TIMEOUTS:
CSS recovery timeout = 31 sec (Total CSS waittime = 65)
IMR Reconfig timeout = 75 sec
CGS rcfg timeout = 85 sec
kjxgmps: proposing substate 2
kjxgmcs: Setting state to 54 2.
kjfmSendAbortInstMsg: send an abort message to instance 2
dbracdb2_lmon_7995464.trc:
*** 2014-03-25 13:14:59.868
kjxgrrcfgchk: Initiating reconfig, reason=3
kjxgrrcfgchk: COMM rcfg - Disk Vote Required
kjfmReceiverHealthCB_CheckAll: Recievers are healthy.
2014-03-25 13:14:59.867489 : kjxgrnetchk: start 0xafd32909, end 0xafd3df53
2014-03-25 13:14:59.867509 : kjxgrnetchk: Network Validation wait: 46 sec
2014-03-25 13:14:59.867528 : kjxgrnetchk: Sending comm check req to inst 1
kjxgrrcfgchk: prev pstate 6 mapsz 512
kjxgrrcfgchk: new bmp: 1 2
kjxgrrcfgchk: work bmp: 1 2
kjxgrrcfgchk: rr bmp: 1 2
*** 2014-03-25 13:14:59.868
kjxgmrcfg: Reconfiguration started, type 3
CGS/IMR TIMEOUTS:
CSS recovery timeout = 31 sec (Total CSS waittime = 65)
IMR Reconfig timeout = 75 sec
CGS rcfg timeout = 85 sec
kjxgmcs: Setting state to 50 0.
kjxgrs0h: disable CGS timeout
...
2014-03-25 13:15:25.722114 : kjxgrdtrt: Evicted by inst 1, seq (52, 52)
IMR state information
Inst 2, thread 2, state 0x4:124c, flags 0x12ca9:0x0001
分析:
According to lmon trace, the instance2 was evicted by instance1 with reason 3: Communication Failure.
關於reason 3介紹如下:
Troubleshooting ORA-29740 in a RAC Environment (Doc ID 219361.1)
Reason 3: Communications Failure. This can happen if:
a) The LMON processes lose communication between one another.
b) One instance loses communications with the LMS, LMD, process of
another instance.
c) The LCK processes lose communication between one another.
d) A process like LMON, LMD, LMS, or LCK is blocked, spinning, or stuck
and is not responding to remote requests.
Reason 0 = No reconfiguration
Reason 1 = The Node Monitor generated the reconfiguration.
Reason 2 = An instance death was detected.
Reason 3 = Communications Failure
Reason 4 = Reconfiguration after suspend
繼續分析trace日誌
dbracdb1_lms0_6160700.trc:
2014-03-25 13:13:03.149035 : GSIPC:PING: rcv'd PINGREQ[4] from 2.1 (seq 0.10358654) stm 0x45af2ac2
2014-03-25 13:13:03.149109 : GSIPC:PING: send PINGACK[4] to 2.1 (seq 0.10360096)
*** 2014-03-25 13:14:54.428
IPC Send timeout detected. Sender: ospid 6160700 [oracle@dbrac1 (LMS0)] <============LMS0 of instance1 is the sender of the IPC
Receiver: inst 2 binc 291585594 ospid 11010320 <========= LMS0 of instance2 us the receiver of the IPC
dumping request handle 11123d958
krqh = 11123d958
...
Trace Bucket Dump End: KSI
Reporting Communication error with instance 2
dbracdb2_lms0_11010320.trc:
*** 2014-03-25 13:15:25.242
===== Idle Connection Overview =====
Idle connections [0x198ea]: 1
IdleConn List: 1[r:0.10358654,t:0x198ea]
GSIPC:IKILL: ping to inst 1 start 104682 now 104824 icktm 140 psm 0
Reporting Communication error with instance 1
分析:IPC的傳送者跟接受者都報告互相通訊錯誤。
分析總結如下:
目前我們可以確認例項2是由於例項間通訊超時,被例項1驅逐出了叢集。
這次IPC的傳送者是例項1的LMS0 ,接收者是例項2的LMS0,這兩個程式當時都是正常的,而且LMON也檢查了,發現recevier is healthy。
LMON, 兩個例項的LMS0都報了Communication Failure/Error,說明它們是由於通訊失敗導致了問題。
從OSW看,當時兩個節點的資源沒有問題,但是traceroute中報了* * *,說明私網通訊有問題。
解決方法:
透過更換私網的光纖,問題解決,後持續跟進驗證資料庫恢復正常!
oracle11GR2 RAC
詳細版本:11.2.0.4
現象:
節點2 CRS HANG住了,CRSCTL命令完全沒反應,直接幹掉CRS程式主機重啟後,但VIP沒飄到節點1
分析思路;
1、DB下的alert日誌及相關trace日誌。
2. 檢視所有節點的"errpt -a"的輸出。
3. 檢視發生問題時所有節點的GI日誌:
<GRID_HOME>/log/<host>/alert*.log
<GRID_HOME>/log/<host>/crsd/crsd.log
<GRID_HOME>/log/<host>/cssd/ocssd.log
<GRID_HOME>/log/<host>/agent/ohasd/oracssdmonitor_root/oracssdmonitor_root.log
<GRID_HOME>/log/<host>/agent/ohasd/oracssdagent_root/oracssdagent_root.log
/etc/oracle/lastgasp/*, or /var/opt/oracle/lastgasp/*(If have)
注:如果是CRS發起的重啟主機會在/etc/oracle/lastgasp/目錄下的檔案中新增一條記錄。
4. 檢視發生問題時所有節點的LMON, LMS*,LMD0 trace files。
5. 檢視發生問題時所有節點OSW的所有輸出。
詳細分析過程如下:
節點1DB的alert日誌:
Tue Mar 25 12:59:07 2014
Thread 1 advanced to log sequence 245 (LGWR switch)
Current log# 2 seq# 245 mem# 0: +SYSDG/dbracdb/onlinelog/group_2.264.840562709
Current log# 2 seq# 245 mem# 1: +SYSDG/dbracdb/onlinelog/group_2.265.840562727
Tue Mar 25 12:59:20 2014
Archived Log entry 315 added for thread 1 sequence 244 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:14:54 2014
IPC Send timeout detected. Sender: ospid 6160700 [oracle@dbrac1 (LMS0)]
Receiver: inst 2 binc 291585594 ospid 11010320
IPC Send timeout to 2.1 inc 50 for msg type 65518 from opid 12
Tue Mar 25 13:14:59 2014
Communications reconfiguration: instance_number 2
Tue Mar 25 13:15:01 2014
IPC Send timeout detected. Sender: ospid 12452050 [oracle@dbrac1 (LMS1)]
Receiver: inst 2 binc 291585600 ospid 11534636
IPC Send timeout to 2.2 inc 50 for msg type 65518 from opid 13
Tue Mar 25 13:15:22 2014
IPC Send timeout detected. Sender: ospid 10682630 [oracle@dbrac1 (TNS V1-V3)]
Receiver: inst 2 binc 50 ospid 6095056
Tue Mar 25 13:15:25 2014
Detected an inconsistent instance membership by instance 1
Evicting instance 2 from cluster
Waiting for instances to leave: 2
Tue Mar 25 13:15:26 2014
Dumping diagnostic data in directory=[cdmp_20140325131526], requested by (instance=2, osid=8192018 (LMD0)), summary=[abnormal instance termination].
Tue Mar 25 13:15:42 2014
Reconfiguration started (old inc 50, new inc 54)
List of instances:
1 (myinst: 1)
...
Tue Mar 25 13:15:52 2014
Archived Log entry 316 added for thread 2 sequence 114 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:15:53 2014
ARC3: Archiving disabled thread 2 sequence 115
Archived Log entry 317 added for thread 2 sequence 115 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:16:37 2014
Thread 1 advanced to log sequence 246 (LGWR switch)
Current log# 3 seq# 246 mem# 0: +SYSDG/dbracdb/onlinelog/group_3.266.840562735
Current log# 3 seq# 246 mem# 1: +SYSDG/dbracdb/onlinelog/group_3.267.840562747
Tue Mar 25 13:16:46 2014
Decreasing number of real time LMS from 2 to 0
Tue Mar 25 13:16:51 2014
Archived Log entry 318 added for thread 1 sequence 245 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:20:50 2014
IPC Send timeout detected. Sender: ospid 9306248 [oracle@dbrac1 (PING)]
Receiver: inst 2 binc 291585377 ospid 2687058
Tue Mar 25 13:30:08 2014
Thread 1 advanced to log sequence 247 (LGWR switch)
Current log# 1 seq# 247 mem# 0: +SYSDG/dbracdb/onlinelog/group_1.262.840562653
Current log# 1 seq# 247 mem# 1: +SYSDG/dbracdb/onlinelog/group_1.263.840562689
Tue Mar 25 13:30:20 2014
Archived Log entry 319 added for thread 1 sequence 246 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:45:23 2014
Thread 1 advanced to log sequence 248 (LGWR switch)
Current log# 2 seq# 248 mem# 0: +SYSDG/dbracdb/onlinelog/group_2.264.840562709
Current log# 2 seq# 248 mem# 1: +SYSDG/dbracdb/onlinelog/group_2.265.840562727
節點2DB的alert日誌:
Tue Mar 25 12:07:15 2014
Archived Log entry 309 added for thread 2 sequence 112 ID 0xffffffff82080958 dest 1:
Tue Mar 25 12:22:22 2014
Dumping diagnostic data in directory=[cdmp_20140325122222], requested by (instance=1, osid=7012828), summary=[incident=384673].
Tue Mar 25 12:45:21 2014
Thread 2 advanced to log sequence 114 (LGWR switch)
Current log# 6 seq# 114 mem# 0: +SYSDG/dbracdb/onlinelog/group_6.274.840563009
Current log# 6 seq# 114 mem# 1: +SYSDG/dbracdb/onlinelog/group_6.275.840563017
Tue Mar 25 12:45:22 2014
Archived Log entry 313 added for thread 2 sequence 113 ID 0xffffffff82080958 dest 1:
Tue Mar 25 13:14:57 2014
IPC Send timeout detected. Receiver ospid 11010320
Tue Mar 25 13:14:57 2014
Errors in file /oraclelog/diag/rdbms/dbracdb/dbracdb2/trace/dbracdb2_lms0_11010320.trc:
IPC Send timeout detected. Receiver ospid 11534636 [
Tue Mar 25 13:15:01 2014
Errors in file /oraclelog/diag/rdbms/dbracdb/dbracdb2/trace/dbracdb2_lms1_11534636.trc:
Tue Mar 25 13:15:25 2014
LMS0 (ospid: 11010320) has detected no messaging activity from instance 1
LMS0 (ospid: 11010320) issues an IMR to resolve the situation
Please check LMS0 trace file for more detail.
Tue Mar 25 13:15:25 2014
Suppressed nested communications reconfiguration: instance_number 1
Detected an inconsistent instance membership by instance 1
Tue Mar 25 13:15:25 2014
Received an instance abort message from instance 1
Please check instance 1 alert and LMON trace files for detail.
LMD0 (ospid: 8192018): terminating the instance due to error 481
Tue Mar 25 13:15:26 2014
ORA-1092 : opitsk aborting process
Tue Mar 25 13:15:29 2014
System state dump requested by (instance=2, osid=8192018 (LMD0)), summary=[abnormal instance termination].
System State dumped to trace file /oraclelog/diag/rdbms/dbracdb/dbracdb2/trace/dbracdb2_diag_9699724_20140325131529.trc
Instance terminated by LMD0, pid = 8192018
節點1的OSW PRVTNET日誌:
zzz ***Tue Mar 25 13:12:19 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:12:31 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * *
2 * * *
3 * dbrac2-priv (192.168.100.2) 0 ms *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:13:17 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * *
2 * * *
3 dbrac2-priv (192.168.100.2) 0 ms * *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:14:04 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * * <===============================注:*代表traceroute不成功,3個*代表執行了3次網路互動
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
8 dbrac2-priv (192.168.100.2) 0 ms 0 ms *
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:16:01 BEIST 2014 <====================================This snapshot is taken after 2 mins, OSW gap happened.
trying to get source for 192.168.100.1
source should be 192.168.100.1
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.1
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * dbrac2-priv (192.168.100.2) 0 ms 0 ms
節點2的OSW PRVTNET日誌:
zzz ***Tue Mar 25 13:14:39 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 * dbrac1-priv (192.168.100.1) 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:14:51 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 * dbrac1-priv (192.168.100.1) 0 ms 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Tue Mar 25 13:14:56 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 * * dbrac1-priv (192.168.100.1) 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Warning. Traceroute response is spanning snapshot intervals.
zzz ***Tue Mar 25 13:15:08 BEIST 2014
trying to get source for 192.168.100.1
source should be 192.168.100.2
traceroute to 192.168.100.1 (192.168.100.1) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac1-priv (192.168.100.1) 1 ms * 0 ms
trying to get source for 192.168.100.2
source should be 192.168.100.2
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.2 (192.168.100.2), 30 hops max
outgoing MTU = 1500
1 dbrac2-priv (192.168.100.2) 1 ms 0 ms 0 ms
zzz ***Warning. Traceroute response is spanning snapshot intervals.
分析如下:
例項2和例項1之間發生了IPC Send timeout,之後例項2被驅逐出了叢集。
在traceroute中我們看到當時節點間私網通訊存在問題:
traceroute to 192.168.100.2 (192.168.100.2) from 192.168.100.1 (192.168.100.1), 30 hops max
outgoing MTU = 1500
1 * * * <===============================注:*代表traceroute不成功,3個*代表執行了3次網路互動
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
OSW netstat日誌如下:
OSW
---------------------
dbrac1_netstat_14.03.25.1300.dat
zzz Tue Mar 25 13:14:33 BEIST 2014
udp:
439252958 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
328052 dropped due to no socket
4 broadcast/multicast datagrams dropped due to no socket
7289 socket buffer overflows
438917613 delivered
434514260 datagrams output
...
zzz Tue Mar 25 13:15:08 BEIST 2014
udp:
439253377 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
328066 dropped due to no socket
4 broadcast/multicast datagrams dropped due to no socket
7289 socket buffer overflows
438918018 delivered
434516319 datagrams output
OSW
---------------------
dbrac2_netstat_14.03.25.1300.dat
zzz Tue Mar 25 13:14:33 BEIST 2014
udp:
182516839 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
303810 dropped due to no socket
0 broadcast/multicast datagrams dropped due to no socket
0 socket buffer overflows
182213029 delivered
188182343 datagrams output
...
zzz Tue Mar 25 13:15:02 BEIST 2014
udp:
182517239 datagrams received
0 incomplete headers
0 bad data length fields
0 bad checksums
303819 dropped due to no socket
0 broadcast/multicast datagrams dropped due to no socket
0 socket buffer overflows
182213420 delivered
188182641 datagrams output
分析:udp上沒有發現overflow, bad checksums/incomplete headers etc.
OSW vmstat日誌如下:
OSW
-------------------------
dbrac1_vmstat_14.03.25.1300.dat
zzz ***Tue Mar 25 13:14:39 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
3 0 6927214 7214 0 0 0 0 0 0 60 3262132 9492 11 9 78 1
0 0 6926387 9432 0 0 0 1414 1413 0 72 3213613 9907 10 7 82 1
2 0 6926147 9652 0 0 0 0 0 0 56 3236071 7970 6 5 89 1
zzz ***Tue Mar 25 13:14:45 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
3 0 6922909 12885 0 0 0 0 0 0 94 3252893 10659 11 9 79 0
2 0 6922917 12856 0 0 0 0 0 0 50 3211865 8039 7 5 88 0
1 0 6922685 13069 0 0 0 0 0 0 49 3247836 7710 6 5 88 0
zzz ***Tue Mar 25 13:14:51 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 0 6922855 12894 0 0 0 0 0 0 126 3243290 9261 11 9 79 1
1 0 6922863 12865 0 0 0 0 0 0 34 3206410 7722 7 5 88 1
2 0 6924085 11623 0 0 0 0 0 0 45 3229507 7859 7 5 88 1
zzz ***Tue Mar 25 13:14:56 BEIST 2014
System configuration: lcpu=24 mem=49152MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 0 6922895 12779 0 0 0 0 0 0 79 3252256 8939 11 9 80 1
1 0 6922902 12750 0 0 0 0 0 0 44 3195964 7207 7 5 87 1
2 0 6922671 12929 0 0 0 0 0 0 80 3208503 7401 6 5 89 1
OSW
-------------------------
dbrac2_vmstat_14.03.25.1300.dat
zzz ***Tue Mar 25 13:14:39 BEIST 2014
System configuration: lcpu=24 mem=49151MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 0 4369629 7628806 0 0 0 0 0 0 57 3252089 7493 10 8 81 0
1 0 4369637 7628777 0 0 0 0 0 0 22 3201077 6327 6 4 89 1
1 0 4369406 7628989 0 0 0 0 0 0 32 3210307 7840 6 5 89 0
zzz ***Tue Mar 25 13:14:45 BEIST 2014
System configuration: lcpu=24 mem=49151MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 0 4369486 7628912 0 0 0 0 0 0 118 3252929 7799 11 9 79 1
1 1 4369410 7628967 0 0 0 0 0 0 304 3210820 9094 6 5 88 1
1 0 4369179 7629179 0 0 0 0 0 0 38 3215090 5315 6 4 89 1
zzz ***Tue Mar 25 13:14:51 BEIST 2014
System configuration: lcpu=24 mem=49151MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
3 0 4369636 7628720 0 0 0 0 0 0 55 3255957 9446 11 9 79 1
2 0 4369644 7628691 0 0 0 0 0 0 22 3219176 6673 6 4 89 1
1 0 4369413 7628903 0 0 0 0 0 0 27 3205112 5832 6 4 89 1
分析:run queue 不高,b不高,cpu使用率不高,沒有pi/po
dbracdb1_lmon_6423022.trc:
2014-03-25 13:14:59.599871 : kjxgrcomerr: Communications reconfig: instance 2 (50,50)
*** 2014-03-25 13:14:59.687
2014-03-25 13:14:59.687302 : kjxgrrcfg: done - ret = 3 hist 0x1679a (initial rsn: 3)
kjxgrrcfgchk: Initiating reconfig, reason=3 <======================================reason 3: communication failure
kjxgrrcfgchk: COMM rcfg - Disk Vote Required
kjfmReceiverHealthCB_CheckAll: Recievers are healthy.<============================Recievers are healthy
2014-03-25 13:14:59.687425 : kjxgrnetchk: start 0x19b28646, end 0x19b33c90
2014-03-25 13:14:59.687444 : kjxgrnetchk: Network Validation wait: 46 sec
kjxgrnetchk: ce-event: from inst 1 to inst 2 ver 0x7e8aac60
kjxgrrcfgchk: prev pstate 6 mapsz 512
kjxgrrcfgchk: new bmp: 1 2
kjxgrrcfgchk: work bmp: 1 2
kjxgrrcfgchk: rr bmp: 1 2
*** 2014-03-25 13:14:59.687
kjxgmrcfg: Reconfiguration started, type 3
CGS/IMR TIMEOUTS:
CSS recovery timeout = 31 sec (Total CSS waittime = 65)
IMR Reconfig timeout = 75 sec
CGS rcfg timeout = 85 sec
kjxgmcs: Setting state to 50 0.
kjxgrs0h: disable CGS timeout
...
2014-03-25 13:15:42.511713 : kjxgrDiskVote: bitmap: 1
CGS/IMR TIMEOUTS:
CSS recovery timeout = 31 sec (Total CSS waittime = 65)
IMR Reconfig timeout = 75 sec
CGS rcfg timeout = 85 sec
kjxgmps: proposing substate 2
kjxgmcs: Setting state to 54 2.
kjfmSendAbortInstMsg: send an abort message to instance 2
dbracdb2_lmon_7995464.trc:
*** 2014-03-25 13:14:59.868
kjxgrrcfgchk: Initiating reconfig, reason=3
kjxgrrcfgchk: COMM rcfg - Disk Vote Required
kjfmReceiverHealthCB_CheckAll: Recievers are healthy.
2014-03-25 13:14:59.867489 : kjxgrnetchk: start 0xafd32909, end 0xafd3df53
2014-03-25 13:14:59.867509 : kjxgrnetchk: Network Validation wait: 46 sec
2014-03-25 13:14:59.867528 : kjxgrnetchk: Sending comm check req to inst 1
kjxgrrcfgchk: prev pstate 6 mapsz 512
kjxgrrcfgchk: new bmp: 1 2
kjxgrrcfgchk: work bmp: 1 2
kjxgrrcfgchk: rr bmp: 1 2
*** 2014-03-25 13:14:59.868
kjxgmrcfg: Reconfiguration started, type 3
CGS/IMR TIMEOUTS:
CSS recovery timeout = 31 sec (Total CSS waittime = 65)
IMR Reconfig timeout = 75 sec
CGS rcfg timeout = 85 sec
kjxgmcs: Setting state to 50 0.
kjxgrs0h: disable CGS timeout
...
2014-03-25 13:15:25.722114 : kjxgrdtrt: Evicted by inst 1, seq (52, 52)
IMR state information
Inst 2, thread 2, state 0x4:124c, flags 0x12ca9:0x0001
分析:
According to lmon trace, the instance2 was evicted by instance1 with reason 3: Communication Failure.
關於reason 3介紹如下:
Troubleshooting ORA-29740 in a RAC Environment (Doc ID 219361.1)
Reason 3: Communications Failure. This can happen if:
a) The LMON processes lose communication between one another.
b) One instance loses communications with the LMS, LMD, process of
another instance.
c) The LCK processes lose communication between one another.
d) A process like LMON, LMD, LMS, or LCK is blocked, spinning, or stuck
and is not responding to remote requests.
Reason 0 = No reconfiguration
Reason 1 = The Node Monitor generated the reconfiguration.
Reason 2 = An instance death was detected.
Reason 3 = Communications Failure
Reason 4 = Reconfiguration after suspend
繼續分析trace日誌
dbracdb1_lms0_6160700.trc:
2014-03-25 13:13:03.149035 : GSIPC:PING: rcv'd PINGREQ[4] from 2.1 (seq 0.10358654) stm 0x45af2ac2
2014-03-25 13:13:03.149109 : GSIPC:PING: send PINGACK[4] to 2.1 (seq 0.10360096)
*** 2014-03-25 13:14:54.428
IPC Send timeout detected. Sender: ospid 6160700 [oracle@dbrac1 (LMS0)] <============LMS0 of instance1 is the sender of the IPC
Receiver: inst 2 binc 291585594 ospid 11010320 <========= LMS0 of instance2 us the receiver of the IPC
dumping request handle 11123d958
krqh = 11123d958
...
Trace Bucket Dump End: KSI
Reporting Communication error with instance 2
dbracdb2_lms0_11010320.trc:
*** 2014-03-25 13:15:25.242
===== Idle Connection Overview =====
Idle connections [0x198ea]: 1
IdleConn List: 1[r:0.10358654,t:0x198ea]
GSIPC:IKILL: ping to inst 1 start 104682 now 104824 icktm 140 psm 0
Reporting Communication error with instance 1
分析:IPC的傳送者跟接受者都報告互相通訊錯誤。
分析總結如下:
目前我們可以確認例項2是由於例項間通訊超時,被例項1驅逐出了叢集。
這次IPC的傳送者是例項1的LMS0 ,接收者是例項2的LMS0,這兩個程式當時都是正常的,而且LMON也檢查了,發現recevier is healthy。
LMON, 兩個例項的LMS0都報了Communication Failure/Error,說明它們是由於通訊失敗導致了問題。
從OSW看,當時兩個節點的資源沒有問題,但是traceroute中報了* * *,說明私網通訊有問題。
解決方法:
透過更換私網的光纖,問題解決,後持續跟進驗證資料庫恢復正常!
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30633755/viewspace-2127672/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 3節點RAC資料庫夯故障分析資料庫
- RAC第一個節點被剔除叢集故障分析
- oracle 10g rac,刪除故障節點並新增新節點Oracle 10g
- 【kingsql分享】RAC節點故障修復一例SQL
- Oracle RAC命中ORA-7445只能開啟一個節點故障案例分析Oracle
- RAC資料庫只能啟動一個節點的故障資料庫
- RAC 雙節點 轉單節點流程
- 故障分析 | MySQL : slave_compressed_protocol 導致 crashMySqlProtocol
- RAC一個節點記憶體故障當機,無法訪問記憶體
- Oracle RAC新增節點Oracle
- Oracle RAC 新增節點Oracle
- oracle11gR2 RAC 環境測試修改節點VIP的測試操作記錄Oracle
- kubernets叢集節點NotReady故障 分析報告
- RAC節點之間通訊問題 兩節點 11g RAC
- 2節點RAC安裝
- Oracle Rac 刪除節點Oracle
- MongoDB單點故障分析MongoDB
- oracle兩節點RAC,由於gipc導致某節點crs無法啟動問題分析Oracle
- 故障分析 | MySQL 5.7 使用臨時表導致資料庫 CrashMySql資料庫
- [網摘] Oracle RAC新增節點Oracle
- 檢視oracle rac的節點Oracle
- 【kingsql分享】RAC節點命令SQL
- RAC中跨節點並行並行
- 【RAC】處理因ASM例項異常導致RAC第一節點例項異常終止故障ASM
- 【原創】Oracle RAC故障分析與處理Oracle
- 【RAC】Oracle10g RAC 節點重配的方式Oracle
- oracle 11g rac新增節點前之清除節點資訊Oracle
- 刪除oracle10g rac(rhel4)節點_節點Oracle
- 一次詳細的RAC 節點例項驅逐分析文件
- oracle11g RAC新增節點Oracle
- 雙節點RAC 修改PROCESS程式數
- Oracle10g RAC 加節點Oracle
- rac中控制節點間並行並行
- 規劃單節點遷移到rac
- crash日誌分析
- 【RAC】節點2上不能完全識別節點1上的asmdiskASM
- oracle 10g rac 新增節點與刪除節點步驟Oracle 10g
- Oracle RAC常見啟動失敗故障分析Oracle