CRS-4535: Cannot communicate with Cluster Ready Services
環境:11.2.0.3 rac primary+rac standby
生產庫rac standby的node1節點CRS自動關閉問題
--EM報警
Message=Clusterware has problems on the master agent host CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
登陸到該庫node1,發現CRS程式已經關閉
[grid@node1 ~]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
$ crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
[grid@node1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services <----------
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
ps -ef|grep crs --同樣沒有crs程式
試著啟動crs程式,但是無法啟動
[root@node1 ~]# /app/grid/bin/crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@node1 ~]# /app/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services <<----CRS沒有起來
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
在grid使用者下檢視crs程式日誌
$ cd $ORACLE_HOME/log/node1/crsd
$ vim crsd.log
-------------------
2013-12-10 15:47:19.902: [ OCRASM][33715952]ASM Error Stack :
2013-12-10 15:47:19.934: [ OCRASM][33715952]proprasmo: kgfoCheckMount returned [6]
2013-12-10 15:47:19.934: [ OCRASM][33715952]proprasmo: The ASM disk group crs is not found or not mounted <-------
2013-12-10 15:47:19.934: [ OCRRAW][33715952]proprioo: Failed to open [+crs]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
2013-12-10 15:47:19.934: [ OCRRAW][33715952]proprioo: No OCR/OLR devices are usable
2013-12-10 15:47:19.934: [ OCRASM][33715952]proprasmcl: asmhandle is NULL
2013-12-10 15:47:19.935: [ GIPC][33715952] gipcCheckInitialization: possible incompatible non-threaded init from [prom.c : 690], original from [clsss.c : 5343]
2013-12-10 15:47:19.935: [ default][33715952]clsvactversion:4: Retrieving Active Version from local storage.
2013-12-10 15:47:19.937: [ OCRRAW][33715952]proprrepauto: The local OCR configuration matches with the configuration published by OCR Cache Writer. No repair required.
2013-12-10 15:47:19.938: [ OCRRAW][33715952]proprinit: Could not open raw device
2013-12-10 15:47:19.938: [ OCRASM][33715952]proprasmcl: asmhandle is NULL
2013-12-10 15:47:19.939: [ OCRAPI][33715952]a_init:16!: Backend init unsuccessful : [26]
2013-12-10 15:47:19.939: [ CRSOCR][33715952] OCR context init failure. Error: PROC-26: Error while accessing the physical storage <-------
2013-12-10 15:47:19.939: [ CRSD][33715952] Created alert : (:CRSD00111:) : Could not init OCR, error: PROC-26: Error while accessing the physical storage
2013-12-10 15:47:19.939: [ CRSD][33715952][PANIC] CRSD exiting: Could not init OCR, code: 26
2013-12-10 15:47:19.939: [ CRSD][33715952] Done.
---------------------
透過日誌,可以看出是CRS磁碟組有問題
也確實如此,沒有掛載CRS磁碟組
su - grid
SQL> set linesize 200
SQL> select GROUP_NUMBER,NAME,TYPE,ALLOCATION_UNIT_SIZE,STATE from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE ALLOCATION_UNIT_SIZE STATE
------------ ------------------------------ ------ -------------------- -----------
0 CRS 0 DISMOUNTED<--------
2 DATA1 EXTERN 4194304 MOUNTED
檢視asm例項alert日誌,返現CRS磁碟組被強制解除安裝了
SQL> show parameter dump
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
background_core_dump string partial
background_dump_dest string /app/gridbase/diag/asm/+asm/+A
SM1/trace
cd /app/gridbase/diag/asm/+asm/+ASM1/trace
$ vim alert_+ASM1.log
-------------------------------------------
Tue Dec 10 11:13:57 2013
WARNING: Waited 15 secs for write IO to PST disk 0 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 1 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 2 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 0 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 1 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 2 in group 1.
Tue Dec 10 11:13:57 2013
NOTE: process _b000_+asm1 (15822) initiating offline of disk 0.3916226472 (CRS_0000) with mask 0x7e in group 1
NOTE: process _b000_+asm1 (15822) initiating offline of disk 1.3916226471 (CRS_0001) with mask 0x7e in group 1
NOTE: process _b000_+asm1 (15822) initiating offline of disk 2.3916226470 (CRS_0002) with mask 0x7e in group 1
NOTE: checking PST: grp = 1
GMON checking disk modes for group 1 at 12 for pid 37, osid 15822
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: checking PST for grp 1 done.
NOTE: initiating PST update: grp = 1, dsk = 0/0xe96cdfa8, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 1/0xe96cdfa7, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 2/0xe96cdfa6, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 13 for pid 37, osid 15822
ERROR: no read quorum in group: required 2, found 0 disks
Tue Dec 10 11:13:57 2013
NOTE: cache dismounting (not clean) group 1/0x165C2F6D (CRS)
WARNING: Offline for disk CRS_0000 in mode 0x7f failed.
WARNING: Offline for disk CRS_0001 in mode 0x7f failed.
NOTE: messaging CKPT to quiesce pins Unix process pid: 15824, image: oracle@node1 (B001)
WARNING: Offline for disk CRS_0002 in mode 0x7f failed.
Tue Dec 10 11:13:57 2013
NOTE: halting all I/Os to diskgroup 1 (CRS)
Tue Dec 10 11:13:57 2013
NOTE: LGWR doing non-clean dismount of group 1 (CRS)
NOTE: LGWR sync ABA=3.42 last written ABA 3.42
Tue Dec 10 11:13:57 2013
kjbdomdet send to inst 2
detach from dom 1, sending detach message to inst 2
Tue Dec 10 11:13:57 2013
List of instances:
1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 4)
Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 1 invalid = TRUE
Tue Dec 10 11:13:57 2013
NOTE: No asm libraries found in the system
520 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Tue Dec 10 11:13:57 2013
WARNING: dirty detached from domain 1
NOTE: cache dismounted group 1/0x165C2F6D (CRS)
SQL> alter diskgroup CRS dismount force /* ASM SERVER:375140205 */ <------------CRS被強制dismount---
Tue Dec 10 11:13:57 2013
NOTE: cache deleting context for group CRS 1/0x165c2f6d
GMON dismounting group 1 at 14 for pid 41, osid 15824
NOTE: Disk CRS_0000 in mode 0x7f marked for de-assignment
NOTE: Disk CRS_0001 in mode 0x7f marked for de-assignment
NOTE: Disk CRS_0002 in mode 0x7f marked for de-assignment
NOTE:Waiting for all pending writes to complete before de-registering: grpnum 1
Tue Dec 10 11:14:27 2013
NOTE:Waiting for all pending writes to complete before de-registering: grpnum 1
Tue Dec 10 11:14:29 2013
ASM Health Checker found 1 new failures
Tue Dec 10 11:14:57 2013
SUCCESS: diskgroup CRS was dismounted
SUCCESS: alter diskgroup CRS dismount force /* ASM SERVER:375140205 */
SUCCESS: ASM-initiated MANDATORY DISMOUNT of group CRS
--------------------------------------
掛載CRS 磁碟組
su - grid
sqlplus / as sysasm --!!!一定是sysasm
SQL> alter diskgroup crs mount;
SQL> select GROUP_NUMBER,NAME,TYPE,ALLOCATION_UNIT_SIZE,STATE from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE ALLOCATION_UNIT_SIZE STATE
------------ ------------------------------ ------ -------------------- -----------
1 CRS NORMAL 4194304 MOUNTED
2 DATA1 EXTERN 4194304 MOUNTED
啟動CRS
但是常用的start crs命令執行不成功
# /app/grid/bin/crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
使用該命令啟動成功
[root@node1 ~]# /app/grid/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'node1'
CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
# /app/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online <------------------
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
解決路線圖:
crsd_log-->asm_instance_alert_log-->mount crs diskgroup -->start crs
雖然node1的CRS恢復正常,但CRS磁碟組會被強制dismount的原因還沒找到,找到後會貼在這裡。
生產庫rac standby的node1節點CRS自動關閉問題
--EM報警
Message=Clusterware has problems on the master agent host CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
登陸到該庫node1,發現CRS程式已經關閉
[grid@node1 ~]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
$ crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
[grid@node1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services <----------
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
ps -ef|grep crs --同樣沒有crs程式
試著啟動crs程式,但是無法啟動
[root@node1 ~]# /app/grid/bin/crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@node1 ~]# /app/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services <<----CRS沒有起來
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
在grid使用者下檢視crs程式日誌
$ cd $ORACLE_HOME/log/node1/crsd
$ vim crsd.log
-------------------
2013-12-10 15:47:19.902: [ OCRASM][33715952]ASM Error Stack :
2013-12-10 15:47:19.934: [ OCRASM][33715952]proprasmo: kgfoCheckMount returned [6]
2013-12-10 15:47:19.934: [ OCRASM][33715952]proprasmo: The ASM disk group crs is not found or not mounted <-------
2013-12-10 15:47:19.934: [ OCRRAW][33715952]proprioo: Failed to open [+crs]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
2013-12-10 15:47:19.934: [ OCRRAW][33715952]proprioo: No OCR/OLR devices are usable
2013-12-10 15:47:19.934: [ OCRASM][33715952]proprasmcl: asmhandle is NULL
2013-12-10 15:47:19.935: [ GIPC][33715952] gipcCheckInitialization: possible incompatible non-threaded init from [prom.c : 690], original from [clsss.c : 5343]
2013-12-10 15:47:19.935: [ default][33715952]clsvactversion:4: Retrieving Active Version from local storage.
2013-12-10 15:47:19.937: [ OCRRAW][33715952]proprrepauto: The local OCR configuration matches with the configuration published by OCR Cache Writer. No repair required.
2013-12-10 15:47:19.938: [ OCRRAW][33715952]proprinit: Could not open raw device
2013-12-10 15:47:19.938: [ OCRASM][33715952]proprasmcl: asmhandle is NULL
2013-12-10 15:47:19.939: [ OCRAPI][33715952]a_init:16!: Backend init unsuccessful : [26]
2013-12-10 15:47:19.939: [ CRSOCR][33715952] OCR context init failure. Error: PROC-26: Error while accessing the physical storage <-------
2013-12-10 15:47:19.939: [ CRSD][33715952] Created alert : (:CRSD00111:) : Could not init OCR, error: PROC-26: Error while accessing the physical storage
2013-12-10 15:47:19.939: [ CRSD][33715952][PANIC] CRSD exiting: Could not init OCR, code: 26
2013-12-10 15:47:19.939: [ CRSD][33715952] Done.
---------------------
透過日誌,可以看出是CRS磁碟組有問題
也確實如此,沒有掛載CRS磁碟組
su - grid
SQL> set linesize 200
SQL> select GROUP_NUMBER,NAME,TYPE,ALLOCATION_UNIT_SIZE,STATE from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE ALLOCATION_UNIT_SIZE STATE
------------ ------------------------------ ------ -------------------- -----------
0 CRS 0 DISMOUNTED<--------
2 DATA1 EXTERN 4194304 MOUNTED
檢視asm例項alert日誌,返現CRS磁碟組被強制解除安裝了
SQL> show parameter dump
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
background_core_dump string partial
background_dump_dest string /app/gridbase/diag/asm/+asm/+A
SM1/trace
cd /app/gridbase/diag/asm/+asm/+ASM1/trace
$ vim alert_+ASM1.log
-------------------------------------------
Tue Dec 10 11:13:57 2013
WARNING: Waited 15 secs for write IO to PST disk 0 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 1 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 2 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 0 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 1 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 2 in group 1.
Tue Dec 10 11:13:57 2013
NOTE: process _b000_+asm1 (15822) initiating offline of disk 0.3916226472 (CRS_0000) with mask 0x7e in group 1
NOTE: process _b000_+asm1 (15822) initiating offline of disk 1.3916226471 (CRS_0001) with mask 0x7e in group 1
NOTE: process _b000_+asm1 (15822) initiating offline of disk 2.3916226470 (CRS_0002) with mask 0x7e in group 1
NOTE: checking PST: grp = 1
GMON checking disk modes for group 1 at 12 for pid 37, osid 15822
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: checking PST for grp 1 done.
NOTE: initiating PST update: grp = 1, dsk = 0/0xe96cdfa8, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 1/0xe96cdfa7, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 1, dsk = 2/0xe96cdfa6, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 13 for pid 37, osid 15822
ERROR: no read quorum in group: required 2, found 0 disks
Tue Dec 10 11:13:57 2013
NOTE: cache dismounting (not clean) group 1/0x165C2F6D (CRS)
WARNING: Offline for disk CRS_0000 in mode 0x7f failed.
WARNING: Offline for disk CRS_0001 in mode 0x7f failed.
NOTE: messaging CKPT to quiesce pins Unix process pid: 15824, image: oracle@node1 (B001)
WARNING: Offline for disk CRS_0002 in mode 0x7f failed.
Tue Dec 10 11:13:57 2013
NOTE: halting all I/Os to diskgroup 1 (CRS)
Tue Dec 10 11:13:57 2013
NOTE: LGWR doing non-clean dismount of group 1 (CRS)
NOTE: LGWR sync ABA=3.42 last written ABA 3.42
Tue Dec 10 11:13:57 2013
kjbdomdet send to inst 2
detach from dom 1, sending detach message to inst 2
Tue Dec 10 11:13:57 2013
List of instances:
1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 4)
Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 1 invalid = TRUE
Tue Dec 10 11:13:57 2013
NOTE: No asm libraries found in the system
520 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Tue Dec 10 11:13:57 2013
WARNING: dirty detached from domain 1
NOTE: cache dismounted group 1/0x165C2F6D (CRS)
SQL> alter diskgroup CRS dismount force /* ASM SERVER:375140205 */ <------------CRS被強制dismount---
Tue Dec 10 11:13:57 2013
NOTE: cache deleting context for group CRS 1/0x165c2f6d
GMON dismounting group 1 at 14 for pid 41, osid 15824
NOTE: Disk CRS_0000 in mode 0x7f marked for de-assignment
NOTE: Disk CRS_0001 in mode 0x7f marked for de-assignment
NOTE: Disk CRS_0002 in mode 0x7f marked for de-assignment
NOTE:Waiting for all pending writes to complete before de-registering: grpnum 1
Tue Dec 10 11:14:27 2013
NOTE:Waiting for all pending writes to complete before de-registering: grpnum 1
Tue Dec 10 11:14:29 2013
ASM Health Checker found 1 new failures
Tue Dec 10 11:14:57 2013
SUCCESS: diskgroup CRS was dismounted
SUCCESS: alter diskgroup CRS dismount force /* ASM SERVER:375140205 */
SUCCESS: ASM-initiated MANDATORY DISMOUNT of group CRS
--------------------------------------
掛載CRS 磁碟組
su - grid
sqlplus / as sysasm --!!!一定是sysasm
SQL> alter diskgroup crs mount;
SQL> select GROUP_NUMBER,NAME,TYPE,ALLOCATION_UNIT_SIZE,STATE from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE ALLOCATION_UNIT_SIZE STATE
------------ ------------------------------ ------ -------------------- -----------
1 CRS NORMAL 4194304 MOUNTED
2 DATA1 EXTERN 4194304 MOUNTED
啟動CRS
但是常用的start crs命令執行不成功
# /app/grid/bin/crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
使用該命令啟動成功
[root@node1 ~]# /app/grid/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'node1'
CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
# /app/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online <------------------
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
解決路線圖:
crsd_log-->asm_instance_alert_log-->mount crs diskgroup -->start crs
雖然node1的CRS恢復正常,但CRS磁碟組會被強制dismount的原因還沒找到,找到後會貼在這裡。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28869493/viewspace-1982141/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle10g新特色:CRS(Cluster Ready Services)Oracle
- Oracle10g New Feature:CRS(Cluster Ready Services) (zt)Oracle
- CRS-4534: Cannot communicate with Event Manager
- CRS-0184 Cannot communicate with the CRS daemon
- CRS-0184: Cannot communicate with the CRS daemon
- CRS-0184: Cannot communicate with the CRS daemon.
- 神奇的CRS-0184: Cannot communicate with the CRS daemon
- CRS-0184: Cannot communicate with the CRS daemo的診斷
- 解決:CRS-0184: Cannot communicate with the CRS daemon.
- crs_stat -t CRS-0184: Cannot communicate with the CRS daemon
- CRS-0184: Cannot communicate with the CRS daemon. 問題處理
- LoadRunner報錯vuser_init.c(18): Error: nca_connect_server: cannot communicateErrorServer
- Failure 1 contacting Cluster Synchronization Services daemon_1466098.1AI
- (unable to communicate with CRSD/OHASD
- Angular 2 Components CommunicateAngular
- jQuery ready事件jQuery事件
- 【GO】Ready To WorkGo
- DOM Ready 事件事件
- Kibana server is not ready yetServer
- $(document).ready(function(){})的作用Function
- Ready!GO PM GO 1.3Go
- Ready!GO PM GO 1.2Go
- Ready!GO PM GO 1.1Go
- Ready!GO PM GO 1.0Go
- Web services框架Web框架
- Oracle 11g RAC CRS-4535/ORA-15077Oracle
- Are we ready for learned cardinality estimation?
- 2.4.1.2 Nonedefault Services in a CDBNone
- 2.4 Overview of Services in a CDBView
- DCOM services errorError
- Web services 介紹Web
- Making Kotlin Ready for Data ScienceKotlin
- DockerCon回顧(二): Ready for Production?Docker
- redis.cluster/memcached.cluster/wmware esxiRedis
- XML安全之Web ServicesXMLWeb
- Amazon Web Services (目錄)Web
- 漫談Oracle RAC servicesOracle
- SQLNET.AUTHENTICATION_SERVICESSQL