11gR2 安裝RAC節點2上 root.sh 報錯ORA-15072 ORA-15018

tolywang發表於2011-02-28

Oracle  11gR2 RAC 安裝Clusterware 結束, 在第二個節點執行root.sh指令碼的是報如下錯誤:

DiskGroup DATA1 creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 regular failure groups, discovered only 0

 

在Oracle 的官網搜了一下, 相關解釋如下

 

Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.1 and later   [Release: 11.2 and later ]
Information in this document applies to any platform.

Symptoms


While installing Oracle Grid Infrastructure with ASM, root.sh ran successfully in first node, but fails on the second node.

Error example

1. root.sh failed on second node with following errors
-------------------------------------------------------
DiskGroup DATA1 creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 regular failure groups, discovered only 0


Configuration of ASM failed, see logs for details

2. rootcrs_nodename.log
-----------------------
2010-02-03 13:40:43: Configuring ASM via ASMCA
2010-02-03 13:40:43: Executing as oracle: /u01/app/1120/grid/bin/asmca -silent -diskGroupName DATA1 -diskList ORCL:DATA1 -redundancy EXTERNAL -configureLocalASM
2010-02-03 13:40:43: Running as user oracle: /u01/app/1120/grid/bin/asmca -silent -diskGroupName DATA1 -diskList ORCL:DATA1 -redundancy EXTERNAL -configureLocalASM
2010-02-03 13:40:43: Invoking "/u01/app/1120/grid/bin/asmca -silent -diskGroupName DATA1 -diskList ORCL:DATA1 -redundancy EXTERNAL -configureLocalASM" as user "oracle"
2010-02-03 13:40:51: Configuration of ASM failed, see logs for details

3. On the 2nd node
/etc/oratab files shows +ASM1, rather than +ASM2

4. The following commands on the 2nd node show the ASM disk information correctly
/etc/init.d/oracleasm listdisks
/etc/init.d/oracleasm scandisks
ls -ltr /dev/oracleasm/disks

 

Cause


After configuring multipath disks on Linux x86-64, proper parameters have not been configured in /etc/sysconfig/oracleasm

 

 

Solution

 

On all nodes,
1. Modify the /etc/sysconfig/oracleasm with:
       ORACLEASM_SCANORDER="dm"
       ORACLEASM_SCANEXCLUDE="sd"
2. restart the asmlib by :
       # /etc/init.d/oracleasm restart
3. Run root.sh on the 2nd node

 

 

在這裡要注意的事, 在執行root.sh 指令碼的時候,會報如下錯誤:

 

Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.

因為之前已經執行過了。 我們需要把之前註冊的資訊 刪除之後,在執行root.sh 指令碼。 執行如下命令,刪除節點註冊資訊:

       # /oracle/grid/crs/install/roothas.pl -delete -force -verbose

待刪除完成後在執行,就不會報錯了。

參考:http://www.cnblogs.com/abenz/archive/2010/06/08/1754328.html

 

 

 

 

節點2上執行root.sh 報錯資訊: 

-------------------------------------------------------------------------

[root@wmrac02 product]# /u01/product/grid/11.2.0/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/product/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:    
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-02-28 16:05:19: Parsing the host name
2011-02-28 16:05:19: Checking for super user privileges
2011-02-28 16:05:19: User has super user privileges
Using configuration parameter file: /u01/product/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'wmrac02'
CRS-2672: Attempting to start 'ora.mdnsd' on 'wmrac02'
CRS-2676: Start of 'ora.gipcd' on 'wmrac02' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wmrac02'
CRS-2676: Start of 'ora.gpnpd' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wmrac02'
CRS-2676: Start of 'ora.cssdmonitor' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wmrac02'
CRS-2672: Attempting to start 'ora.diskmon' on 'wmrac02'
CRS-2676: Start of 'ora.diskmon' on 'wmrac02' succeeded
CRS-2676: Start of 'ora.cssd' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wmrac02'
CRS-2676: Start of 'ora.ctssd' on 'wmrac02' succeeded

DiskGroup DATA creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 regular failure groups, discovered only 0


Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/product/grid/11.2.0/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2673: Attempting to stop 'ora.asm' on 'wmrac02'
CRS-2677: Stop of 'ora.asm' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'wmrac02'
CRS-2677: Stop of 'ora.ctssd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'wmrac02'
CRS-2677: Stop of 'ora.cssdmonitor' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'wmrac02'
CRS-2677: Stop of 'ora.cssd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'wmrac02'
CRS-2677: Stop of 'ora.gpnpd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'wmrac02'
CRS-2677: Stop of 'ora.gipcd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'wmrac02'
CRS-2677: Stop of 'ora.mdnsd' on 'wmrac02' succeeded
Initial cluster configuration failed.  See /u01/product/grid/11.2.0/cfgtoollogs/crsconfig/rootcrs_wmrac02.log for details

處理步驟:

 


On all nodes,
1. Modify the /etc/sysconfig/oracleasm with:
       ORACLEASM_SCANORDER="dm"
       ORACLEASM_SCANEXCLUDE="sd"

2. restart the asmlib by :
       # /etc/init.d/oracleasm restart

3. on the 2nd node
[root@wmrac02 sysconfig]# /u01/product/grid/11.2.0/crs/install/roothas.pl  -delete -force -verbose2011-02-28 17:10:27: Checking for

super user privileges
2011-02-28 17:10:27: User has super user privileges
2011-02-28 17:10:27: Parsing the host name
Using configuration parameter file: /u01/product/grid/11.2.0/crs/install/crsconfig_params
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'wmrac02'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'wmrac02'
CRS-2677: Stop of 'ora.drivers.acfs' on 'wmrac02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'wmrac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
ACFS-9200: Supported
Successfully deconfigured Oracle Restart stack

4. Run root.sh on the 2nd node

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/35489/viewspace-688118/,如需轉載,請註明出處,否則將追究法律責任。

相關文章