root.sh Fails on the First Node for 11gR2 GI Installation

zhouwf0726發表於2019-04-08

手不能太快,在第一個點執行完,等一會,再在第二個點執行,尤其是虛擬機器。

 

root.sh Fails on the First Node for 11gR2 GI Installation [ID 1191783.1]


  修改時間 08-MAY-2011     型別 PROBLEM     狀態 PUBLISHED  

In this Document
  Symptoms
  Changes
  Cause
  Solution
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.1 and later   [Release: 11.2 and later ]
Information in this document applies to any platform.

Symptoms

2 node cluster, first time installing 11gR2 Grid Infrastructure, root.sh fails on the first node.

$GI_HOME/cfgtoollogs/crsconfig/rootcrs_.log shows:
2010-07-24 23:29:36: Configuring ASM via ASMCA
2010-07-24 23:29:36: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:53:Configuration failed, see logfile for details

Per $ORACLE_BASE/cfgtoollogs/asmca/asmca-.log shows asmca failed with error:

ORA-15018 diskgroup cannot be created
ORA-15017 diskgroup OCR_VOTING_DG cannot be mounted
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space

This is a new installation, the disks used by ASM are not shared on any other cluster system.

Changes

New installation.

Cause

The problem is caused by root.sh was run on two nodes at similar time (3 second difference) rather than it is finished on node 1 first before running it on node 2.

On node 2, file $GI_HOME/cfgtoollogs/crsconfig/rootcrs_.log exist. Checking the content:

2010-07-24 23:29:39: Configuring ASM via ASMCA
2010-07-24 23:29:39:
Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR
-diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3
-redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Running
as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR
-diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3
-redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39:
Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList
ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL
-configureLocalASM" as user "oracle"
2010-07-24 23:29:55:Configuration failed, see logfile for details

It has similar content, the only difference is its time started 3 seconds later than the first node. This indicates root.sh were running at similar time on both nodes rather than finishing on 1st node before running on 2nd node.

The root.sh on 2nd node also created +ASM1 instance ( as it also appears it is the first node to run root.sh), it mounted the same diskgroup, lead to the first +ASM1 on node 1 complain:
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space

Solution

1. Deconfig the Grid Infrastructure without removing binaries, refer to note 942166.1> for details.
 
2. Rerun root.sh on each node sequentially. Only starts root.sh on the 2nd node when it completes on the 1st node.

After this, root.sh completes successfully and CRS daemon are up.

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/756652/viewspace-695699/,如需轉載,請註明出處,否則將追究法律責任。

相關文章