Metlink:How to clean up a failed CRS/Clusterware

yyp2009發表於2012-07-11

1.1  1.1 Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.3 and later [Release: 11.2 and later ]
IBM: Linux on System z

1.2  1.2 Goal

This articles describes how to clone an Oracle Grid Infrastructure home and use the cloned home to create a cluster. You perform. the cloning procedures by running scripts in silent mode. The cloning procedures are applicable to 11.2.0.3 (and later) Clusterware installations on IBM: Linux on System z on both SuSE and Redhat.

This document does not cover using Cloning to Add Nodes to a Cluster.

This article assumes that you are cloning an Oracle Clusterware 11g release 2 (11.2) installation configured as follows:

No Grid Naming Service (GNS)

No Intelligent Platform. Management Interface specification (IPMI)

Voting disk and Oracle Cluster Registry (OCR) are stored in Oracle Automatic Storage Management (ASM)

Single Client Access Name (SCAN) resolves through DNS

You should also refer to the following documentation:

Oracle® Clusterware Administration and Deployment Guide
11g Release 2 (11.2)
Part Number E16794-16
http://docs.oracle.com/cd/E11882_01/rac.112/e16794/clonecluster.htm




1.3  1.3 Solution

Cloning is the process of copying an existing Oracle Clusterware installation to a different location and then updating the copied installation to work in the new environment. Changes made by one-off patches applied on the source Oracle Grid Infrastructure home are also present after cloning. During cloning, you run a script. that replays the actions that installed the Oracle Grid Infrastructure home.

Cloning requires that you start with a successfully installed Oracle Grid Infrastructure home. You use this home as the basis for implementing a script. that extends the Oracle Grid Infrastructure home to create a cluster based on the original Grid home.

Advantages

Install once and deploy to many without the need for a GUI interface.

Cloning enables you to create an installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, cloning performs all of these individual steps as a single procedure.

Installing Oracle Clusterware by cloning is a quick process. A few minutes to install the software, plus the configuration wizard.

Cloning provides a guaranteed method of accurately repeating the same Oracle Clusterware installation on multiple clusters.

1.3.1      1.3.1 Install Grid Infrastructure Clusterware + any required patches

Before copying the source Oracle Grid Infrastructure home, shut down all of the services, databases, listeners, applications, Oracle Clusterware, and Oracle ASM instances that run on the node. Oracle recommends that you use the Server Control (SRVCTL) utility to first shut down the databases, and then the Oracle Clusterware Control (CRSCTL) utility to shut down the rest of the components.

It is recommended to create a
copy of the source Grid Infrastructure home. This may appear an unnecessary step but by doing so, we can delete unwanted (node specific) files/logs, leaving the original Grid Infrastructure Home intact whilst ensuring that the cloned software is clean. If this is going to be the master copy of Grid Infrastructure software to be rolled out to many clusters, it is worth taking a little time to do this.

As root user:
The next command assumes that our Grid Infrastructure source is
/u01/11.2.0/grid and we are going to use a copy_path of /mnt/sware

cp
prf /u01/11.2.0/grid /mnt/sware


Remove unnecessary files:-

cd /mnt/sware/grid
Note where you see
host_name you need to replace with the hostname of your server

rm -rf host_name
rm -rf log/host_name
rm -rf gpnp/host_name

find gpnp -type f -exec rm -f {} \;
find cfgtoollogs -type f -exec rm -f {} \;
rm -rf crs/init/*
rm -rf cdata/*
rm -rf crf/*
rm -rf network/admin/*.ora
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf root.sh*

Compress the files

cd /mnt/sware/grid
tar -zcvpf /mnt/sware/gridHome.tgz .

1.3.2      1.3.2 Prepare the new cluster nodes

This article does not go into specific details as to what is required. It is assumed that all nodes of the new cluster have been set up with correct kernel parameters, meet all networking requirements, have all ASM devices configured, shared and available and CVU has been run successfully to verify OS and Hardware setup.

Create the same directory structure on each of the new nodes of the new cluster into which you will restore the copy of the Grid Infrastructure Home. You should ensure that the permissions are correct for both the new Grid Home and the oraInventory directory.
In the example below it is assumed that the Grid Infrastructure installation owner is
oracle and the Oracle Inventory group is oinstall - hence owner:group is oracle:oinstall

As root user

mkdir -p /u01/11.2.0/grid
cd /u01/11.2.0/grid
tar -zxvf /mnt/sware/gridHome.tgz

Create the oraInventory directory:

mkdir -p /u01/oraInventory
chown oracle:oinstall /u01/oraInventory
chown -R oracle:oinstall /u01/11.2.0/grid

It is necessary to add the setuid and setgid from the binaries and you should run:

chmod u+s Grid_home/bin/oracle
chmod g+s Grid_home/bin/oracle
chmod u+s Grid_home/bin/extjob
chmod u+s Grid_home/bin/jssu
chmod u+s Grid_home/bin/oradism

1.3.3      1.3.3 Run the clone.pl on the Destination Node.

Just to clarify, at this point we are working on our new node, we have extracted the copied software, ensuring that all permissions are correct and unwanted files have been removed.
We need to run the clone.pl with the relevant parameters e.g.:-

Parameters

Description

ORACLE_BASE=ORACLE_BASE

The complete path to the Oracle base to be cloned. If you specify an invalid path, then the script. exits. This parameter is required.

ORACLE_HOME=GRID_HOME

The complete path to the Grid Infrastructure home for cloning. If you specify an invalid path, then the script. exits. This parameter is required.

ORACLE_HOME_NAME=Oracle_home_name (or) -defaultHomeName

The Oracle home name of the home to be cloned. Optionally, you can specify the -defaultHomeName flag. This parameter is not required.

INVENTORY_LOCATION=location_of_inventory

The location for the Oracle Inventory.

-O'"CLUSTER_NODES={node_name,node_name,...}"'

A comma-delimited list of short node names for the nodes that are included in this new cluster.

-O'"LOCAL_NODE=node_name"'

The short node name for the node on which clone.pl is running.

CRS=TRUE

This parameter is necessary to set this property on the Oracle Universal Installer inventory.

OSDBA_GROUP=OSDBA_privileged_group

Specify the operating system group you want to use as the OSDBA privileged group. This parameter is optional if you do not want the default value.

OSASM_GROUP=OSASM_privileged_group

Specify the operating system group you want to use as the OSASM privileged group. This parameter is optional if you do not want the default value.

OSOPER_GROUP=OSOPER_privileged_group

Specify the operating system group you want to use as the OSOPER privileged group. This parameter is optional if you do not want the default value.

-debug

Specify this option to run the clone.pl script. in debug mode.

-help

Specify this option to obtain help for the clone.pl script.




 

As the grid owner (oracle):-

$ cd /u01/11.2.0/grid/clone/bin
$ perl clone.pl -silent ORACLE_BASE=/u01/base ORACLE_HOME=
/u01/11.2.0/grid ORACLE_HOME_NAME=OraHome1Grid
INVENTORY_LOCATION=/u01/oraInventory -O'"CLUSTER_NODES={node1, node2}"'
-O'"LOCAL_NODE=node1"' CRS=TRUE

Just to clarify the
quotes in the command above for LOCAL_NODE and CLUSTER_NODES e.g.
-O'"CLUSTER_NODES={node1, node2}"'
This is single quote followed by double quote after the
O and double quote followed by single quote at the end.



The clone command needs to be run on each node of the new cluster. This command prepares the new Grid Infrastructure Home for entry into the central inventory (/u01/oraInventory) and relinks the binaries.

Here is sample output:-
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3043 MB Passed
Preparing to launch Oracle Universal Installer from /oracle2/tmp/OraInstall2012-03-06_12-49-27PM. Please wait ...Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
You can find the log of this install session at:
/oracle/oraInventory/logs/cloneActions2012-03-06_12-49-27PM.log.
Performing tests to see whether nodes strkf42 are available
............................................................... 100% Done.
Installation in progress (Tuesday, March 6, 2012 12:49:44 PM GMT)
....................................................................... 71% Done.
Install successful
Linking in progress (Tuesday, March 6, 2012 12:49:48 PM GMT)
Link successful
Setup in progress (Tuesday, March 6, 2012 12:50:50 PM GMT)
................. 100% Done.
Setup successful
End of install phases.(Tuesday, March 6, 2012 12:51:12 PM GMT)
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script. at '/oracle/oraInventory/orainstRoot.sh' with root privileges on nodes 'strkf43'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script. in the list below is followed by a list of nodes.
/u01/oraInventory/orainstRoot.sh #On nodes strkf43
/ u01/11.2.0/grid /root.sh #On nodes strkf43
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

==================================================================
as root user:-
./u01/oraInventory/orainstRoot.sh
./u01/11.2.0/grid /root.sh

In practice, if there was a requirement to roll out Clusterware software on a large number of nodes, this would be further automated by generating shell scripts which call clone.pl passing relevant parameters.

Here is an example:-
Filename start.sh
==================================================================
#!/bin/sh
export PATH=/ u01/11.2.0/grid/bin:$PATH
export THIS_NODE=`/bin/hostname -s`
echo $THIS_NODE
ORACLE_BASE=/u01/base
GRID_HOME=/u01/11.2.0/grid
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${GRID_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=/u01/oraInventory
C00=-O'"-debug"'
C01=-O"\"CLUSTER_NODES={strkf42,strkf43}\""
C02="-O\"LOCAL_NODE=$THIS_NODE\""
perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C00 $C01 $C02
==================================================================
./start.sh will be run on each node of the new cluster, resulting in a prompt to run root.sh and orainstRoot.sh after successful completion, on each node of your new cluster.

It is now time to configure the new cluster
this can be done via the Configuration Wizard (a GUI interface) or silently via a response file.

1.3.4      1.3.4 Launch the Configuration Wizard

The Configuration Wizard helps you to prepare the new crsconfig_params file which is copied across all nodes of the cluster, prompting you to run root.sh script. (which calls the rootconfig script), and runs cluster post-install verifications. You will need to have the list of public, private, and virtual IP address, ASM devices, scan names etc. This article assumes that you are familiar with these requirements and does not go into further detail.


./u01/11.2.0/grid/crs/config/config.sh

The Configuration Wizard allows you to record a responseFile. The following is an example responseFile generated from 11.2.0.3 Configuration Wizard.

Filename config.rsp
==================================================================
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_sc
hema_v11_2_0
INVENTORY_LOCATION=/u01/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/base
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=oinstall
oracle.install.crs.config.gpnp.scanName=strkf-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=strkf
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=strkf42.us.oracle.com:strkf42-vp.us.oracle.com,strkf43.us.oracle.com:strkf43-vp.us.oracle.com
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1", "2", or "3"
# (1 indicates public, 2 indicates private, and 3 indicates the interface is not used)
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=eth0:130.xx.xx.0:1,eth1:10.xx.xx.0:2
oracle.install.crs.config.storageOption=ASM_STORAGE
oracle.install.asm.SYSASMPassword=Oracle_11
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=8
oracle.install.asm.diskGroup.disks=/dev/mapper/lun01,/dev/mapper/lun02
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/lun0*
oracle.install.asm.monitorPassword=Oracle_11
oracle.install.asm.upgradeASM=false
[ConfigWizard]
oracle.install.asm.useExistingDiskGroup=false
==================================================================

To run config.sh silently:-
As oracle user:
cd /u01/11.2.0/grid/crs/config
./config.sh -silent -responseFile /oracle2/copy_grid/config.rsp-ignoreSysPrereqs
ignorePrereq

Note the
ignoreSysPrereqs and ignorePrereq are required or config.sh will fail due to an incorrectly flagged missing rpm. In addition, the SYSASM passwords do not conform. to recommended naming standards.

==================================================================
Example output:-

[WARNING] [INS-30011] The SYS password entered does not conform. to the Oracle recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-30011] The ASMSNMP password entered does not conform. to the Oracle recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.

As a root user, execute the following script(s):
1. /u01/11.2.0/grid /root.sh

Execute /u01/11.2.0/grid/root.sh on the following nodes:
[strkf42, strkf43]

Successfully Setup Software.
==================================================================
Note: the config.sh only needs to be run on one node of the cluster
all required files are propagated to other nodes within the cluster.

At this point you can see that it is necessary to run root.sh (for a second time). The first time this was run was during the clone.pl process. At this point, the $GRID_HOME/crs/crsconfig/rootconfig.sh file was empty. Now that config.sh has been run, the root.sh and rootconfig.sh file will be populated.

Root.sh takes a little time to run, you should ensure that it has completed successfully on the first node before running root.sh on any other nodes.

You can
tail the log to verify when it has completed:
Example output from:
$ORACLE_HOME/install/root__.log

1. ONLINE 6d5789a956204fc2bfab68cab2f2ba06 (/dev/mapper/lun01) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'strkf43'
CRS-2676: Start of 'ora.asm' on 'strkf43' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'strkf43'
CRS-2676: Start of 'ora.DATA.dg' on 'strkf43' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
=================================================================

root.sh can now be run on all other nodes in the cluster.

 

 

1.4  1.4 Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.3 and later [Release: 11.2 and later ]
IBM: Linux on System z

1.5  1.5 Goal

This articles describes how to clone an Oracle Grid Infrastructure home and use the cloned home to create a cluster. You perform. the cloning procedures by running scripts in silent mode. The cloning procedures are applicable to 11.2.0.3 (and later) Clusterware installations on IBM: Linux on System z on both SuSE and Redhat.

This document does not cover using Cloning to Add Nodes to a Cluster.

This article assumes that you are cloning an Oracle Clusterware 11g release 2 (11.2) installation configured as follows:

No Grid Naming Service (GNS)

No Intelligent Platform. Management Interface specification (IPMI)

Voting disk and Oracle Cluster Registry (OCR) are stored in Oracle Automatic Storage Management (ASM)

Single Client Access Name (SCAN) resolves through DNS

You should also refer to the following documentation:

Oracle® Clusterware Administration and Deployment Guide
11g Release 2 (11.2)
Part Number E16794-16
http://docs.oracle.com/cd/E11882_01/rac.112/e16794/clonecluster.htm




1.6  1.6 Solution

Cloning is the process of copying an existing Oracle Clusterware installation to a different location and then updating the copied installation to work in the new environment. Changes made by one-off patches applied on the source Oracle Grid Infrastructure home are also present after cloning. During cloning, you run a script. that replays the actions that installed the Oracle Grid Infrastructure home.

Cloning requires that you start with a successfully installed Oracle Grid Infrastructure home. You use this home as the basis for implementing a script. that extends the Oracle Grid Infrastructure home to create a cluster based on the original Grid home.

Advantages

Install once and deploy to many without the need for a GUI interface.

Cloning enables you to create an installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, cloning performs all of these individual steps as a single procedure.

Installing Oracle Clusterware by cloning is a quick process. A few minutes to install the software, plus the configuration wizard.

Cloning provides a guaranteed method of accurately repeating the same Oracle Clusterware installation on multiple clusters.

1.6.1      1.6.1 Install Grid Infrastructure Clusterware + any required patches

Before copying the source Oracle Grid Infrastructure home, shut down all of the services, databases, listeners, applications, Oracle Clusterware, and Oracle ASM instances that run on the node. Oracle recommends that you use the Server Control (SRVCTL) utility to first shut down the databases, and then the Oracle Clusterware Control (CRSCTL) utility to shut down the rest of the components.

It is recommended to create a
copy of the source Grid Infrastructure home. This may appear an unnecessary step but by doing so, we can delete unwanted (node specific) files/logs, leaving the original Grid Infrastructure Home intact whilst ensuring that the cloned software is clean. If this is going to be the master copy of Grid Infrastructure software to be rolled out to many clusters, it is worth taking a little time to do this.

As root user:
The next command assumes that our Grid Infrastructure source is
/u01/11.2.0/grid and we are going to use a copy_path of /mnt/sware

cp
prf /u01/11.2.0/grid /mnt/sware


Remove unnecessary files:-

cd /mnt/sware/grid
Note where you see
host_name you need to replace with the hostname of your server

rm -rf host_name
rm -rf log/host_name
rm -rf gpnp/host_name

find gpnp -type f -exec rm -f {} \;
find cfgtoollogs -type f -exec rm -f {} \;
rm -rf crs/init/*
rm -rf cdata/*
rm -rf crf/*
rm -rf network/admin/*.ora
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf root.sh*

Compress the files

cd /mnt/sware/grid
tar -zcvpf /mnt/sware/gridHome.tgz .

1.6.2      1.6.2 Prepare the new cluster nodes

This article does not go into specific details as to what is required. It is assumed that all nodes of the new cluster have been set up with correct kernel parameters, meet all networking requirements, have all ASM devices configured, shared and available and CVU has been run successfully to verify OS and Hardware setup.

Create the same directory structure on each of the new nodes of the new cluster into which you will restore the copy of the Grid Infrastructure Home. You should ensure that the permissions are correct for both the new Grid Home and the oraInventory directory.
In the example below it is assumed that the Grid Infrastructure installation owner is
oracle and the Oracle Inventory group is oinstall - hence owner:group is oracle:oinstall

As root user

mkdir -p /u01/11.2.0/grid
cd /u01/11.2.0/grid
tar -zxvf /mnt/sware/gridHome.tgz

Create the oraInventory directory:

mkdir -p /u01/oraInventory
chown oracle:oinstall /u01/oraInventory
chown -R oracle:oinstall /u01/11.2.0/grid

It is necessary to add the setuid and setgid from the binaries and you should run:

chmod u+s Grid_home/bin/oracle
chmod g+s Grid_home/bin/oracle
chmod u+s Grid_home/bin/extjob
chmod u+s Grid_home/bin/jssu
chmod u+s Grid_home/bin/oradism

1.6.3      1.6.3 Run the clone.pl on the Destination Node.

Just to clarify, at this point we are working on our new node, we have extracted the copied software, ensuring that all permissions are correct and unwanted files have been removed.
We need to run the clone.pl with the relevant parameters e.g.:-

Parameters

Description

ORACLE_BASE=ORACLE_BASE

The complete path to the Oracle base to be cloned. If you specify an invalid path, then the script. exits. This parameter is required.

ORACLE_HOME=GRID_HOME

The complete path to the Grid Infrastructure home for cloning. If you specify an invalid path, then the script. exits. This parameter is required.

ORACLE_HOME_NAME=Oracle_home_name (or) -defaultHomeName

The Oracle home name of the home to be cloned. Optionally, you can specify the -defaultHomeName flag. This parameter is not required.

INVENTORY_LOCATION=location_of_inventory

The location for the Oracle Inventory.

-O'"CLUSTER_NODES={node_name,node_name,...}"'

A comma-delimited list of short node names for the nodes that are included in this new cluster.

-O'"LOCAL_NODE=node_name"'

The short node name for the node on which clone.pl is running.

CRS=TRUE

This parameter is necessary to set this property on the Oracle Universal Installer inventory.

OSDBA_GROUP=OSDBA_privileged_group

Specify the operating system group you want to use as the OSDBA privileged group. This parameter is optional if you do not want the default value.

OSASM_GROUP=OSASM_privileged_group

Specify the operating system group you want to use as the OSASM privileged group. This parameter is optional if you do not want the default value.

OSOPER_GROUP=OSOPER_privileged_group

Specify the operating system group you want to use as the OSOPER privileged group. This parameter is optional if you do not want the default value.

-debug

Specify this option to run the clone.pl script. in debug mode.

-help

Specify this option to obtain help for the clone.pl script.




 

As the grid owner (oracle):-

$ cd /u01/11.2.0/grid/clone/bin
$ perl clone.pl -silent ORACLE_BASE=/u01/base ORACLE_HOME=
/u01/11.2.0/grid ORACLE_HOME_NAME=OraHome1Grid
INVENTORY_LOCATION=/u01/oraInventory -O'"CLUSTER_NODES={node1, node2}"'
-O'"LOCAL_NODE=node1"' CRS=TRUE

Just to clarify the
quotes in the command above for LOCAL_NODE and CLUSTER_NODES e.g.
-O'"CLUSTER_NODES={node1, node2}"'
This is single quote followed by double quote after the
O and double quote followed by single quote at the end.



The clone command needs to be run on each node of the new cluster. This command prepares the new Grid Infrastructure Home for entry into the central inventory (/u01/oraInventory) and relinks the binaries.

Here is sample output:-
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3043 MB Passed
Preparing to launch Oracle Universal Installer from /oracle2/tmp/OraInstall2012-03-06_12-49-27PM. Please wait ...Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
You can find the log of this install session at:
/oracle/oraInventory/logs/cloneActions2012-03-06_12-49-27PM.log.
Performing tests to see whether nodes strkf42 are available
............................................................... 100% Done.
Installation in progress (Tuesday, March 6, 2012 12:49:44 PM GMT)
....................................................................... 71% Done.
Install successful
Linking in progress (Tuesday, March 6, 2012 12:49:48 PM GMT)
Link successful
Setup in progress (Tuesday, March 6, 2012 12:50:50 PM GMT)
................. 100% Done.
Setup successful
End of install phases.(Tuesday, March 6, 2012 12:51:12 PM GMT)
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script. at '/oracle/oraInventory/orainstRoot.sh' with root privileges on nodes 'strkf43'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script. in the list below is followed by a list of nodes.
/u01/oraInventory/orainstRoot.sh #On nodes strkf43
/ u01/11.2.0/grid /root.sh #On nodes strkf43
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

==================================================================
as root user:-
./u01/oraInventory/orainstRoot.sh
./u01/11.2.0/grid /root.sh

In practice, if there was a requirement to roll out Clusterware software on a large number of nodes, this would be further automated by generating shell scripts which call clone.pl passing relevant parameters.

Here is an example:-
Filename start.sh
==================================================================
#!/bin/sh
export PATH=/ u01/11.2.0/grid/bin:$PATH
export THIS_NODE=`/bin/hostname -s`
echo $THIS_NODE
ORACLE_BASE=/u01/base
GRID_HOME=/u01/11.2.0/grid
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${GRID_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=/u01/oraInventory
C00=-O'"-debug"'
C01=-O"\"CLUSTER_NODES={strkf42,strkf43}\""
C02="-O\"LOCAL_NODE=$THIS_NODE\""
perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C00 $C01 $C02
==================================================================
./start.sh will be run on each node of the new cluster, resulting in a prompt to run root.sh and orainstRoot.sh after successful completion, on each node of your new cluster.

It is now time to configure the new cluster
this can be done via the Configuration Wizard (a GUI interface) or silently via a response file.

1.6.4      1.6.4 Launch the Configuration Wizard

The Configuration Wizard helps you to prepare the new crsconfig_params file which is copied across all nodes of the cluster, prompting you to run root.sh script. (which calls the rootconfig script), and runs cluster post-install verifications. You will need to have the list of public, private, and virtual IP address, ASM devices, scan names etc. This article assumes that you are familiar with these requirements and does not go into further detail.


./u01/11.2.0/grid/crs/config/config.sh

The Configuration Wizard allows you to record a responseFile. The following is an example responseFile generated from 11.2.0.3 Configuration Wizard.

Filename config.rsp
==================================================================
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_sc
hema_v11_2_0
INVENTORY_LOCATION=/u01/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/base
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=oinstall
oracle.install.crs.config.gpnp.scanName=strkf-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=strkf
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=strkf42.us.oracle.com:strkf42-vp.us.oracle.com,strkf43.us.oracle.com:strkf43-vp.us.oracle.com
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1", "2", or "3"
# (1 indicates public, 2 indicates private, and 3 indicates the interface is not used)
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=eth0:130.xx.xx.0:1,eth1:10.xx.xx.0:2
oracle.install.crs.config.storageOption=ASM_STORAGE
oracle.install.asm.SYSASMPassword=Oracle_11
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=8
oracle.install.asm.diskGroup.disks=/dev/mapper/lun01,/dev/mapper/lun02
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/lun0*
oracle.install.asm.monitorPassword=Oracle_11
oracle.install.asm.upgradeASM=false
[ConfigWizard]
oracle.install.asm.useExistingDiskGroup=false
==================================================================

To run config.sh silently:-
As oracle user:
cd /u01/11.2.0/grid/crs/config
./config.sh -silent -responseFile /oracle2/copy_grid/config.rsp-ignoreSysPrereqs
ignorePrereq

Note the
ignoreSysPrereqs and ignorePrereq are required or config.sh will fail due to an incorrectly flagged missing rpm. In addition, the SYSASM passwords do not conform. to recommended naming standards.

==================================================================
Example output:-

[WARNING] [INS-30011] The SYS password entered does not conform. to the Oracle recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-30011] The ASMSNMP password entered does not conform. to the Oracle recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.

As a root user, execute the following script(s):
1. /u01/11.2.0/grid /root.sh

Execute /u01/11.2.0/grid/root.sh on the following nodes:
[strkf42, strkf43]

Successfully Setup Software.
==================================================================
Note: the config.sh only needs to be run on one node of the cluster
all required files are propagated to other nodes within the cluster.

At this point you can see that it is necessary to run root.sh (for a second time). The first time this was run was during the clone.pl process. At this point, the $GRID_HOME/crs/crsconfig/rootconfig.sh file was empty. Now that config.sh has been run, the root.sh and rootconfig.sh file will be populated.

Root.sh takes a little time to run, you should ensure that it has completed successfully on the first node before running root.sh on any other nodes.

You can
tail the log to verify when it has completed:
Example output from:
$ORACLE_HOME/install/root__.log

1. ONLINE 6d5789a956204fc2bfab68cab2f2ba06 (/dev/mapper/lun01) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'strkf43'
CRS-2676: Start of 'ora.asm' on 'strkf43' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'strkf43'
CRS-2676: Start of 'ora.DATA.dg' on 'strkf43' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
=================================================================

root.sh can now be run on all other nodes in the cluster.

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/13750068/viewspace-735207/,如需轉載,請註明出處,否則將追究法律責任。

相關文章