補接_oracle rac_node addition and deletion for clusterware or software
Adding an Oracle Clusterware Home to a New Node Using OUI in Interactive Mode
1.
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To use these procedures as shown, your $CRS_HOME environment variable must identify your successfully installed Oracle Clusterware home.
2.
Go to CRS_home/oui/bin and run the addNode.sh script.
3.
The Oracle Universal Installer (OUI) displays the Node Selection Page on which you should select the node or nodes that you want to add and click Next.
4.
Verify the entries that OUI displays on the Summary Page and click Next.
5.
Run the rootaddNode.sh script. from the CRS_home/install/ directory on the node from which you are running OUI.
6.
Run the orainstRoot.sh script. on the new node if OUI prompts you to do so.
Note:
The orainstRoot.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
7.
Run the root.sh script. on the new node from CRS_home to start Oracle Clusterware on the new node.
8.
Obtain the remote port identifier, which you need to know for the next step, by running the following command on the existing node from the CRS_home/opmn/conf directory:
cat ons.config
9.
From the CRS_home/bin directory on an existing node, run the Oracle Notification Service (RACGONS) utility as in the following example where remote_port is the port number from the previous step and node2 is the name of the node that you are adding:
./racgons add_config node2:remote_port
Adding an Oracle Clusterware Home to a New Node Using OUI in Silent Mode
1.
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To use these procedures as shown, your $CRS_HOME environment variable must identify your successfully installed Oracle Clusterware home.
2.
Go to CRS_home/oui/bin and run the addNode.sh script. using the following syntax where node2 is the name of the new node that you are adding, node2-priv is the private node name for the new node, and node2-vip is the VIP name for the new node:
./addNode.sh –silent "CLUSTER_NEW_NODES={node2}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}"
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
3.
Perform. OUI-related steps 5 through 9 from the previous section about using OUI interactively under the heading "Adding an Oracle Clusterware Home to a New Node Using OUI in Interactive Mode".
Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Interactive Mode
1.
Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, your $ORACLE_HOME environment variable must identify your successfully installed Oracle home.
2.
Go to Oracle_home/oui/bin and run the addNode.sh script.
3.
When OUI displays the Node Selection Page, select the node or nodes to be added and click Next.
4.
Verify the entries that OUI displays on the Summary Page and click Next.
5.
Run the root.sh script. on the new node from Oracle_home when OUI prompts you to do so.
Note:
The root.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command on any one but only one of the nodes in the cluster to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
6.
On the new node, run the Oracle Net Configuration Assistant (NETCA) to add a Listener.
7.
Use the Enterprise Manager or DBCA to add an instance as described under the heading "Step 5: Adding Database Instances to New Nodes".
Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Silent Mode
1.
Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node in your cluster environment. To use these procedures, your $ORACLE_HOME environment variable must identify your successfully installed Oracle home and the node to be added is named node2.
2.
Go to Oracle_home/oui/bin and run the addNode.sh script. using the following syntax:
./addNode.sh -silent "CLUSTER_NEW_NODES={node2}"
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command on any one but only one of the nodes in the cluster to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
3.
Perform. steps 5 through 7 from the previous section about using OUI interactively under the heading "Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Interactive Mode".
Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Silent Mode
1.
Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node in your cluster environment. To use these procedures, your $ORACLE_HOME environment variable must identify your successfully installed Oracle home and the node to be added is named node2.
2.
Go to Oracle_home/oui/bin and run the addNode.sh script. using the following syntax:
./addNode.sh -silent "CLUSTER_NEW_NODES={node2}"
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command on any one but only one of the nodes in the cluster to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
3.
Perform. steps 5 through 7 from the previous section about using OUI interactively under the heading "Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Interactive Mode".
Deleting an Oracle Clusterware Home from an Existing Node
The procedures for deleting an Oracle Clusterware home assume that you have successfully installed the clusterware on the node from which you want to delete the Oracle Clusterware home. You can use either of the following procedures to delete an Oracle Clusterware home from a node:
*
Deleting an Oracle Clusterware Home Using OUI in Interactive Mode
*
Deleting an Oracle Clusterware Home Using OUI in Silent Mode
Deleting an Oracle Clusterware Home Using OUI in Interactive Mode
1.
Perform. the delete node operation for database homes as described in the section titled "Deleting an Oracle Home with Oracle RAC Using OUI in Interactive Mode" or use the procedure, "Deleting an Oracle Home with Oracle RAC Using OUI in Silent Mode", and ensure that the $CRS_HOME environment variable is defined to identify the appropriate Oracle Clusterware home on each node.
2.
If you ran the Oracle Interface Configuration Tool (OIFCFG) with the -global flag during the installation, then skip this step. Otherwise, from a node that is going to remain in your cluster, from the CRS_home/bin directory, run the following command where node2 is the name of the node that you are deleting:
./oifcfg delif –node node2
3.
Obtain the remote port number, which you will use in the next step, using the following command from the CRS_home/opmn/conf directory:
cat ons.config
4.
From CRS_home/bin on a node that is going to remain in the cluster, run the Oracle Notification Service Utility (RACGONS) as in the following example where remote_port is the ONS remote port number that you obtained in the previous step and node2 is the name of the node that you are deleting:
./racgons remove_config node2:remote_port
5.
On the node to be deleted, run rootdelete.sh as the root user from the CRS_home/install directory. If you are deleting more than one node, then perform. this step on all of the other nodes that you are deleting.
6.
From any node that you are not deleting, run the following command from the CRS_home/install directory as the root user where node2,node2-number represents the node and the node number that you want to delete:
./rootdeletenode.sh node2,node2-number
If necessary, identify the node number using the following command on the node that you are deleting:
CRS_home/bin/olsnodes -n
7.
Perform. this step only if your are using a non-shared Oracle home. On the node or nodes to be deleted, run the following command from the CRS_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:
./runInstaller -updateNodeList ORACLE_HOME=CRS_home
"CLUSTER_NODES={node_to_be_deleted}"
CRS=TRUE -local
8.
Perform. this step only if your are using a non-shared Oracle home. On the node that you are deleting, run OUI using the runInstaller command from the CRS_home/oui/bin directory to deinstall Oracle Clusterware.
9.
If you are using a non-shared Oracle home, on any node other than the node that you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your Oracle RAC database:
./runInstaller -updateNodeList ORACLE_HOME=CRS_home
"CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE
If you are using a shared Oracle home, on any node other than the node that you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your Oracle RAC database:
./runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=Oracle_home
"CLUSTER_NODES={remaining_nodes_list}"
Deleting an Oracle Clusterware Home Using OUI in Silent Mode
1.
Perform. steps 1 through 7 from the previous section about using OUI interactively under the heading "Deleting an Oracle Clusterware Home Using OUI in Interactive Mode". Also ensure that the $CRS_HOME environment variable is defined to identify the appropriate Oracle Clusterware home on each node.
2.
Deinstall the Oracle Clusterware home from the node that you are deleting using OUI as follows by running the following command from the Oracle_home/oui/bin directory, where CRS_home is the name defined for the Oracle Clusterware home:
./runInstaller -deinstall –silent "REMOVE_HOMES={CRS_home}"
3.
Perform. step 9 from the previous section about using OUI interactively under the heading "Deleting an Oracle Clusterware Home Using OUI in Interactive Mode".
10 Adding and Deleting Nodes and Instances on UNIX-Based Systems
Step 1: Connecting New Nodes to the Cluster
Complete the following procedures to connect the new nodes to the cluster and to prepare them to support your Oracle RAC database:
*
Making Physical Connections
*
Installing the Operating System
*
Creating Oracle Users
*
Verifying the Installation with the Cluster Verification Utility
*
Checking the Installation
Making Physical Connections
Connect the new nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. Refer to your hardware vendor documentation for details about this step.
Installing the Operating System
Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. Refer to your hardware vendor documentation for details about this process.
Creating Oracle Users
As root user, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.
Verifying the Installation with the Cluster Verification Utility
Verify your installation using the Cluster Verification Utility (CVU) as in the following steps:
1.
From the directory CRS_home/bin on the existing nodes, run the CVU command to verify your installation at the post hardware installation stage as shown in the following example, where node_list is a comma-delimited list of nodes you want in your cluster:
cluvfy stage -post hwos -n node_list|all [-verbose]
This command causes CVU to verify your hardware and operating system environment at the post-hardware setup stage. After you have configured the hardware and operating systems on the new nodes, you can use this command to verify node reachability, for example, to all of the nodes from the local node. You can also use this command to verify user equivalence to all given nodes the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.
Note:
You can only use the all option with the -n argument if you have set the CV_NODELIST variable to represent the list of nodes on which you want to perform. the CVU operation.
See Also:
"Using the Cluster Verification Utility" for more information about enabling and using the CVU
2.
From the directory CRS_home/bin on the existing nodes, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment where ref_node is a node in your existing cluster against which you want CVU to compare, for example, the newly added nodes that you specify with the comma-delimited list in node_list for the -n option, orainventory_group is the name of the Oracle inventory group, and osdba_group is the name of the OSDBA group:
cluvfy comp peer [ -refnode ref_node ] -n node_list
[ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
Note:
For the reference node, select a node from your existing cluster nodes against which you want CVU to compare, for example, the newly added nodes that you specify with the -n option.
Checking the Installation
To verify that your installation is configured correctly, perform. the following steps:
1.
Ensure that the new nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures in "Step 2: Extending Clusterware and Oracle Software to New Nodes".
2.
If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing nodes. Make sure that you have at least 250MB of free space on the same location on each of the new nodes to install Oracle Clusterware. In addition, ensure you have enough free space on each new node to install the Oracle binaries.
3.
Ensure that the Oracle Cluster Registry (OCR) and the voting disk are accessible by the new nodes using the same path as the other nodes use. In addition, the OCR and voting disk devices must have the same permissions as on the existing nodes.
4.
Verify user equivalence to and from an existing node to the new nodes using rsh or ssh.
After completing the procedures in this section, your new nodes are connected to the cluster and configured with the required software to make them visible to the clusterware. Configure the new nodes as members of the cluster by extending the cluster software to the new nodes as described in "Step 2: Extending Clusterware and Oracle Software to New Nodes".
Note:
Do not change a hostname after the Oracle Clusterware installation. This includes adding or deleting a domain qualification.
Step 2: Extending Clusterware and Oracle Software to New Nodes
The following topics describe how to add new nodes to the clusterware and to the Oracle database software layers using OUI:
*
Adding Nodes at the Vendor Clusterware Layer
*
Adding Nodes at the Oracle Clusterware Layer
Adding Nodes at the Vendor Clusterware Layer
Add the new nodes at the clusterware layer according to the vendor clusterware documentation. For systems using shared storage for the Oracle Clusterware home, ensure that the existing clusterware is accessible by the new nodes. Also ensure that the new nodes can be brought online as part of the existing cluster. Proceed to the next section to add the nodes at the clusterware layer.
Adding Nodes at the Oracle Clusterware Layer
Before beginning this procedure, ensure that your existing nodes have the $CRS_HOME environment variable set correctly. The OUI requires access to the private interconnect that you verified as part of the installation validation in Step 1. If OUI cannot make the required connections, then you will not be able to complete the following steps to add nodes.
Note:
Instead of performing the first six steps of this procedure, you can alternatively run the addNode.sh script. in silent mode as described at the end of this section.
1.
On one of the existing nodes go to the CRS_home/oui/bin directory and run the addNode.sh script. to start OUI.
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
2.
The OUI runs in add node mode and the OUI Welcome page appears. Click Next and the Specify Cluster Nodes for Node Addition page appears.
3.
The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes, the private node names, and the virtual IP (VIP) addresses that are associated with Oracle Clusterware. Use the lower table to enter the public, private node names and the virtual hostnames of the new nodes.
4.
If you are using vendor clusterware, then the public node names automatically appear in the lower table. Click Next and OUI verifies connectivity on the existing nodes and on the new nodes. The verifications that OUI performs include determining whether:
*
The nodes are up
*
The nodes and private node names are accessible by way of the network
Note:
If any of the existing nodes are down, you can proceed with the procedure. However, once the nodes are up, you must run the following command on each of those nodes only if you are using a local (non-shared) Oracle home:
runInstaller -updateNodeList -local
"CLUSTER_NODES={available_node_list}"
ORACLE_HOME=CRS_home
This operation should be run from the CRS_home/oui/bin directory and the available_node_list values is comma-delimited list of all of nodes currently in the cluster and CRS_home defines the Oracle Clusterware home directory.
*
The virtual hostnames are not already in use on the network.
5.
If any verifications fail, then OUI re-displays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your cluster before you can proceed with node addition. If all the checks succeed, then OUI displays the Node Addition Summary page.
Note:
Oracle strongly recommends that you install Oracle Clusterware on every node in the cluster on which you have installed vendor clusterware.
6.
The Node Addition Summary page displays the following information showing the products that are installed in the Oracle Clusterware home that you are extending to the new nodes:
*
The source for the add node process, which in this case is the Oracle Clusterware home
*
The private node names that you entered for the new nodes
*
The new nodes that you entered
*
The required and available space on the new nodes
*
The installed products listing the products that are already installed on the existing Oracle Clusterware home
Click Next and OUI displays the Cluster Node Addition Progress page.
7.
The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the four phases of the node addition process and the phases' statuses as follows:
*
Instantiate Root Scripts—Instantiates rootaddNode.sh with the public nodes, private node names, and virtual hostnames that you entered on the Cluster Node Addition page.
*
Copy the Oracle Clusterware home to the New Nodes—Copies the Oracle Clusterware home to the new nodes unless the Oracle Clusterware home is on a cluster file system.
*
Save Cluster Inventory—Updates the node list associated with the Oracle Clusterware home and its inventory.
*
Run rootaddNode.sh and root.sh—Displays a dialog prompting you to run the rootaddNode.sh script. from the local node (the node on which you are running OUI) and to run the root.sh script. on the new nodes. If OUI detects that the new nodes do not have an inventory location, then OUI instructs you to run orainstRoot.sh on those nodes. The central inventory location is the same as that of the local node. The addNodeActionstimestamp.log file, where timestamp shows the session start date and time, contains information about which scripts you need to run and on which nodes you need to run them.
The Cluster Node Addition Progress page's Status column displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit to end the OUI session. After OUI displays the End of Node Addition page, click Exit to end the OUI session.
For shared Oracle home users, run the following command only once on any of the nodes from the $ORACLE_HOME/oui/bin directory, where nodes_list is a comma-delimited list of the nodes that are part of your cluster:
./runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=Oracle_home
CLUSTER_NODES=nodes_list
8.
Obtain the remote port number, which you will use in the next step, using the following command from the CRS_home/opmn/conf directory:
cat ons.config
If you are using a shared Oracle home, verify that the usesharedinstall=true entry is included in the $ORACLE_HOME/opmn/conf/ons.config file. If it is not, do the following:
1.
Navigate to the $CRS_HOME/opmn/conf directory.
2.
Enter the following:
$ cat >> ons.config
usesharedinstall=true
3.
Press Ctrl-D.
4.
Restart the Oracle Notification Server.
9.
Run the racgons utility from the bin subdirectory of the Oracle Clusterware home to configure the Oracle Notification Services (ONS) port number. Use the following command, supplying the name of the node that you are adding for new_node_name and the remote port number obtained in the previous step:
racgons add_config new_node_name:remote_port
10.
Check that your cluster is integrated and that the cluster is not divided into partitions by completing the following operations:
*
Run the following CVU command to obtain detailed output for verifying cluster manager integrity on all of the nodes that are part of your Oracle RAC environment:
cluvfy comp clumgr -n all [-verbose]
*
Use the following CVU command to obtain detailed output for verifying cluster integrity on all of the nodes that are part of your Oracle RAC environment:
cluvfy comp clu [-verbose]
*
Use the following command to perform. an integrated validation of the Oracle Clusterware setup on all of the configured nodes, both the pre-existing nodes and the nodes that you have added:
cluvfy comp stage -post crinst -n all [-verbose]
See Also:
"Using the Cluster Verification Utility" for more information about enabling and using the CVU
You can optionally run addNode.sh in silent mode, replacing steps 1 through 7, as follows where nodeI, nodeI+1, and so on are the new nodes that you are adding:
addNode.sh -silent "CLUSTER_NEW_NODES={nodeI, nodeI+1, … nodeI+n}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={pnI, pnI+1, … pnI+n}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vipI, vipI+1,…,vipI+n}"
You can alternatively specify the variable=value entries in a response file and run the addNode.sh script. as follows:
addNode.sh -silent -responseFile filename OR addNode.bat -silent -responseFile filename
See Also:
Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files
Notes:
*
Command-line values always override response file values.
*
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode
List2006-08-11_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
Run rootaddNode.sh on the local node, or the node on which you are performing this procedure, and run root.sh on the new nodes. If OUI detects that the new nodes do not have an inventory location, then OUI instructs you to run orainstRoot.sh on those nodes. The central inventory location is the same as that of the local node. The addNodeActionstimestamp.log file, where timestamp shows the session start date and time, contains the information about which scripts you need to run and on which nodes you need to run them.
After you have completed the procedures in this section for adding nodes at the Oracle Clusterware layer, you have successfully extended the Oracle Clusterware home from your existing the Oracle Clusterware home to the new nodes. Proceed to "Step 3: Preparing Storage on New Nodes" to prepare storage for Oracle RAC on the new nodes.
Step 3: Preparing Storage on New Nodes
To extend an existing Oracle RAC database to your new nodes, configure the shared storage for the new instances to be added on new nodes so that the storage type is the same as the storage that is already used by the existing nodes' instances. Prepare the same type of storage on the new nodes as you are using on the other nodes in the Oracle RAC environment that you want to extend as follows:
*
Automatic Storage Management (ASM)
If you are using ASM, then make sure that the new nodes can access the ASM disks with the same permissions as the existing nodes.
*
Oracle Cluster File System (OCFS)
If you are using Oracle Cluster File Systems, then make sure that the new nodes can access the cluster file systems in the same way that the other nodes access them.
*
Vendor Cluster File Systems
If your cluster database uses vendor cluster file systems, then configure the new nodes to use the vendor cluster file systems. Refer to the vendor clusterware documentation for the pre-installation steps for your UNIX platform.
*
Raw Device Storage
If your cluster database uses raw devices, then prepare the new raw devices by following the procedures described in the next section.
See Also:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Microsoft Windows for more information about the Oracle Cluster File System
Run the following command to verify your cluster file system and obtain detailed output where nodelist includes both the pre-existing nodes and the newly added nodes and file system is the name of the file system that you used for the Oracle Cluster File System:
cluvfy comp cfs -n nodelist -f file system [-verbose]
Using DBCA in Interactive Mode to Add Database Instances to New Nodes
To add a database instance to a new node with DBCA in interactive mode, perform. the following procedure:
1.
Start the Database Configuration Assistant (DBCA) by entering dbca at the system prompt from the bin directory in the Oracle_home directory.
The DBCA displays the Welcome page for Oracle RAC. Click Help on any DBCA page for additional information.
2.
Select Oracle Real Application Clusters database, click Next, and DBCA displays the Operations page.
3.
Select Instance Management, click Next, and DBCA displays the Instance Management page.
4.
Select Add Instance and click Next. The DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE, or INACTIVE.
5.
From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Enter user name and password for the database user that has SYSDBA privileges. Click Next and DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the Oracle RAC database that you selected.
6.
Click Next to add a new instance and DBCA displays the Adding an Instance page.
7.
On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA provides does not match your existing instance naming scheme. Then select the new node name from the list, click Next, and DBCA displays the Services Page.
8.
Enter the services information for the new node's instance, click Next, and DBCA displays the Instance Storage page.
9.
If you are using raw devices or raw partitions, then on the Instance Storage page select the Tablespaces folder and expand it. Select the undo tablespace storage object and a dialog appears on the right-hand side. Change the default datafile name to the raw device name for the tablespace.
10.
If you are using raw devices or raw partitions or if you want to change the default redo log group file name, then on the Instance Storage page select and expand the Redo Log Groups folder. For each redo log group number that you select, DBCA displays another dialog box. Enter the raw device name that you created in the section "Raw Device Storage Preparation for New Nodes" in the File Name field.
11.
If you are using a cluster file system, then click Finish on the Instance Storage page. If you are using raw devices, then repeat step 10 for all of the other redo log groups, click Finish, and DBCA displays a Summary dialog.
12.
Review the information on the Summary dialog and click OK or click Cancel to end the instance addition operation. The DBCA displays a progress dialog showing DBCA performing the instance addition operation. When DBCA completes the instance addition operation, DBCA displays a dialog asking whether you want to perform. another operation.
13.
After you terminate your DBCA session, run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges where nodelist consists of the newly added nodes:
cluvfy comp admprv -o db_config -d oracle_home -n nodelist [-verbose]
After adding the instances to the new nodes using the steps described in this section, perform. any needed service configuration procedures as described in Chapter 6, "Introduction to Workload Management".
Using DBCA in Silent Mode to Add Database Instances to New Nodes
You can use the Database Configuration Assistant (DBCA) in silent mode to add instances to nodes onto which you have extended an Oracle Clusterware home and an Oracle home. Use the following syntax to perform. this operation where node is the node onto which you want to add the instance, gdbname is the global database name, instname is the name of the new instance, sysdba is the name of an Oracle user with SYSDBA privileges, and password is the password for the user name in sysdba:
dbca -silent -addInstance -nodeList node -gdbName gdbname [-instanceName instname] -sysDBAUserName sysdba -sysDBAPassword password
Note that you only need to provide an instance name if you want to override the Oracle naming convention for Oracle RAC instance names.
After you have completed either of the DBCA procedures in this section, DBCA has successfully added the new instance to the new node and completed the following steps:
*
Created and started an ASM instance on each new node if the existing instances were using ASM
*
Created a new database instance on each new node
*
Created and configured high availability components
*
Created the Oracle Net configuration
*
Started the new instance
*
Created and started services if you entered services information on the Services Configuration page
\Adding Nodes that Already Have Clusterware and Oracle Software to a Cluster
Before beginning this procedure, ensure that your existing nodes have the $CRS_HOME and $ORACLE_HOME environment variables set correctly. To add nodes to a cluster that already have clusterware and Oracle software installed on them, you must configure the new nodes with the Oracle software that is on the existing nodes of the cluster. To do this, you must run two versions of an OUI process: one for the clusterware and one for the database layer as described in the following procedures:
1.
Add new nodes at the Oracle Clusterware layer by running OUI from the Oracle Clusterware home on an existing node, using the following command:
CRS_home/oui/bin/addNode.sh -noCopy
2.
Add new nodes at the Oracle software layer by running OUI from the Oracle home as follows:
Oracle_home/oui/bin/addNode.sh -noCopy
In the -noCopy mode, OUI performs all add node operations except for the copying of software to the new nodes.
Note:
Oracle recommends that you back up your voting disk and OCR files after you complete the node addition process
Using DBCA in Interactive Mode to Delete Database Instances from Existing Nodes
To delete an instance using DBCA in interactive mode, perform. the following steps:
1.
Start DBCA on a node other than the node that hosts the instance that you want to delete. On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next, and DBCA displays the Operations page.
2.
On the DBCA Operations page, select Instance Management, click Next, and DBCA displays the Instance Management page.
3.
On the Instance Management page, Select Delete Instance, click Next, and DBCA displays the List of Cluster Databases page.
4.
Select an Oracle RAC database from which to delete an instance. Enter a user name and password for the database user that has SYSDBA privileges. Click Next and DBCA displays the List of Cluster Database Instances page. The List of Cluster Database Instances page shows the instances that are associated with the Oracle RAC database that you selected and the status of each instance.
5.
Select an instance to delete and click Finish.
6.
If you have services assigned to this instance, then the DBCA Services Management page appears. Use this feature to reassign services from this instance to other instances in the cluster database.
7.
Review the information about the instance deletion operation on the Summary page and click OK. Otherwise, click Cancel to cancel the instance deletion operation. If you click OK, then DBCA displays a Confirmation dialog.
8.
Click OK on the Confirmation dialog to proceed with the instance deletion operation and DBCA displays a progress dialog showing that DBCA is performing the instance deletion operation. During this operation, DBCA removes the instance and the instance's Oracle Net configuration. When DBCA completes this operation, DBCA displays a dialog asking whether you want to perform. another operation.
9.
Click No and exit DBCA or click Yes to perform. another operation. If you click Yes, then DBCA displays the Operations page.
Using DBCA in Silent Mode to Delete Instance from Existing Nodes
Use DBCA to delete a database instance from a node as follows, where the variables are the same as those in the preceding add instance command:
dbca -silent -deleteInstance [-nodeList node] -gdbName gdbname -instanceName instname -sysDBAUserName sysdba -sysDBAPassword password
You only need to provide a node name if you are deleting an instance from a node other than the one on which you are running DBCA.
At this point, you have accomplished the following:
*
De-registered the selected instance from its associated Oracle Net Services listeners
*
Deleted the selected database instance from the instance's configured node
*
Removed the Oracle Net configuration
*
Deleted the Oracle Flexible Architecture directory structure from the instance's configured node.
Step 2: Deleting Nodes from Oracle Real Application Clusters Databases
Before beginning these procedures, ensure that your existing nodes have the $CRS_HOME and $ORACLE_HOME environment variables set correctly. Use the following procedures to delete nodes from Oracle clusters on UNIX-based systems:
Note:
You can perform. some of the steps in this procedure in silent mode as described at the end of this section.
1.
If there are instances on the node that you want to delete, then perform. the procedures in the section titled "Step 1: Deleting Instances from Oracle Real Application Clusters Databases" before executing these procedures. If you are deleting more than one node, then delete the instances from all the nodes that you are going to delete.
2.
If you use ASM, then perform. the procedures in the following section, "Step 3: ASM Instance Clean-Up Procedures for Node Deletion".
3.
If this is the Oracle home from which the node-specific listener named LISTENER_nodename runs, then use NETCA to remove this listener. If necessary, re-create this listener in another home.
See Also:
Oracle Database Net Services Administrator's Guide for more information about NETCA
4.
For a non-shared home, on each node that you are deleting, perform. the following two steps:
*
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=Oracle_home
CLUSTER_NODES="" –local
The runInstaller command is located in the directory Oracle_home/oui/bin. Using this command does not launch an installer GUI.
*
Run OUI from the home and deinstall this home. Make sure that you choose the home to be removed and not just the products under that home.
5.
If you are using a non-shared Oracle home, from an existing node, run the following command where node_list is a comma-delimited list of nodes that remain in the cluster:
runInstaller -updateNodeList ORACLE_HOME=Oracle_home "CLUSTER_NODES={node_list}"
If you are using a shared Oracle home, from an existing node, run the following command where node_list is a comma-delimited list of nodes that remain in the cluster:
runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=Oracle_home
"CLUSTER_NODES={node_list}"
6.
Run the following commands to remove node-specific interface configurations where nodename is the name of the node that you want to delete and remote_port is port on the deleted node:
racgons remove_config nodename:remote_port
oifcfg delif -node nodename
7.
On the node that you are deleting, run the command CRS_home/install/rootdelete.sh to disable the Oracle Clusterware applications that are on the node. Only run this command once and use the nosharedhome argument if you are using a local file system. The default for this command is sharedhome which prevents you from updating the permissions of local files such that they can be removed by the oracle user.
If the ocr.loc file is on a shared file system, then run the command CRS_home/install/rootdelete.sh remote sharedvar. If the ocr.loc file is not on a shared file system, then run the CRS_home/install/rootdelete.sh remote nosharedvar command.
If you are deleting more than one node from your cluster, then repeat this step on each node that you are deleting.
8.
Run CRS_home/install/rootdeletenode.sh on any remaining node in the cluster to delete the nodes from the Oracle cluster and to update the Oracle Cluster Registry (OCR). If you are deleting multiple nodes, then run the command CRS_home/install/rootdeletenode.sh node1,node1-number,node2,node2-number,... nodeN,nodeN-number where node1 through nodeN is a list of the nodes that you want to delete, and node1-number through nodeN-number represents the node number. To determine the node number of any node, run the command CRS_home/bin/olsnodes -n. To delete only one node, enter the node name and number of the node that you want to delete with the command CRS_home/install/rootdeletenode.sh node1,node1-number.
9.
For a non-shared Oracle home, on each node that you are deleting, perform. the following two steps:
*
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=CRS_home
CLUSTER_NODES="" –local CRS=true
The runInstaller command is located in the CRS_home/oui/bin directory. Executing this command does not launch an installer GUI.
*
Run OUI from the home and deinstall this home. Make sure that you choose the home to be removed and not just the products under that home.
10.
If using a non-shared Oracle home, from an existing node, run the following command:
runInstaller -updateNodeList ORACLE_HOME=CRS_home
"CLUSTER_NODES={nodelist}"
where nodelist is a comma-delimited list of nodes that remain in the cluster.
For shared Oracle home users, run the following command on an existing node from the $ORACLE_HOME/oui/bin directory, where nodes_list is a comma-delimited list of the nodes that remain in your cluster:
./runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=CRS_home
CLUSTER_NODES=nodes_list
11.
Run the following command to verify that the node is no longer a member of the cluster and to verify that the Oracle Clusterware components have been removed from this node:
cluvfy comp crs -n all [-verbose]
The response from this command should not contain any information about the node that you deleted; the deleted node should no longer have the Oracle Clusterware components on it. This verifies that you have deleted the node from the cluster.
As mentioned earlier in this procedure, you can optionally delete nodes from Oracle Real Application Clusters databases in silent mode by completing the following steps:
1.
Complete steps 1 through 3 of the procedure described at the start of this section under the heading "Step 2: Deleting Nodes from Oracle Real Application Clusters Databases".
2.
Depending on whether you have a shared or non-shared Oracle home, complete one of the following two procedures:
*
For a shared home, run the following command on each of the nodes that are to be deleted:
./runInstaller -detachHome -local ORACLE_HOME=Oracle_home
*
For a non-shared home, on each node that you are deleting, perform. the following two steps:
o
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=Oracle_home
CLUSTER_NODES="" –local
The runInstaller command is located in the directory Oracle_home/oui/bin. Using this command does not launch an installer GUI.
o
Deinstall the Oracle home from the node that you are deleting by running the following command from the Oracle_home/oui/bin directory:
./runInstaller -deinstall -silent "REMOVE_HOMES={Oracle_home}"
3.
Complete steps 5 through 10 from the procedure described in "Step 2: Deleting Nodes from Oracle Real Application Clusters Databases".
4.
Depending on whether you have a shared or non-shared Oracle Clusterware home, complete one of the following two procedures:
*
For shared homes, do not perform. a deinstall operation. Instead, perform. a detach home operation on the node that you are deleting. To do this, run the following command from CRS_home/oui/bin:
./runInstaller -detachHome ORACLE_HOME=CRS_home
*
For a non-shared home, on each node that you are deleting, perform. the following two steps:
o
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=CRS_home
CLUSTER_NODES="" –local CRS=true
The runInstaller command is located in the directory CRS_home/oui/bin. Executing this command does not launch an installer GUI.
o
On each node that you are deleting, perform. the following step from the CRS_home/oui/bin directory:
./runInstaller -deinstall -silent "REMOVE_HOMES={CRS_home}"
where CRS_home is the name given to the Oracle Clusterware home you are deleting.
5.
Complete steps 10 and 11 of the procedure described at the start of this section.
Step 3: ASM Instance Clean-Up Procedures for Node Deletion
If you are using ASM, then perform. the following procedure to remove the ASM instances:
1.
Stop all of the databases that use the ASM instance that is running from the Oracle home that is on the node that you are deleting.
2.
On the node that you are deleting, if this is the Oracle home which from which the ASM instance runs, then remove the ASM configuration by completing the following steps. Run the command srvctl stop asm -n node_name for all of the nodes on which this Oracle home exists. Run the command srvctl remove asm -n node for all nodes on which this Oracle home exists. If there are databases on this node that use ASM, then use DBCA Disk Group Management to create an ASM instance on one of the existing Oracle homes on the node, restart the databases if you stopped them.
3.
If you are using a cluster file system for your ASM Oracle home, then ensure that your local node has the $ORACLE_BASE and $ORACLE_HOME environment variables set correctly. Run the following commands from a node other than the node that you are deleting, where node_number is the node number of the node that you are deleting:
rm -r $ORACLE_BASE/admin/+ASMnode_number
rm -f $ORACLE_HOME/dbs/*ASMnode_number
If you are not using a cluster file system for your ASM Oracle home, then run the rm or delete commands mentioned in the previous step on each node on which the Oracle home exists.
12 Design and Deployment Techniques
1.
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To use these procedures as shown, your $CRS_HOME environment variable must identify your successfully installed Oracle Clusterware home.
2.
Go to CRS_home/oui/bin and run the addNode.sh script.
3.
The Oracle Universal Installer (OUI) displays the Node Selection Page on which you should select the node or nodes that you want to add and click Next.
4.
Verify the entries that OUI displays on the Summary Page and click Next.
5.
Run the rootaddNode.sh script. from the CRS_home/install/ directory on the node from which you are running OUI.
6.
Run the orainstRoot.sh script. on the new node if OUI prompts you to do so.
Note:
The orainstRoot.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
7.
Run the root.sh script. on the new node from CRS_home to start Oracle Clusterware on the new node.
8.
Obtain the remote port identifier, which you need to know for the next step, by running the following command on the existing node from the CRS_home/opmn/conf directory:
cat ons.config
9.
From the CRS_home/bin directory on an existing node, run the Oracle Notification Service (RACGONS) utility as in the following example where remote_port is the port number from the previous step and node2 is the name of the node that you are adding:
./racgons add_config node2:remote_port
Adding an Oracle Clusterware Home to a New Node Using OUI in Silent Mode
1.
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To use these procedures as shown, your $CRS_HOME environment variable must identify your successfully installed Oracle Clusterware home.
2.
Go to CRS_home/oui/bin and run the addNode.sh script. using the following syntax where node2 is the name of the new node that you are adding, node2-priv is the private node name for the new node, and node2-vip is the VIP name for the new node:
./addNode.sh –silent "CLUSTER_NEW_NODES={node2}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}"
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
3.
Perform. OUI-related steps 5 through 9 from the previous section about using OUI interactively under the heading "Adding an Oracle Clusterware Home to a New Node Using OUI in Interactive Mode".
Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Interactive Mode
1.
Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, your $ORACLE_HOME environment variable must identify your successfully installed Oracle home.
2.
Go to Oracle_home/oui/bin and run the addNode.sh script.
3.
When OUI displays the Node Selection Page, select the node or nodes to be added and click Next.
4.
Verify the entries that OUI displays on the Summary Page and click Next.
5.
Run the root.sh script. on the new node from Oracle_home when OUI prompts you to do so.
Note:
The root.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command on any one but only one of the nodes in the cluster to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
6.
On the new node, run the Oracle Net Configuration Assistant (NETCA) to add a Listener.
7.
Use the Enterprise Manager or DBCA to add an instance as described under the heading "Step 5: Adding Database Instances to New Nodes".
Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Silent Mode
1.
Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node in your cluster environment. To use these procedures, your $ORACLE_HOME environment variable must identify your successfully installed Oracle home and the node to be added is named node2.
2.
Go to Oracle_home/oui/bin and run the addNode.sh script. using the following syntax:
./addNode.sh -silent "CLUSTER_NEW_NODES={node2}"
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command on any one but only one of the nodes in the cluster to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
3.
Perform. steps 5 through 7 from the previous section about using OUI interactively under the heading "Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Interactive Mode".
Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Silent Mode
1.
Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node in your cluster environment. To use these procedures, your $ORACLE_HOME environment variable must identify your successfully installed Oracle home and the node to be added is named node2.
2.
Go to Oracle_home/oui/bin and run the addNode.sh script. using the following syntax:
./addNode.sh -silent "CLUSTER_NEW_NODES={node2}"
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command on any one but only one of the nodes in the cluster to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
3.
Perform. steps 5 through 7 from the previous section about using OUI interactively under the heading "Adding an Oracle Home with Oracle RAC to a New Node Using OUI in Interactive Mode".
Deleting an Oracle Clusterware Home from an Existing Node
The procedures for deleting an Oracle Clusterware home assume that you have successfully installed the clusterware on the node from which you want to delete the Oracle Clusterware home. You can use either of the following procedures to delete an Oracle Clusterware home from a node:
*
Deleting an Oracle Clusterware Home Using OUI in Interactive Mode
*
Deleting an Oracle Clusterware Home Using OUI in Silent Mode
Deleting an Oracle Clusterware Home Using OUI in Interactive Mode
1.
Perform. the delete node operation for database homes as described in the section titled "Deleting an Oracle Home with Oracle RAC Using OUI in Interactive Mode" or use the procedure, "Deleting an Oracle Home with Oracle RAC Using OUI in Silent Mode", and ensure that the $CRS_HOME environment variable is defined to identify the appropriate Oracle Clusterware home on each node.
2.
If you ran the Oracle Interface Configuration Tool (OIFCFG) with the -global flag during the installation, then skip this step. Otherwise, from a node that is going to remain in your cluster, from the CRS_home/bin directory, run the following command where node2 is the name of the node that you are deleting:
./oifcfg delif –node node2
3.
Obtain the remote port number, which you will use in the next step, using the following command from the CRS_home/opmn/conf directory:
cat ons.config
4.
From CRS_home/bin on a node that is going to remain in the cluster, run the Oracle Notification Service Utility (RACGONS) as in the following example where remote_port is the ONS remote port number that you obtained in the previous step and node2 is the name of the node that you are deleting:
./racgons remove_config node2:remote_port
5.
On the node to be deleted, run rootdelete.sh as the root user from the CRS_home/install directory. If you are deleting more than one node, then perform. this step on all of the other nodes that you are deleting.
6.
From any node that you are not deleting, run the following command from the CRS_home/install directory as the root user where node2,node2-number represents the node and the node number that you want to delete:
./rootdeletenode.sh node2,node2-number
If necessary, identify the node number using the following command on the node that you are deleting:
CRS_home/bin/olsnodes -n
7.
Perform. this step only if your are using a non-shared Oracle home. On the node or nodes to be deleted, run the following command from the CRS_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:
./runInstaller -updateNodeList ORACLE_HOME=CRS_home
"CLUSTER_NODES={node_to_be_deleted}"
CRS=TRUE -local
8.
Perform. this step only if your are using a non-shared Oracle home. On the node that you are deleting, run OUI using the runInstaller command from the CRS_home/oui/bin directory to deinstall Oracle Clusterware.
9.
If you are using a non-shared Oracle home, on any node other than the node that you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your Oracle RAC database:
./runInstaller -updateNodeList ORACLE_HOME=CRS_home
"CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE
If you are using a shared Oracle home, on any node other than the node that you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your Oracle RAC database:
./runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=Oracle_home
"CLUSTER_NODES={remaining_nodes_list}"
Deleting an Oracle Clusterware Home Using OUI in Silent Mode
1.
Perform. steps 1 through 7 from the previous section about using OUI interactively under the heading "Deleting an Oracle Clusterware Home Using OUI in Interactive Mode". Also ensure that the $CRS_HOME environment variable is defined to identify the appropriate Oracle Clusterware home on each node.
2.
Deinstall the Oracle Clusterware home from the node that you are deleting using OUI as follows by running the following command from the Oracle_home/oui/bin directory, where CRS_home is the name defined for the Oracle Clusterware home:
./runInstaller -deinstall –silent "REMOVE_HOMES={CRS_home}"
3.
Perform. step 9 from the previous section about using OUI interactively under the heading "Deleting an Oracle Clusterware Home Using OUI in Interactive Mode".
10 Adding and Deleting Nodes and Instances on UNIX-Based Systems
Step 1: Connecting New Nodes to the Cluster
Complete the following procedures to connect the new nodes to the cluster and to prepare them to support your Oracle RAC database:
*
Making Physical Connections
*
Installing the Operating System
*
Creating Oracle Users
*
Verifying the Installation with the Cluster Verification Utility
*
Checking the Installation
Making Physical Connections
Connect the new nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. Refer to your hardware vendor documentation for details about this step.
Installing the Operating System
Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. Refer to your hardware vendor documentation for details about this process.
Creating Oracle Users
As root user, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.
Verifying the Installation with the Cluster Verification Utility
Verify your installation using the Cluster Verification Utility (CVU) as in the following steps:
1.
From the directory CRS_home/bin on the existing nodes, run the CVU command to verify your installation at the post hardware installation stage as shown in the following example, where node_list is a comma-delimited list of nodes you want in your cluster:
cluvfy stage -post hwos -n node_list|all [-verbose]
This command causes CVU to verify your hardware and operating system environment at the post-hardware setup stage. After you have configured the hardware and operating systems on the new nodes, you can use this command to verify node reachability, for example, to all of the nodes from the local node. You can also use this command to verify user equivalence to all given nodes the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.
Note:
You can only use the all option with the -n argument if you have set the CV_NODELIST variable to represent the list of nodes on which you want to perform. the CVU operation.
See Also:
"Using the Cluster Verification Utility" for more information about enabling and using the CVU
2.
From the directory CRS_home/bin on the existing nodes, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment where ref_node is a node in your existing cluster against which you want CVU to compare, for example, the newly added nodes that you specify with the comma-delimited list in node_list for the -n option, orainventory_group is the name of the Oracle inventory group, and osdba_group is the name of the OSDBA group:
cluvfy comp peer [ -refnode ref_node ] -n node_list
[ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
Note:
For the reference node, select a node from your existing cluster nodes against which you want CVU to compare, for example, the newly added nodes that you specify with the -n option.
Checking the Installation
To verify that your installation is configured correctly, perform. the following steps:
1.
Ensure that the new nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures in "Step 2: Extending Clusterware and Oracle Software to New Nodes".
2.
If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing nodes. Make sure that you have at least 250MB of free space on the same location on each of the new nodes to install Oracle Clusterware. In addition, ensure you have enough free space on each new node to install the Oracle binaries.
3.
Ensure that the Oracle Cluster Registry (OCR) and the voting disk are accessible by the new nodes using the same path as the other nodes use. In addition, the OCR and voting disk devices must have the same permissions as on the existing nodes.
4.
Verify user equivalence to and from an existing node to the new nodes using rsh or ssh.
After completing the procedures in this section, your new nodes are connected to the cluster and configured with the required software to make them visible to the clusterware. Configure the new nodes as members of the cluster by extending the cluster software to the new nodes as described in "Step 2: Extending Clusterware and Oracle Software to New Nodes".
Note:
Do not change a hostname after the Oracle Clusterware installation. This includes adding or deleting a domain qualification.
Step 2: Extending Clusterware and Oracle Software to New Nodes
The following topics describe how to add new nodes to the clusterware and to the Oracle database software layers using OUI:
*
Adding Nodes at the Vendor Clusterware Layer
*
Adding Nodes at the Oracle Clusterware Layer
Adding Nodes at the Vendor Clusterware Layer
Add the new nodes at the clusterware layer according to the vendor clusterware documentation. For systems using shared storage for the Oracle Clusterware home, ensure that the existing clusterware is accessible by the new nodes. Also ensure that the new nodes can be brought online as part of the existing cluster. Proceed to the next section to add the nodes at the clusterware layer.
Adding Nodes at the Oracle Clusterware Layer
Before beginning this procedure, ensure that your existing nodes have the $CRS_HOME environment variable set correctly. The OUI requires access to the private interconnect that you verified as part of the installation validation in Step 1. If OUI cannot make the required connections, then you will not be able to complete the following steps to add nodes.
Note:
Instead of performing the first six steps of this procedure, you can alternatively run the addNode.sh script. in silent mode as described at the end of this section.
1.
On one of the existing nodes go to the CRS_home/oui/bin directory and run the addNode.sh script. to start OUI.
Note:
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode List2006-08-11
_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
2.
The OUI runs in add node mode and the OUI Welcome page appears. Click Next and the Specify Cluster Nodes for Node Addition page appears.
3.
The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes, the private node names, and the virtual IP (VIP) addresses that are associated with Oracle Clusterware. Use the lower table to enter the public, private node names and the virtual hostnames of the new nodes.
4.
If you are using vendor clusterware, then the public node names automatically appear in the lower table. Click Next and OUI verifies connectivity on the existing nodes and on the new nodes. The verifications that OUI performs include determining whether:
*
The nodes are up
*
The nodes and private node names are accessible by way of the network
Note:
If any of the existing nodes are down, you can proceed with the procedure. However, once the nodes are up, you must run the following command on each of those nodes only if you are using a local (non-shared) Oracle home:
runInstaller -updateNodeList -local
"CLUSTER_NODES={available_node_list}"
ORACLE_HOME=CRS_home
This operation should be run from the CRS_home/oui/bin directory and the available_node_list values is comma-delimited list of all of nodes currently in the cluster and CRS_home defines the Oracle Clusterware home directory.
*
The virtual hostnames are not already in use on the network.
5.
If any verifications fail, then OUI re-displays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your cluster before you can proceed with node addition. If all the checks succeed, then OUI displays the Node Addition Summary page.
Note:
Oracle strongly recommends that you install Oracle Clusterware on every node in the cluster on which you have installed vendor clusterware.
6.
The Node Addition Summary page displays the following information showing the products that are installed in the Oracle Clusterware home that you are extending to the new nodes:
*
The source for the add node process, which in this case is the Oracle Clusterware home
*
The private node names that you entered for the new nodes
*
The new nodes that you entered
*
The required and available space on the new nodes
*
The installed products listing the products that are already installed on the existing Oracle Clusterware home
Click Next and OUI displays the Cluster Node Addition Progress page.
7.
The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the four phases of the node addition process and the phases' statuses as follows:
*
Instantiate Root Scripts—Instantiates rootaddNode.sh with the public nodes, private node names, and virtual hostnames that you entered on the Cluster Node Addition page.
*
Copy the Oracle Clusterware home to the New Nodes—Copies the Oracle Clusterware home to the new nodes unless the Oracle Clusterware home is on a cluster file system.
*
Save Cluster Inventory—Updates the node list associated with the Oracle Clusterware home and its inventory.
*
Run rootaddNode.sh and root.sh—Displays a dialog prompting you to run the rootaddNode.sh script. from the local node (the node on which you are running OUI) and to run the root.sh script. on the new nodes. If OUI detects that the new nodes do not have an inventory location, then OUI instructs you to run orainstRoot.sh on those nodes. The central inventory location is the same as that of the local node. The addNodeActionstimestamp.log file, where timestamp shows the session start date and time, contains information about which scripts you need to run and on which nodes you need to run them.
The Cluster Node Addition Progress page's Status column displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit to end the OUI session. After OUI displays the End of Node Addition page, click Exit to end the OUI session.
For shared Oracle home users, run the following command only once on any of the nodes from the $ORACLE_HOME/oui/bin directory, where nodes_list is a comma-delimited list of the nodes that are part of your cluster:
./runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=Oracle_home
CLUSTER_NODES=nodes_list
8.
Obtain the remote port number, which you will use in the next step, using the following command from the CRS_home/opmn/conf directory:
cat ons.config
If you are using a shared Oracle home, verify that the usesharedinstall=true entry is included in the $ORACLE_HOME/opmn/conf/ons.config file. If it is not, do the following:
1.
Navigate to the $CRS_HOME/opmn/conf directory.
2.
Enter the following:
$ cat >> ons.config
usesharedinstall=true
3.
Press Ctrl-D.
4.
Restart the Oracle Notification Server.
9.
Run the racgons utility from the bin subdirectory of the Oracle Clusterware home to configure the Oracle Notification Services (ONS) port number. Use the following command, supplying the name of the node that you are adding for new_node_name and the remote port number obtained in the previous step:
racgons add_config new_node_name:remote_port
10.
Check that your cluster is integrated and that the cluster is not divided into partitions by completing the following operations:
*
Run the following CVU command to obtain detailed output for verifying cluster manager integrity on all of the nodes that are part of your Oracle RAC environment:
cluvfy comp clumgr -n all [-verbose]
*
Use the following CVU command to obtain detailed output for verifying cluster integrity on all of the nodes that are part of your Oracle RAC environment:
cluvfy comp clu [-verbose]
*
Use the following command to perform. an integrated validation of the Oracle Clusterware setup on all of the configured nodes, both the pre-existing nodes and the nodes that you have added:
cluvfy comp stage -post crinst -n all [-verbose]
See Also:
"Using the Cluster Verification Utility" for more information about enabling and using the CVU
You can optionally run addNode.sh in silent mode, replacing steps 1 through 7, as follows where nodeI, nodeI+1, and so on are the new nodes that you are adding:
addNode.sh -silent "CLUSTER_NEW_NODES={nodeI, nodeI+1, … nodeI+n}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={pnI, pnI+1, … pnI+n}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vipI, vipI+1,…,vipI+n}"
You can alternatively specify the variable=value entries in a response file and run the addNode.sh script. as follows:
addNode.sh -silent -responseFile filename OR addNode.bat -silent -responseFile filename
See Also:
Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files
Notes:
*
Command-line values always override response file values.
*
The addnode.sh script. updates the contents of the oraInventory file on all of the nodes in a cluster. If your cluster's oraInventory file is stored in a shared directory, such as in a directory on a cluster file system, Oracle Database displays the following error when you run addnode.sh:
SEVERE: Remote 'UpdateNodeList' failed on nodes: 'cpqshow2'.
Refer to '/install/sana/orainv/logs/UpdateNode
List2006-08-11_08-58-33AM.log' for details.
If Oracle Database displays this error, run the following command to complete the update of the oraInventory file:
/install/sana/rachome1/oui/bin/runInstaller -updateNodeList
-noClusterEnabled ORACLE_HOME=/install/sana/rachome1 CLUSTER
_NODES=cpqshow1,cpqshow2 CRS=false "INVENTORY
_LOCATION=/install/sana/orainv" LOCAL_NODE=node_on_which_
command_is_to_be_run
Run rootaddNode.sh on the local node, or the node on which you are performing this procedure, and run root.sh on the new nodes. If OUI detects that the new nodes do not have an inventory location, then OUI instructs you to run orainstRoot.sh on those nodes. The central inventory location is the same as that of the local node. The addNodeActionstimestamp.log file, where timestamp shows the session start date and time, contains the information about which scripts you need to run and on which nodes you need to run them.
After you have completed the procedures in this section for adding nodes at the Oracle Clusterware layer, you have successfully extended the Oracle Clusterware home from your existing the Oracle Clusterware home to the new nodes. Proceed to "Step 3: Preparing Storage on New Nodes" to prepare storage for Oracle RAC on the new nodes.
Step 3: Preparing Storage on New Nodes
To extend an existing Oracle RAC database to your new nodes, configure the shared storage for the new instances to be added on new nodes so that the storage type is the same as the storage that is already used by the existing nodes' instances. Prepare the same type of storage on the new nodes as you are using on the other nodes in the Oracle RAC environment that you want to extend as follows:
*
Automatic Storage Management (ASM)
If you are using ASM, then make sure that the new nodes can access the ASM disks with the same permissions as the existing nodes.
*
Oracle Cluster File System (OCFS)
If you are using Oracle Cluster File Systems, then make sure that the new nodes can access the cluster file systems in the same way that the other nodes access them.
*
Vendor Cluster File Systems
If your cluster database uses vendor cluster file systems, then configure the new nodes to use the vendor cluster file systems. Refer to the vendor clusterware documentation for the pre-installation steps for your UNIX platform.
*
Raw Device Storage
If your cluster database uses raw devices, then prepare the new raw devices by following the procedures described in the next section.
See Also:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Microsoft Windows for more information about the Oracle Cluster File System
Run the following command to verify your cluster file system and obtain detailed output where nodelist includes both the pre-existing nodes and the newly added nodes and file system is the name of the file system that you used for the Oracle Cluster File System:
cluvfy comp cfs -n nodelist -f file system [-verbose]
Using DBCA in Interactive Mode to Add Database Instances to New Nodes
To add a database instance to a new node with DBCA in interactive mode, perform. the following procedure:
1.
Start the Database Configuration Assistant (DBCA) by entering dbca at the system prompt from the bin directory in the Oracle_home directory.
The DBCA displays the Welcome page for Oracle RAC. Click Help on any DBCA page for additional information.
2.
Select Oracle Real Application Clusters database, click Next, and DBCA displays the Operations page.
3.
Select Instance Management, click Next, and DBCA displays the Instance Management page.
4.
Select Add Instance and click Next. The DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE, or INACTIVE.
5.
From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Enter user name and password for the database user that has SYSDBA privileges. Click Next and DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the Oracle RAC database that you selected.
6.
Click Next to add a new instance and DBCA displays the Adding an Instance page.
7.
On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA provides does not match your existing instance naming scheme. Then select the new node name from the list, click Next, and DBCA displays the Services Page.
8.
Enter the services information for the new node's instance, click Next, and DBCA displays the Instance Storage page.
9.
If you are using raw devices or raw partitions, then on the Instance Storage page select the Tablespaces folder and expand it. Select the undo tablespace storage object and a dialog appears on the right-hand side. Change the default datafile name to the raw device name for the tablespace.
10.
If you are using raw devices or raw partitions or if you want to change the default redo log group file name, then on the Instance Storage page select and expand the Redo Log Groups folder. For each redo log group number that you select, DBCA displays another dialog box. Enter the raw device name that you created in the section "Raw Device Storage Preparation for New Nodes" in the File Name field.
11.
If you are using a cluster file system, then click Finish on the Instance Storage page. If you are using raw devices, then repeat step 10 for all of the other redo log groups, click Finish, and DBCA displays a Summary dialog.
12.
Review the information on the Summary dialog and click OK or click Cancel to end the instance addition operation. The DBCA displays a progress dialog showing DBCA performing the instance addition operation. When DBCA completes the instance addition operation, DBCA displays a dialog asking whether you want to perform. another operation.
13.
After you terminate your DBCA session, run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges where nodelist consists of the newly added nodes:
cluvfy comp admprv -o db_config -d oracle_home -n nodelist [-verbose]
After adding the instances to the new nodes using the steps described in this section, perform. any needed service configuration procedures as described in Chapter 6, "Introduction to Workload Management".
Using DBCA in Silent Mode to Add Database Instances to New Nodes
You can use the Database Configuration Assistant (DBCA) in silent mode to add instances to nodes onto which you have extended an Oracle Clusterware home and an Oracle home. Use the following syntax to perform. this operation where node is the node onto which you want to add the instance, gdbname is the global database name, instname is the name of the new instance, sysdba is the name of an Oracle user with SYSDBA privileges, and password is the password for the user name in sysdba:
dbca -silent -addInstance -nodeList node -gdbName gdbname [-instanceName instname] -sysDBAUserName sysdba -sysDBAPassword password
Note that you only need to provide an instance name if you want to override the Oracle naming convention for Oracle RAC instance names.
After you have completed either of the DBCA procedures in this section, DBCA has successfully added the new instance to the new node and completed the following steps:
*
Created and started an ASM instance on each new node if the existing instances were using ASM
*
Created a new database instance on each new node
*
Created and configured high availability components
*
Created the Oracle Net configuration
*
Started the new instance
*
Created and started services if you entered services information on the Services Configuration page
\Adding Nodes that Already Have Clusterware and Oracle Software to a Cluster
Before beginning this procedure, ensure that your existing nodes have the $CRS_HOME and $ORACLE_HOME environment variables set correctly. To add nodes to a cluster that already have clusterware and Oracle software installed on them, you must configure the new nodes with the Oracle software that is on the existing nodes of the cluster. To do this, you must run two versions of an OUI process: one for the clusterware and one for the database layer as described in the following procedures:
1.
Add new nodes at the Oracle Clusterware layer by running OUI from the Oracle Clusterware home on an existing node, using the following command:
CRS_home/oui/bin/addNode.sh -noCopy
2.
Add new nodes at the Oracle software layer by running OUI from the Oracle home as follows:
Oracle_home/oui/bin/addNode.sh -noCopy
In the -noCopy mode, OUI performs all add node operations except for the copying of software to the new nodes.
Note:
Oracle recommends that you back up your voting disk and OCR files after you complete the node addition process
Using DBCA in Interactive Mode to Delete Database Instances from Existing Nodes
To delete an instance using DBCA in interactive mode, perform. the following steps:
1.
Start DBCA on a node other than the node that hosts the instance that you want to delete. On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next, and DBCA displays the Operations page.
2.
On the DBCA Operations page, select Instance Management, click Next, and DBCA displays the Instance Management page.
3.
On the Instance Management page, Select Delete Instance, click Next, and DBCA displays the List of Cluster Databases page.
4.
Select an Oracle RAC database from which to delete an instance. Enter a user name and password for the database user that has SYSDBA privileges. Click Next and DBCA displays the List of Cluster Database Instances page. The List of Cluster Database Instances page shows the instances that are associated with the Oracle RAC database that you selected and the status of each instance.
5.
Select an instance to delete and click Finish.
6.
If you have services assigned to this instance, then the DBCA Services Management page appears. Use this feature to reassign services from this instance to other instances in the cluster database.
7.
Review the information about the instance deletion operation on the Summary page and click OK. Otherwise, click Cancel to cancel the instance deletion operation. If you click OK, then DBCA displays a Confirmation dialog.
8.
Click OK on the Confirmation dialog to proceed with the instance deletion operation and DBCA displays a progress dialog showing that DBCA is performing the instance deletion operation. During this operation, DBCA removes the instance and the instance's Oracle Net configuration. When DBCA completes this operation, DBCA displays a dialog asking whether you want to perform. another operation.
9.
Click No and exit DBCA or click Yes to perform. another operation. If you click Yes, then DBCA displays the Operations page.
Using DBCA in Silent Mode to Delete Instance from Existing Nodes
Use DBCA to delete a database instance from a node as follows, where the variables are the same as those in the preceding add instance command:
dbca -silent -deleteInstance [-nodeList node] -gdbName gdbname -instanceName instname -sysDBAUserName sysdba -sysDBAPassword password
You only need to provide a node name if you are deleting an instance from a node other than the one on which you are running DBCA.
At this point, you have accomplished the following:
*
De-registered the selected instance from its associated Oracle Net Services listeners
*
Deleted the selected database instance from the instance's configured node
*
Removed the Oracle Net configuration
*
Deleted the Oracle Flexible Architecture directory structure from the instance's configured node.
Step 2: Deleting Nodes from Oracle Real Application Clusters Databases
Before beginning these procedures, ensure that your existing nodes have the $CRS_HOME and $ORACLE_HOME environment variables set correctly. Use the following procedures to delete nodes from Oracle clusters on UNIX-based systems:
Note:
You can perform. some of the steps in this procedure in silent mode as described at the end of this section.
1.
If there are instances on the node that you want to delete, then perform. the procedures in the section titled "Step 1: Deleting Instances from Oracle Real Application Clusters Databases" before executing these procedures. If you are deleting more than one node, then delete the instances from all the nodes that you are going to delete.
2.
If you use ASM, then perform. the procedures in the following section, "Step 3: ASM Instance Clean-Up Procedures for Node Deletion".
3.
If this is the Oracle home from which the node-specific listener named LISTENER_nodename runs, then use NETCA to remove this listener. If necessary, re-create this listener in another home.
See Also:
Oracle Database Net Services Administrator's Guide for more information about NETCA
4.
For a non-shared home, on each node that you are deleting, perform. the following two steps:
*
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=Oracle_home
CLUSTER_NODES="" –local
The runInstaller command is located in the directory Oracle_home/oui/bin. Using this command does not launch an installer GUI.
*
Run OUI from the home and deinstall this home. Make sure that you choose the home to be removed and not just the products under that home.
5.
If you are using a non-shared Oracle home, from an existing node, run the following command where node_list is a comma-delimited list of nodes that remain in the cluster:
runInstaller -updateNodeList ORACLE_HOME=Oracle_home "CLUSTER_NODES={node_list}"
If you are using a shared Oracle home, from an existing node, run the following command where node_list is a comma-delimited list of nodes that remain in the cluster:
runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=Oracle_home
"CLUSTER_NODES={node_list}"
6.
Run the following commands to remove node-specific interface configurations where nodename is the name of the node that you want to delete and remote_port is port on the deleted node:
racgons remove_config nodename:remote_port
oifcfg delif -node nodename
7.
On the node that you are deleting, run the command CRS_home/install/rootdelete.sh to disable the Oracle Clusterware applications that are on the node. Only run this command once and use the nosharedhome argument if you are using a local file system. The default for this command is sharedhome which prevents you from updating the permissions of local files such that they can be removed by the oracle user.
If the ocr.loc file is on a shared file system, then run the command CRS_home/install/rootdelete.sh remote sharedvar. If the ocr.loc file is not on a shared file system, then run the CRS_home/install/rootdelete.sh remote nosharedvar command.
If you are deleting more than one node from your cluster, then repeat this step on each node that you are deleting.
8.
Run CRS_home/install/rootdeletenode.sh on any remaining node in the cluster to delete the nodes from the Oracle cluster and to update the Oracle Cluster Registry (OCR). If you are deleting multiple nodes, then run the command CRS_home/install/rootdeletenode.sh node1,node1-number,node2,node2-number,... nodeN,nodeN-number where node1 through nodeN is a list of the nodes that you want to delete, and node1-number through nodeN-number represents the node number. To determine the node number of any node, run the command CRS_home/bin/olsnodes -n. To delete only one node, enter the node name and number of the node that you want to delete with the command CRS_home/install/rootdeletenode.sh node1,node1-number.
9.
For a non-shared Oracle home, on each node that you are deleting, perform. the following two steps:
*
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=CRS_home
CLUSTER_NODES="" –local CRS=true
The runInstaller command is located in the CRS_home/oui/bin directory. Executing this command does not launch an installer GUI.
*
Run OUI from the home and deinstall this home. Make sure that you choose the home to be removed and not just the products under that home.
10.
If using a non-shared Oracle home, from an existing node, run the following command:
runInstaller -updateNodeList ORACLE_HOME=CRS_home
"CLUSTER_NODES={nodelist}"
where nodelist is a comma-delimited list of nodes that remain in the cluster.
For shared Oracle home users, run the following command on an existing node from the $ORACLE_HOME/oui/bin directory, where nodes_list is a comma-delimited list of the nodes that remain in your cluster:
./runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=CRS_home
CLUSTER_NODES=nodes_list
11.
Run the following command to verify that the node is no longer a member of the cluster and to verify that the Oracle Clusterware components have been removed from this node:
cluvfy comp crs -n all [-verbose]
The response from this command should not contain any information about the node that you deleted; the deleted node should no longer have the Oracle Clusterware components on it. This verifies that you have deleted the node from the cluster.
As mentioned earlier in this procedure, you can optionally delete nodes from Oracle Real Application Clusters databases in silent mode by completing the following steps:
1.
Complete steps 1 through 3 of the procedure described at the start of this section under the heading "Step 2: Deleting Nodes from Oracle Real Application Clusters Databases".
2.
Depending on whether you have a shared or non-shared Oracle home, complete one of the following two procedures:
*
For a shared home, run the following command on each of the nodes that are to be deleted:
./runInstaller -detachHome -local ORACLE_HOME=Oracle_home
*
For a non-shared home, on each node that you are deleting, perform. the following two steps:
o
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=Oracle_home
CLUSTER_NODES="" –local
The runInstaller command is located in the directory Oracle_home/oui/bin. Using this command does not launch an installer GUI.
o
Deinstall the Oracle home from the node that you are deleting by running the following command from the Oracle_home/oui/bin directory:
./runInstaller -deinstall -silent "REMOVE_HOMES={Oracle_home}"
3.
Complete steps 5 through 10 from the procedure described in "Step 2: Deleting Nodes from Oracle Real Application Clusters Databases".
4.
Depending on whether you have a shared or non-shared Oracle Clusterware home, complete one of the following two procedures:
*
For shared homes, do not perform. a deinstall operation. Instead, perform. a detach home operation on the node that you are deleting. To do this, run the following command from CRS_home/oui/bin:
./runInstaller -detachHome ORACLE_HOME=CRS_home
*
For a non-shared home, on each node that you are deleting, perform. the following two steps:
o
Run the following command:
runInstaller -updateNodeList ORACLE_HOME=CRS_home
CLUSTER_NODES="" –local CRS=true
The runInstaller command is located in the directory CRS_home/oui/bin. Executing this command does not launch an installer GUI.
o
On each node that you are deleting, perform. the following step from the CRS_home/oui/bin directory:
./runInstaller -deinstall -silent "REMOVE_HOMES={CRS_home}"
where CRS_home is the name given to the Oracle Clusterware home you are deleting.
5.
Complete steps 10 and 11 of the procedure described at the start of this section.
Step 3: ASM Instance Clean-Up Procedures for Node Deletion
If you are using ASM, then perform. the following procedure to remove the ASM instances:
1.
Stop all of the databases that use the ASM instance that is running from the Oracle home that is on the node that you are deleting.
2.
On the node that you are deleting, if this is the Oracle home which from which the ASM instance runs, then remove the ASM configuration by completing the following steps. Run the command srvctl stop asm -n node_name for all of the nodes on which this Oracle home exists. Run the command srvctl remove asm -n node for all nodes on which this Oracle home exists. If there are databases on this node that use ASM, then use DBCA Disk Group Management to create an ASM instance on one of the existing Oracle homes on the node, restart the databases if you stopped them.
3.
If you are using a cluster file system for your ASM Oracle home, then ensure that your local node has the $ORACLE_BASE and $ORACLE_HOME environment variables set correctly. Run the following commands from a node other than the node that you are deleting, where node_number is the node number of the node that you are deleting:
rm -r $ORACLE_BASE/admin/+ASMnode_number
rm -f $ORACLE_HOME/dbs/*ASMnode_number
If you are not using a cluster file system for your ASM Oracle home, then run the rm or delete commands mentioned in the previous step on each node on which the Oracle home exists.
12 Design and Deployment Techniques
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/9240380/viewspace-622187/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle Clusterware Software Component Processing DetailsOracleAI
- Disable the automatic startup of the Oracle Clusterware (CRS|HAS) softwareOracle
- oracle clusterwareOracle
- ORACLE RAC clusterwareOracle
- Oracle Clusterware的心跳Oracle
- Oracle Clusterware工具集Oracle
- Oracle Clusterware and Oracle Grid InfrastructureOracleASTStruct
- Oracle clusterware組成概述Oracle
- HACMP & Oracle Clusterware 對比ACMOracle
- manual database deletion for oracle10g on solaris10DatabaseOracle
- The Oracle Clusterware Voting Disk and Oracle Cluster RegistryOracle
- How to Relink Oracle Database SoftwareOracleDatabase
- Oracle software下載地址列表Oracle
- How to Deinstall Oracle Clusterware Home ManuallyOracle
- Oracle Clusterware 命令集分類Oracle
- oracle9i(9204)_manual deletion of db_引注Oracle
- Deinstallation Tool for Oracle Clusterware and Oracle Real Application ClustersOracleAPP
- adc指令(Addition Carry)
- Oracle 21C Clusterware Technology StackOracle
- oracle clusterware命令集的分類:Oracle
- clusterware完全解除安裝oracle官方指南Oracle
- 【RAC】Oracle Clusterware 診斷收集指令碼Oracle指令碼
- Change AUTO_START in ASM Resource Oracle ClusterwareASMOracle
- Unable to Connect to Database with Oracle Client Software for WindowsDatabaseOracleclientWindows
- LeetCode-Range AdditionLeetCode
- Split Brain in Oracle Clusterware and Real Application ClusterAIOracleAPP
- Oracle ClusterWare 的OCR叢集登錄檔Oracle
- How to Change IP and VIP in Oracle Clusterware(一)-概念篇Oracle
- 升級oracle 10g clusterware 和 racOracle 10g
- RAC and Oracle Clusterware and Starter Kit (Platform Independent)-810394.1OraclePlatform
- Identifying Your Oracle Database Software Release (21)IDEOracleDatabase
- Range Addition II 範圍求和 II
- oracle 補丁Oracle
- Master Note for RAC Oracle Clusterware and Oracle Grid Infrastructure 1096952.ASTOracleStruct
- Oracle Clusterware: Components installed. (Doc ID 556976.1)Oracle
- How to Clean Up After a Failed Oracle Clusterware (CRS) InstallationAIOracle
- ORACLE 11G RAC--CLUSTERWARE工具集1Oracle
- How to Change IP and VIP in Oracle Clusterware(二)-實操篇Oracle