Removing a Node from a 10gR1 RAC Cluster (Doc ID 269320.1)

rongshiyuan發表於2014-08-13

Removing a Node from a 10gR1 RAC Cluster (Doc ID 269320.1)


Note:  This article is only relevant for 10gR1 RAC environments.  


PURPOSE
-------

The purpose of this note is to provide the user with a  document that 
can be used as a guide to remove a cluster node from an Oracle 10gR1 Real
Applications environment.
 
SCOPE & APPLICATION
-------------------

This document can be used by DBAs and support analysts who need to 
either remove a cluster node or assist another in removing a cluster
node in a 10gR1 Unix Real Applications environment.

REMOVING A NODE FROM A 10gR1 RAC CLUSTER
--------------------------------------

If you have to remove a node from a RAC 10gR1 database, even if the node 
will no longer be available to the environment, there is a certain 
amount of cleanup that needs to be done.  The remaining nodes need to 
be informed of the change of status of the departing node.  If there are
any steps that must be run on the node being removed and the node is no 
longer available those commands can be skipped.

The most important 3 steps that need to be followed are;

A.	Remove the instance using DBCA.
B.	Remove the node from the cluster.
C.	Reconfigure the OS and remaining hardware.

Here is a breakdown of the above steps.

A.	Remove the instance using DBCA.
--------------------------------------

1.      Verify that you have a good backup of the OCR (Oracle Configuration
        Repository) using ocrconfig -showbackup.

2.	Run DBCA from one of the nodes you are going to keep.  Leave the    
        database up and also leave the departing instance up and running.

3.	Choose "Instance Management"

4.	Choose "Delete an instance"

5.      On the next screen, select the cluster database from which you
	will delete an instance.  Supply the system privilege username
        and password.

6.	On the next screen, a list of cluster database instances will 
        appear.  Highlight the instance you would like to delete then 
        click next.

7.	If you have services configured, reassign the services.  Modify the 
        services so that each service can run on one of the remaining 
	instances.  Set "not used" for each service regarding the instance 
        that is to be deleted.  Click Finish.

8.	If your database is in archive log mode you may encounter the 
        following errors:
        ORA-350  
        ORA-312  
        This may occur because the DBCA cannot drop the current log, as 
        it needs archiving.  This issue is fixed in the 10.1.0.3 
        patchset. But previous to this patchset you should click the 
        ignore button and when the DBCA completes, manually archive 
        the logs for the deleted instance and dropt the log group.

        SQL>  alter system archive log all;
        SQL>  alter database drop logfile group 2;  

9.	Verify that the dropped instance's redo thread has been removed by
	querying v$log.  If for any reason the redo thread is not disabled 
        then disable the thread.  

        SQL> alter database disable thread 2;

10.     Verify that the instance was removed from the OCR (Oracle 
        Configuration Repository) with the following commands:

   	srvctl config database -d 
	cd /bin
	./crs_stat

11.	If this node had an ASM instance and the node will no longer be a 
	part of the cluster you will now need to remove the ASM instance with:

        srvctl stop asm -n 
        srvctl remove asm -n 

	Verify that asm is removed with:

	srvctl config asm -n 


B.	Remove the Node from the Cluster
----------------------------------------

Once the instance has been deleted.  The process of removing the node 
from the cluster is a manual process. This is accomplished by running 
scripts on the deleted node to remove the CRS install, as well as 
scripts on the remaining nodes to update the node list.  The following 
steps assume that the node to be removed is still functioning.

1.	To delete node number 2 first stop and remove the nodeapps on the 
	node you are removing.  Assuming that you have removed the ASM 
	instance as the root user on a remaining node;

        # srvctl stop nodeapps -n 

2.      Run netca.  Choose "Cluster Configuration". 

3.	Only select the node you are removing and click next.

4.	Choose "Listener Configuration" and click next.

5. 	Choose "Delete" and delete any listeners configured on the node 
	you are removing.

6.  	Run /bin/crs_stat.  Make sure that all database 
	resources are running on nodes that are going to be kept.  For
	example:

	NAME=ora..db
	TYPE=application
	TARGET=ONLINE
	STATE=ONLINE on 

	Ensure that this resource is not running on a node that will be 
	removed.  Use /bin/crs_relocate to perform this.
	Example:

	crs_relocate ora..db

7. 	As the root user, remove the nodeapps on the node you are removing.

        # srvctl remove nodeapps -n 

8.	Next as the Oracle user run the installer with the 
        updateNodeList option on any remaining node in the cluster.

        a.  DISPLAY=ipaddress:0.0; export DISPLAY
        This should be set even though the gui does not run.
        
        b.  $ORACLE_HOME/oui/bin/runInstaller -updateNodeList 
        ORACLE_HOME= CLUSTER_NODES=, 
        ,

	With this command we are defining the RDBMS $ORACLE_HOME's that 
	now are part of the cluster in the Oracle inventory.  If there is 
	no $ORACLE_HOME this step can be skipped.

9.	Change to the root user to finish the removal on a node that
	is being removed.  This command will stop the CRS stack 
        and delete the ocr.loc file on the node to be removed.  The 
        nosharedvar option assumes the ocr.loc file is not on a shared 
        file sytem.  If it does exist on a shared file system then 
        specify sharedvar instead.  The nosharedhome option specifies 
	that the CRS_HOME is on a local filesystem.  If the CRS_HOME is
	on a shared file system, specify sharedhome instead.
	Run the rootdelete.sh script from /install.  Example:

        # cd /install
        # ./rootdelete.sh local nosharedvar nosharedhome

10.	On a node that will be kept, the root user should run the 
        rootdeletenode.sh script from the /install directory.  
        When running this script from the CRS home specify both the node
        name and the node number.  The node name and the node number are
	visiable in olsnodes -n.  Also do NOT put a space after the 
        comma between the two. 
 
	# olsnodes -n
	       1
	       2

        # cd /install
	# ./rootdeletenode.sh ,2

11.	Confirm success by running OLSNODES.

        /bin>: ./olsnodes -n
        	      1

12.	Now switch back to the oracle user account and run the same 
        runInstaller command as before.  Run it this time from the 
         instead of the ORACLE_HOME.  Specify all of the 
        remaining nodes.

        a.  DISPLAY=ipaddress:0.0; export DISPLAY

        b.  /oui/bin/runInstaller -updateNodeList 
        ORACLE_HOME= CLUSTER_NODES=, 
        ,  CRS=TRUE

	With this command we are defining the CRS HOME's that now are 
	part of the cluster in the Oracle inventory.  

13.	Once the node updates are done you will need to manually delete
        the $ORACLE_HOME and $CRS_HOME from the node to be expunged, 
        unless, of course, either of these is on a shared file system 
        that is still being used.

        a.  $ORACLE_HOME>: rm -rf *
        b.  $CRS_HOME> : rm -rf *   (as root)

14.	Next, as root, from the deleted node, verify that all init scripts
	and soft links are removed:
        
Sun:

	rm /etc/init.d/init.cssd 
	rm /etc/init.d/init.crs 
	rm /etc/init.d/init.crsd 
	rm /etc/init.d/init.evmd 
	rm /etc/rc3.d/K96init.crs
	rm /etc/rc3.d/S96init.crs
        rm -Rf /var/opt/oracle/scls_scr 
        rm -Rf /var/opt/oracle/oprocd

Linux:

	rm -f /etc/init.d/init.cssd 
	rm -f /etc/init.d/init.crs 
	rm -f /etc/init.d/init.crsd 
	rm -f /etc/init.d/init.evmd 
	rm -f /etc/rc2.d/K96init.crs
	rm -f /etc/rc2.d/S96init.crs
	rm -f /etc/rc3.d/K96init.crs
	rm -f /etc/rc3.d/S96init.crs
	rm -f /etc/rc5.d/K96init.crs
	rm -f /etc/rc5.d/S96init.crs
        rm -Rf /etc/oracle/scls_scr

HP-UX:

	rm /sbin/init.d/init.cssd 
	rm /sbin/init.d/init.crs 
	rm /sbin/init.d/init.crsd 
	rm /sbin/init.d/init.evmd 
	rm /sbin/rc3.d/K960init.crs
	rm /sbin/rc3.d/S960init.crs
	rm /sbin/rc2.d/K960init.crs
	rm /sbin/rc2.d/K001init.crs
        rm -Rf /var/opt/oracle/scls_scr 
        rm -Rf /var/opt/oracle/oprocd

HP Tru64:

	rm /sbin/init.d/init.cssd 
	rm /sbin/init.d/init.crs 
	rm /sbin/init.d/init.crsd 
	rm /sbin/init.d/init.evmd 
	rm /sbin/rc3.d/K96init.crs
	rm /sbin/rc3.d/S96init.crs
        rm -Rf /var/opt/oracle/scls_scr 
        rm -Rf /var/opt/oracle/oprocd

IBM AIX:

	rm /etc/init.cssd 
	rm /etc/init.crs 
	rm /etc/init.crsd 
	rm /etc/init.evmd 
	rm /etc/rc.d/rc2.d/K96init.crs
	rm /etc/rc.d/rc2.d/S96init.crs
        rm -Rf /etc/oracle/scls_scr
        rm -Rf /etc/oracle/oprocd

16.	You can also remove the /etc/oracle directory, the 
        /etc/oratab file, and the Oracle inventory (if desired)

17.     To remove an ADDITIONAL ORACLE_HOME, ASM_HOME, or EM_HOME from the 
        inventory on all remaining nodes, run the installer to update the 
        node list.  Example (if removing node 2):

        runInstaller -updateNodeList -local \
        ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1,node3,node4
(If you are using private home installations, please ignore the "-local" flag.)



RELATED DOCUMENTS
-----------------

Oracle?? Real Application Clusters Administrator's Guide 10g Release 1 (10.1)
Part Number B10765-02
Chapter 5
Oracle Series/Oracle Database 10g High Availabilty Chapter 5 28-34.  
Note 239998.1
Oracle Clusterware and RAC Admin and Deployment Guide - Ch. 10 and 11 
 

Document Details

 
Email link to this documentOpen document in new windowPrintable Page
Type:
Status:
Last Major Update:
Last Update:
BULLETIN
PUBLISHED
12/6/2012
12/6/2012
     
 

Related Products

 
Oracle Database - Enterprise Edition
     
 

Document References

 
No References available for this document.
     

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-1250753/,如需轉載,請註明出處,否則將追究法律責任。

相關文章