Remove a node from Oracle10g RAC cluster and add back for IBM AIX

watershed發表於2010-03-04

1. REMOVING A NODE FROM A 10g RAC CLUSTER
Assume you want to delete node 2 from cluster
1> srvctl stop nodeapps -n (run on delete node)
2> run netca to delete listener that on the delete node
3> srvctl remove nodeapps -n (run on delete node)
4> on remaining node : run the installer with the
updateNodeList option ,you can run this command with local_node=host_name option just when the command run unsuccessfully
export DISPLAY=XXX.XXX.XXX.XXX:X.0
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList
ORACLE_HOME= CLUSTER_NODES=,
,
5> on delete node: cd $CRS_HOME/install
#./rootdelete.sh local nosharedvar nosharedhome
6> on remaining node:
verify: #olsnodes –n –p
#cd $CRS_HOME/install
#./rootdeletenode.sh node2_hostname,2
verify: #olsnodes –n –p
7> on remaining node:
export DISPLAY=XXX.XXX.XXX.XXX:X.0
$CRS_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME= CLUSTER_NODES=,, CRS=TRUE
8> on delete node:
as root: cd $ORACLE_BASE
# rm –rf *
$ : rm -rf *
9> on delete node: remove all init scripte and soft link:
as root:
Sun:
rm /etc/init.d/init.cssd
rm /etc/init.d/init.crs
rm /etc/init.d/init.crsd
rm /etc/init.d/init.evmd
rm /etc/rc3.d/K96init.crs
rm /etc/rc3.d/S96init.crs
rm -Rf /var/opt/oracle/scls_scr
rm -Rf /var/opt/oracle/oprocd
Linux:
rm -f /etc/init.d/init.cssd
rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
HP-UX:
rm /sbin/init.d/init.cssd
rm /sbin/init.d/init.crs
rm /sbin/init.d/init.crsd
rm /sbin/init.d/init.evmd
rm /sbin/rc3.d/K960init.crs
rm /sbin/rc3.d/S960init.crs
rm /sbin/rc2.d/K960init.crs
rm /sbin/rc2.d/K001init.crs
rm -Rf /var/opt/oracle/scls_scr
rm -Rf /var/opt/oracle/oprocd
HP Tru64:
rm /sbin/init.d/init.cssd
rm /sbin/init.d/init.crs
rm /sbin/init.d/init.crsd
rm /sbin/init.d/init.evmd
rm /sbin/rc3.d/K96init.crs
rm /sbin/rc3.d/S96init.crs
rm -Rf /var/opt/oracle/scls_scr
rm -Rf /var/opt/oracle/oprocd
IBM AIX:
rm /etc/init.cssd
rm /etc/init.crs
rm /etc/init.crsd
rm /etc/init.evmd
rm /etc/rc.d/rc2.d/K96init.crs
rm /etc/rc.d/rc2.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -Rf /etc/oracle/oprocd
10> on delete node : remove directory /etc/oracle and file /etc/oratab


2. Add back node to 10g RAC CLUSTER
Assume you want to add node 2 to cluster
1> Configure the OS and hardware for the new node
Verify useid和groupid
OS limit,vi /etc/security/limits
root:
data = -1
memory = -1
stack = -1
fsize = -1
fsize_hard = -1
core_hard = -1
cpu_hard = -1
data_hard = -1
stack_hard = -1
rss=-1
rss_hard=-1
nofiles=20000
nofiles_hard=20000
oracle:
data = -1
memory = -1
stack = -1
fsize = -1
fsize_hard = -1
core_hard = -1
cpu_hard = -1
data_hard = -1
stack_hard = -1
rss=-1
rss_hard=-1
nofiles=20000
nofiles_hard=20000

smit chgsys
Verify that the value shown for Maximum number of PROCESSES allowed per user is greater than or equal to 4096.

Modify udp tcp cache:
no -p -o rfc1323=1
no -r -o ipqmaxlen=512
no -p -o sb_max=4048000
no -p -o udp_sendspace=2048000
no -p -o udp_recvspace=2048000
no -p -o tcp_sendspace=65536
no -p -o tcp_recvspace=65536

verify : /etc/hosts
verify: user root and oracle can rsh and rlogin from each other node.

Change the attribute of asm disk,votedisk and ocr disk:
As root :
cd /dev
chdev -l hdisk* -a reserve_policy=no_reserve (change the disk to cluster share mode)
chown oracle:dba /dev/rhdisk*
chmod 660 /dev/rhdisk*

as root:
run rootpre.sh (find the script in the oracle software installation disk1)

2> Add the node to the cluster
On existing node:
Export DISPLAY=XXX.XXX.XXX.XXX:X.0
cd $CRS_HOME/oui/bin
#./addNode.sh
You will be prompted to run two scriptes rootaddnode.sh and root.sh as root on different node.
3> On new node:
$CRS_HOME/bin/racgons add_config new_node_name:4948
4> Add the Oracle Database software (with RAC option) to the new node
On existing node:
Export DISPLAY=XXX.XXX.XXX.XXX:X.0
cd $ORACLE_HOME/oui/bin
./addNode.sh
You will
then be prompted to run root.sh as the root user
su root
cd $ORACLE_HOME
./root.sh

5> cd to the $ORACLE_HOME/bin directory and run the vipca tool with the
new nodelist. Example
su - root
DISPLAY=ipaddress:0.0; export DISPLAY
cd $ORACLE_HOME/bin
./vipca -nodelist ,

6> Reconfigure listeners for new node
Run NETCA on the NEW node
DISPLAY=ipaddress:0.0; export DISPLAY
netca
Select: Cluster Configuration=> Select all nodes=> Listener configuration=> Reconfigure………..
You may get an error message saying, “The information provided for this
listener is currently in use by another listener…”. Click yes to
continue anyway
7> On new node: srvctl start nodeapps -n
8> Add instances via DBCA or add instance manual(step 9)
DISPLAY=ipaddress:0.0; export DISPLAY
$dbca
Select : Oracle Real Application Clusters=> Instance Management=> Add an Instance=>…….
To verify success, log into one of the instances and query from
gv$instance, you should now see all nodes
9> Add instance manual:
Register asm instance(if you use asm) to cluster:
srvctl add asm -n new_node_name -i +ASM2 -o $ORACLE_HOME

register database instance to cluster:
srvctl add database -d db_name -o $ORACLE_HOME
srvctl add instance -d db_name -i instance_2 -n new_node_name

start asm and instance:
#srvctl start nodeapps –n new_node_name
#srvctl start asm –n new_node_name
#srvctl start instance –d db_name –i instance_2
#crs_stat -t

[@more@]

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/75730/viewspace-1031586/,如需轉載,請註明出處,否則將追究法律責任。

相關文章