11gR2 RAC新增節點步驟
1 新增節點node3,3個節點的hosts配置如下:
127.0.0.1 localhost
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#node1
192.168.8.221 rac1 rac1.oracle.com
192.168.8.222 rac1-vip
172.168.1.18 rac1-priv
#node2
192.168.8.223 rac2 rac2.oracle.com
192.168.8.224 rac2-vip
172.168.1.19 rac2-priv
#node3
192.168.8.227 rac3 rac3.oracle.com
192.168.8.228 rac3-vip
172.168.1.20 rac3-priv
#scan-ip
192.168.8.225 rac-cluster rac-cluster-scan
2 關閉防火牆
service iptables stop
chkconfig iptables off
3 關閉Selinux
vim /etc/selinux/config
SELINUX=disabled
4 建立使用者和組
--建立使用者:
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
groupadd -g 1300 dba
groupadd -g 1301 oper
--建立組:
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle
--建立密碼:
passwd grid
passwd oracle
5 配置使用者的環境變數
--grid使用者:
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM3
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export LANG=en_US
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
umask 022
--oracle使用者:
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac3
export ORACLE_SID=orcl3
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/11.2.0/db_1
export ORACLE_UNQNAME=orcl
export TNS_ADMIN=$ORACLE_HOME/network/admin
#export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export LANG=en_US
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'
umask 022
6 建立所需目錄
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/11.2.0/grid
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
chown -R grid:oinstall /u01
7 配置limits.conf增加如配置
vim /etc/security/limits.conf
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
8 修改核心引數
--注意將原來引數shmmall和shmmax註釋掉
vim /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
--使sysctl生效
sysctl -p
9 停止NTP服務
service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.bak
10 安裝相關依賴包
yum install gcc compat-libstdc++-33 elfutils-libelf-devel glibc-devel glibc-headers gcc-c++ libaio-devel libstdc++-devel pdksh compat-libcap1-*
11 配置共享儲存
for i in b c d e f g h;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
--執行
/sbin/start_udev
[root@rac3 ~]# ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Jun 14 05:42 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Jun 14 05:42 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Jun 14 05:42 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Jun 14 05:42 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Jun 14 05:42 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Jun 14 05:42 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 Jun 14 05:42
/dev/asm-diskh
以上10個步驟要和之前2個節點配置一樣
12 配置ORACLE和GRID使用者的對等性
在節點1上執行
[oracle@rac1 rac1]# $ORACLE_HOME/oui/bin/runSSHSetup.sh -user oracle -hosts 'rac1 rac2 rac3' -advanced -exverify
[grid@rac1 rac1]# $ORACLE_HOME/oui/bin/runSSHSetup.sh
-user grid -hosts 'rac1 rac2 rac3' -advanced -exverify
13 驗證對等性
[grid@rac1 ~]$ cluvfy comp nodecon -n rac1,rac2,rac3
Verifying node connectivity
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.8.0" with node(s) rac2,rac1,rac3
TCP connectivity check passed for subnet "192.168.8.0"
Node connectivity passed for subnet "172.168.0.0" with node(s) rac2,rac1,rac3
TCP connectivity check passed for subnet "172.168.0.0"
Node connectivity passed for subnet "169.254.0.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "169.254.0.0"
Interfaces found on subnet "192.168.8.0" that are likely candidates for VIP are:
rac2 eth0:192.168.8.223 eth0:192.168.8.224
rac1 eth0:192.168.8.221 eth0:192.168.8.222 eth0:192.168.8.225
rac3 eth0:192.168.8.227
Interfaces found on subnet "172.168.0.0" that are likely candidates for VIP are:
rac2 eth1:172.168.1.19
rac1 eth1:172.168.1.18
rac3 eth1:172.168.1.20
WARNING:
Could not find a suitable set of interfaces for the private interconnect
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.8.0".
Subnet mask consistency check passed for subnet "172.168.0.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.
Node connectivity check passed
Verification of node connectivity was successful.
14 備份OCR
[root@rac1 tmp]# ocrconfig -manualbackup
rac1 2016/06/14 05:47:56 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20160614_054756.ocr
[root@rac1 tmp]# ocrconfig -showbackup manual
rac1 2016/06/14
05:47:56
/u01/app/11.2.0/grid/cdata/rac-cluster/backup_20160614_054756.ocr
15 對新節點安裝Clusterware
[grid@rac1 ~]$ cluvfy stage -post hwos -n rac3
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.8.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
ERROR: /*錯誤原因由於BUG,檢查了網路和對等性都沒問題,這裡忽略*/
PRVF-7617 : Node connectivity between "rac1 : 192.168.8.221" and "rac3 : 172.168.1.20" failed
TCP connectivity check failed for subnet "172.168.0.0"
Node connectivity check failed
Checking multicast communication...
Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility...
Disk Sharing Nodes (1 in count)
------------------------------------ ------------------------
/dev/sda rac3
Disk Sharing Nodes (1 in count)
------------------------------------ ------------------------
/dev/sdb rac3
/dev/sdc rac3
/dev/sdd rac3
/dev/sde rac3
/dev/sdf rac3
/dev/sdg rac3
/dev/sdh rac3
Shared storage check was successful on nodes "rac3"
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Post-check for hardware and operating system setup was
unsuccessful on all the nodes.
[grid@rac1 ~]$ cluvfy stage -pre crsinst -n rac3
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.8.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
ERROR:
PRVF-7617 : Node connectivity between "rac1 : 192.168.8.221" and "rac3 : 172.168.1.20" failed
TCP connectivity check failed for subnet "172.168.0.0"
Node connectivity check failed
Checking multicast communication...
Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac3:/u01/app/11.2.0/grid,rac3:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh" /*節點3沒裝pdksh,這個包可裝可不裝*/
Check failed on nodes:
rac3
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Time zone consistency check passed
Pre-check for cluster services setup was unsuccessful on
all the nodes.
[grid@rac1 ~]$ cluvfy stage -pre nodeadd -n rac3 -fixup -verbose
Performing pre-checks for node addition
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
------------------------------------ ------------------------
rac3 yes
Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac3 passed
Result: User equivalence check passed for user "grid"
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac1"
The Oracle Clusterware is healthy on node "rac2"
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Result: Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
rac1 passed
rac2 passed
rac3 passed
Verification of the hosts config file successful
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.8.221 192.168.8.0 0.0.0.0 192.168.8.1 08:00:27:A7:60:61 1500
eth0 192.168.8.222 192.168.8.0 0.0.0.0 192.168.8.1 08:00:27:A7:60:61 1500
eth0 192.168.8.225 192.168.8.0 0.0.0.0 192.168.8.1 08:00:27:A7:60:61 1500
eth1 172.168.1.18 172.168.0.0 0.0.0.0 192.168.8.1 08:00:27:4A:6A:15 1500
eth1 169.254.93.171 169.254.0.0 0.0.0.0 192.168.8.1 08:00:27:4A:6A:15 1500
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.8.223 192.168.8.0 0.0.0.0 192.168.8.1 08:00:27:41:AC:86 1500
eth0 192.168.8.224 192.168.8.0 0.0.0.0 192.168.8.1 08:00:27:41:AC:86 1500
eth1 172.168.1.19 172.168.0.0 0.0.0.0 192.168.8.1 08:00:27:E0:B4:FA 1500
eth1 169.254.205.237 169.254.0.0 0.0.0.0 192.168.8.1 08:00:27:E0:B4:FA 1500
Interface information for node "rac3"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.8.227 192.168.8.0 0.0.0.0 192.168.8.1 08:00:27:39:2C:07 1500
eth1 172.168.1.20 172.168.0.0 0.0.0.0 192.168.8.1 08:00:27:31:D4:B0 1500
Check: Node connectivity for interface "eth0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1[192.168.8.221] rac1[192.168.8.222] yes
rac1[192.168.8.221] rac1[192.168.8.225] yes
rac1[192.168.8.221] rac2[192.168.8.223] yes
rac1[192.168.8.221] rac2[192.168.8.224] yes
rac1[192.168.8.221] rac3[192.168.8.227] yes
rac1[192.168.8.222] rac1[192.168.8.225] yes
rac1[192.168.8.222] rac2[192.168.8.223] yes
rac1[192.168.8.222] rac2[192.168.8.224] yes
rac1[192.168.8.222] rac3[192.168.8.227] yes
rac1[192.168.8.225] rac2[192.168.8.223] yes
rac1[192.168.8.225] rac2[192.168.8.224] yes
rac1[192.168.8.225] rac3[192.168.8.227] yes
rac2[192.168.8.223] rac2[192.168.8.224] yes
rac2[192.168.8.223] rac3[192.168.8.227] yes
rac2[192.168.8.224] rac3[192.168.8.227] yes
Result: Node connectivity passed for interface "eth0"
Check: TCP connectivity of subnet "192.168.8.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:192.168.8.221 rac1:192.168.8.222 passed
rac1:192.168.8.221 rac1:192.168.8.225 passed
rac1:192.168.8.221 rac2:192.168.8.223 passed
rac1:192.168.8.221 rac2:192.168.8.224 passed
rac1:192.168.8.221 rac3:192.168.8.227 passed
Result: TCP connectivity check passed for subnet "192.168.8.0"
Check: Node connectivity for interface "eth1"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1[172.168.1.18] rac2[172.168.1.19] yes
rac1[172.168.1.18] rac3[172.168.1.20] yes
rac2[172.168.1.19] rac3[172.168.1.20] yes
Result: Node connectivity passed for interface "eth1"
Check: TCP connectivity of subnet "172.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:172.168.1.18 rac2:172.168.1.19 passed
rac1:172.168.1.18 rac3:172.168.1.20 passed
Result: TCP connectivity check passed for subnet "172.168.0.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.8.0".
Subnet mask consistency check passed for subnet "172.168.0.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 1.8334GB (1922488.0KB) 1.5GB (1572864.0KB) passed
rac3 1.8334GB (1922488.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 611.7266MB (626408.0KB) 50MB (51200.0KB) passed
rac3 1.7658GB (1851556.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 4GB (4194296.0KB) 2.7501GB (2883732.0KB) passed
rac3 4GB (4194296.0KB) 2.7501GB (2883732.0KB) passed
Result: Swap space check passed
Check: Free disk space for "rac1:/u01/app/11.2.0/grid,rac1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/u01/app/11.2.0/grid rac1 / 90.5684GB 7.5GB passed
/tmp rac1 / 90.5684GB 7.5GB passed
Result: Free disk space check passed for "rac1:/u01/app/11.2.0/grid,rac1:/tmp"
Check: Free disk space for "rac3:/u01/app/11.2.0/grid,rac3:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/u01/app/11.2.0/grid rac3 / 111.8486GB 7.5GB passed
/tmp rac3 / 111.8486GB 7.5GB passed
Result: Free disk space check passed for "rac3:/u01/app/11.2.0/grid,rac3:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rac1 passed exists(1100)
rac3 passed exists(1100)
Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "grid"
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rac1 3 3,5 passed
rac3 3 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac1 hard 65536 65536 passed
rac3 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac1 soft 1024 1024 passed
rac3 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac1 hard 16384 16384 passed
rac3 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac1 soft 2047 2047 passed
rac3 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 x86_64 x86_64 passed
rac3 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 2.6.32-431.el6.x86_64 2.6.9 passed
rac3 2.6.32-431.el6.x86_64 2.6.9 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 250 250 250 passed
rac3 250 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 32000 32000 32000 passed
rac3 32000 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 100 100 100 passed
rac3 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 128 128 128 passed
rac3 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 4398046511104 4398046511104 984313856 passed
rac3 4398046511104 4398046511104 984313856 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 4096 4096 4096 passed
rac3 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 1073741824 1073741824 2097152 passed
rac3 1073741824 1073741824 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 6815744 6815744 6815744 passed
rac3 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
rac3 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 262144 262144 262144 passed
rac3 262144 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 4194304 4194304 4194304 passed
rac3 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 262144 262144 262144 passed
rac3 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 1048576 1048576 1048576 passed
rac3 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac1 1048576 1048576 1048576 passed
rac3 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 make-3.81-20.el6 make-3.80 passed
rac3 make-3.81-20.el6 make-3.80 passed
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 binutils-2.20.51.0.2-5.36.el6 binutils-2.15.92.0.2 passed
rac3 binutils-2.20.51.0.2-5.36.el6 binutils-2.15.92.0.2 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 gcc(x86_64)-4.4.7-17.el6 gcc(x86_64)-3.4.6 passed
rac3 gcc(x86_64)-4.4.7-17.el6 gcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.105 passed
rac3 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 glibc(x86_64)-2.12-1.192.el6 glibc(x86_64)-2.3.4-2.41 passed
rac3 glibc(x86_64)-2.12-1.192.el6 glibc(x86_64)-2.3.4-2.41 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
rac3 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "elfutils-libelf(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 elfutils-libelf(x86_64)-0.164-2.el6 elfutils-libelf(x86_64)-0.97 passed
rac3 elfutils-libelf(x86_64)-0.164-2.el6 elfutils-libelf(x86_64)-0.97 passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for "elfutils-libelf-devel"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 elfutils-libelf-devel-0.164-2.el6 elfutils-libelf-devel-0.97 passed
rac3 elfutils-libelf-devel-0.164-2.el6 elfutils-libelf-devel-0.97 passed
Result: Package existence check passed for "elfutils-libelf-devel"
Check: Package existence for "glibc-common"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 glibc-common-2.12-1.192.el6 glibc-common-2.3.4 passed
rac3 glibc-common-2.12-1.192.el6 glibc-common-2.3.4 passed
Result: Package existence check passed for "glibc-common"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 glibc-devel(x86_64)-2.12-1.192.el6 glibc-devel(x86_64)-2.3.4 passed
rac3 glibc-devel(x86_64)-2.12-1.192.el6 glibc-devel(x86_64)-2.3.4 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 glibc-headers-2.12-1.192.el6 glibc-headers-2.3.4 passed
rac3 glibc-headers-2.12-1.192.el6 glibc-headers-2.3.4 passed
Result: Package existence check passed for "glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 gcc-c++(x86_64)-4.4.7-17.el6 gcc-c++(x86_64)-3.4.6 passed
rac3 gcc-c++(x86_64)-4.4.7-17.el6 gcc-c++(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.105 passed
rac3 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 libgcc(x86_64)-4.4.7-17.el6 libgcc(x86_64)-3.4.6 passed
rac3 libgcc(x86_64)-4.4.7-17.el6 libgcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 libstdc++(x86_64)-4.4.7-17.el6 libstdc++(x86_64)-3.4.6 passed
rac3 libstdc++(x86_64)-4.4.7-17.el6 libstdc++(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 libstdc++-devel(x86_64)-4.4.7-17.el6 libstdc++-devel(x86_64)-3.4.6 passed
rac3 libstdc++-devel(x86_64)-4.4.7-17.el6 libstdc++-devel(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 sysstat-9.0.4-22.el6 sysstat-5.0.5 passed
rac3 sysstat-9.0.4-22.el6 sysstat-5.0.5 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "pdksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 pdksh-5.2.14-30 pdksh-5.2.14 passed
rac3 missing pdksh-5.2.14 failed
Result: Package existence check failed for "pdksh"
Check: Package existence for "expat(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac1 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed
rac3 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed
Result: Package existence check passed for "expat(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
rac1 passed
rac3 passed
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rac1 passed does not exist
rac3 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rac1 passed
rac3 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Pre-check for node addition was unsuccessful on all the
nodes.
2.3 GI使用者執行addNode.sh命令
在addNode.sh正式新增節點之前它也會呼叫cluvfy工具來驗證新加入節點是否滿足條件,如果不滿足則拒絕下一步操作。
因為addNode.sh 指令碼會cluvfy工具來驗證新加入節點是否滿足條件,而我們DNS 沒有配置,所以直接執行,肯定會報錯。
所以在執行addNode.sh 之前需要設定環境變數,跳過對節點新增的預檢查。 這個引數就是從addNode.sh 指令碼里找出來的:
export IGNORE_PREADDNODE_CHECKS=Y
[grid@rac1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[grid@rac1 bin]$ ./addNode.sh
"CLUSTER_NEW_NODES={rac3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={rac3-priv}" > /u01/app/grid/add_node.log
2>&1
[root@rac1 ~]# tail -f /u01/app/grid/add_node.log
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3646 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes rac2,rac3 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/11.2.0/grid
New Nodes
Space Requirements
New Nodes
rac3
/: Required 4.45GB : Available 104.16GB
Installed Products
Product Names
Oracle Grid Infrastructure 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.5
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Server) 11.2.0.4.0
Installation Plugin Files 11.2.0.4.0
Universal Storage Manager Files 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Automatic Storage Management Assistant 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.4.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Oracle Net Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Enterprise Manager plugin Common Files 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.4.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.5
Deinstallation Tool 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Cluster Verification Utility Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle LDAP administration 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
Agent Required Support Files 10.2.0.4.5
Parser Generator Required Support Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Net 11.2.0.4.0
PL/SQL 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Cluster Ready Services Files 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Tuesday, June 14, 2016 6:16:50 AM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Tuesday, June 14, 2016 6:16:53 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Tuesday, June 14, 2016 6:27:28 AM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac3
/u01/app/11.2.0/grid/root.sh #On nodes rac3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
[root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac3 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ...
succeeded
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ ./addNode.sh
"CLUSTER_NEW_NODES={rac3}" > /u01/app/oracle/add_node.log
2>&1
[root@rac1 ~]# tail -f /u01/app/oracle/add_node.log
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 1916 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes rac2,rac3 are available
............................................................... 100% Done.
....
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/oracle/product/11.2.0/db_1
New Nodes
Space Requirements
New Nodes
rac3
/: Required 4.26GB : Available 101.88GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.8.1.0
Oracle ODBC Driverfor Instant Client 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
SSL Required Support Files for InstantClient 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle Real Application Testing 11.2.0.4.0
Oracle Database Vault J2EE Application 11.2.0.4.0
Oracle Label Security 11.2.0.4.0
Oracle Data Mining RDBMS Files 11.2.0.4.0
Oracle OLAP RDBMS Files 11.2.0.4.0
Oracle OLAP API 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle Database Vault option 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.4.0
Oracle SQL Developer 11.2.0.4.0
Oracle Application Express 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
SQLJ Runtime 11.2.0.4.0
Database Workspace Manager 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0
Enterprise Manager Repository Core Files 10.2.0.4.5
Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0
Enterprise Manager Grid Control Core Files 10.2.0.4.5
Enterprise Manager Common Core Files 10.2.0.4.5
Enterprise Manager Agent Core Files 10.2.0.4.5
RDBMS Required Support Files 11.2.0.4.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.5
Oracle 11g Warehouse Builder Required Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Parser Generator Required Support Files 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Multimedia Annotator 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Sample Schema Data 11.2.0.4.0
Oracle Starter Database 11.2.0.4.0
Oracle Message Gateway Common Files 11.2.0.4.0
Oracle XML Query 11.2.0.4.0
XML Parser for Oracle JVM 11.2.0.4.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.4.0
Enterprise Manager Common Files 10.2.0.4.5
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.4.0
Oracle Containers for Java 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
Oracle Net Required Support Files 11.2.0.4.0
Secure Socket Layer 11.2.0.4.0
Oracle Universal Connection Pool 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
Precompiler Common Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Oracle Help for the Web 2.0.14.0.0
Oracle LDAP administration 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
PL/SQL 11.2.0.4.0
Generic Connectivity Common Files 11.2.0.4.0
Oracle Database Gateway for ODBC 11.2.0.4.0
Oracle Programmer 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Enterprise Manager Agent 10.2.0.4.5
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Call Interface (OCI) 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Oracle Net 11.2.0.4.0
Oracle XML Development Kit 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Database Configuration and Upgrade Assistants 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Oracle Enterprise Manager Console DB 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Text 11.2.0.4.0
Oracle Net Services 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
Oracle OLAP 11.2.0.4.0
Oracle Spatial 11.2.0.4.0
Oracle Partitioning 11.2.0.4.0
Enterprise Edition Options 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Thursday, June 9, 2016 8:58:09 AM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Thursday, June 9, 2016 8:58:17 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Thursday, June 9, 2016 9:12:28 AM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac3
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
[root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac3 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac3 ~]# ps -ef|grep ora
root 2410 1 0 08:43 ? 00:00:04 /u01/app/11.2.0/grid/bin/orarootagent.bin
grid 5593 1 0 08:51 ? 00:00:00 oracle+ASM3 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 5612 1 0 08:51 ? 00:00:00 oracle+ASM3_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 5637 1 0 08:51 ? 00:00:00 oracle+ASM3_asmb_+asm3 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 5742 1 0 08:51 ? 00:00:02 /u01/app/11.2.0/grid/bin/oraagent.bin
root 5746 1 0 08:51 ? 00:00:02 /u01/app/11.2.0/grid/bin/orarootagent.bin
grid 5779 1 0 08:51 ? 00:00:00 oracle+ASM3 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 5784 1 0 08:51 ? 00:00:00 oracle+ASM3 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
root 9530 10313 0 09:16 pts/0 00:00:00 grep ora
root 10615 1 0 08:36 ? 00:00:12 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/rac3/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/rac3/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/rac3/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/rac3/tfa_home
grid 24396 1 0
08:43 ? 00:00:02
/u01/app/11.2.0/grid/bin/oraagent.bin
在節點1上用dbca命令把oracle 例項新增到資料庫。
dbca -> instance manager -> add an instance -> 選擇例項,輸入sys使用者和密碼 -> 選擇節點和例項名 ->
Finish.
或者透過dbca的圖形化管理,也可以使用dbca的靜默安裝。
[oracle@rac1 bin]$ dbca -silent -addInstance -nodeList rac3 -gdbName orcl -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword oracle
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file
"/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details.
4.2 用oracle 使用者將例項新增到CRS資源裡
注意,用oracle使用者執行,還要注意oracle 使用者的屬組資訊
[oracle@rac1 bin]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATADG/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2,orcl3
Disk Groups: DATADG,SYSTEMDG
Mount point paths:
Services: orcl_taf
Type: RAC
Database is administrator managed
發現orcl3的例項已經和資料庫關聯。如果沒有關聯,執行如下語句:
[oracle@rac1 bin]$ srvctl add instance -d orcl -i orcl3
-n rac3
五. 修改Client-side TAF 配置
5.1 修改Oracle 使用者下tnsnames.ora 檔案
修改所有節點,Oracle使用者下的tnsnames.ora 檔案,修改內容如下:
NODE1_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST= rac1-vip)(PORT = 1521))
NODE2_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST =rac2-vip)(PORT = 1521))
NODE3_LOCAL=(ADDRESS = (PROTOCOL = TCP)(HOST =rac3-vip)(PORT = 1521))
DAVE_REMOTE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST=rac1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST=rac2-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST=rac3-vip)(PORT = 1521))
)
)
5.2 修改 LOCAL_LISTENER 和REMOTE_LISTENER 引數
執行如下操作:
alter system set LOCAL_LISTENER='NODE1_LOCAL' scope=both sid='orcl1';
alter system set LOCAL_LISTENER='NODE2_LOCAL' scope=both sid='orcl2';
alter system set LOCAL_LISTENER='NODE3_LOCAL' scope=both sid='orcl3';
alter system set REMOTE_LISTENER='ORCL_REMOTE' scope=both
sid='*';
六. 修改Service-Side TAF 配置
[oracle@rac1 admin]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s) orcl1
[oracle@rac1 admin]$ srvctl config service -d orcl
Service name: orcl_taf
Service is enabled
Server pool: orcl_orcl_taf
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: orcl1
Available instances: orcl2
--修改之前的service,新增節點3例項:orcl3
[oracle@rac1 admin]$ srvctl modify service -d orcl -s orcl_taf -n -i orcl1,orcl2,orcl3
[oracle@rac1 admin]$ srvctl config service -d orcl
Service name: orcl_taf
Service is enabled
Server pool: orcl_orcl_taf
Cardinality: 3
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: orcl1,orcl2,orcl3
Available instances:
[oracle@rac1 admin]$ srvctl start service -d orcl -s orcl_taf -i orcl3
[oracle@rac1 admin]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s) orcl1,orcl3
#原來orcl2沒啟用,這裡順便啟動下
[oracle@rac1 admin]$ srvctl start service -d orcl -s orcl_taf -i orcl2
[oracle@rac1 admin]$ srvctl status service -d orcl
Service orcl_taf is running on instance(s)
orcl1,orcl2,orcl3
七. 驗證
[grid@rac3 ~]$ olsnodes -s
rac1 Active
rac2 Active
rac3 Active
[grid@rac3 ~]$ olsnodes -n
rac1 1
rac2 2
rac3 3
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.SYSTEMDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ONLINE ONLINE rac3 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
OFFLINE OFFLINE rac3
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac2
ora.oc4j
1 ONLINE ONLINE rac2
ora.orcl.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
3 ONLINE ONLINE rac3 Open
ora.orcl.orcl_taf.svc
1 ONLINE ONLINE rac1
2 ONLINE ONLINE rac3
3 ONLINE ONLINE rac2
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.rac3.vip
1 ONLINE ONLINE rac3
ora.scan1.vip
1 ONLINE
ONLINE rac2
[oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Thu Jun 9 10:09:31 2016
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> col host_name for a20
SQL> select inst_id,host_name,instance_name,status from gv$instance;
INST_ID HOST_NAME INSTANCE_NAME STATUS
---------- -------------------- ---------------- ------------
1 rac1 orcl1 OPEN
3 rac3 orcl3 OPEN
2 rac2 orcl2 OPEN
[root@rac1 ~]# ./crs_stat.sh
Name Target State Host
------------------------ ---------- --------- -------
ora.DATADG.dg ONLINE ONLINE rac1
ora.LISTENER.lsnr ONLINE ONLINE rac1
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE rac2
ora.SYSTEMDG.dg ONLINE ONLINE rac1
ora.asm ONLINE ONLINE rac1
ora.cvu ONLINE ONLINE rac2
ora.gsd OFFLINE OFFLINE
ora.net1.network ONLINE ONLINE rac1
ora.oc4j ONLINE ONLINE rac2
ora.ons ONLINE ONLINE rac1
ora.orcl.db ONLINE ONLINE rac1
ora.orcl.orcl_taf.svc ONLINE ONLINE rac1
ora.rac1.ASM1.asm ONLINE ONLINE rac1
ora.rac1.LISTENER_RAC1.lsnr ONLINE ONLINE rac1
ora.rac1.gsd OFFLINE OFFLINE
ora.rac1.ons ONLINE ONLINE rac1
ora.rac1.vip ONLINE ONLINE rac1
ora.rac2.ASM2.asm ONLINE ONLINE rac2
ora.rac2.LISTENER_RAC2.lsnr ONLINE ONLINE rac2
ora.rac2.gsd OFFLINE OFFLINE
ora.rac2.ons ONLINE ONLINE rac2
ora.rac2.vip ONLINE ONLINE rac2
ora.rac3.ASM3.asm ONLINE ONLINE rac3
ora.rac3.LISTENER_RAC3.lsnr ONLINE ONLINE rac3
ora.rac3.gsd OFFLINE OFFLINE
ora.rac3.ons ONLINE ONLINE rac3
ora.rac3.vip ONLINE ONLINE rac3
ora.scan1.vip ONLINE ONLINE rac2
八. 11gR2 新增刪除節點小結
11gR2 RAC 新增節點分3個階段:
(1)第一階段主要工作是複製GIRD HOME到新節點,配置GRID,並且啟動GRID,同時更新OCR資訊,更新inventory資訊。
(2)第二階段主要工作是複製RDBMS HOME到新節點,更新inventory資訊。
(3)第三階段主要工作是DBCA建立新的資料庫例項(包括建立undo 表空間,redo log,初始化引數等),更新OCR資訊(包括註冊新的資料庫例項等)。
11gR2 的解除安裝步驟正好和上面的步驟相反。 步驟還是三個步驟。
在新增/刪除節點的過程中,原有的節點一直是online狀態,不需要停機,對客戶端業務沒有影響。新節點的ORACLE_BASE和ORACLE_HOME 路徑在新增過程中會自動建立,無需手動建立。
注意事項:
(1)在新增/刪除節點前,建議手工備份一下OCR,在某些情況下新增/刪除節點失敗,可以透過恢復原來的OCR來解決問題。
(2)在正常安裝11.2 GRID時,OUI介面提供SSH 配置功能,但是新增節點指令碼addNode.sh沒有這個功能,因此需要手動配置oracle使用者和grid使用者的SSH 使用者等效性。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31444259/viewspace-2155058/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- rac新增節點步驟(11g)
- 11g rac新增節點步驟(11g)
- Oracle RAC新增節點Oracle
- rac新增節點前之清除節點資訊
- DKHhadoop叢集新增節點管理功能的操作步驟Hadoop
- Windows 11.2.0.4 RAC安裝配置以及RAC新增節點Windows
- oracle11g RAC新增節點Oracle
- rac新增節點容易遇到的問題
- oracle11g_RAC新增刪除節點Oracle
- Oracle 11g RAC重新新增節點Oracle
- oracle 11g rac新增節點前之清除節點資訊Oracle
- 【RAC】Oracle10g rac新增刪除節點命令參考Oracle
- Oracle優化案例-新增RAC節點(二十九)Oracle優化
- 11g rac新增節點容易遇到的問題
- rac新增節點容易遇到的問題(11g)
- mongodb副本集用一致性快照方法新增從節點步驟MongoDB
- NEO共識節點推薦搭建步驟
- 11gR2 RAC convert ONENODE
- 11gR2 OneNode Convert RAC
- 【RAC】RAC更換心跳地址和RAC更換儲存主要步驟
- Oracle RAC叢集解除安裝步驟Oracle
- 2節點RAC安裝
- 節點加入k8s 叢集的步驟K8S
- KubeSphere 新增節點
- 新增節點教程
- 新增Jenkins節點Jenkins
- linux下安裝redis 單節點安裝操作步驟LinuxRedis
- 一步一步搭建oracle 11gR2 rac+dg之環境準備(二)Oracle
- 【RAC】RAC搭建步驟Linux7.2+11G(基於Vmware+Openfile)Linux
- 一步一步搭建11gR2 rac+dg之配置單例項的DG(八)單例
- V8R6叢集節點擴容步驟整理
- 12.1.0.2 單機 升級 19.16 RAC步驟詳解
- 19c rac自動打補丁步驟
- 【ASK_ORACLE】Relink RAC叢集詳細步驟Oracle
- RAC二節點啟動異常
- rancher新增k8s節點時顯示節點已新增K8S
- Oracle RAC 11gR2開啟歸檔Oracle
- DataNode工作機制 & 新增節點 &下線節點