ORACLE 10G增加一個節點rac3

dayong2015發表於2014-05-17
1、主機名規劃: 
#Public Network - (eth0) 
192.168.10.1   rac1 
192.168.10.2   rac2 
192.168.10.5   rac3       #新增加節點Public IP
#Private Interconnect - (eth1) 
10.0.0.1     rac1-priv 
10.0.0.2     rac2-priv 
10.0.0.3     rac3-priv      #新增加 節點Private IP
#Public Virtual IP (VIP) addresses - (eth0) 
192.168.10.3   rac1-vip 
192.168.10.4   rac2-vip 
192.168.10.6   rac3-vip  #新增加節點Virtual IP
2.配置rac3與rac1、rac2一樣的OS環境,包括建立使用者,環境變數,SSH驗證等。
初始化第3臺節點,首先需要對新節點進行適當的配置,以使其能夠滿足成為RAC環境中一員,此處練習採用在虛擬環境下模擬:
1)基本環境的驗證
檢查使用者和組,如下:

[root@rac3 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
[root@rac3 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
檢查oracle使用者環境變數,如下:
[root@rac3 ~]# su - oracle
[oracle@rac3 ~]$ vi .bash_profile
export ORACLE_SID=orcl3     #修改內容
驗證目錄及許可權,如下:
[root@rac3 oracle]# chown -R oracle:oinstall crs/
[root@rac3 oracle]# ll
total 4
drwxr-xr-x 2 oracle oinstall 4096 May 17 18:28 crs
檢查核心引數,如下:
[root@rac3 oracle]# vi /etc/sysctl.conf
檢查oracle使用者的shell限制,如下:
[root@rac3 oracle]# vi /etc/pam.d/login
驗證HangCheck計時器,如下:
[root@rac3 oracle]# vi /etc/rc.local
驗證分割槽和繫結的裸裝置,如下:
[root@rac3 oracle]# ll /dev/raw/raw*
crw-rw---- 1 oracle dba 162, 1 May 17 18:03 /dev/raw/raw1
crw-rw---- 1 oracle dba 162, 2 May 17 18:03 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 3 May 17 18:03 /dev/raw/raw3
crw-rw---- 1 oracle dba 162, 4 May 17 18:03 /dev/raw/raw4

2)配置/etc/hosts 如下:
[root@rac3 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
#127.0.0.1              rac1 localhost.localdomain localhost
127.0.0.1       localhost
#::1            localhost6.localdomain6 localhost6
#public ip
192.168.10.1    rac1
192.168.10.2    rac2
192.168.10.5    rac3
#private ip
10.0.0.1        rac1-priv
10.0.0.2        rac2-priv
10.0.0.3        rac3-priv
#virtual ip
192.168.10.3    rac1-vip
192.168.10.4    rac2-vip
192.168.10.6    rac3-vip

注意:這裡不僅新增加的節點中hosts檔案需要修改,同一個RAC 環境中所有節點的

hosts 檔案都必須重新修改。
2)
配置SSH金鑰認證

RAC 環境中各節點間不僅時刻保持通訊,而且還有可能互訪檔案,因此必須要保證各

節點間訪問不需輸入DBA手動密碼即可自動完成,這裡我們通過配置SSH 來實現這一點

首先是在新增加的節點時操作,即RAC3 節點(注意執行命令的使用者)
[oracle@rac3 .ssh]$ ls
authorized_keys  id_dsa  id_dsa.pub  id_rsa  id_rsa.pub  known_hosts
[oracle@rac3 .ssh]$ rm -rf *
[oracle@rac3 .ssh]$ ll
total 0

[oracle@rac3 ~]$ ssh-keygen -t rsa

[oracle@rac3 ~]$ ssh-keygen -t dsa

然後轉至rac1 節點執行,也是以oracle 身份進行操作(執行過程中,當訪問遠端節點

時可能需要輸入目標節點的密碼)

[oracle@rac1 ~]$ ssh rac3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac3 (192.168.10.5)' can't be established.
RSA key fingerprint is 7c:f4:aa:d2:89:fc:0d:1f:ff:33:07:15:21:97:62:8f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac3,192.168.10.5' (RSA) to the list of known hosts.
oracle@rac3's password:
[oracle@rac1 ~]$ ssh rac3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac3's password:

最後傳輸rac1 節點中配置好的認證金鑰資訊到節點和節點3,執行命令如下:

[oracle@rac1 ~]$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
authorized_keys                               100% 2982     2.9KB/s   00:00
[oracle@rac1 ~]$ scp ~/.ssh/authorized_keys rac3:~/.ssh/authorized_keys
oracle@rac3's password:
authorized_keys                               100% 2982     2.9KB/s   00:00

驗證雙機互信            #確保有日期返回,並且不需要輸入密碼
[oracle@rac1 .ssh]$ ssh rac1 date
Sat May 17 16:25:22 CST 2014
[oracle@rac1 .ssh]$ ssh rac2 date
Sat May 17 16:25:30 CST 2014
[oracle@rac1 .ssh]$ ssh rac3 date
Sat May 17 16:25:36 CST 2014
[oracle@rac1 .ssh]$ ssh rac1-priv date
Sat May 17 16:26:11 CST 2014
[oracle@rac1 .ssh]$ ssh rac2-priv date
Sat May 17 16:26:20 CST 2014
在三個節點都要執行;
[oracle@rac1 .ssh]$ exec /usr/bin/ssh-agent $SHELL
[oracle@rac1 .ssh]$ exec /usr/bin/ssh-add                   //檢視是否有資訊彈出
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
3.新增clusterware到新節點
1)檢查安裝環境

首先是檢查安裝環境,仍然是使用runcluvfy.sh 指令碼來進行驗證,該指令碼可以在現有RAC

配置中的任意節點上執行,這裡在節點rac執行,如下:
[oracle@rac1 cluvfy]$ pwd
/u01/app/clusterware/cluvfy
[oracle@rac1 cluvfy]$ ls
cvupack.zip  jrepack.zip  runcluvfy.sh

[oracle@rac1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n rac3 -verbose
結尾顯示如下資訊:正常

2)安裝clusterware到新節點

新節點中clusterware 的安裝也是從現有的RAC 環境中開始的,在當前RAC 環境中任

意節點的$CRS_HOME,執行oui/bin/addNode.sh 指令碼敲出視界介面,操作如下:

rac1中執行:
[root@rac1 ~]# xhost +
[root@rac1 ~]# su - oracle
[oracle@rac1 bin]$ pwd
/u01/app/oracle/crs/oui/bin
[oracle@rac1 bin]$ ls
addLangs.sh  attachHome.sh  lsnodes    ouica.sh  runConfig.sh  runInstaller.sh
addNode.sh   detachHome.sh  ouica.bat  resource  runInstaller

[oracle@rac1 bin]$ ./addNode.sh
出現如下歡迎介面:

新增的節點資訊如下:

按照提示,按順序執行相應指令碼,如下:


[root@rac1 ~]# /u01/app/oracle/crs/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 3: rac3 rac3-priv rac3
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/u01/app/oracle/crs/bin/srvctl add nodeapps -n rac3 -A rac3-vip/255.255.255.0/eth0 -o /u01/app/oracle/crs

[root@rac3 app]# /u01/app/oracle/crs/root.sh
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
OCR LOCATIONS =  /dev/raw/raw1
OCR backup directory '/u01/app/oracle/crs/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
        rac3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps


Creating VIP application resource on (0) nodes.
Creating GSD application resource on (0) nodes.
Creating ONS application resource on (0) nodes.
Starting VIP application resource on (2) nodes1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/u01/app/oracle/crs/log/rac1/racg/ora.rac1.vip.log" for more details
.1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/u01/app/oracle/crs/log/rac2/racg/ora.rac2.vip.log" for more details
..
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...

Done.
指令碼執行完成之後,點選下一步,出現如下安裝完成介面:

檢視crs狀態:
[oracle@rac2 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac3
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2
ora.rac3.gsd   application    0/5    0/0    ONLINE    ONLINE    rac3
ora.rac3.ons   application    0/3    0/0    ONLINE    ONLINE    rac3
ora.rac3.vip   application    0/0    0/0    ONLINE    ONLINE    rac1
3)接下來需要將新節點的ONS(Oracle Notification Services)配置資訊寫入OCR(Oracle Cluster Register),在節點rac執行指令碼如下:
首先檢視第三個rac3節點對應的埠,如下:
[oracle@rac3 conf]$ pwd
/u01/app/oracle/crs/opmn/conf
[oracle@rac3 conf]$ cat ons.config
localport=6113
remoteport=6200
loglevel=3
useocr=on
在節點rac1上執行如下指令碼:
[oracle@rac1 bin]$ pwd
/u01/app/oracle/crs/bin
[oracle@rac1 bin]$ ./racgons add_config rac3:6200

至此,新節點的CLUSTERWARE 配置完成,要檢查安裝的結果,可以在新節點中呼叫cluvfy 命令進行驗證,如下:
[oracle@rac3 bin]$ pwd
/u01/app/oracle/crs/bin
[oracle@rac3 bin]$ ./cluvfy stage -post crsinst -n rac3 -verbose

Performing post-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac3"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Node reachability check passed from node "rac3".

Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  rac3                                  passed
Result: User equivalence check passed for user "oracle".

Checking Cluster manager integrity...

Checking CSS daemon...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac3                                  running
Result: Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...

  Node Name
  ------------------------------------
  rac1
  rac2
  rac3


Cluster integrity check passed

Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...

ERROR:
OCR integrity is invalid.

OCR integrity check failed.

Checking CRS integrity...

Checking daemon liveness...

Check: Liveness for "CRS daemon"
  Node Name                             Running
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Liveness check passed for "CRS daemon".

Checking daemon liveness...

Check: Liveness for "CSS daemon"
  Node Name                             Running
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Liveness check passed for "CSS daemon".

Checking daemon liveness...

Check: Liveness for "EVM daemon"
  Node Name                             Running
  ------------------------------------  ------------------------
  rac3                                  yes
Result: Liveness check passed for "EVM daemon".

Liveness of all the daemons
  Node Name     CRS daemon                CSS daemon                EVM daemon
  ------------  ------------------------  ------------------------  ----------
  rac3          yes                       yes                       yes

Checking CRS health...

Check: Health of CRS
  Node Name                             CRS OK?
  ------------------------------------  ------------------------
  rac3                                  yes
Result: CRS health check passed.

CRS integrity check passed.

Checking node application existence...

Checking existence of VIP node application
  Node Name     Required                  Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac3          yes                       exists                    passed
Result: Check passed.


Checking existence of ONS node application
  Node Name     Required                  Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac3          no                        exists                    passed
Result: Check passed.

Checking existence of GSD node application
  Node Name     Required                  Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  rac3          no                        exists                    passed
Result: Check passed.

Post-check for cluster services setup was unsuccessful on all the nodes.    #看到此節點,說明新增成功
4.複製oracle軟體到新節點rac3

接下來要複製ORACLE 資料庫軟體到新節點,複製操作可以在現的RAC 環境中的任意節點中開始,這裡我們選擇在rac12節點上操作:
[oracle@rac2 bin]$ pwd
/u01/app/oracle/db_1/oui/bin
[oracle@rac2 bin]$ ./addNode.sh

出現如下歡迎介面:

新增第三個節點,如下:

如下安裝資訊會將第二個節點安裝的oracle軟體資訊同步到新新增的節點上:

出現如下資訊,按提示執行指令碼:


[root@rac3 ~]# /u01/app/oracle/db_1/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
5.在新節點rac3配置監聽:

rac2執行netca,選擇rac3,新增監聽
只選擇在節點rac3上建立監聽,如下:

檢視監聽狀態,如下:
[oracle@rac3 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac3
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2
ora....C3.lsnr application    0/5    0/0    ONLINE    ONLINE    rac3
ora.rac3.gsd   application    0/5    0/0    ONLINE    ONLINE    rac3
ora.rac3.ons   application    0/3    0/0    ONLINE    ONLINE    rac3
ora.rac3.vip   application    0/0    0/0    ONLINE    ONLINE    rac3
6.新增例項到新節點rac3,如下:
[oracle@rac1 ~]$ dbca
選擇Oracle Real Application Clusters database,如下:


選擇例項管理,如下:

選擇新增一個例項,如下:

在當前正在執行的 資料庫上增加例項,如下:

下面顯示當前資料庫下的兩個例項,如下:

選擇預設例項,點選下一步,如下:

下面顯示Instance Storge資訊,如下:

在此期間,ORACLE 開始自動在新節點上建立例項,並且會視需要提示建立ASM 相關例項(如果使用了ASM 做儲存的話,點選yes)


7.驗證資訊:
[oracle@rac1 ~]$ export ORACLE_SID=orcl1
[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Sat May 17 22:06:55 2014

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> select  INST_ID, INSTANCE_NAME,STATUS from gv$instance;

   INST_ID INSTANCE_NAME    STATUS
---------- ---------------- ------------
         1 orcl1            OPEN
         3 orcl3            OPEN
         2 orcl2            OPEN

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29634949/viewspace-1163338/,如需轉載,請註明出處,否則將追究法律責任。

相關文章