一步一步在virtual box4.1.6中安裝基於rhel5.5x86_64的oracle 10g R2雙節點RAC

weixin_34344677發表於2012-02-01
 1. 配置單例項環境
參照:http://blog.csdn.net/t0nsha/article/details/7166582

2. 配置域名解析
vim /etc/hosts
127.0.0.1       localhost.localdomain localhost
192.168.2.101   rac1.localdomain        rac1
192.168.2.102   rac2.localdomain        rac2
192.168.0.101   rac1-priv.localdomain   rac1-priv
192.168.0.102   rac2-priv.localdomain   rac2-priv
192.168.2.111   rac1-vip.localdomain    rac1-vip
192.168.2.112   rac2-vip.localdomain    rac2-vip

3. 建立安裝目錄
mkdir -p /u01/oracle/crs
mkdir -p /u01/oracle/10gR2
chown -R oracle:oinstall /u01
chmod -R 775 /u01

4. 配置環境變數
vi .bash_profile
export ORACLE_SID=RAC1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

5. 建立共享磁碟
set path=C:\Program Files\Oracle\VirtualBox;%path%
VBoxManage createhd --filename ocr1.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename ocr2.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot1.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot2.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot3.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename asm1.vdi --size 5120 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 5120 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 5120 --format VDI --variant Fixed

6. 關聯共享磁碟
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium vot1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium vot2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium vot3.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm3.vdi --mtype shareable
http://www.oracledistilled.com/virtualbox/creating-shared-drives-in-oracle-vm-virtualbox/

7. 配置共享磁碟
VBoxManage modifyhd ocr1.vdi --type shareable
VBoxManage modifyhd ocr2.vdi --type shareable
VBoxManage modifyhd vot1.vdi --type shareable
VBoxManage modifyhd vot2.vdi --type shareable
VBoxManage modifyhd vot3.vdi --type shareable
VBoxManage modifyhd asm1.vdi --type shareable
VBoxManage modifyhd asm2.vdi --type shareable
VBoxManage modifyhd asm3.vdi --type shareable

8. 克隆第二臺虛擬機器
mkdir rac2
VBoxManage clonehd rac1\rac1.vdi rac2\rac2.vdi

建立基於rac2.vdi的虛擬機器.

VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium vot1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium vot2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium vot3.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm3.vdi --mtype shareable

9. 配置rac2的環境變數
vi .bash_profile
export ORACLE_SID=RAC2
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

10. 測試域名解析
ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv

11. 配置ssh
ssh-keygen -t rsa
cat id_rsa.pub >>authorized_keys
scp authorized_keys rac2:/home/oracle/.ssh/
scp authorized_keys rac1:/home/oracle/.ssh/

rac1和rac2上執行以下4條命令,沒有提示輸入密碼錶示ssh配置成功:
ssh rac1 date
ssh rac1-priv date
ssh rac2 date
ssh rac2-priv date

12. 配置共享磁碟為裸裝置
vim  /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="raw[1-8]", OWNER="oracle", GROUP="oinstall", MODE="0660"

13. 驗證叢集軟體
cd /clusterware/cluvfy/
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

如下錯誤是bug導致,可以忽略
Could not find a suitable set of interfaces for VIPs.
(http://www.eygle.com/archives/2007/12/oracle10g_rac_linux_cluvfy.html)

14. 安裝叢集軟體
cd /clusterware/
./runInstaller

執行/clusterware/runInstaller遇到以下問題時,
用root使用者執行/clusterware/rootpre/rootpre.sh,
兩個節點都需要執行:
Has 'rootpre.sh' been run by root? [y/n] (n)
# cd /clusterware/rootpre
./rootpre.sh

如遇系統版本不相容,修改如下檔案:
vim /etc/redhat-release
redhat-4

執行/u01/oracle/crs/root.sh時碰到:
Failed to upgrade Oracle Cluster Registry configuration
有兩點需要解決:
1. 需要打一個patch(其實就是替換每個節點的clsfmt.bin): p4679769_10201_Linux-x86-64.zip
2. 作為ocr和voting的磁碟沒有分割槽,用fdisk將sdb-sdi分割槽
(由於是共享磁碟,只需在一個節點rac1分割槽即可,到rac2上就可看到分割槽了),
然後用dd命令重新整理ocr和voting裸裝置即可:
dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw3 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw5 bs=1M count=256
* a.) Are the RAW devices you are using partitions or full disks? They have to be partitions.
(https://cn.forums.oracle.com/forums/thread.jspa?threadID=1122862&start=0&tstart=0)

再次執行root.sh:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

在rac2節點上執行root.sh碰到如下錯誤:
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/oracle/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
需要修改兩個指令碼:
For the VIPCA utility, alter the $CRS_HOME/bin/vipca script on all nodes to remove LD_ASSUME_KERNEL. After the "if" statement in line 123, add an unset

command to ensure LD_ASSUME_KERNEL is not set as follows:

if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
       then
            LD_ASSUME_KERNEL=2.4.19
            export LD_ASSUME_KERNEL
       fi
            unset LD_ASSUME_KERNEL
With the newly inserted line, root.sh should be able to call VIPCA successfully.

For the SRVCTL utility, alter the $CRS_HOME/bin/srvctl scripts on all nodes by adding a line, unset LD_ASSUME_KERNEL, after line 174 as follows:

LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL

http://docs.oracle.com/cd/B19306_01/relnotes.102/b15666/toc.htm

只得重新執行一遍/u01/oracle/crs/root.sh:
先刪除cssfatal:
rm -f /etc/oracle/scls_scr/rac1/oracle/cssfatal
再重新整理ocr與voting disk:
dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw3 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw5 bs=1M count=256

重新執行/u01/oracle/crs/root.sh時,在rac2節點上又出問題了:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac1 crs]# pwd
/u01/oracle/crs
[root@rac1 crs]# ssh rac2
root@rac2's password:
Last login: Wed Jan  4 21:34:06 2012 from rac1.localdomain
[root@rac2 ~]# source /home/oracle/.bash_profile
[root@rac2 ~]# cd $ORACLE_HOME
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

直接執行vipca碰到:
[root@rac2 bin]# ./vipca
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

解決方法:
[root@rac2 bin]# ./oifcfg iflist
eth0  192.168.2.0
eth1  192.168.0.0
[root@rac2 bin]# ./oifcfg setif -global eth0/192.168.2.0:public
[root@rac2 bin]# ./oifcfg setif -global eth1/192.168.0.0:cluster_interconnect
[root@rac2 bin]# ./oifcfg getif
eth0  192.168.2.0  global  public
eth1  192.168.0.0  global  cluster_interconnect
[root@rac2 bin]#
./vipca
(http://blog.chinaunix.net/space.php?uid=261392&do=blog&id=2138877)

clusterware安裝好後,執行crs_stat -t即可檢視叢集服務的執行情況:
[root@rac2 bin]# ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.rac1.gsd   application    ONLINE    ONLINE    rac1       
ora.rac1.ons   application    ONLINE    ONLINE    rac1       
ora.rac1.vip   application    ONLINE    ONLINE    rac1       
ora.rac2.gsd   application    ONLINE    ONLINE    rac2       
ora.rac2.ons   application    ONLINE    ONLINE    rac2       
ora.rac2.vip   application    ONLINE    ONLINE    rac2  

15. 安裝asm資料庫
回到rac1,開始安裝asm資料庫:
[oracle@rac1 database]$ ./runInstaller

指定asm安裝路徑:
OraASM10g_home
/01/oracle/10gR2/asm

安裝完成後,檢視asm狀態:
[oracle@rac1 database]$ srvctl status asm -n rac1
ASM instance +ASM1 is running on node rac1.
[oracle@rac1 database]$ srvctl status asm -n rac2
ASM instance +ASM2 is running on node rac2.
[oracle@rac1 database]$

[oracle@rac1 database]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    rac1       
ora....C1.lsnr application    ONLINE    ONLINE    rac1       
ora.rac1.gsd   application    ONLINE    ONLINE    rac1       
ora.rac1.ons   application    ONLINE    ONLINE    rac1       
ora.rac1.vip   application    ONLINE    ONLINE    rac1       
ora....SM2.asm application    ONLINE    ONLINE    rac2       
ora....C2.lsnr application    ONLINE    ONLINE    rac2       
ora.rac2.gsd   application    ONLINE    ONLINE    rac2       
ora.rac2.ons   application    ONLINE    ONLINE    rac2       
ora.rac2.vip   application    ONLINE    ONLINE    rac2       
[oracle@rac1 database]$

16. 安裝主資料庫
回到rac1,開始安裝主資料庫:
[oracle@rac1 database]$ ./runInstaller

指定主庫安裝路徑:
OraDb10g_home
/01/oracle/10gR2/db_1


17. 更新.bash_profile:
vi .bash_profile
export ORACLE_SID=RAC
export ORACLE_BASE=/opt/oracle/10gR2
export ORACLE_HOME=/opt/oracle/10gR2/db_1
export PATH=$PATH:$ORACLE_HOME/bin


18. 配置管理rac1的環境變數指令碼
crs.env
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

asm.env
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/10gR2/asm
export PATH=$ORACLE_HOME/bin:$PATH

db.env
export ORACLE_SID=RAC1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/10gR2/db_1
export PATH=$ORACLE_HOME/bin:$PATH

 

REF:
Oracle Database 11g Release 2 RAC On Linux Using VirtualBox
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php

相關文章