Solaris裸裝置安裝三節點RAC102(二)

yangtingkun發表於2011-01-27

利用裸裝置安裝Solaris上的三節點Oracle 10.2 RAC

這一篇主要討論ORACLECLUSTERWARE的安裝。

Solaris裸裝置安裝三節點RAC102(一): http://yangtingkun.itpub.net/post/468/512772

 

 

在上一篇文章中已經將作業系統準備完畢。這篇文章介紹使用ORACLECLUSTERWARE來安裝RAC環境。

Oraclecluster安裝檔案解壓,利用cpio idmv < 10gr2_cluster_sol.cpio命令展開。然後進入展開目錄,進入cluvfy目錄執行下面的檢測命令:

$ cd cluster_disk
$ cd cluvfy

bash-2.03$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2,racnode3

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "racnode1".


Checking user equivalence...
User equivalence check passed for user "oracle".

Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity...

Node connectivity check passed for subnet "172.25.0.0" with node(s) racnode3,racnode2,racnode1.
Node connectivity check passed for subnet "10.0.0.0" with node(s) racnode3,racnode2,racnode1.

Suitable interfaces for the private interconnect on subnet "172.25.0.0":
racnode3 ce0:172.25.198.226
racnode2 ce0:172.25.198.223
racnode1 ce0:172.25.198.222

ERROR:
Could not find a suitable set of interfaces for VIPs.

Node connectivity check failed.


Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check failed.
Check failed on nodes:
        racnode3,racnode2,racnode1
System architecture check passed.
Operating system version check failed.
Check failed on nodes:
        racnode3
Package existence check passed for "SUNWarc".
Package existence check passed for "SUNWbtool".
Package existence check passed for "SUNWhea".
Package existence check passed for "SUNWlibm".
Package existence check passed for "SUNWlibms".
Package existence check passed for "SUNWsprot".
Package existence check passed for "SUNWsprox".
Package existence check passed for "SUNWtoo".
Package existence check passed for "SUNWi1of".
Package existence check passed for "SUNWi1cs".
Package existence check passed for "SUNWi15cs".
Package existence check passed for "SUNWxwfnt".
Package existence check passed for "SUNWlibC".
Package existence check failed for "SUNWscucm:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWudlmr:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWudlm:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWscr:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Package existence check failed for "SUNWscu:3.1".
Check failed on nodes:
        racnode3,racnode2,racnode1
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "oracle".
User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.

其中VIP的錯誤原因已經在以前的文章中專門描述過了,這裡就不重複了。而swap空間不足的錯誤可以忽略,在上一篇文章中已經進行了檢查,系統中有足夠的swap空間。下面在對一些系統包進行檢查時失敗,這些包是和SunCluster有關的包,由於安裝RAC準備使用OracleClusterware,因此這些錯誤也可以忽略。至於作業系統版本檢查的問題,是因為racnode3racnode1racnode2版本不統一。這個問題也可以忽略。

在安裝之前,首先配置Clusterware所需的共享儲存。將儲存上劃分的共享空間分為了多個裸裝置,這些裸裝置可以在/dev/rdsk目錄下看到。

由於三臺測試伺服器的光纖卡不同,因此載入的裸裝置名稱也不相同,在racnoce1上裝置的名稱是:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd99114,0
       1. c2t3d0
          /pci@8,700000/QLGC,qla@3/sd@3,0
       2. c2t3d1
          /pci@8,700000/QLGC,qla@3/sd@3,1
       3. c2t3d2
          /pci@8,700000/QLGC,qla@3/sd@3,2
       4. c2t3d3
          /pci@8,700000/QLGC,qla@3/sd@3,3
       5. c2t3d4
          /pci@8,700000/QLGC,qla@3/sd@3,4
       6. c2t3d5
          /pci@8,700000/QLGC,qla@3/sd@3,5
       7. c2t3d6
          /pci@8,700000/QLGC,qla@3/sd@3,6
       8. c2t3d7
          /pci@8,700000/QLGC,qla@3/sd@3,7
       9. c2t3d8
          /pci@8,700000/QLGC,qla@3/sd@3,8
      10. c2t3d9
          /pci@8,700000/QLGC,qla@3/sd@3,9
      11. c2t3d10
          /pci@8,700000/QLGC,qla@3/sd@3,a
      12. c2t3d11
          /pci@8,700000/QLGC,qla@3/sd@3,b
      13. c2t3d12
          /pci@8,700000/QLGC,qla@3/sd@3,c
      14. c2t3d13
          /pci@8,700000/QLGC,qla@3/sd@3,d

而在racnode2racnode3上,載入的裝置名稱相似:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0
          /pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd9e4b8,0
       1. c1t1d0
          /pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd9ead5,0
       2. c2t500601603022E66Ad0
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,0
       3. c2t500601603022E66Ad1
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,1
       4. c2t500601603022E66Ad2
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,2
       5. c2t500601603022E66Ad3
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,3
       6. c2t500601603022E66Ad4
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,4
       7. c2t500601603022E66Ad5
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,5
       8. c2t500601603022E66Ad6
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,6
       9. c2t500601603022E66Ad7
          /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,7

由於安裝過程中,在建立ocrvot共享磁碟時,需要兩個伺服器具有相同的名稱,因此建立如下的連結。在racnode1上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c2t3d2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c2t3d2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c2t0d2s1
# chown oracle:oinstall /dev/rdsk/c2t0d2s3

racnode2上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c2t500601603022E66Ad2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c2t500601603022E66Ad2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad2s1
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad2s3

racnode3上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c1t500601603022E66Ad2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c1t500601603022E66Ad2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c1t500601603022E66Ad2s1
# chown oracle:oinstall /dev/rdsk/c1t500601603022E66Ad2s3

注意不要使用s0作為共享裸裝置,否則在安裝完成後執行root.sh檔案時會出現Failed to upgrade Oracle Cluster Registry configuration的錯誤資訊。

下面可以開始安裝了,啟動Xmanager,登陸racnode1執行:

# xhost +
access control disabled, clients can connect from any host
 su - oracle
Sun Microsystems Inc.   SunOS 5.8       Generic Patch   October 2001
$ cd /data/cluster_disk
$ ./runInstaller

啟動圖形介面後,點選next。這時Oracle會提示輸入inventory路徑和作業系統組資訊:預設的就是剛才建立的/data/oracle/oraInventory目錄和oinstall,點選next

下面Oracle會提示OraCrs10g_home1目錄的路徑,這裡預設是ORACLE_HOME的路徑:/data/oracle/product/10.2/database將其修改為/data/oracle/product/10.2/crs,並且選擇簡體中文語句,點選next

下面Oracle會自動檢測系統是否滿足安裝需要,如果根據上面一篇文章中的內容進行了設定,這裡的檢查成功,然後進入下一步。

進入Cluster的配置,預設的Cluster Namecrs,可以修改也可以採用預設設定。Oracle會自動將安裝節點的網路配置列出來,這裡需要手工將racnode2racnode3的節點資訊:racnode2racnode2-privracnode2-vipracnode3racnode3-privracnode3-vip新增進去。點選next

下面會列出可用的網路卡資訊,檢查配置的PUBLICPRIVATE配置是否和hosts檔案中的一致。由於當前系統配置172.25開頭的ipOraclebug會認為這個ipPrivate IP,因此,會將兩個網路卡的屬性都設定為PRIVATE,這裡需要手工的將子網為172.25.0.0的網路卡設定為PUBLIC。修改之後,點選next

進入ocr的配置階段。由於ocr使用的共享磁碟來自儲存,本身已經採用了raid0的配置,所以這裡選擇External Redundancy,並在OCR Location的位置輸入剛才設定好的/dev/rac/ocr,然後點選next

進入Voting Disk配置,出於同樣的原因選擇External Redundancy,並在Voting Disk Location的位置輸入配置好的/dev/rac/vot,點選next

出現彙總也,點選install開始安裝。

安裝完畢需要分別在兩個節點用root先後執行兩個指令碼。前後在racnode1racnode2racnode3上執行下面的指令碼,結果是一樣的。

# . /data/oracle/oraInventory/orainstRoot.sh
Changing permissions of /data/oracle/oraInventory to 770.
Changing groupname of /data/oracle/oraInventory to oinstall.
The execution of the script. is complete

對於第二個指令碼,racnode1上執行:

# /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
ln: cannot create /data/oracle/product/10.2/crs/lib/libskgxn2.so: File exists
ln: cannot create /data/oracle/product/10.2/crs/lib32/libskgxn2.so: File exists
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
node 3: racnode3 racnode3-priv racnode3
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Now formatting voting device: /dev/rac/vot
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
CSS is inactive on these nodes.
        racnode2
        racnode3
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

racnode2上執行:

# /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
node 3: racnode3 racnode3-priv racnode3
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
        racnode2
CSS is inactive on these nodes.
        racnode3
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

最後在racnode3上執行指令碼:

# /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
node 3: racnode3 racnode3-priv racnode3
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
        racnode2
        racnode3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "ce0" is not public. Public interfaces should be used to configure virtual IPs.

指令碼顯示出現了錯誤。這個錯誤就是前面提到了多次的Oraclebug,將PUBLICinterface認為是private的,導致無法配置vip。這個問題的詳細描述會在最後一篇文章問題彙總中詳細描述,這裡只給出解決辦法。

最簡單的方法是啟動vipca圖形介面手頭配置:

# cd /data/oracle/product/10.2/crs/bin/
# ./vipca

Xmanager中啟動一個終端,輸入上述命令,啟動vipca圖形介面。點選next,出現所有可用的網路介面,由於ce0配置的是PUBLIC INTERFACT,這裡選擇ce0,點選next,在出現的配置中IP Alias Name分別填入:racnode1-vipracnode2-vipracnode3-vipIP address處填入:172.25.198.224172.25.198.225172.25.198.227。這裡配置是正確的,那麼填完一個IPOracle會自動將剩下六個配置補齊。點選next,出現彙總頁面,檢查無誤後,點選Finish

Oracle會執行6個步驟,Create VIP application resourceCreate GSD application resourceCreate ONS application resourceStart VIP application resourceStart GSD application resourceStart ONS application resource

全部成功後點選OK,結束VIPCA的配置。

這個時候可以返回到剛才的Clusterware安裝介面,點選OK

這個時候Oracle會嘗試啟動兩個工具並最終執行一下驗證程式。全部檢查完成,跳到安裝結束畫面,點選Exit結束Clusterware的安裝。

 

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/4227/viewspace-686424/,如需轉載,請註明出處,否則將追究法律責任。

相關文章