Solaris 10下遷移10G RAC (三)
本文是這次遷移工作的第三部分:安裝oracle clusterware。
安裝clusterware
檢驗系統是否滿足安裝clusterware的要求
在pre1以oracle使用者執行如下指令碼:
$ ./runcluvfy.sh stage -pre crsinst -n pre1,pre2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "pre1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet "172.0.2.0" with node(s) pre2,pre1.
Node connectivity check passed for subnet "10.0.0.0" with node(s) pre2,pre1.
Suitable interfaces for VIP on subnet "172.0.2.0":
pre2 ce0:172.0.2.3
pre1 ce0:172.0.2.1
Suitable interfaces for the private interconnect on subnet "10.0.0.0":
pre2 ce1:10.0.0.2
pre1 ce1:10.0.0.1
Node connectivity check passed.
Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check passed.
System architecture check passed.
Operating system version check passed.
Package existence check passed for "SUNWarc".
Package existence check passed for "SUNWbtool".
Package existence check passed for "SUNWhea".
Package existence check passed for "SUNWlibm".
Package existence check passed for "SUNWlibms".
Package existence check passed for "SUNWsprot".
Package existence check passed for "SUNWsprox".
Package existence check passed for "SUNWtoo".
Package existence check passed for "SUNWi1of".
Package existence check passed for "SUNWi1cs".
Package existence check passed for "SUNWi15cs".
Package existence check passed for "SUNWxwfnt".
Package existence check passed for "SUNWlibC".
Package existence check failed for "SUNWscucm:3.1".
Check failed on nodes:
pre2,pre1
Package existence check failed for "SUNWudlmr:3.1".
Check failed on nodes:
pre2,pre1
Package existence check failed for "SUNWudlm:3.1".
Check failed on nodes:
pre2,pre1
Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".
Check failed on nodes:
pre2,pre1
Package existence check failed for "SUNWscr:3.1".
Check failed on nodes:
pre2,pre1
Package existence check failed for "SUNWscu:3.1".
Check failed on nodes:
pre2,pre1
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "oracle".
User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.
這裡顯示檢測不透過,但是不影響安裝clusterware,原因上面已經說過。
正式安裝clusterware
啟動VNC,開始執行安裝程式:
bash-3.00# xhost +
access control disabled, clients can connect from any host
bash-3.00# su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
bash-3.00$ export DISPLAY=172.0.2.1:1.0
重新執行ssh-agent,以免出錯:
bash-3.00$ cd /backup/soft/cluster/
bash-3.00$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Identity added: /export/home/oracle/.ssh/id_rsa (/export/home/oracle/.ssh/id_rsa)
Identity added: /export/home/oracle/.ssh/id_dsa (/export/home/oracle/.ssh/id_dsa)
$ pwd
/backup/soft/cluster
啟動安裝介面:
$ ./runInstaller
Starting Oracle Universal Installer...
Checking installer requirements...
Checking operating system version: must be 5.8, 5.9 or 5.10. Actual 5.10
Passed
Checking Temp space: must be greater than 250 MB. Actual 10917 MB Passed
Checking swap space: must be greater than 500 MB. Actual 11324 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed
All installer requirements met.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2007-09-22_12-31-16AM. Please wait ...$ Oracle Universal Installer, Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved.
Checking installer requirements...
Checking operating system version: must be 5.8, 5.9 or 5.10. Actual 5.10
Passed
Checking Temp space: must be greater than 250 MB. Actual 10934 MB Passed
Checking swap space: must be greater than 500 MB. Actual 11333 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed
All installer requirements met.
在介面點選下一步,在出來的介面選擇inventory directory和system group,預設是/oracle/app/oraInventory和oinstall,用預設值,next
輸入OraCrs10g_home1的路徑:/oracle/app/product/10.2/crs,在語言中新增簡體中文,然後next
接著oracle進行一系列的檢測,如果都透過,下一步
輸入cluster name,這裡用預設的crs,
在cluster nodes中配置下面資訊:
public node name private node name virtual host name
pre1 priv-pre1 vip-pre1
pre2 priv-pre2 vip-pre2
這裡的介面名稱要和/etc/hosts中的設定一樣。
選擇下一步,設定網路,如果顯示出來的不正確,輸入正確的網路資訊
選擇下一步,輸入OCR的位置/dev/rdsk/c3t0d3s5,選擇外部冗餘
選擇下一步,輸入Voting的位置:/dev/rdsk/c3t0d3s6,選擇外部冗餘
選擇下一步,開始安裝
最後,在所有節點以root執行如下兩個指令碼:
/oracle/app/oraInventory/orainstRoot.sh
/oracle/app/product/10.2/crs/root.sh
pre1上以root執行:
bash-3.00# /oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory to 770.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete
bash-3.00# /oracle/app/product/10.2/crs/root.sh
WARNING: directory '/oracle/app/product/10.2' is not owned by root
WARNING: directory '/oracle/app/product' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/product/10.2' is not owned by root
WARNING: directory '/oracle/app/product' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: pre1 priv-pre1 pre1
node 2: pre2 priv-pre2 pre2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/rdsk/c3t0d3s6
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
pre2
CSS is inactive on these nodes.
pre1
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
pre2上以root執行:
bash-3.00# /oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory to 770.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete
bash-3.00# /oracle/app/product/10.2/crs/root.sh
WARNING: directory '/oracle/app/product/10.2' is not owned by root
WARNING: directory '/oracle/app/product' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/product/10.2' is not owned by root
WARNING: directory '/oracle/app/product' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: pre1 priv-pre1 pre1
node 2: pre2 priv-pre2 pre2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
pre1
pre2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.
返回介面,點選OK,oracle再進行做一些配置後,再次用clufty檢測所有節點的crs狀態,如果正確,則點選EXIT退出。
至此,clusterware安裝完畢。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/231499/viewspace-63864/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Solaris 10下遷移10G RAC (八)
- Solaris 10下遷移10G RAC (六)
- Solaris 10下遷移10G RAC (七)
- Solaris 10下遷移10G RAC (二)
- Solaris 10下遷移10G RAC (一)
- Solaris 10下遷移10G RAC (五)
- Solaris 10下遷移10G RAC (四)
- ORACLE 10g RAC 遷移共享儲存Oracle 10g
- RAC 10g下從裸裝置遷移到ASMASM
- Oracle RAC 10g for Solaris環境解除安裝(二)Oracle
- Oracle RAC 10g for Solaris環境解除安裝(一)Oracle
- Solaris10下安裝Oracle1106RAC環境(三)Oracle
- Solaris10下安裝Oracle10203RAC環境(三)Oracle
- redhat enterprise 4下遷移oracle 10g到asmRedhatOracle 10gASM
- Solaris10下Silent模式安裝Oracle1106RAC環境(三)模式Oracle
- Linux 下Oracle 10G RAC 管理LinuxOracle 10g
- solaris 10下的oracle 10g 自動啟動指令碼Oracle 10g指令碼
- ORACLE 10G ASM非歸檔模式下使用RMAN遷移一例Oracle 10gASM模式
- 10g RAC下安裝10.2.0.4補丁
- oracle 10g rac hacmp 遷移到asm實驗步驟Oracle 10gACMASM
- Oracle 10g 安裝及單例項遷移到RACOracle 10g單例
- DB遷移RAC環境
- oracle 10g rac on solaris 10 x86_配置共享儲存_利用vmware server 1.0.6Oracle 10gServer
- solaris10_oracle10g_asm_non_asm遷移資料庫測試OracleASM資料庫
- Solaris裸裝置安裝三節點RAC102(三)
- 【OCM】Oracle Database 10g: RAC for Administrators(三)OracleDatabase
- VMWARE+linux+oracle 10g RAC 之三LinuxOracle 10g
- [RAC]ORACLE Database 10g RAC for Administrators學習筆記(三)OracleDatabase筆記
- 10g RAC on AIXAI
- Oracle 10g在solaris 10下的自動執行指令碼薦Oracle 10g指令碼
- 非歸檔模式下遷移10g單機庫到新的儲存上模式
- Install Oracle 10g on Solaris 10 simple recordOracle 10g
- 安裝Oracle11.2 RAC for Solaris10 sparc64(三)Oracle
- Oracle RAC 遷移替換 OCR 盤Oracle
- 使用RMAN遷移單庫到RAC
- Oracle 10g RAC NFSOracle 10gNFS
- Oracle 10g RAC TAFOracle 10g
- Solaris10下安裝Oracle1106RAC環境(五)Oracle