oracle rac管理基本命令

fufuh2o發表於2010-03-23


rac oracle clusterware 命令集
按不同層次劃分
1.節點層次,olsnodes
2.網路層,oifcfg
3.叢集層,crsctl,ocrcheck,ocrdump,ocrconfig
4.應用層,srvctl ,onsctl,crs_stat


#具體操作

節點層
olsnode 命令 具體都可以-help檢視
[root@dmk01 bin]# ./olsnodes -help
Usage: olsnodes [-n] [-p] [-i] [ | -l] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect name with the node name
                -i print virtual IP name with the node name
                print information for the specified node
                -l print information for the local node
                -g turn on logging
                -v run in verbose mod

-n顯示每個節點編號
-p顯示每個節點用於private interconnect網路介面名
-i顯示每個node vip
-g:列印日誌資訊
-v:列印詳細資訊
[root@dmk01 bin]# pwd
/oracle/app/product/11.1/crs/bin
[root@dmk01 bin]# ./olsnodes -n -p -i
dmk01   1       dmk01-priv      dmk01-vip
dmk02   2       dmk02-priv      dmk02-vip
dmk03   3       dmk03-priv      dmk03-vip
dmk04   4       dmk04-priv      dmk04-vip
dmk05   5       dmk05-priv      dmk05-vip
dmk06   6       dmk06-priv      dmk06-vip
dmk07   7       dmk07-priv      dmk07-vip
dmk08   8       dmk08-priv      dmk08-vip
dmk09   9       dmk09-priv      dmk09-vip
dmk10   10      dmk10-priv      dmk10-vip
dmk11   11      dmk11-priv      dmk11-vip
dmk12   12      dmk12-priv      dmk12-vip
dmk13   13      dmk13-priv      dmk13-vip
dmk14   14      dmk14-priv      dmk14-vip
dmk15   15      dmk15-priv      dmk15-vip
dmk16   16      dmk16-priv      dmk16-vip
dmk17   17      dmk17-priv      dmk17-vip
dmk18   18      dmk18-priv      dmk18-vip
dmk19   19      dmk19-priv      dmk19-vip
dmk20   20      dmk20-priv      dmk20-vip
dmk21   21      dmk21-priv      dmk21-vip
dmk22   22      dmk22-priv      dmk22-vip
dmk23   23      dmk23-priv      dmk23-vip
dmk24   24      dmk24-priv      dmk24-vip

 

網路層

 

[oracle@dmk01 ~]$ oifcfg iflist   顯示網口列表
ib0  192.168.26.0
bond0  10.87.25.0
bond1  192.168.25.0

[oracle@dmk01 ~]$ oifcfg getif  顯示每個網路卡屬性
bond0  10.87.25.0  global  public************************* oracle net和vip
ib0  192.168.26.0  global  cluster_interconnect***************用於心跳


[oracle@dmk01 ~]$ oifcfg getif -global -dmk22   檢視node dmk22的配置
bond0  10.87.25.0  global  public
ib0  192.168.26.0  global  cluster_interconnect

 

[oracle@dmk01 ~]$ oifcfg getif -type public  檢視指定型別的網路卡(例中為public,還有cluster_interconnect型別)
bond0  10.87.25.0  global  public


oifcfg setif -xxxx 新增網路卡介面
oifcfg delif -global 刪除介面


叢集層
包含crsctl,ocrcheck,ocrdump,ocrconfig後3個針對ocr disk

[oracle@dmk01 ~]$ crsctl
Usage: crsctl check crs - checks the viability of the Oracle Clusterware
       crsctl check cssd        - checks the viability of Cluster Synchronization Services
       crsctl check crsd        - checks the viability of Cluster Ready Services
       crsctl check evmd        - checks the viability of Event Manager
       crsctl check cluster [-node ] - checks the viability of CSS across nodes
       crsctl set css - sets a parameter override
       crsctl get css - gets the value of a Cluster Synchronization Services parameter
       crsctl unset css - sets the Cluster Synchronization Services parameter to its default
       crsctl query css votedisk - lists the voting disks used by Cluster Synchronization Services
       crsctl add css votedisk - adds a new voting disk
       crsctl delete css votedisk - removes a voting disk
       crsctl enable crs - enables startup for all Oracle Clusterware daemons
       crsctl disable crs - disables startup for all Oracle Clusterware daemons
       crsctl start crs [-wait] - starts all Oracle Clusterware daemons
       crsctl stop crs [-wait] - stops all Oracle Clusterware daemons. Stops Oracle Clusterware managed resources in case of cluster.
       crsctl start resources - starts Oracle Clusterware managed resources
       crsctl stop resources - stops Oracle Clusterware managed resources
       crsctl debug statedump css - dumps state info for Cluster Synchronization Services objects
       crsctl debug statedump crs - dumps state info for Cluster Ready Services objects
       crsctl debug statedump evm - dumps state info for Event Manager objects
       crsctl debug log css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
       crsctl debug log crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
       crsctl debug log evm [module:level] {,module:level} ... - turns on debugging for Event Manager
       crsctl debug log res [resname:level] ... - turns on debugging for Event Manager
       crsctl debug trace css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
       crsctl debug trace crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
       crsctl debug trace evm [module:level] {,module:level} ... - turns on debugging for Event Manager
       crsctl query crs softwareversion [] - lists the version of Oracle Clusterware software installed
       crsctl query crs activeversion - lists the Oracle Clusterware operating version
       crsctl lsmodules css - lists the Cluster Synchronization Services modules that can be used for debugging
       crsctl lsmodules crs - lists the Cluster Ready Services modules that can be used for debugging
       crsctl lsmodules evm - lists the Event Manager modules that can be used for debugging
If necessary any of these commands can be run with additional tracing by adding a 'trace'
 argument at the very front. Example: crsctl trace check css


常用crs命令
#檢查crs狀態
[oracle@dmk01 ~]$ crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
#檢查cssd元件
[oracle@dmk01 ~]$ crsctl check cssd
Cluster Synchronization Services appears healthy
#檢查crsd元件
[oracle@dmk01 ~]$ crsctl check crsd
Cluster Ready Services appears healthy
#檢查evmd元件
[oracle@dmk01 ~]$ crsctl check evmd
Event Manager appears healthy
[oracle@dmk01 ~]$

#配置是否crs棧自動啟動,root執行,預設自動啟動,實際修改/etc/orcle/scls_scr/NODE_NAME/root/crsstart
crsctl disable crs
crsctl enable crs

#啟動停止crs
crsctl stop crs
crsctl start crs

#查詢votedisk位置
crsctl query css votedisk

#檢視crs 引數
crsctl get css 引數名 
例如
[oracle@dmk01 ~]$ crsctl get css misscount
60
#設定css引數
crsctl set css 引數名

#crs由crs,css,evm 這3個服務組成(每個服務包含一系列的module,crsctl可以對每個module跟蹤,並記錄到日誌中)
[oracle@dmk01 ~]$ crsctl lsmodules css
The following are the Cluster Synchronization Services modules::
    CSSD
    COMMCRS
    COMMNS
[oracle@dmk01 ~]$ crsctl lsmodules crs
The following are the CRS modules::
    CRSUI
    CRSCOMM
    CRSRTI
    CRSMAIN
    CRSPLACE
    CRSAPP
    CRSRES
    CRSCOMM
    CRSOCR
    CRSTIMER
    CRSEVT
    CRSD
    CLUCLS
    CLSVER
    CSSCLNT
    COMMCRS
    COMMNS
[oracle@dmk01 ~]$ crsctl lsmodules evm
The following are the Event Manager modules::
   EVMD
   EVMDMAIN
   EVMCOMM
   EVMEVT
   EVMAPP
   EVMAGENT
   CRSOCR
   CLUCLS
   CSSCLNT
   COMMCRS
   COMMNS

 


#啟動跟蹤cssd modle,root執行,結果存ocssd.log中
crsctl debug log css "cssd:1"

 

#維護votedisk,oracle要求必須一半以上的votedisk可用,否則crs機群立刻當機
#1.新增votedisk,要求db,asm,crs stack完全停止後進行,才可以操作,需加-force引數
#查詢votedisk位置
crsctl query css votedisk
#停所有node crs,root使用者執行
crsctl stop crs
#新增votedisk,root執行,刪除就是del,由於必須一般以上,如果原本是一個,建議新增2個,一共3個,否則2個的話,一個壞了,就宕了
crsctl add css votedisk /位置   -force
#確認新增後情況,root執行
crsctl query css votedisk
#啟動crs stack,root執行
crsctl start crs


#ocr命令
#整個叢集資訊放在共享儲存之中,ocr disk就是這個共享儲存,整個叢集中只有一個node可以寫ocr disk,叫做master node ,所有node都在記憶體中儲存一份ocr copy
#同時只有一個ocr process從這個記憶體中讀取內容,ocr內容改變時候,master node的ocr process負責同步到其他node的ocr process.
#oracle 每4個小時備份ocr,保留最後3個備份,以及前一天,前一週的最後一個備份,備份由master node的crsd程式完成,default backup dest=$CRS_HOME/crs/cdata/cluster_name
#每次備份,備份檔名會自動更改(反映時間順序),第一次備份叫backup00.ocr

#ocrdump列印出ocr內容(不代表備份),只可閱讀
ocr -stdout:列印到螢幕,filename:內容輸出到檔案中,-keyname 只列印某個鍵極子鍵內容,-xml以.xml格式列印輸出
#ocrdumo -stdout -keyname SYSTEM.css -xml 列印system.css key
#會在$CRS_HOME//client 產生ocrdump_.log,若有問題檢視日誌


#ocrcheck檢查ocr內容一致性,在$CRS_HOME//client 產生ocrcheck_.log產生log
ocrcheck

#ocrconfig,維護ocrdisk,ocrdisk最多隻能有2個,一個primary ocr,一個mirror ocr
[oracle@dmk10 ~]$ ocrconfig -help   檢視使用
Name:
        ocrconfig - Configuration tool for Oracle Cluster Registry.

Synopsis:
        ocrconfig [option]
        option:
                -export [-s online]
                                                    - Export cluster register contents to a file
                -import                   - Import cluster registry contents from a file
                -upgrade [ []]
                                                    - Upgrade cluster registry from previous version
                -downgrade [-version ]
                                                    - Downgrade cluster registry to the specified version
                -backuploc                 - Configure periodic backup location
                -showbackup [auto|manual]           - Show backup information
                -manualbackup                       - Perform. OCR backup
                -restore                  - Restore from physical backup
                -replace ocr|ocrmirror [] - Add/replace/remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair ocr|ocrmirror     - Repair local OCR configuration
                -help                               - Print out this help information

Note:
        A log file will be created in
        $ORACLE_HOME/log//client/ocrconfig_.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.


#檢視ocr備份
ocrconfig -showbackup
#修改備份 位置,預設在 $CRS_HOME/crs/cdata/cluster_name
ocrconfig -backuploc

#exp/imp 進行備份恢復
#oracle建議叢集調整時(add node,del node前)對ocr backup,export命令,若做replace,restore操作,建議cluvfy comp ocr -n all進行全面檢查
#關閉所有node crs,root
crsctl stop crs
#exp ocr,user root
ocrconfig -export 位置名稱.exp(text.exp)
#檢查crs狀態,root
crscrl check crs
#破壞ocr(儲存損壞之類)此時ocr 內容損壞
#檢查ocr一致性,root,cluvfy工具,此時會報告 failed
ocrcheck
runcluvfy.sh comp ocr -n all
#恢復ocr內容,imp,root
ocrconfig -import /oracle/test.exp
#再次用ocrcheck or cluvfy工具檢查ocr
#檢查沒問題,啟動 crs,root
crsctl start crs
#啟動後檢查crs狀態,root
crsctl check crs

 


#移動ocr位置
#檢視ocr是否有備份
ocrconfig -showbackup
#若無備份執行一次exp作為備份,root
ocrconfig -export 位置名稱.exp -s online
#檢視當前ocr配置
ocrcheck
#若只有一個primary ocr,無mirror ocr不能直接改變這個ocr位置,必須先add mirror ocr在修改
#add mirror ocr
ocrconfig -replace ocrmirror
#檢視是否add成功,root
ocrcheck
#改變primary ocr位置,root
ocrconfig -replace ocr /新位置
#確認修改是否成功,root
ocrcheck
#以上執行後all node /etc/oracle/ocr.loc內容自動同步,若不同步 手動修改
ocrconfig_loc=/新位置
ocrmirrorconfig_loc=xxxxxxxxxxx

local_only=FALSE

 

#應用層,rac database,若干資源組成,每個資源是一個程式或一組程式組成的完整服務
#crs_stat,檢視crs維護所有資源的狀態
crs_stat -t
#檢視指定資源狀態,-p ,-v ,-p很詳細,-v包含允許重起次數,已執行重起次數,失敗閥值,失敗次數,
crs_stat 資源名 -p
#檢視每個資源許可權定義 正確應為 oracle:dba rwxrwxr--(使用者,組,許可權)
crs_stat -ls

#onsctl
#用於管理配置ons,(oracle notification service),oracle clusterware實現fan event push模型的基礎
#10g引入push機制,FAN,當服務端發生某些event時,伺服器會主動通知client變化,讓client儘早知道服務端變化,機制依賴ons(早期是client定期檢索server端來判斷服務狀態,pull模型

#使用onsctl需要配置ons服務
oracle@HA5-DZ01:[/home/oracle] onsctl                                                                                                                          
usage: /oracle/app/oracle/product/10.2.0/crs/bin/onsctl start|stop|ping|reconfig|debug

start                            - Start opmn only.
stop                             - Stop ons daemon
ping                             - Test to see if ons daemon is running
debug                            - Display debug information for the ons daemon
reconfig                         - Reload the ons configuration
help                             - Print a short syntax description (this).
detailed                         - Print a verbose syntax description.

#rac中用$crs_home中ons配置檔案,$crs_home/opmn/conf/ons.config
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] more ons.config
localport=6100  *****************************本地監聽埠,用來和執行在本地的client進行通訊
remoteport=6200 *****************遠端監聽埠,用來和遠端client進行通訊

loglevel=3*********************oracle允許trace ons program,級別為1-9預設3
logfile=/位置*********trace位置,預設$CRS_HOME/opmn/logs/opmn.log

useocr&nodes決定本機ons daemon和哪些遠端節點上ons daemon通訊
node格式hostname/ip:port
useocr=on表示資訊存在ocr中,off表示讀取nodes中配置(單instance useocr=off)
#useocr=off********要讀nodes配置
#nodes=rac1:6200,rac2:6200  表示本機ons和rac1,rac2 這2個node上6200埠通訊
#useocr時候,存ocr中,儲存在DATABASE.ONS_HOSTS KEY中,可以看到主機和port,類似DATABASE.ONS_HOSTS.rac1,DATABASE.ONS_HOSTS.rac1.PORT
ocrdump -xml test.xml -keyname DATABASE.ONS_HOSTS


#配置ons,可直接編輯ons配置檔案修改,若useocr=on可以通過racgons進行配置,root,add新增配置,remove移除配置
racgons add_config rac3:6200,rac4:6200
racgons remove_config rac3:6200,rac4:6200

#onsctl命令,啟動,停止,除錯,重新載入配置檔案
onsctl start|stop|debug|recofig|detailed
#ons執行並不一定表示ons正常工作,要用ping確認
#os層檢視程式狀態
[oracle@dmk10 conf]$ ps -aef|grep ons
oracle   13173 15458  0 14:04 pts/1    00:00:00 grep ons
oracle   24922     1  0 Mar08 ?        00:00:00 /oracle/app/product/11.1/crs/opmn/bin/ons -d
oracle   24923 24922  0 Mar08 ?        00:00:05 /oracle/app/product/11.1/crs/opmn/bin/ons -d
#確認ons服務狀態
$ onsctl ping
Number of onsconfiguration retrieved, numcfg = 3
onscfg[0]
   {node = nbidw7, port = 6251}
Setting remote port from OCR repository to 6251
Adding remote host nbidw7:6251
onscfg[1]
   {node = nbidw8, port = 6251}
Adding remote host nbidw8:6251
onscfg[2]
   {node = nbidw9, port = 6251}
Adding remote host nbidw9:6251
ons is not running ...~~~~~~~~~~~~~~~~~~~~~~~~~~~~沒啟動
$
#啟動ons
onsctl start
#再次確認ons狀態(正確為ons is running)
onsctl ping
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] onsctl ping
ons is running ...

#debug檢視詳細資訊,可以看到ONS 列表,顯示所有連線
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] onsctl debug
HTTP/1.1 200 OK
Content-Length: 1285
Content-Type: text/html
Response:


======== NS ========

Listeners:

 NAME    BIND ADDRESS   PORT   FLAGS   SOCKET
------- --------------- ----- -------- ------
Local   127.000.000.001  6100 00000142      6
Remote  010.096.019.037  6200 00000101      7
Request     No listener

Server connections:

    ID           IP        PORT    FLAGS    SENDQ     WORKER   BUSY  SUBS
---------- --------------- ----- -------- ---------- -------- ------ -----

Client connections:

    ID           IP        PORT    FLAGS    SENDQ     WORKER   BUSY  SUBS
---------- --------------- ----- -------- ---------- -------- ------ -----
        11 127.000.000.001  6100 0001001a          0               1     0
        17 127.000.000.001  6100 0001001a          0               1     1

Pending connections:

    ID           IP        PORT    FLAGS    SENDQ     WORKER   BUSY  SUBS
---------- --------------- ----- -------- ---------- -------- ------ -----
         0 127.000.000.001  6100 00020812          0               1     0

Worker Ticket: 37/37, Idle: 60

   THREAD   FLAGS
  -------- --------
         2 00000012
         3 00000012
         4 00000012

Resources:

  Notifications:
    Received: 15, in Receive Q: 0, Processed: 15, in Process Q: 0

  Pools:
    Message: 24/25 (1), Link: 25/25 (1), Subscription: 24/25 (1)
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf]
onsctl debug

 


#SRVCTL,可操作database,instance,asm,service,listener,node application(GSD,ONS,VIP)

oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl -help
Usage: srvctl []
    command: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|instance|service|nodeapps|asm|listener
For detailed help on each command and object and its options use:
    srvctl -h


#檢視database配置,顯示在ocr中註冊的所有db(有的環境一個ocr中有多個DB)
srvctl config database
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl config database
HNSMS
#-d檢視某個資料庫配置,比如幾個node組成
srvctl config database -d HNSMS
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl config database -d HNSMS
ha5-dz01 HNSMS1 /oracle/app/oracle/product/10.2.0/database
ha5-dz02 HNSMS2 /oracle/app/oracle/product/10.2.0/database
#-a檢視更詳細資訊
srvctl config database -d HNSMS -a
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl config database -d HNSMS -a
ha5-dz01 HNSMS1 /oracle/app/oracle/product/10.2.0/database
ha5-dz02 HNSMS2 /oracle/app/oracle/product/10.2.0/database
DB_NAME: HNSMS
ORACLE_HOME: /oracle/app/oracle/product/10.2.0/database
SPFILE: /dev/vx/rdsk/vg_db01/dz01_05G_006
DOMAIN: null
DB_ROLE: null
START_OPTIONS: null
POLICY:  AUTOMATIC
ENABLE FLAG: DB ENABLED

#檢視node application配置
srvctl config nodeapps -n node_name
[oracle@dmk10 conf]$ srvctl config nodeapps -n dmk10
VIP exists.: /dmk10-vip/10.87.25.35/255.255.255.0/bond0
GSD exists.
ONS daemon exists.
Listener exists.
#-a檢視vip,-g檢視gsd,-s 檢視ons,-l檢視listener
srvctl config nodeapps -n node_name -a

#檢視listenr
srvctl config listener -n NODE_name
[oracle@dmk10 conf]$ srvctl config listener -n dmk09
dmk09 LISTENER_DMK09
[oracle@dmk10 conf]$ srvctl config listener -n dmk10
dmk10 LISTENER_DMK10


#檢視asm,輸出每個node ASM instance name,和$ORACLE_HOME
srvctl config asm -n node_name
[oracle@dmk10 conf]$ srvctl config asm -n dmk10
+ASM10 /oracle/app/product/11.1/db

#檢視service,檢視db 所有serviece配置
srvctl config service -d database name -a
[oracle@dmk10 conf]$ srvctl config service -d rac -a
masamk1 PREF: rac1 rac2 rac3 rac4 rac5 rac6 AVAIL:  TAF: NONE
masamk2 PREF: rac7 rac8 rac9 rac10 rac11 rac12 AVAIL:  TAF: NONE
masamk3 PREF: rac1 rac2 rac3 rac4 rac5 rac6 rac7 rac8 rac9 rac10 rac11 rac12 AVAIL:  TAF: NONE

#-S檢視某個service配置 -s masamk1
#-a檢視TAF策略

 

 

#使用add新增物件,刪除的話remove ,刪除db or instance是互動式的
#應用層資源多是圖形介面註冊ocr中,vip,ons安裝最後階段註冊,db,asm,在dbca過程中註冊,listener netca註冊
#手工註冊db
srvctl add database  -d database_name -o $ORACLE_HOME
#註冊instance
srvctl add instance -d database_name -n node_name - i instance_name
#新增服務,使用4個引數
-s:服務名
-r:首選instance_name
-a:備用instance name
-p:tad策略(NODE 預設,basic,preconnect)
srvctl add service -d database name -s service name -r instance_name -a instance_name -p TAF POLICY


#enable/disable 啟動,禁用物件
#default db,instance,service,asm都隨 crs啟動而啟動,
#配置db隨crs啟動而啟動,關閉disable
srvctl enable database -d database name
#檢視是否配置成功,ENABLE FLAG: DB ENABLED表示成功,policy應為automatic,db隨crs啟動
srvctl config database -d DATABASE_NAME -a
#關閉某個instance自動啟動
srvctl disable instance -d database name -i INSTACNE_name

 

#禁止服務在某個instance上執行,
srvctl disable service -d database name -s service name -i instance


#啟動停止檢視物件
#推薦使用 srvctl來管理db 啟動關閉,優點及時更新crs中執行資訊
#srvctl start database -d database_name (default啟動到open)

#啟動到不同狀態,針對instance,-o代表 - option
srvctl start database -d database_name -i instacne_name -o mount
srvctl start database -d database_name -i instacne_name -o nomount
#關閉物件,針對instance
srvctl stop instance -i instance_name -o immediate
srvctl stop instance -i instance_name -o abort

#在instance上啟動service
srvctl start service -d database name -s service name -i instance name
#檢視服務狀態
srvctl status service -d database name -v
#在instance上關閉service
srvctl stop service -d database name -s service name -i instance  name


具體啟動語法引數可以
srvctl start database/asm/instance/service/listener/nodeapps -h


#檢視狀態
srvctl status service -d database name  -v
srvctl status database -d database name
srvctl status instance -d database name -i instance name


#跟蹤srvctl
#跟蹤srvctl 需要設定srvm_trace=trus ,os環境變數既可,
export SRVM_TRACE=true
srvctl 使用下,然後 trace會到螢幕

 

*********恢復******
#ocr,voteing全部損壞,且無backup,此時需要重新初始化ocr & votedisk
#停止all node crs statck
crsctl stop crs
#分別在每個node執行$CRS_HOME/install/rootdelete.sh,root執行
#在任意node執行$CRS_HOME/install/rootdeinstall.sh 只在一個node執行就可以
#在上面執行rootdeinstall.sh的 node上執行$CRS_HOME/root.sh
#在其他node執行$CRS_HOME/root.sh(注意最後一個node輸出,ons,gsd,vip,creating,starting)
#netca重新配置listener,確認註冊到crs ocr中(此時listner,ons,gsd,vip註冊到了ocr)
crs_stat -t -v

#若有asm 將asm add 到ocr中
srvctl add asm -n node_name -i asm_instance_name -o $ORACLE_HOME
#啟動asm
srvctl start asm -n node_name
#若啟動ASM時候出現ora-27550錯誤,可能是rac無法確定使用哪塊網路卡做為private interconnect(心跳)
#在asm pfile中加引數解決,每個node 的asm pfile都要寫
+ASM1.cluster_interconnects='心跳IP地址,內網'
+ASM2.cluster_interconnects='心跳IP地址,內網'
#ocr中add database  object
srvctl add database -d database_name -o $ORACLE_HOME(詳細路徑)
#ocr中add instance object,有幾個instance add幾個
srvctl add instance -d database_name -i instance_name -n node_name
#修改instance 與asm依賴關係,instance_name 對應asm_instance_name,針對每個instance都要做一次
srvctl modify instance -d database_name -i instance_name,-s asm_instance_name

#啟動db
srvctl start database -d database_name
#若需要ora-27550,解決方式與在asm中解決一樣
#修改資料庫 instance spfile,每個instance的spfile都要修改,對應每個node pirvate ip
alter system set cluster_interconnects='心跳IP地址,內網' scope=spfile sid='INSTANCE_NAME'
alter system set cluster_interconnects='心跳IP地址,內網' scope=spfile sid='INSTANCE_NAME'

#重新啟動db
srvctl start database -d database_name

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/12020513/viewspace-630224/,如需轉載,請註明出處,否則將追究法律責任。

相關文章