rac維護基本命令

liqilin0429發表於2012-12-12

RAC維護

配置OEM

配置了 Oracle Enterprise Manager (Database Control),可以用它檢視資料庫的配置和當前狀態。

URL 為:https://racnode1:1158/em

[oracle@racnode1 ~]$ emctl status dbconsole 
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 
Copyright (c) 1996, 2009 Oracle Corporation.  All rights reserved.
https://racnode1:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running. 
------------------------------------------------------------------
Logs are generated in directory
/u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/log

檢查叢集的執行狀況(叢集化命令)

grid 使用者身份執行以下命令

[grid@racnode1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

所有 Oracle 例項 (資料庫狀態)

[oracle@racnode1 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node racnode1
Instance racdb2 is running on node racnode2

單個 Oracle 例項 (特定例項的狀態)

[oracle@racnode1 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node racnode1

 

節點應用

節點應用程式 (狀態)

[oracle@racnode1 ~]$ srvctl status nodeapps
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1
VIP racnode2-vip is enabled VIP racnode2-vip is running on node: racnode2
Network is enabled Network is running on node: racnode1
Network is running on node: racnode2 GSD is disabled
GSD is not running on node: racnode1
GSD is not running on node: racnode2
ONS is enabled
ONS daemon is running on node: racnode1
ONS daemon is running on node: racnode2
eONS is enabled eONS daemon is running on node: racnode1
eONS daemon is running on node: racnode2

節點應用程式 (配置)

[oracle@racnode1 ~]$ srvctl config nodeapps
VIP exists.:racnode1 VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168,
listening port 2016

列出配置的所有資料庫

[oracle@racnode1 ~]$ srvctl config database racdb

資料庫 (配置)

[oracle@racnode1 ~]$ srvctl config database -d racdb -a
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: RACDB_DATA,FRA
Services: 
Database is enabled
Database is administrator managed

 

ASM

ASM —(狀態)

[oracle@racnode1 ~]$ srvctl status asm
ASM is running on racnode1,racnode2

ASM —(配置)

$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.

 

TNS

TNS 監聽器(狀態)

[oracle@racnode1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): racnode1,racnode2

TNS 監聽器(配置)

[oracle@racnode1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:  
 /u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521

 

SCAN

SCAN —(狀態)

[oracle@racnode1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node racnode1

SCAN —(配置)

[oracle@racnode1 ~]$ srvctl config scan
SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /racnode-cluster-scan/192.168.1.187

 

VIP

VIP —(特定節點的狀態)

[oracle@racnode1 ~]$ srvctl status vip -n racnode1
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1 

[oracle@racnode1 ~]$ srvctl status vip -n racnode2
VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2

VIP —(特定節點的配置)

[oracle@racnode1 ~]$ srvctl config vip -n racnode1
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 

[oracle@racnode1 ~]$ srvctl config vip -n racnode2
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

節點應用程式配置 VIPGSDONS、監聽器)

[oracle@racnode1 ~]$ srvctl config nodeapps -a -g -s -l
-l option has been deprecated and will be ignored.
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:  
 /u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521

驗證所有叢集節點間的時鐘同步

[oracle@racnode1 ~]$ cluvfy comp clocksync -verbose 

Verifying Clock Synchronization across the cluster nodes  

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed 

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes 
  Node Name                             Status                    
  ------------------------------------  ------------------------
  racnode1                             
passed
                                 
 
Result: CTSS resource check passed  

Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed 

Check CTSS state started...
Check: CTSS state
  Node Name                             State                   
  ------------------------------------  ------------------------
  racnode1                              Active                  
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                  
  ------------  ------------------------  ------------------------
  racnode1      0.0                      
passed
                                 
  

Time offset is within the specified limits on the following set of nodes:  "[racnode1]" 
Result: Check of clock time offsets passed  

Oracle Cluster Time Synchronization Services check passed 

Verification of Clock Synchronization across the cluster nodes was successful.

叢集中所有正在執行的例項  (SQL)


SELECT
    inst_id
  , instance_number inst_no
  , instance_name inst_name
  , parallel
  , status
  , database_status db_status
  , active_state state
  , host_name host
FROM gv$instance
ORDER BY inst_id; 
 INST_ID  INST_NO INST_NAME  PAR STATUS  DB_STATUS    STATE     HOST
-------- -------- ---------- --- ------- ------------ --------- -------
       1        1 racdb1     YES OPEN    ACTIVE       NORMAL    racnode1
       2        2 racdb2     YES OPEN    ACTIVE       NORMAL    racnode2

所有資料庫檔案及它們所在的 ASM 磁碟組  (SQL)


select name from v$datafile
union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile; 
NAME
-------------------------------------------
+FRA/racdb/controlfile/current.256.703530389
+FRA/racdb/onlinelog/group_1.257.703530391
+FRA/racdb/onlinelog/group_2.258.703530393
+FRA/racdb/onlinelog/group_3.259.703533497
+FRA/racdb/onlinelog/group_4.260.703533499
+RACDB_DATA/racdb/controlfile/current.256.703530389
+RACDB_DATA/racdb/datafile/example.263.703530435
+RACDB_DATA/racdb/datafile/indx.270.703542993
+RACDB_DATA/racdb/datafile/sysaux.260.703530411
+RACDB_DATA/racdb/datafile/system.259.703530397
+RACDB_DATA/racdb/datafile/undotbs1.261.703530423
+RACDB_DATA/racdb/datafile/undotbs2.264.703530441
+RACDB_DATA/racdb/datafile/users.265.703530447
+RACDB_DATA/racdb/datafile/users.269.703542943
+RACDB_DATA/racdb/onlinelog/group_1.257.703530391
+RACDB_DATA/racdb/onlinelog/group_2.258.703530393
+RACDB_DATA/racdb/onlinelog/group_3.266.703533497
+RACDB_DATA/racdb/onlinelog/group_4.267.703533499
+RACDB_DATA/racdb/tempfile/temp.262.703530429 

19 rows selected.

ASM 磁碟卷 — (SQL)
SELECT path
FROM   v$asm_disk;
 
PATH
----------------------------------
ORCL:CRSVOL1
ORCL:DATAVOL1
ORCL:FRAVOL1


啟動/停止叢集

Oracle Grid Infrastructure 已由 grid 使用者安裝,Oracle RAC 軟體已由 oracle使用者安裝。一個名為 racdb 的功能完善的叢集化資料庫正在執行。

所有服務(包括 Oracle ClusterwareASM、網路、SCANVIPOracle Database 等)應在 Linux 節點每次重新引導時自動啟動。

有時為了進行維護,需要在某節點上關閉 Oracle 服務,稍後再重啟 Oracle Clusterware 系統。或者,您可能發現 Enterprise Manager 沒有執行而需要啟動它。停止/啟動操作需要以 root 身份來執行。

在本地伺服器上停止 Oracle Clusterware 系統

racnode1 節點上使用 crsctl stop cluster 命令停止 Oracle Clusterware 系統:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on
'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racnode1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'racnode1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.racnode1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.scan1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.racnode1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.racnode1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.registry.acfs' on 'racnode1' succeeded
CRS-2676: Start of 'ora.racnode1.vip' on 'racnode2' succeeded           
                               

CRS-2676: Start of 'ora.scan1.vip' on 'racnode2' succeeded              
                               

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'racnode2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'racnode2' succeeded    
                               

CRS-2677: Stop of 'ora.CRS.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode1'
CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'racnode1'
CRS-2673: Attempting to stop 'ora.eons' on 'racnode1'
CRS-2677: Stop of 'ora.ons' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'racnode1'
CRS-2677: Stop of 'ora.net1.network' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.eons' on 'racnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has
completed
CRS-2677: Stop of 'ora.crsd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1'
CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'racnode1'
CRS-2677: Stop of 'ora.diskmon' on 'racnode1' succeeded

 

注:在執行crsctl stop cluster命令之後,如果 Oracle Clusterware 管理的資源中有任何一個還在執行,則整個命令失敗。使用 -f 選項無條件地停止所有資源並停止 Oracle Clusterware 系統。

另請注意,可通過指定 -all 選項在叢集中所有伺服器上停止 Oracle Clusterware 系統。以下命令將在 racnode1 racnode2 上停止 Oracle Clusterware 系統:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

 

在本地伺服器上啟動 Oracle Clusterware 系統

racnode1 節點上使用 crsctl start cluster 命令啟動 Oracle Clusterware 系統:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1'
CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1'
CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'racnode1'
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

 

注:可通過指定 -all 選項在叢集中所有伺服器上啟動 Oracle Clusterware 系統。

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

 

還可以通過列出伺服器(各伺服器之間以空格分隔)在叢集中一個或多個指定的伺服器上啟動 Oracle Clusterware 系統:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n
racnode1 racnode2

 

使用 SRVCTL 啟動/停止所有例項

可使用以下命令來啟動/停止所有例項及相關服務

[oracle@racnode1 ~]$ srvctl stop database -d racdb 
[oracle@racnode1 ~]$ srvctl start database -d racdb

Openfiler 伺服器中使用 lvscan 命令檢查所有邏輯卷的狀態:

[root@openfiler ~]# lvscan

  ACTIVE            '/dev/rac.sharedisk1/ocrvdisk1' [4.00 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/ocrvdisk2' [4.00 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/ocrvdisk3' [4.00 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/ractest_dbfile1' [11.72 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/fra1' [8.16 GB] inherit

注意:每個邏輯卷的狀態設定為 inactive(工作系統上每個邏輯卷的狀態應設定為 ACTIVE

技巧

[oracle@racnode1 ~]$ su - grid -c "crs_stat -t -v"
Password: *********

 

檢查兩個節點上的 Oracle TNS 監聽器程式

[grid@racnode1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' |
awk '{print $9}'

LISTENER_SCAN1

LISTENER
[grid@racnode2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' |
awk '{print $9}'

LISTENER

確認針對 Oracle Clusterware 檔案的 Oracle ASM 功能

如果在 Oracle ASM 上安裝了 OCR 和表決磁碟檔案,則以 Grid Infrastructure 安裝所有者的身份,使用下面的命令語法來確認當前正在執行已安裝的 Oracle ASM

[grid@racnode1 ~]$ srvctl status asm -a
ASM is running on racnode1,racnode2
ASM is enabled.

檢查 Oracle 叢集登錄檔 (OCR)

[grid@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120         
         Used space (kbytes)      :       2404         
         Available space (kbytes) :     259716         
         ID                       : 1259866904          
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

檢查表決磁碟

[grid@racnode1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).

 

 

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/20976446/viewspace-750971/,如需轉載,請註明出處,否則將追究法律責任。

相關文章