【Oracle】11g RAC R2 日常巡檢--Grid
一.巡檢RAC資料庫
1.1列出資料庫
[grid@node1 ~]$ srvctl config database
racdb
[grid@node1 ~]$
1.2列出資料庫的例項
[grid@node1 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node node1
Instance racdb2 is running on node node2
1.3資料庫的配置
[grid@node1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/racdb/spfileracdb.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: DATA Services: Database is enabled Database is administrator managed [grid@node1 ~]$
二.巡檢Grid
2.1叢集名稱
[grid@node1 ~]$ cemutlo -n scan-cluster [grid@node1 ~]$
2.2檢查叢集棧狀態
[grid@node1 ~]$ crsctl check cluster -all ************************************************************** node1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** node2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [grid@node1 ~]$
2.3 叢集的資源
[grid@node1 ~]$ crsctl status res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.LISTENER.lsnr ONLINE ONLINE node1 ONLINE ONLINE node2 ora.asm ONLINE ONLINE node1 Started ONLINE ONLINE node2 Started ora.eons ONLINE ONLINE node1 ONLINE ONLINE node2 ora.gsd OFFLINE OFFLINE node1 OFFLINE OFFLINE node2 ora.net1.network ONLINE ONLINE node1 ONLINE ONLINE node2 ora.ons ONLINE ONLINE node1 ONLINE ONLINE node2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node2 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE node1 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE node1 ora.node1.vip 1 ONLINE ONLINE node1 ora.node2.vip 1 ONLINE ONLINE node2 ora.oc4j 1 OFFLINE OFFLINE ora.racdb.db 1 ONLINE ONLINE node1 Open 2 ONLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE node2 ora.scan2.vip 1 ONLINE ONLINE node1 ora.scan3.vip 1 ONLINE ONLINE node1 [grid@node1 ~]$
主機node1的更加詳細的資源
[grid@node1 ~]$ crsctl status res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE node1 Started ora.crsd 1 ONLINE ONLINE node1 ora.cssd 1 ONLINE ONLINE node1 ora.cssdmonitor 1 ONLINE ONLINE node1 ora.ctssd 1 ONLINE ONLINE node1 ACTIVE:0 ora.diskmon 1 ONLINE ONLINE node1 ora.evmd 1 ONLINE ONLINE node1 ora.gipcd 1 ONLINE ONLINE node1 ora.gpnpd 1 ONLINE ONLINE node1 ora.mdnsd 1 ONLINE ONLINE node1 [grid@node1 ~]$
主機node2的更加詳細的資源
[grid@node2 ~]$ crsctl status res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE node2 Started ora.crsd 1 ONLINE ONLINE node2 ora.cssd 1 ONLINE ONLINE node2 ora.cssdmonitor 1 ONLINE ONLINE node2 ora.ctssd 1 ONLINE ONLINE node2 ACTIVE:-11700 ora.diskmon 1 ONLINE ONLINE node2 ora.evmd 1 ONLINE ONLINE node2 ora.gipcd 1 ONLINE ONLINE node2 ora.gpnpd 1 ONLINE ONLINE node2 ora.mdnsd 1 ONLINE ONLINE node2 [grid@node2 ~]$
2.4檢查節點應用
[grid@node1 ~]$ srvctl status nodeapps VIP node1-vip is enabled VIP node1-vip is running on node: node1 VIP node2-vip is enabled VIP node2-vip is running on node: node2 Network is enabled Network is running on node: node1 Network is running on node: node2 GSD is disabled GSD is not running on node: node1 GSD is not running on node: node2 ONS is enabled ONS daemon is running on node: node1 ONS daemon is running on node: node2 eONS is enabled eONS daemon is running on node: node1 eONS daemon is running on node: node2 [grid@node1 ~]$
2.5 檢查SCAN
檢查scan-ip地址的配置 [grid@node1 ~]$ srvctl config scan SCAN name: scan-cluster.com, Network: 1/192.168.0.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /scan-cluster/192.168.0.24 SCAN VIP name: scan2, IP: /scan-cluster/192.168.0.25 SCAN VIP name: scan3, IP: /scan-cluster/192.168.0.26 [grid@node1 ~]$ 檢查scan-ip地址的實際分佈及狀態 [grid@node1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node node2 SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node node1 SCAN VIP scan3 is enabled SCAN VIP scan3 is running on node node1 [grid@node1 ~]$ 檢查scan監聽配置 [grid@node1 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521 SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521 SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521 [grid@node1 ~]$ 檢查scan監聽狀態 [grid@node1 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node node2 SCAN Listener LISTENER_SCAN2 is enabled SCAN listener LISTENER_SCAN2 is running on node node1 SCAN Listener LISTENER_SCAN3 is enabled SCAN listener LISTENER_SCAN3 is running on node node1 [grid@node1 ~]$
2.6 檢查VIP和監聽
檢查VIP的配置情況 [grid@node1 ~]$ srvctl config vip -n node1 VIP exists.:node1 VIP exists.: /node1-vip/192.168.0.21/255.255.255.0/eth0 [grid@node1 ~]$ srvctl config vip -n node2 VIP exists.:node2 VIP exists.: /node2-vip/192.168.0.31/255.255.255.0/eth0 [grid@node1 ~]$ 檢查VIP的狀態 [grid@node1 ~]$ srvctl status nodeapps 或 [grid@node1 ~]$ srvctl status vip -n node1 VIP node1-vip is enabled VIP node1-vip is running on node: node1 [grid@node1 ~]$ srvctl status vip -n node2 VIP node2-vip is enabled VIP node2-vip is running on node: node2 [grid@node1 ~]$ 檢查本地監聽配置: [grid@node1 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: <CRS home> /u01/app/11.2.0/grid on node(s) node2,node1 End points: TCP:1521 檢查本地監聽狀態: [grid@node1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): node1,node2 [grid@node1 ~]$
2.7 檢查ASM
檢查ASM狀態
[grid@node1 ~]$ srvctl status asm -a
ASM is running on node1,node2
ASM is enabled.
檢查ASM配置
[grid@node1 ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
[grid@node1 ~]$
檢查磁碟組
[grid@node1 ~]$ srvctl status diskgroup -g DATA
Disk Group DATA is running on node1,node2
[grid@node1 ~]$
檢視ASM磁碟
[root@node1 bin]# oracleasm listdisks
VOL1
VOL2
[root@node1 bin]#
檢視物理磁碟與asm 磁碟對應關係
[root@node1 bin]# oracleasm querydisk -v -p VOL1
Disk "VOL1" is a valid ASM disk
/dev/sdb1: LABEL="VOL1" TYPE="oracleasm"
[root@node1 bin]#
2.8檢查叢集節點間的時鐘同步
檢查節點node1的時間同步 [grid@node1 ~]$ cluvfy comp clocksync -verbose ....... Verification of Clock Synchronization across the cluster nodes was successful. [grid@node1 ~]$
檢查節點node2的時間同步 [grid@node2 ~]$ cluvfy comp clocksync -verbose .............. CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ node2 -89900.0 failed Result: PRVF-9661 : Time offset is NOT within the specified limits on the following nodes: "[node2]" PRVF-9652 : Cluster Time Synchronization Services check failed Verification of Clock Synchronization across the cluster nodes was unsuccessful on all the specified nodes.
[grid@node2 ~]$
注:節點2的伺服器時間出現問題
至此,對Grid的巡檢基本上就算完成了
轉載polestar文章如有侵權請聯絡我
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30327022/viewspace-2132713/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle之 11g RAC R2 體系結構---GridOracle
- oracle、filesystem、backup日常巡檢指令碼Oracle指令碼
- Oracle 11g r2 racOracle
- (轉):oracle、filesystem、backup日常巡檢指令碼Oracle指令碼
- Oracle 10G R2 RAC 日常管理Oracle 10g
- Oracle 10G R2 RAC日常管理Oracle 10g
- 【SCRIPT】Oracle日常巡檢指令碼通用版Oracle指令碼
- Oracle ERP系統日常維護和巡檢Oracle
- oracle 11g R2安裝RACOracle
- Oracle 11g RAC 監聽日常管理Oracle
- Oracle資料庫(RAC)巡檢報告Oracle資料庫
- Oracle運維指令碼-巡檢(RAC版)Oracle運維指令碼
- Oracle 10G RAC巡檢指令碼Oracle 10g指令碼
- Oracle 11G RAC GRID主要架構Oracle架構
- 【SCRIPT】Oracle12C日常巡檢指令碼通用版Oracle指令碼
- Oracle 11g RAC檢視ASM日誌、grid日誌和DB日誌OracleASM
- oracle巡檢(轉)Oracle
- oracle的巡檢Oracle
- Oracle運維指令碼-巡檢(RAC版本)-V1.1Oracle運維指令碼
- Oracle 11G RAC叢集安裝(2)——安裝gridOracle
- Oracle 11g R2(11.2.0.3.0) RAC環境搭建(二)Oracle
- 11g R2 RAC: SERVER POOLSServer
- Oracle RAC+DG巡檢常見問題彙總(一)Oracle
- 自己總結了一下巡檢的工作 for Oracle RACOracle
- Oracle 巡檢手冊Oracle
- Oracle巡檢內容Oracle
- oracle巡檢工具-RDAOracle
- Oracle RAC日常管理命令Oracle
- Oracle RAC 日常維護Oracle
- oracle 11g RAC手動解除安裝grid,no deinstallOracle
- 11g RAC Grid 元件及程式元件
- RHEL 6.5 64位安裝ORACLE 11G R2 Grid Infrastructure for a Standalone ServerOracleASTStructServer
- oracle DBA 巡檢專案Oracle
- Oracle資料庫巡檢Oracle資料庫
- oracle健康巡檢筆記Oracle筆記
- (轉)ORACLE 巡檢指令碼Oracle指令碼
- Oracle 11g R2 RAC高可用連線特性 – SCAN詳解Oracle
- 【原創】Oracle RAC 日常管理Oracle