ORACLE 11G RAC--CLUSTERWARE工具集1

oracle_db發表於2012-08-15
目的:熟悉工具集的使用

節點層相關工具集

1.olsnodes:關注叢集由那些節點組成
-n選項:顯示每個節點編號
[root@rac1 bin]# pwd
/u01/app/11.2.0/11ggrid/bin
[root@rac1 bin]# ./olsnodes -n
rac1    1
rac2    2
[root@rac1 bin]#
-I選項:顯示每個節點的VIP
[root@rac1 bin]# ./olsnodes -i
rac1    rac1vip
rac2    rac2vip
[root@rac1 bin]#
-p選項:顯示私有IP地址
[root@rac1 bin]# ./olsnodes -n -l -p
rac1    1       192.168.137.101
-g選項:開啟日誌開關
[root@rac1 bin]# ./olsnodes -g
rac1
rac2
-V選項:顯示詳細日誌
[root@rac1 bin]# ./olsnodes -v
lang init : Initializing LXL global
main: Initializing CLSS context
memberlist: No of cluster members configured = 256
memberlist: Allocated mem for lease node vector.
memberlist: Leased NodeList entries used = 2.
memberlist: Getting information for nodenum = 1
memberlist: node_name = rac1
memberlist: ctx->lsdata->node_num = 1
print data: Printing the node data
rac1
memberlist: Getting information for nodenum = 2
memberlist: node_name = rac2
memberlist: ctx->lsdata->node_num = 2
print data: Printing the node data
rac2
main: olsnodes executed successfully
term: Terminating LSF
[root@rac1 bin]#
其它引數用法說明
root@rac1 bin]# ./olsnodes -n -p
Usage: olsnodes [ [-n] [-i] [-s] [-t] [ | -l [-p]] | [-c] ] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                print information for the specified node
                -l print information for the local node
                -s print node status - active or inactive
                -t print node type - pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name
2.網路層工具oifcfg,網路層由各個節點的網路元件組成,先看下網路配置是什麼樣的
[root@rac1 bin]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
#public ip:
192.168.44.101 rac1
192.168.44.102 rac2

#private ip:
192.168.137.101 rac1priv
192.168.137.102 rac2priv

#vip
192.168.44.201 rac1vip
192.168.44.202 rac2vip

#scan
192.168.44.200 racscan

#opfiler

192.168.44.251 openfiler
192.168.137.251 openfilerpriv

oifcfg可用來定義和修改oracle叢集需要的網路卡屬性。用法說明
[root@rac1 bin]# pwd
/u01/app/11.2.0/11ggrid/bin
[root@rac1 bin]# ./oifcfg -help

Name:
        oifcfg - Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]
        oifcfg setif {-node | -global} {/:}...
        oifcfg getif [-node | -global] [ -if [/] [-type ] ]
        oifcfg delif [{-node | -global} [[/]]]
        oifcfg [-help]

        - name of the host, as known to a communications network
          - name by which the interface is configured in the system
           - subnet address of the interface
          - type of the interface { cluster_interconnect | public }

[root@rac1 bin]#
用法舉例:
iflist顯示網口列表
[root@rac1 bin]# ./oifcfg iflist
eth0  192.168.44.0
eth1  192.168.137.0
[root@rac1 bin]#
getif檢視每個網路卡的屬性
[root@rac1 bin]# ./oifcfg getif
eth0  192.168.44.0  global  public
eth1  192.168.137.0  global  cluster_interconnect
[root@rac1 bin]#
【注】global為介面的配置方式,介面的配置方式分global,node-specific.global則說明所有節點的配置資訊相同,即所有節點配置對稱。後面當然是不對稱。
檢視非對稱配置
[root@rac1 bin]# ./oifcfg getif -node rac1
[root@rac1 bin]# ./oifcfg getif -node rac2
[root@rac1 bin]#
檢視指定型別的網路卡配置
[root@rac1 bin]# ./oifcfg getif -type public
eth0  192.168.44.0  global  public
[root@rac1 bin]# ./oifcfg getif -type private
eth1  192.168.137.0  global  cluster_interconnect
[root@rac1 bin]# ./oifcfg getif -type cluster_interconnect
eth1  192.168.137.0  global  cluster_interconnect
[root@rac1 bin]#
搞不清楚,通過setif新增新的網路卡,慎用
搞不清楚,通過delif刪除介面配置,慎用
3.叢集層,負責維護叢集內的共享裝置,併為上一層的應用叢集提供完整的叢集狀態檢視。主要命令包含:crsctl,ocrcheck,ocrdump,ocrconfig

crsctl用來檢查CRS程式棧,每個CRS程式的狀態,管理VOTEDISK,跟蹤CRS程式
用法說明
[root@rac1 bin]# ./crsctl
Usage: crsctl []
    command: enable|disable|config|start|stop|relocate|replace|stat|add|delete|modify|getperm|setperm|check|set|get|unset|debug|lsmodules|query|pin|unpin
For complete usage, use:
    crsctl [-h | --help]
For detailed help on each command and object and its options use:
    crsctl -h  e.g. crsctl relocate resource -h
[root@rac1 bin]#
檢查CRS狀態
[root@rac1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 bin]#
檢查每個服務的狀態
[root@rac1 bin]# ./crsctl check cssd
CRS-272: This command remains for backward compatibility only
Cluster Synchronization Services is online
[root@rac1 bin]# ./crsctl check crsd
CRS-272: This command remains for backward compatibility only
Cluster Ready Services is online
[root@rac1 bin]# ./crsctl check evmd
CRS-272: This command remains for backward compatibility only
Event Manager is online
[root@rac1 bin]#
出於維護目的,臨時啟停CRS程式,CRS程式隨著作業系統程式啟動自動啟動。
[root@rac1 bin]# ./crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@rac1 bin]# ./crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@rac1 bin]#
啟動停止CRS棧
[root@rac1 bin]# ./crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@rac1 bin]# ./crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'rac1'
CRS-2677: Stop of 'ora.FRA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.eons' on 'rac1'
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2677: Stop of 'ora.eons' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'rac1'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 bin]#
[root@rac1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac1 bin]#
檢視VOTEDISK位置
[oracle@rac1 bin]$ ./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   29eb9e0e7d664f58bf9b733f6c435fa8 (/dev/oracleasm/disks/CRS001) [CRS]
Located 1 voting disk(s).
[oracle@rac1 bin]$
檢視和修改CRS引數
[oracle@rac1 bin]$ ./crsctl get hostname
rac1
[oracle@rac1 bin]$
跟蹤CRS模組提供輔助診斷測試
CRS由CRS,CSS,EVM3個服務組成,每個服務由一系統模組組成CRSCTL允許對每個模組進行跟蹤
檢視CRS下3個服務各自有那些模組
[root@rac1 bin]# pwd
/u01/app/11.2.0/11ggrid/bin
[root@rac1 bin]# ./crsctl lsmodules css
The following are the Cluster Synchronization Services modules::
    CSSD
    COMMCRS
    COMMNS
    CLSF
    SKGFD
[root@rac1 bin]# ./crsctl lsmodules crs
List CRSD Debug Module: AGENT
List CRSD Debug Module: AGFW
List CRSD Debug Module: CLSFRAME
List CRSD Debug Module: CLSVER
List CRSD Debug Module: CLUCLS
List CRSD Debug Module: COMMCRS
List CRSD Debug Module: COMMNS
List CRSD Debug Module: CRSAPP
List CRSD Debug Module: CRSCCL
List CRSD Debug Module: CRSCEVT
List CRSD Debug Module: CRSCOMM
List CRSD Debug Module: CRSD
List CRSD Debug Module: CRSEVT
List CRSD Debug Module: CRSMAIN
List CRSD Debug Module: CRSOCR
List CRSD Debug Module: CRSPE
List CRSD Debug Module: CRSPLACE
List CRSD Debug Module: CRSRES
List CRSD Debug Module: CRSRPT
List CRSD Debug Module: CRSRTI
List CRSD Debug Module: CRSSE
List CRSD Debug Module: CRSSEC
List CRSD Debug Module: CRSTIMER
List CRSD Debug Module: CRSUI
List CRSD Debug Module: CSSCLNT
List CRSD Debug Module: OCRAPI
List CRSD Debug Module: OCRASM
List CRSD Debug Module: OCRCAC
List CRSD Debug Module: OCRCLI
List CRSD Debug Module: OCRMAS
List CRSD Debug Module: OCRMSG
List CRSD Debug Module: OCROSD
List CRSD Debug Module: OCRRAW
List CRSD Debug Module: OCRSRV
List CRSD Debug Module: OCRUTL
List CRSD Debug Module: SuiteTes
List CRSD Debug Module: UiServer
[root@rac1 bin]# ./crsctl lsmodules evm
List EVMD Debug Module: CLSVER
List EVMD Debug Module: CLUCLS
List EVMD Debug Module: COMMCRS
List EVMD Debug Module: COMMNS
List EVMD Debug Module: CRSOCR
List EVMD Debug Module: CSSCLNT
List EVMD Debug Module: EVMAGENT
List EVMD Debug Module: EVMAPP
List EVMD Debug Module: EVMCOMM
List EVMD Debug Module: EVMD
List EVMD Debug Module: EVMDMAIN
List EVMD Debug Module: EVMEVT
List EVMD Debug Module: OCRAPI
List EVMD Debug Module: OCRCLI
List EVMD Debug Module: OCRMSG
[root@rac1 bin]#
跟蹤指定模組CSSD:
[root@rac1 bin]# ./crsctl debug log css "CSSD:1"
CRS-4151: DEPRECATED: use crsctl set log {css|crs|evm}
Configuration parameter trace is now set to 1.
Set CRSD Debug Module: CSSD  Level: 1
[root@rac1 bin]#
[root@rac1 bin]# tail -f /u01/app/11.2.0/11ggrid/log/rac1/cssd/ocssd.log
2012-08-15 23:43:15.970: [    CSSD][2898250640]clssnmSendingThread: sent 5 status msgs to all nodes
2012-08-15 23:43:16.727: [    CSSD][2919230352]clssgmTagize: version(1), type(13), tagizer(0x80cf3ac)
2012-08-15 23:43:16.727: [    CSSD][2919230352]clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 1
2012-08-15 23:43:18.730: [    CSSD][2919230352]clssgmTagize: version(1), type(13), tagizer(0x80cf3ac)
2012-08-15 23:43:18.731: [    CSSD][2919230352]clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 1
2012-08-15 23:43:20.751: [    CSSD][2919230352]clssgmTagize: version(1), type(13), tagizer(0x80cf3ac)
2012-08-15 23:43:20.751: [    CSSD][2919230352]clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 1
2012-08-15 23:43:20.978: [    CSSD][2898250640]clssnmSendingThread: sending status msg to all nodes
2012-08-15 23:43:20.978: [    CSSD][2898250640]clssnmSendingThread: sent 5 status msgs to all nodes
2012-08-15 23:43:22.024: [    CSSD][3020884880]clssscSetDebugLevel: The logging level is set to 1

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/15720542/viewspace-741056/,如需轉載,請註明出處,否則將追究法律責任。

相關文章