一步一步搭建 oracle 11gR2 rac+dg之grid安裝(四)
-
一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之grid安裝 (四)
本章目錄結構:
這一步也比較重要,主要是安裝ASM,如果前一步的共享磁碟沒有準備好的話,執行root指令碼的時候可能會報錯,不過不要緊的,,,一定可以解決的,,,,
本章目錄結構
-
Grid安裝過程
下載軟體,上傳軟體,解壓軟體:
[root@rac1 share]# ll
total 3398288
-rwxrwxrwx 1 root root 1358454646 Dec 14 2011 p10404530_112030_Linux-x86-64_1of7.zip
-rwxrwxrwx 1 root root 1142195302 May 25 2012 p10404530_112030_Linux-x86-64_2of7.zip
-rwxrwxrwx 1 root root 979195792 May 26 2012 p10404530_112030_Linux-x86-64_3of7.zip
[root@rac1 share]# unzip p10404530_112030_Linux-x86-64_1of7.zip -d /tmp/ && unzip p10404530_112030_Linux-x86-64_2of7.zip -d /tmp/
[root@rac1 share]# unzip p10404530_112030_Linux-x86-64_3of7.zip -d /tmp/
-
安裝補丁包(cvuqdisk)
安裝cvuqdisk包並驗證,在2個節點上都需要安裝該包:
在兩個 Oracle RAC 節點上安裝作業系統程式包 cvuqdisk。如果沒有 cvuqdisk,叢集驗證實用程式就無法發現共享磁碟,當執行(手動執行或在 Oracle Grid Infrastructure 安裝結束時自動執行)叢集驗證實用程式時,您會收到這樣的錯誤訊息:"Package cvuqdisk not installed"。使用適用於您的硬體體系結構(例如,x86_64 或 i386)的 cvuqdisk RPM。
cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安裝介質上的 rpm 目錄中。
設定環境變數 CVUQDISK_GRP,使其指向作為 cvuqdisk 的所有者所在的組(本文為 oinstall):
export CVUQDISK_GRP=oinstall
使用 CVU 驗證是否滿足 Oracle 叢集件要求
記住要作為 grid 使用者在將要執行 Oracle 安裝的節點 (racnode1) 上執行。此外,必須為 grid
使用者配置通過使用者等效性實現的 SSH 連通性。
[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm
Preparing... ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ########################################### [100%]
[root@rac1 Packages]# rpm -q cvuqdisk
cvuqdisk-1.0.7-1.x86_64
傳輸到第2個節點上:
[root@rac1 rpm]# scp cvuqdisk-1.0.9-1.rpm root@192.168.59.136:/tmp
-
cluster 硬體檢測--安裝前預檢查配置資訊
該過程有點慢。。。。。慢慢等待吧。。。。。
只需要在其中一個節點上執行即可
在安裝 GRID 之前,建議先利用 CVU(Cluster Verification Utility)檢查 CRS 的安裝前環境。
① 使用 CVU 檢查 CRS 的安裝前環境
在grid軟體目錄裡執行以下命令:
使用 CVU 驗證硬體和作業系統設定
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose
cluvfy stage -pre crsinst -n node1,node2,node3 -fixup -verbose
未檢測通過的顯示為failed,根據情況修復一下即可:
[grid@rac1 grid]$ ll
total 15
drwxrwxrwx 1 root root 4096 Aug 16 2009 doc
drwxrwxrwx 1 root root 4096 Aug 15 2009 install
drwxrwxrwx 1 root root 0 Aug 15 2009 response
drwxrwxrwx 1 root root 0 Aug 15 2009 rpm
-rwxrwxrwx 1 root root 3795 Jan 28 2009 runcluvfy.sh
-rwxrwxrwx 1 root root 3227 Aug 15 2009 runInstaller
drwxrwxrwx 1 root root 0 Aug 15 2009 sshsetup
drwxrwxrwx 1 root root 8192 Aug 15 2009 stage
-rwxrwxrwx 1 root root 4228 Aug 17 2009 welcome.html
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
------------------------------------ ------------------------
rac2 yes
rac1 yes
Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed
rac1 passed
Verification of the hosts config file successful
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.128.152 192.168.128.0 0.0.0.0 192.168.128.2 00:0C:29:EC:A0:64 1500
eth1 10.10.10.152 10.0.0.0 0.0.0.0 192.168.128.2 00:0C:29:EC:A0:6E 1500
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.128.151 192.168.128.0 0.0.0.0 192.168.128.2 00:0C:29:2F:A8:C3 1500
eth1 10.10.10.151 10.0.0.0 0.0.0.0 192.168.128.2 00:0C:29:2F:A8:CD 1500
Check: Node connectivity of subnet "192.168.128.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2:eth0 rac1:eth0 yes
Result: Node connectivity passed for subnet "192.168.128.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "192.168.128.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:192.168.128.151 rac2:192.168.128.152 passed
Result: TCP connectivity check passed for subnet "192.168.128.0"
Check: Node connectivity of subnet "10.0.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2:eth1 rac1:eth1 yes
Result: Node connectivity passed for subnet "10.0.0.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "10.0.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:10.10.10.151 rac2:10.10.10.152 passed
Result: TCP connectivity check passed for subnet "10.0.0.0"
Interfaces found on subnet "192.168.128.0" that are likely candidates for VIP are:
rac2 eth0:192.168.128.152
rac1 eth0:192.168.128.151
Interfaces found on subnet "10.0.0.0" that are likely candidates for a private interconnect are:
rac2 eth1:10.10.10.152
rac1 eth1:10.10.10.151
Result: Node connectivity check passed
Check: Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 999.85MB (1023844.0KB) 1.5GB (1572864.0KB) failed
rac1 999.85MB (1023844.0KB) 1.5GB (1572864.0KB) failed
Result: Total memory check failed
Check: Available memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 878.98MB (900076.0KB) 50MB (51200.0KB) passed
rac1 717.45MB (734672.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
Result: Swap space check failed
Check: Free disk space for "rac2:/tmp"
Path Node Name Mount point Available Required Comment
---------------- ------------ ------------ ------------ ------------ ------------
/tmp rac2 / 14.5GB 1GB passed
Result: Free disk space check passed for "rac2:/tmp"
Check: Free disk space for "rac1:/tmp"
Path Node Name Mount point Available Required Comment
---------------- ------------ ------------ ------------ ------------ ------------
/tmp rac1 / 14.06GB 1GB passed
Result: Free disk space check passed for "rac1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 exists passed
rac1 exists passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 exists passed
rac1 exists passed
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 exists passed
rac1 exists passed
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 yes yes yes yes passed
rac1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Comment
---------------- ------------ ------------ ------------ ----------------
rac2 yes yes yes passed
rac1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed
Check: Run level
Node Name run level Required Comment
------------ ------------------------ ------------------------ ----------
rac2 5 3,5 passed
rac1 5 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
rac2 hard 65536 65536 passed
rac1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
rac2 soft 1024 1024 passed
rac1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
rac2 hard 16384 16384 passed
rac1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
rac2 soft 2047 2047 passed
rac1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 x86_64 x86_64 passed
rac1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 2.6.18-348.el5 2.6.18 passed
rac1 2.6.18-348.el5 2.6.18 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 250 250 passed
rac1 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 32000 32000 passed
rac1 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 100 100 passed
rac1 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 128 128 passed
rac1 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 68719476736 536870912 passed
rac1 68719476736 536870912 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 4096 4096 passed
rac1 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 2147483648 2097152 passed
rac1 2147483648 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 6815744 6815744 passed
rac1 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 between 9000 & 65500 between 9000 & 65500 passed
rac1 between 9000 & 65500 between 9000 & 65500 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 262144 262144 passed
rac1 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 4194304 4194304 passed
rac1 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 262144 262144 passed
rac1 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 1048586 1048576 passed
rac1 1048586 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
rac2 1048576 1048576 passed
rac1 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make-3.81"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 make-3.81-3.el5 make-3.81 passed
rac1 make-3.81-3.el5 make-3.81 passed
Result: Package existence check passed for "make-3.81"
Check: Package existence for "binutils-2.17.50.0.6"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 binutils-2.17.50.0.6-20.el5_8.3 binutils-2.17.50.0.6 passed
rac1 binutils-2.17.50.0.6-20.el5_8.3 binutils-2.17.50.0.6 passed
Result: Package existence check passed for "binutils-2.17.50.0.6"
Check: Package existence for "gcc-4.1"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 gcc-4.1.2-54.el5 gcc-4.1 passed
rac1 gcc-4.1.2-54.el5 gcc-4.1 passed
Result: Package existence check passed for "gcc-4.1"
Check: Package existence for "libaio-0.3.106 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libaio-0.3.106-5 (i386) libaio-0.3.106 (i386) passed
rac1 libaio-0.3.106-5 (i386) libaio-0.3.106 (i386) passed
Result: Package existence check passed for "libaio-0.3.106 (i386)"
Check: Package existence for "libaio-0.3.106 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libaio-0.3.106-5 (x86_64) libaio-0.3.106 (x86_64) passed
rac1 libaio-0.3.106-5 (x86_64) libaio-0.3.106 (x86_64) passed
Result: Package existence check passed for "libaio-0.3.106 (x86_64)"
Check: Package existence for "glibc-2.5-24 (i686)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 glibc-2.5-107 (i686) glibc-2.5-24 (i686) passed
rac1 glibc-2.5-107 (i686) glibc-2.5-24 (i686) passed
Result: Package existence check passed for "glibc-2.5-24 (i686)"
Check: Package existence for "glibc-2.5-24 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 glibc-2.5-107 (x86_64) glibc-2.5-24 (x86_64) passed
rac1 glibc-2.5-107 (x86_64) glibc-2.5-24 (x86_64) passed
Result: Package existence check passed for "glibc-2.5-24 (x86_64)"
Check: Package existence for "compat-libstdc++-33-3.2.3 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 missing compat-libstdc++-33-3.2.3 (i386) failed
rac1 missing compat-libstdc++-33-3.2.3 (i386) failed
Result: Package existence check failed for "compat-libstdc++-33-3.2.3 (i386)"
Check: Package existence for "compat-libstdc++-33-3.2.3 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 compat-libstdc++-33-3.2.3-61 (x86_64) compat-libstdc++-33-3.2.3 (x86_64) passed
rac1 compat-libstdc++-33-3.2.3-61 (x86_64) compat-libstdc++-33-3.2.3 (x86_64) passed
Result: Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
Check: Package existence for "elfutils-libelf-0.125 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf-0.137-3.el5 (x86_64) elfutils-libelf-0.125 (x86_64) passed
rac1 elfutils-libelf-0.137-3.el5 (x86_64) elfutils-libelf-0.125 (x86_64) passed
Result: Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
Check: Package existence for "elfutils-libelf-devel-0.125"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
rac1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
WARNING:
PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac2: elfutils-libelf-devel-0.137-3.el5 (i386),elfutils-libelf-devel-0.137-3.el5 (x86_64)
WARNING:
PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac1: elfutils-libelf-devel-0.137-3.el5 (i386),elfutils-libelf-devel-0.137-3.el5 (x86_64)
Result: Package existence check passed for "elfutils-libelf-devel-0.125"
Check: Package existence for "glibc-common-2.5"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 glibc-common-2.5-107 glibc-common-2.5 passed
rac1 glibc-common-2.5-107 glibc-common-2.5 passed
Result: Package existence check passed for "glibc-common-2.5"
Check: Package existence for "glibc-devel-2.5 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 missing glibc-devel-2.5 (i386) failed
rac1 missing glibc-devel-2.5 (i386) failed
Result: Package existence check failed for "glibc-devel-2.5 (i386)"
Check: Package existence for "glibc-devel-2.5 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 glibc-devel-2.5-107 (x86_64) glibc-devel-2.5 (x86_64) passed
rac1 glibc-devel-2.5-107 (x86_64) glibc-devel-2.5 (x86_64) passed
Result: Package existence check passed for "glibc-devel-2.5 (x86_64)"
Check: Package existence for "glibc-headers-2.5"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 glibc-headers-2.5-107 glibc-headers-2.5 passed
rac1 glibc-headers-2.5-107 glibc-headers-2.5 passed
Result: Package existence check passed for "glibc-headers-2.5"
Check: Package existence for "gcc-c++-4.1.2"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 gcc-c++-4.1.2-54.el5 gcc-c++-4.1.2 passed
rac1 gcc-c++-4.1.2-54.el5 gcc-c++-4.1.2 passed
Result: Package existence check passed for "gcc-c++-4.1.2"
Check: Package existence for "libaio-devel-0.3.106 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 missing libaio-devel-0.3.106 (i386) failed
rac1 missing libaio-devel-0.3.106 (i386) failed
Result: Package existence check failed for "libaio-devel-0.3.106 (i386)"
Check: Package existence for "libaio-devel-0.3.106 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106 (x86_64) passed
rac1 libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106 (x86_64) passed
Result: Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
Check: Package existence for "libgcc-4.1.2 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libgcc-4.1.2-54.el5 (i386) libgcc-4.1.2 (i386) passed
rac1 libgcc-4.1.2-54.el5 (i386) libgcc-4.1.2 (i386) passed
Result: Package existence check passed for "libgcc-4.1.2 (i386)"
Check: Package existence for "libgcc-4.1.2 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libgcc-4.1.2-54.el5 (x86_64) libgcc-4.1.2 (x86_64) passed
rac1 libgcc-4.1.2-54.el5 (x86_64) libgcc-4.1.2 (x86_64) passed
Result: Package existence check passed for "libgcc-4.1.2 (x86_64)"
Check: Package existence for "libstdc++-4.1.2 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libstdc++-4.1.2-54.el5 (i386) libstdc++-4.1.2 (i386) passed
rac1 libstdc++-4.1.2-54.el5 (i386) libstdc++-4.1.2 (i386) passed
Result: Package existence check passed for "libstdc++-4.1.2 (i386)"
Check: Package existence for "libstdc++-4.1.2 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libstdc++-4.1.2-54.el5 (x86_64) libstdc++-4.1.2 (x86_64) passed
rac1 libstdc++-4.1.2-54.el5 (x86_64) libstdc++-4.1.2 (x86_64) passed
Result: Package existence check passed for "libstdc++-4.1.2 (x86_64)"
Check: Package existence for "libstdc++-devel-4.1.2 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 libstdc++-devel-4.1.2-54.el5 (x86_64) libstdc++-devel-4.1.2 (x86_64) passed
rac1 libstdc++-devel-4.1.2-54.el5 (x86_64) libstdc++-devel-4.1.2 (x86_64) passed
Result: Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
Check: Package existence for "sysstat-7.0.2"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 sysstat-7.0.2-12.el5 sysstat-7.0.2 passed
rac1 sysstat-7.0.2-12.el5 sysstat-7.0.2 passed
Result: Package existence check passed for "sysstat-7.0.2"
Check: Package existence for "unixODBC-2.2.11 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 unixODBC-2.2.11-10.el5 (i386) unixODBC-2.2.11 (i386) passed
rac1 unixODBC-2.2.11-10.el5 (i386) unixODBC-2.2.11 (i386) passed
Result: Package existence check passed for "unixODBC-2.2.11 (i386)"
Check: Package existence for "unixODBC-2.2.11 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 unixODBC-2.2.11-10.el5 (x86_64) unixODBC-2.2.11 (x86_64) passed
rac1 unixODBC-2.2.11-10.el5 (x86_64) unixODBC-2.2.11 (x86_64) passed
Result: Package existence check passed for "unixODBC-2.2.11 (x86_64)"
Check: Package existence for "unixODBC-devel-2.2.11 (i386)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 unixODBC-devel-2.2.11-10.el5 (i386) unixODBC-devel-2.2.11 (i386) passed
rac1 unixODBC-devel-2.2.11-10.el5 (i386) unixODBC-devel-2.2.11 (i386) passed
Result: Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
Check: Package existence for "unixODBC-devel-2.2.11 (x86_64)"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 unixODBC-devel-2.2.11-10.el5 (x86_64) unixODBC-devel-2.2.11 (x86_64) passed
rac1 unixODBC-devel-2.2.11-10.el5 (x86_64) unixODBC-devel-2.2.11 (x86_64) passed
Result: Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
Check: Package existence for "ksh-20060214"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 ksh-20100621-12.el5 ksh-20060214 passed
rac1 ksh-20100621-12.el5 ksh-20060214 passed
Result: Package existence check passed for "ksh-20060214"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 does not exist passed
rac1 does not exist passed
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 0022 0022 passed
rac1 0022 0022 passed
Result: Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@rac1 grid]$
解決完所有的failed內容後重新檢測直到所有問題都解決了。。。。
-
開始安裝
安裝日誌:/u01/app/oraInventory/logs/installActions2014-06-05_06-12-27AM.log
首先開啟Xmanager - Passive 軟體, 然後在 Xshell 會話設定如下:
[grid@rhel_linux_asm grid]$ clear
[grid@rhel_linux_asm grid]$ export DISPLAY=192.168.1.100:0.0 ---這裡的ip地址就是本機的ip地址(ipconfig)
[grid@rhel_linux_asm grid]$ xhost +
access control disabled, clients can connect from any host
[grid@rhel_linux_asm grid]$ ls
doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[grid@rhel_linux_asm grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 31642 MB Passed
Checking swap space: must be greater than 150 MB. Actual 383 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-04-29_10-53-18PM. Please wait ...[grid@rhel_linux_asm grid]$
截圖如下:
安裝Grid Infrastructure軟體
以下圖形介面:
SCAN Name 應該寫之前的/etc/hosts 中scan配置的名稱,不然報錯:
正確的配置是:
-
建立ASM磁碟組
如果磁碟是如下狀態(member),說明磁碟已經被使用過了,需要把磁碟初始化,即重新進行分割槽即可:
分別執行: fdisk /dev/sdb 後,對2個節點重新同步:partprobe
這裡所以的檢查項最後都通過,記憶體的話每個節點建議1.5G
這一步比較耗時:
從rac2節點的大小上也可以看到安裝進度,拷貝完成後大約2.9G:
路徑: /u01/app/11.2.0
-
執行root指令碼
執行到76%的時候 ,出現root指令碼,如下:
首先在local node上執行以下指令碼,執行成功後再在其他節點上執行指令碼
這一步如果執行指令碼失敗需要重新執行:
-
/u01/app/grid/11.2.0/crs/install/roothas.pl -deconfig -force -verbose
-
/u01/app/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force
-
/u01/app/grid/11.2.0/root.sh
日誌地址:/u01/app/grid/11.2.0/cfgtoollogs/crsconfig/
重置的日誌檔案:hadelete.log
root.sh指令碼日誌:rootcrs_rac2.log
rac1節點上:
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 ~]# /u01/app/grid/11.2.0/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2014-06-04 12:03:15: Parsing the host name
2014-06-04 12:03:15: Checking for super user privileges
2014-06-04 12:03:15: User has super user privileges
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
ASM created and started successfully.
DiskGroup CRS created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 271f9e0c141e4f06bf2cf3938f95d2b8.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 271f9e0c141e4f06bf2cf3938f95d2b8 (/dev/ocrb) [CRS]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1'
CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'
CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded
rac1 2014/06/04 12:11:19 /u01/app/grid/11.2.0/cdata/rac1/backup_20140604_121119.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 1795 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac1 ~]#
[root@rac2 soft]# /oracle/app/grid/product/11.2.0/root.sh
rac2節點上:
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/grid/product/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwriteit? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-08-02 14:32:28: Parsing the host name
2010-08-02 14:32:28: Checking for super user privileges
2010-08-02 14:32:28: User has super user privileges
Using configuration parameter file:
/oracle/app/grid/product/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on
node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
rac2 2010/08/02 14:37:51
/oracle/app/grid/product/11.2.0/cdata/rac2/backup_20100802_143751.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 1202MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
-
11.2.0.3 版本的執行日誌:
此時,叢集件相關的服務已經啟動。當然,ASM 例項也將在兩個節點上啟動。這個時候即可以執行命令crs_stat -t 來查詢了,結果中有20行:
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.OCR.dg ora....up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type ONLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
[grid@rac1 ~]$
-
校驗
確認Grid安裝成功
CRS狀態
[grid@rac01 ~]$ crs_stat -t 或 crsctl stat res -t
Name Type Target State Host
------------------------------------------------------------
ora.CRSDG.dg ora....up.type ONLINE ONLINE rac01
ora....ER.lsnr ora....er.type ONLINE ONLINE rac01
ora....N1.lsnr ora....er.type ONLINE ONLINE rac01
ora.asm ora.asm.type ONLINE ONLINE rac01
ora.eons ora.eons.type ONLINE ONLINE rac01
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac01
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac01
ora....SM1.asm application ONLINE ONLINE rac01
ora....01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application OFFLINE OFFLINE
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip ora....t1.type ONLINE ONLINE rac01
ora....SM2.asm application ONLINE ONLINE rac02
ora....02.lsnr application ONLINE ONLINE rac02
ora.rac02.gsd application OFFLINE OFFLINE
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip ora....t1.type ONLINE ONLINE rac02
ora.scan1.vip ora....ip.type ONLINE ONLINE rac01
標為藍色的四個服務,在11g裡面是可選項,並且預設是offline,可忽略。
voting disk狀態
[grid@rac01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 7b8903f49cc84fa8bf06d199bdf5dfe3 (ORCL:DISK01) [CRSDG]
OCR狀態
[grid@rac01 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2264
Available space (kbytes) : 259856
ID : 1510360228
Device/File Name : +CRSDG
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
測試GI的安裝
node1
-
[root@node1 ~]# ifconfig
-
eth0 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
-
inet addr:192.168.1.51 Bcast:192.168.255.255 Mask:255.255.0.0
-
inet6 addr: fe80::20c:29ff:fe79:3395/64 Scope:Link
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
RX packets:977978 errors:0 dropped:1345 overruns:0 frame:0
-
TX packets:2525875 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:106995897 (102.0 MiB) TX bytes:3573509233 (3.3 GiB)
-
-
eth0:1 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
-
inet addr:192.168.1.151 Bcast:192.168.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
eth0:3 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
-
inet addr:192.168.1.58 Bcast:192.168.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
eth0:4 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
-
inet addr:192.168.1.59 Bcast:192.168.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
eth1 Link encap:Ethernet HWaddr 00:0C:29:79:33:9F
-
inet addr:172.168.1.51 Bcast:172.168.255.255 Mask:255.255.0.0
-
inet6 addr: fe80::20c:29ff:fe79:339f/64 Scope:Link
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
RX packets:728960 errors:0 dropped:1345 overruns:0 frame:0
-
TX packets:13833 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:54104908 (51.5 MiB) TX bytes:7561084 (7.2 MiB)
-
-
eth1:1 Link encap:Ethernet HWaddr 00:0C:29:79:33:9F
-
inet addr:169.254.201.146 Bcast:169.254.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
lo Link encap:Local Loopback
-
inet addr:127.0.0.1 Mask:255.0.0.0
-
inet6 addr: ::1/128 Scope:Host
-
UP LOOPBACK RUNNING MTU:16436 Metric:1
-
RX packets:13162 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:13162 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:0
-
RX bytes:7783412 (7.4 MiB) TX bytes:7783412 (7.4 MiB)
-
-
[root@node1 ~]# ps -ef|egrep -i "asm|listener"
-
grid 24390 1 0 10:03 ? 00:00:00 asm_pmon_+ASM1
-
grid 24392 1 0 10:03 ? 00:00:00 asm_psp0_+ASM1
-
grid 24394 1 1 10:03 ? 00:00:18 asm_vktm_+ASM1
-
grid 24398 1 0 10:03 ? 00:00:00 asm_gen0_+ASM1
-
grid 24400 1 0 10:03 ? 00:00:00 asm_diag_+ASM1
-
grid 24402 1 0 10:03 ? 00:00:00 asm_ping_+ASM1
-
grid 24404 1 0 10:03 ? 00:00:02 asm_dia0_+ASM1
-
grid 24406 1 0 10:03 ? 00:00:02 asm_lmon_+ASM1
-
grid 24408 1 0 10:03 ? 00:00:01 asm_lmd0_+ASM1
-
grid 24410 1 0 10:03 ? 00:00:02 asm_lms0_+ASM1
-
grid 24414 1 0 10:03 ? 00:00:00 asm_lmhb_+ASM1
-
grid 24416 1 0 10:03 ? 00:00:00 asm_mman_+ASM1
-
grid 24418 1 0 10:03 ? 00:00:00 asm_dbw0_+ASM1
-
grid 24420 1 0 10:03 ? 00:00:00 asm_lgwr_+ASM1
-
grid 24422 1 0 10:03 ? 00:00:00 asm_ckpt_+ASM1
-
grid 24424 1 0 10:03 ? 00:00:00 asm_smon_+ASM1
-
grid 24426 1 0 10:03 ? 00:00:00 asm_rbal_+ASM1
-
grid 24428 1 0 10:03 ? 00:00:00 asm_gmon_+ASM1
-
grid 24430 1 0 10:03 ? 00:00:00 asm_mmon_+ASM1
-
grid 24432 1 0 10:03 ? 00:00:00 asm_mmnl_+ASM1
-
grid 24434 1 0 10:03 ? 00:00:00 asm_lck0_+ASM1
-
grid 24436 1 0 10:03 ? 00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 24466 1 0 10:03 ? 00:00:01 oracle+ASM1_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 24471 1 0 10:03 ? 00:00:00 asm_asmb_+ASM1
-
grid 24473 1 0 10:03 ? 00:00:00 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 24876 1 0 10:04 ? 00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 25269 1 0 10:05 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
-
grid 25283 1 0 10:05 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
-
grid 26105 1 0 10:15 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
-
grid 28183 28182 0 10:21 ? 00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
root 28263 2146 0 10:26 pts/2 00:00:00 egrep -i asm|listener
node2
-
[root@node2 ~]# ifconfig
-
eth0 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:76
-
inet addr:192.168.1.52 Bcast:192.168.255.255 Mask:255.255.0.0
-
inet6 addr: fe80::20c:29ff:fe5c:fc76/64 Scope:Link
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
RX packets:3068626 errors:0 dropped:1348 overruns:0 frame:0
-
TX packets:185731 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:3505670277 (3.2 GiB) TX bytes:39520990 (37.6 MiB)
-
-
eth0:1 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:76
-
inet addr:192.168.1.57 Bcast:192.168.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
eth0:2 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:76
-
inet addr:192.168.1.152 Bcast:192.168.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
eth1 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:80
-
inet addr:172.168.1.52 Bcast:172.168.255.255 Mask:255.255.0.0
-
inet6 addr: fe80::20c:29ff:fe5c:fc80/64 Scope:Link
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
RX packets:729233 errors:0 dropped:1348 overruns:0 frame:0
-
TX packets:15630 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:53620798 (51.1 MiB) TX bytes:8883597 (8.4 MiB)
-
-
eth1:1 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:80
-
inet addr:169.254.30.23 Bcast:169.254.255.255 Mask:255.255.0.0
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
-
lo Link encap:Local Loopback
-
inet addr:127.0.0.1 Mask:255.0.0.0
-
inet6 addr: ::1/128 Scope:Host
-
UP LOOPBACK RUNNING MTU:16436 Metric:1
-
RX packets:6049 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:6049 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:0
-
RX bytes:2377782 (2.2 MiB) TX bytes:2377782 (2.2 MiB)
-
-
[root@node2 ~]# ps -ef|egrep -i "asm|listener"
-
grid 21049 1 0 10:09 ? 00:00:00 asm_pmon_+ASM2
-
grid 21051 1 0 10:09 ? 00:00:00 asm_psp0_+ASM2
-
grid 21053 1 1 10:09 ? 00:00:14 asm_vktm_+ASM2
-
grid 21057 1 0 10:09 ? 00:00:00 asm_gen0_+ASM2
-
grid 21059 1 0 10:09 ? 00:00:00 asm_diag_+ASM2
-
grid 21061 1 0 10:09 ? 00:00:00 asm_ping_+ASM2
-
grid 21063 1 0 10:09 ? 00:00:01 asm_dia0_+ASM2
-
grid 21065 1 0 10:09 ? 00:00:01 asm_lmon_+ASM2
-
grid 21067 1 0 10:09 ? 00:00:00 asm_lmd0_+ASM2
-
grid 21069 1 0 10:09 ? 00:00:02 asm_lms0_+ASM2
-
grid 21073 1 0 10:09 ? 00:00:00 asm_lmhb_+ASM2
-
grid 21075 1 0 10:09 ? 00:00:00 asm_mman_+ASM2
-
grid 21077 1 0 10:09 ? 00:00:00 asm_dbw0_+ASM2
-
grid 21079 1 0 10:09 ? 00:00:00 asm_lgwr_+ASM2
-
grid 21081 1 0 10:09 ? 00:00:00 asm_ckpt_+ASM2
-
grid 21083 1 0 10:09 ? 00:00:00 asm_smon_+ASM2
-
grid 21085 1 0 10:09 ? 00:00:00 asm_rbal_+ASM2
-
grid 21087 1 0 10:09 ? 00:00:00 asm_gmon_+ASM2
-
grid 21089 1 0 10:09 ? 00:00:00 asm_mmon_+ASM2
-
grid 21091 1 0 10:09 ? 00:00:00 asm_mmnl_+ASM2
-
grid 21093 1 0 10:09 ? 00:00:00 asm_lck0_+ASM2
-
grid 21095 1 0 10:09 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 21128 1 0 10:09 ? 00:00:00 oracle+ASM2_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 21130 1 0 10:09 ? 00:00:00 asm_asmb_+ASM2
-
grid 21132 1 0 10:09 ? 00:00:00 oracle+ASM2_asmb_+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 21271 1 0 10:09 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
-
grid 21326 1 0 10:09 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
-
grid 22068 1 0 10:15 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
-
root 23551 1979 0 10:26 pts/2 00:00:00 egrep -i asm|listener
檢查 CRS 狀態
-
[grid@node2 ~]$ crsctl check crs
-
CRS-4638: Oracle High Availability Services is online
-
CRS-4537: Cluster Ready Services is online
-
CRS-4529: Cluster Synchronization Services is online
-
CRS-4533: Event Manager is online
檢查 Clusterware 資源 ,crs_stat命令在11gR2中不再推薦使用,推薦使用crsctl stat res -t
-
[grid@node2 ~]$ crs_stat -t
-
Name Type Target State Host
-
------------------------------------------------------------
-
ora.CRS.dg ora....up.type ONLINE ONLINE node1
-
ora....ER.lsnr ora....er.type ONLINE ONLINE node1
-
ora....N1.lsnr ora....er.type ONLINE ONLINE node2
-
ora....N2.lsnr ora....er.type ONLINE ONLINE node1
-
ora....N3.lsnr ora....er.type ONLINE ONLINE node1
-
ora.asm ora.asm.type ONLINE ONLINE node1
-
ora.cvu ora.cvu.type ONLINE ONLINE node1
-
ora.gsd ora.gsd.type OFFLINE OFFLINE
-
ora....network ora....rk.type ONLINE ONLINE node1
-
ora....SM1.asm application ONLINE ONLINE node1
-
ora....E1.lsnr application ONLINE ONLINE node1
-
ora.node1.gsd application OFFLINE OFFLINE
-
ora.node1.ons application ONLINE ONLINE node1
-
ora.node1.vip ora....t1.type ONLINE ONLINE node1
-
ora....SM2.asm application ONLINE ONLINE node2
-
ora....E2.lsnr application ONLINE ONLINE node2
-
ora.node2.gsd application OFFLINE OFFLINE
-
ora.node2.ons application ONLINE ONLINE node2
-
ora.node2.vip ora....t1.type ONLINE ONLINE node2
-
ora.oc4j ora.oc4j.type ONLINE ONLINE node1
-
ora.ons ora.ons.type ONLINE ONLINE node1
-
ora.scan1.vip ora....ip.type ONLINE ONLINE node2
-
ora.scan2.vip ora....ip.type ONLINE ONLINE node1
-
ora.scan3.vip ora....ip.type ONLINE ONLINE node1
-
[grid@node2 ~]$ crsctl stat res -t
-
--------------------------------------------------------------------------------
-
NAME TARGET STATE SERVER STATE_DETAILS
-
--------------------------------------------------------------------------------
-
Local Resources
-
--------------------------------------------------------------------------------
-
ora.CRS.dg
-
ONLINE ONLINE node1
-
ONLINE ONLINE node2
-
ora.LISTENER.lsnr
-
ONLINE ONLINE node1
-
ONLINE ONLINE node2
-
ora.asm
-
ONLINE ONLINE node1 Started
-
ONLINE ONLINE node2 Started
-
ora.gsd
-
OFFLINE OFFLINE node1
-
OFFLINE OFFLINE node2
-
ora.net1.network
-
ONLINE ONLINE node1
-
ONLINE ONLINE node2
-
ora.ons
-
ONLINE ONLINE node1
-
ONLINE ONLINE node2
-
--------------------------------------------------------------------------------
-
Cluster Resources
-
--------------------------------------------------------------------------------
-
ora.LISTENER_SCAN1.lsnr
-
1 ONLINE ONLINE node2
-
ora.LISTENER_SCAN2.lsnr
-
1 ONLINE ONLINE node1
-
ora.LISTENER_SCAN3.lsnr
-
1 ONLINE ONLINE node1
-
ora.cvu
-
1 ONLINE ONLINE node1
-
ora.node1.vip
-
1 ONLINE ONLINE node1
-
ora.node2.vip
-
1 ONLINE ONLINE node2
-
ora.oc4j
-
1 ONLINE ONLINE node1
-
ora.scan1.vip
-
1 ONLINE ONLINE node2
-
ora.scan2.vip
-
1 ONLINE ONLINE node1
-
ora.scan3.vip
-
1 ONLINE ONLINE node1
檢查叢集節點
-
[grid@node2 ~]$ olsnodes -n
-
node1 1
-
node2 2
檢測CRS版本
-
[grid@node2 ~]$ crsctl query crs activeversion
-
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
檢查 Oracle 叢集登錄檔 (OCR)
-
[grid@node2 ~]$ ocrcheck
-
Status of Oracle Cluster Registry is as follows :
-
Version : 3
-
Total space (kbytes) : 262120
-
Used space (kbytes) : 2588
-
Available space (kbytes) : 259532
-
ID : 1606856820
-
Device/File Name : +CRS
-
Device/File integrity check succeeded
-
-
Device/File not configured
-
-
Device/File not configured
-
-
Device/File not configured
-
-
Device/File not configured
-
-
Cluster registry integrity check succeeded
-
-
Logical corruption check bypassed due to non-privileged user
檢查votedisk
-
[grid@node2 ~]$ crsctl query css votedisk
-
## STATE File Universal Id File Name Disk group
-
-- ----- ----------------- --------- ---------
-
1. ONLINE 4b4ef03676d84facbf55c02b8c058a07 (/dev/asm-diskc) [CRS]
-
Located 1 voting disk(s).
檢查asm
-
[grid@node2 ~]$ srvctl config asm -a
-
ASM home: /u01/app/11.2.0/grid
-
ASM listener: LISTENER
-
ASM is enabled.
-
[grid@node2 ~]$ srvctl status asm
-
ASM is running on node2,node1
-
[grid@node2 ~]$ uname -p
-
x86_64
-
[grid@node2 ~]$ sqlplus / as sysdba
-
-
SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 10:45:13 2012
-
-
Copyright (c) 1982, 2011, Oracle. All rights reserved.
-
-
-
Connected to:
-
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
-
With the Real Application Clusters and Automatic Storage Management options
-
-
SQL> set linesize 100
-
SQL> show parameter spfile
-
-
NAME TYPE VALUE
-
------------------------------------ ---------------------- ------------------------------
-
spfile string +CRS/cluster-scan/asmparameter
-
file/registry.253.803296901
-
SQL> select path from v$asm_disk;
-
-
PATH
-
----------------------------------------------------------------------------------------------------
-
/dev/asm-diskg
-
/dev/asm-diskf
-
/dev/asm-diske
-
/dev/asm-diskb
-
/dev/asm-diskc
-
/dev/asm-diskd
-
-
6 rows selected.
ASM磁碟組配置
檢查監聽狀態
[grid@rac01 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 16-MAR-2011 16:24:36
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 16-MAR-2011 14:27:14
Uptime 0 days 1 hr. 57 min. 27 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/grid/11.2/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/rac01/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.211)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.213)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@rac01 ~]$
如果沒有+ASM服務,則需要重新配置監聽程式。
-
建立其它新磁碟組
以 grid 使用者執行 asmca 命令
使用asmca,建立DATADG和FRADG兩個磁碟組。
DATA建立成功,接下來建立FLASHDG:
FLASHDG建立成功,退出ASMCA。
驗證:
-
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29337971/viewspace-1819885/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 一步一步搭建oracle 11gR2 rac+dg之環境準備(二)Oracle
- 一步一步搭建11gR2 rac+dg之配置單例項的DG(八)單例
- 【BUILD_ORACLE】Oracle 19c RAC搭建(四)Grid軟體安裝UIOracle
- 操作規範(四)——Linux 5.4安裝Oracle 11gR2LinuxOracle
- Oracle RAC+DG搭建Oracle
- Solaris 10.5 安裝Oracle 11gR2Oracle
- 一步一步搭建腳手架
- 第一步,anaconda的安裝
- 一步一步分析vue之observeVue
- 一步一步帶你掌握webpack(四)——開發Web
- 一步一步分析vue之$mount(1)Vue
- Oracle 11G RAC叢集安裝(2)——安裝gridOracle
- 一步一步搭建 springboot-2.1.3+dubbo-2.7.1 專案Spring Boot
- 一步一步搭建Flutter開發架子-網路請求,非同步UI更新封裝Flutter非同步UI封裝
- 教你一步一步在vim中配置史上最難安裝的You Complete Me
- Oracle 11gR2(11.2.0.4)安裝包(7個)作用說明Oracle
- 一步一步搭建react應用-前後端初始化React後端
- kaldi第一步安裝kaldi測試yesno
- 一步一步教你封裝最新版的Dio封裝
- 一步一步搭建,功能最全的許可權管理系統之動態路由選單(一)路由
- 一步一步學ROP之linux_x86篇Linux
- 一步一步學ROP之Android ARM 32位篇Android
- 一步一步學ROP之linux_x64篇Linux
- 【配置上線】靜默安裝資料庫Oracle 11gR2資料庫Oracle
- 一步一步來
- window+python3.6+opencv3.4安裝一步到位PythonOpenCV
- 12.2.0.1.0 Grid RU安裝
- 12.2.0.1 Grid RUR 安裝
- 12.2 Grid RUR 安裝
- 一步搭建你的私密網盤
- 一步一步帶你封裝基於react的modal元件封裝React元件
- 一步一步上手MyBatisPlusMyBatis
- 首次安裝Linux,配置網路、換源一步到位Linux
- oracle之 11.2.0.4 bbed安裝Oracle
- 一步一步學spring bootSpring Boot
- 如何一步一步配置webpackWeb
- 一步一步理解命令模式模式
- 一步一步手寫GPTGPT
- 手挽手帶你學React:四檔(下篇)一步一步學會react-reduxReactRedux