ORACLE RAC 11.2.0.4 for RHEL6.8安裝遭遇PRVF 9992與DBCA遭遇ORA-19504&ORA-15001
無論是物理實體機,還是vmware或virtualbox虛擬機器,自己已經安裝過N多套10g、11g的rac叢集系統,
昨天算是第一次遭遇到PRVF-9992 : Group of device “/dev/mapper/datadg″ did not match the expected group
由於以前在湘潭安裝過oracle rac 11.2.0.4 for rhel6.6,安裝過程非常順利,用的方法與這次衡陽的安裝也是一模一樣,自己感覺到非常納悶。
作業系統:ORACLE LINUX RHEL6.8
資料庫版本:ORACLE 11.2.0.4
物理機器:Dell R730 Server
出現的問題:PRVF 9992
本次安裝,叢集共享磁碟使用的是multipath對映,修改前的配置:
[root@rac1 soft]#cat /etc/multipath.conf
# multipath.conf written by anaconda
defaults {
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "IBM"
product "S/390.*"
}
# don't count normal SATA devices as multipaths
device {
vendor "ATA"
}
# don't count 3ware devices as multipaths
device {
vendor "3ware"
}
device {
vendor "AMCC"
}
# nor highpoint devices
device {
vendor "HPT"
}
wwid "361866da084d8d1002032f80e0b9458e7"
wwid "*"
}
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
wwid "3600a098000abf3880000025958a036a0"
}
multipaths {
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025d58a036e7"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025658a0367d"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000026158a0371a"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025f58a03705"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025b58a036c4"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000026358a03730"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025958a036a0"
mode 0600
}
}
修改後的配置:
# multipath.conf written by anaconda
defaults {
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "IBM"
product "S/390.*"
}
# don't count normal SATA devices as multipaths
device {
vendor "ATA"
}
# don't count 3ware devices as multipaths
device {
vendor "3ware"
}
device {
vendor "AMCC"
}
# nor highpoint devices
device {
vendor "HPT"
}
wwid "361866da084d8d1002032f80e0b9458e7"
wwid "*"
}
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
wwid "3600a098000abf3880000025958a036a0"
}
multipaths {
multipath {
uid 0
gid 0
wwid "3600a098000a1157d000002d25798865a"
mode 0600
}
multipath {
uid 600
gid 600
alias ocr1
wwid "3600a098000a1157d000002d457988679"
mode 0600
}
multipath {
uid 600
gid 600
alias arch
wwid "3600a098000a1157d000002cf57988639"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000a1157d000002de579886d4"
mode 0600
}
multipath {
uid 600
gid 600
alias data3
wwid "3600a098000a1157d000002e0579886e9"
mode 0600
}
multipath {
uid 600
gid 600
alias ocr2
wwid "3600a098000a1157d000002d65798868a"
mode 0600
}
multipath {
uid 600
gid 600
alias ocr3
wwid "3600a098000a1157d000002d85798869a"
mode 0600
}
multipath {
uid 600
gid 600
alias data1
wwid "3600a098000a1157d000002da579886ac"
mode 0600
}
multipath {
uid 600
gid 600
alias data2
wwid "3600a098000a1157d000002dc579886bf"
mode 0600
}
}
外加udev修改讀寫許可權:
[root@rac1 rules.d]# cat 12-mulitpath-privs.rules
ENV{DM_NAME}=="arch*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="ocr*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="data*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
[root@rac1 rules.d]#
主機經過重啟,multipath重啟,udev重啟,再次嘗試安裝grid,還是有prvf-9992警告;想不到的是:忽略prvf-9992,grid竟然安裝成功:
[root@rac1 Packages]# /u01/oracle/app/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/oracle/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/oracle/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 Packages]# /u01/oracle/app/grid/home/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/oracle/app/grid/home
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/oracle/app/grid/home/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group clsdg created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 5edddd8fca634f07bfbf345f7c10f0c2.
Successful addition of voting disk 7c64e7be3a1d4f9fbf9f5983896a0adb.
Successful addition of voting disk 47924c4f2dda4f36bf3fca4111247e3f.
Successfully replaced voting disk group with +clsdg.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 5edddd8fca634f07bfbf345f7c10f0c2 (/dev/mapper/crs1) [CLSDG]
2. ONLINE 7c64e7be3a1d4f9fbf9f5983896a0adb (/dev/mapper/crs2) [CLSDG]
3. ONLINE 47924c4f2dda4f36bf3fca4111247e3f (/dev/mapper/crs3) [CLSDG]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.CLSDG.dg' on 'rac1'
CRS-2676: Start of 'ora.CLSDG.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 Packages]#
[root@rac2 ~]# /u01/oracle/app/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/oracle/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/oracle/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]# /u01/oracle/app/grid/home/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/oracle/app/grid/home
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/oracle/app/grid/home/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CLSDG.dg ora....up.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
[grid@rac1 ~]$
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLSDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[grid@rac1 ~]$
[grid@rac1 ~]$ lsnrctl status racscan
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 13-FEB-2017 22:26:25
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=10.141.101.144)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 13-FEB-2017 22:18:10
Uptime 0 days 0 hr. 8 min. 15 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/app/grid/home/network/admin/listener.ora
Listener Log File /u01/oracle/app/grid/home/log/diag/tnslsnr/rac1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.141.101.144)(PORT=1521)))
The listener supports no services
The command completed successfully
[grid@rac1 ~]$
另一個問題是:DBCA建庫遭遇ORA-19504&ORA-15001
開始給DATADG一共4T的共享儲存,但是grid透過asmca檢視時顯示totalsize是4TB,而freesize只有2TB;
然後,將datadg拆開成datadg1、datadg2,每個2T,再次透過asmca檢視時顯示totalsize和freesize一致了。
但是,重試dbca建庫的時候還是到9%報錯,依然是:ORA-19504&ORA-15001。懷疑是mutipath問題,將對映配置改為:
multipath {
uid 600
gid 600
alias arch
wwid "3600a098000a1157d000002cf57988639"
mode 0660
path_grouping_policy failover
}
並且重新手工同步主機時鐘(由於作業系統不是自己安裝的,叢集grid安裝成功後,檢視作業系統時鐘,2臺主機時間相差10多分鐘),
然後,grid關閉叢集,重啟作業系統後,oracle使用者再次嘗試dbca成功執行:
[grid@rac2 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.ARCHDG.dg ora....up.type ONLINE ONLINE rac1
ora.CLSDG.dg ora....up.type ONLINE ONLINE rac1
ora.DATADG1.dg ora....up.type ONLINE ONLINE rac1
ora.DATADG2.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora.orcl.db ora....se.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac2
[grid@rac2 ~]$
[oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 13 18:34:39 2017
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ WRITE
SQL>
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL>
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCHDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.CLSDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATADG1.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATADG2.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.orcl.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
[grid@rac1 ~]$
昨天算是第一次遭遇到PRVF-9992 : Group of device “/dev/mapper/datadg″ did not match the expected group
由於以前在湘潭安裝過oracle rac 11.2.0.4 for rhel6.6,安裝過程非常順利,用的方法與這次衡陽的安裝也是一模一樣,自己感覺到非常納悶。
作業系統:ORACLE LINUX RHEL6.8
資料庫版本:ORACLE 11.2.0.4
物理機器:Dell R730 Server
出現的問題:PRVF 9992
本次安裝,叢集共享磁碟使用的是multipath對映,修改前的配置:
[root@rac1 soft]#cat /etc/multipath.conf
# multipath.conf written by anaconda
defaults {
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "IBM"
product "S/390.*"
}
# don't count normal SATA devices as multipaths
device {
vendor "ATA"
}
# don't count 3ware devices as multipaths
device {
vendor "3ware"
}
device {
vendor "AMCC"
}
# nor highpoint devices
device {
vendor "HPT"
}
wwid "361866da084d8d1002032f80e0b9458e7"
wwid "*"
}
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
wwid "3600a098000abf3880000025958a036a0"
}
multipaths {
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025d58a036e7"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025658a0367d"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000026158a0371a"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025f58a03705"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025b58a036c4"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000026358a03730"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000abf3880000025958a036a0"
mode 0600
}
}
修改後的配置:
# multipath.conf written by anaconda
defaults {
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "IBM"
product "S/390.*"
}
# don't count normal SATA devices as multipaths
device {
vendor "ATA"
}
# don't count 3ware devices as multipaths
device {
vendor "3ware"
}
device {
vendor "AMCC"
}
# nor highpoint devices
device {
vendor "HPT"
}
wwid "361866da084d8d1002032f80e0b9458e7"
wwid "*"
}
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
blacklist_exceptions {
wwid "3600a098000abf3880000025d58a036e7"
wwid "3600a098000abf3880000025658a0367d"
wwid "3600a098000abf3880000026158a0371a"
wwid "3600a098000abf3880000025f58a03705"
wwid "3600a098000abf3880000025b58a036c4"
wwid "3600a098000abf3880000026358a03730"
wwid "3600a098000abf3880000025958a036a0"
}
multipaths {
multipath {
uid 0
gid 0
wwid "3600a098000a1157d000002d25798865a"
mode 0600
}
multipath {
uid 600
gid 600
alias ocr1
wwid "3600a098000a1157d000002d457988679"
mode 0600
}
multipath {
uid 600
gid 600
alias arch
wwid "3600a098000a1157d000002cf57988639"
mode 0600
}
multipath {
uid 0
gid 0
wwid "3600a098000a1157d000002de579886d4"
mode 0600
}
multipath {
uid 600
gid 600
alias data3
wwid "3600a098000a1157d000002e0579886e9"
mode 0600
}
multipath {
uid 600
gid 600
alias ocr2
wwid "3600a098000a1157d000002d65798868a"
mode 0600
}
multipath {
uid 600
gid 600
alias ocr3
wwid "3600a098000a1157d000002d85798869a"
mode 0600
}
multipath {
uid 600
gid 600
alias data1
wwid "3600a098000a1157d000002da579886ac"
mode 0600
}
multipath {
uid 600
gid 600
alias data2
wwid "3600a098000a1157d000002dc579886bf"
mode 0600
}
}
外加udev修改讀寫許可權:
[root@rac1 rules.d]# cat 12-mulitpath-privs.rules
ENV{DM_NAME}=="arch*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="ocr*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="data*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
[root@rac1 rules.d]#
主機經過重啟,multipath重啟,udev重啟,再次嘗試安裝grid,還是有prvf-9992警告;想不到的是:忽略prvf-9992,grid竟然安裝成功:
[root@rac1 Packages]# /u01/oracle/app/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/oracle/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/oracle/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 Packages]# /u01/oracle/app/grid/home/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/oracle/app/grid/home
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/oracle/app/grid/home/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group clsdg created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 5edddd8fca634f07bfbf345f7c10f0c2.
Successful addition of voting disk 7c64e7be3a1d4f9fbf9f5983896a0adb.
Successful addition of voting disk 47924c4f2dda4f36bf3fca4111247e3f.
Successfully replaced voting disk group with +clsdg.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 5edddd8fca634f07bfbf345f7c10f0c2 (/dev/mapper/crs1) [CLSDG]
2. ONLINE 7c64e7be3a1d4f9fbf9f5983896a0adb (/dev/mapper/crs2) [CLSDG]
3. ONLINE 47924c4f2dda4f36bf3fca4111247e3f (/dev/mapper/crs3) [CLSDG]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.CLSDG.dg' on 'rac1'
CRS-2676: Start of 'ora.CLSDG.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 Packages]#
[root@rac2 ~]# /u01/oracle/app/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/oracle/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/oracle/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]# /u01/oracle/app/grid/home/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/oracle/app/grid/home
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/oracle/app/grid/home/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CLSDG.dg ora....up.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
[grid@rac1 ~]$
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLSDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[grid@rac1 ~]$
[grid@rac1 ~]$ lsnrctl status racscan
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 13-FEB-2017 22:26:25
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=10.141.101.144)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 13-FEB-2017 22:18:10
Uptime 0 days 0 hr. 8 min. 15 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle/app/grid/home/network/admin/listener.ora
Listener Log File /u01/oracle/app/grid/home/log/diag/tnslsnr/rac1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.141.101.144)(PORT=1521)))
The listener supports no services
The command completed successfully
[grid@rac1 ~]$
另一個問題是:DBCA建庫遭遇ORA-19504&ORA-15001
開始給DATADG一共4T的共享儲存,但是grid透過asmca檢視時顯示totalsize是4TB,而freesize只有2TB;
然後,將datadg拆開成datadg1、datadg2,每個2T,再次透過asmca檢視時顯示totalsize和freesize一致了。
但是,重試dbca建庫的時候還是到9%報錯,依然是:ORA-19504&ORA-15001。懷疑是mutipath問題,將對映配置改為:
multipath {
uid 600
gid 600
alias arch
wwid "3600a098000a1157d000002cf57988639"
mode 0660
path_grouping_policy failover
}
並且重新手工同步主機時鐘(由於作業系統不是自己安裝的,叢集grid安裝成功後,檢視作業系統時鐘,2臺主機時間相差10多分鐘),
然後,grid關閉叢集,重啟作業系統後,oracle使用者再次嘗試dbca成功執行:
[grid@rac2 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.ARCHDG.dg ora....up.type ONLINE ONLINE rac1
ora.CLSDG.dg ora....up.type ONLINE ONLINE rac1
ora.DATADG1.dg ora....up.type ONLINE ONLINE rac1
ora.DATADG2.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora.orcl.db ora....se.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac2
[grid@rac2 ~]$
[oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 13 18:34:39 2017
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ WRITE
SQL>
SQL> select status from v$instance;
STATUS
------------
OPEN
SQL>
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCHDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.CLSDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATADG1.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATADG2.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.orcl.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
[grid@rac1 ~]$
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29357786/viewspace-2133482/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle rac遭遇ora-12170Oracle
- Solaris下Oracle RAC 11.2.0.4 安裝方法Oracle
- 32 位 oracle 安裝到 64 位os ,遭遇的錯誤Oracle
- Oracle 11.2.0.4 RAC安裝最新PSU補丁Oracle
- CentOS 7.6 安裝11.2.0.4 RACCentOS
- rac 遭遇GC BUFFER BUSY 處理思路GC
- AIX 6.1 Oracle11g 11.2.0.4 RAC 安裝心得AIOracle
- RAC安裝【AIX 7 + 11.2.0.4 + ASM】AIASM
- oracle遭遇大量SNIPED會話Oracle會話
- Oracle 11.2.0.4的安裝Oracle
- oracle 10.2.4 遭遇bug 當機Oracle
- 【DBCA -SILENT】靜默安裝之rac資料庫安裝資料庫
- Windows 11.2.0.4 RAC安裝配置以及RAC新增節點Windows
- oracle之 11.2.0.4 bbed安裝Oracle
- ORACLE 11.2.0.4靜默安裝Oracle
- ORACLE RAC 11.2.0.4 for RHEL6.8無法啟動之ORA000205&ORA17503&ORA01174Oracle
- centos 6.7下靜默安裝oracle 11.2.0.4 RAC的簡單介紹CentOSOracle
- 遭遇ITL死鎖
- CentOS 6.5安裝Oracle 11.2.0.4------CentOS 6.5安裝CentOSOracle
- RHEL5 Oracle 11G R2 RAC 靜默安裝 (三) rdbms安裝 dbca 建庫Oracle
- ORACLE11.2.0.4 RAC+ ASM安裝方法 (作業系統CENTOS7.6)OracleASM作業系統CentOS
- 安裝 oracle 10g rac 與 裸裝置Oracle 10g
- oracle rac aix 安裝OracleAI
- 遭遇Excel的巨集病毒Excel
- oracle 11gR2 RAC安裝與oracle 10gR2 rac 安裝時的不同點Oracle 10g
- RHEL 7 安裝oracle rac 11.2.0.4執行root.sh報錯ohasd failed to startOracleAI
- 【RAC】 RAC For W2K8R2 安裝--dbca建立資料庫(七)資料庫
- centos6.8 靜默安裝 oracle 11.2.0.4CentOSOracle
- dbca 靜默安裝
- oracle啟動遭遇ORA-27102: out of memoryOracle
- 【RAC安裝】 AIX下安裝Oracle 11gR2 RACAIOracle
- Oracle grid 11gR2 安裝報錯PRVF-5636Oracle
- Oracle RAC 安裝總結Oracle
- mysql xtrabackup 遭遇嚴重bugMySql
- EBS遭遇"Java 的早期版本"Java
- Oracle 11g RAC One node 安裝與配置Oracle
- centos 6.7 (UDEV,、etc/hosts)安裝 RAC 11.2.0.4 報錯處理CentOSdev
- redhat7.2靜默安裝Oracle11.2.0.4RedhatOracle