Oracle 19c RAC打補丁過程避坑指南
墨天輪原文連結:
導讀:記錄19.3 rac 打補丁過程中遇到的一些問題19.3升19.5、19.5升19.6。
新裝的19.3 rac 需要安裝補丁,目前最新的RU是19.6,由於最新的可能不穩定,選擇了次新的19.5,打第一套比較順利,後面的幾套都出現些大大小小的問題 ,記錄一下。
19.3存在一個比較嚴重的crs-6015錯誤,是個bug,在19.6得到了修復,我打完4套19.5,又重新打了遍19.6,比較坑,強列建議直接打19.6。
a)下載RU19.6補丁:p30463609_190000_Linux-x86-64.zip ,包含GI、DB、OJVM 累積增量補丁。
b)補丁安裝順序:GI–>DB-OJVM。
c)打了19.5可以直接打19.6,不需要解除安裝。
d)打gi和db都是在root下面操作,只有ojvm需要在oracle使用者下面操作。
一、補丁安裝方法
1. 檢查環境:
由於新裝的,我這裡就省略掉了,可以看README.html裡面的方法.
2. 解壓補丁包
我下載的是gi的RU,裡面包含gi和db的補丁,我是解壓到/tmp下面。
[root@xydb8node1 ~]# unzip p30116789_190000_Linux-x86-64.zip -d /tmp/ru19.5
[root@xydb8node1 ~]# chmod -R 777 /tmp/ru19.5
3. 先打gi補丁【節點1打完,再打節點2】,使用opatchauto。
打gi要用gi_home的opatchauto,打oracle用oracle_home的opatchauto ,切記都是在root下面執行命令,這時用的是全路徑,配置Path切換容易出錯。
[root@xydb8node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/ru19.5/30116789
4. 檢查gi是否成功
[grid@xydb8node1 ~]$ /u01/app/19.3.0/grid/OPatch/opatch lspatches
30125133;Database Release Update : 19.5.0.0.191015 (30125133)
30122167;ACFS RELEASE UPDATE 19.5.0.0.0 (30122167)
30122149;OCW RELEASE UPDATE 19.5.0.0.0 (30122149)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)
OPatch succeeded.
5. 打db補丁【節點1打完,再打節點2】,使用opatchauto。
[root@xydb8node1 ~]# /u01/app/oracle/product/19.3.0/db_1/OPatch/opatchauto apply /tmp/ru19.5/30116789 -oh /u01/app/oracle/product/19.3.0/db_1
6. 檢查db是否成功
[oracle@xydb8node1 ~]$ /u01/app/oracle/product/19.3.0/db_1/OPatch/opatch lspatches
30125133;Database Release Update : 19.5.0.0.191015 (30125133)
30122149;OCW RELEASE UPDATE 19.5.0.0.0 (30122149)
OPatch succeeded.
7. 打OJVM補丁【節點1打完,再打節點2】
[root@xydb8node1 ~]# cd /tmp/ru19.6/30463609/30484981/
[root@xydb8node1 30484981]# /u01/app/oracle/product/19.3.0/db_1/OPatch/opatch apply
#按提示輸入y,y即可。
8. 回退方法
#gi回退
/u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/grid_path/30116789 -oh /u01/app/19.3.0/grid
#db回退
/u01/app/oracle/product/19.3.0/db_1/OPatch/opatchauto rollback /tmp/grid_path/30116789 -oh /u01/app/oracle/product/19.3.0/db_1
9. 小結
先打節點1,或2都行,沒有強制要求先打節點1,習慣而已。補丁安裝過程中可能會遇到各種許可權問題及其它問題,後面針對遇到的問題都做了下記錄,讓後面的人少踩坑。
二、遇到的一些錯誤
錯誤No.1
Patch: /tmp/grid_path/30116789/30122149
Log: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-09_17-44-51PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)'
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Mon Mar 9 17:45:31 2020
Time taken to complete the session 1 minute, 16 seconds
opatchauto failed with error code 42
問題描述:
DB補丁安裝過程中報出的許可權不足,具體原因不明,沒有深入去分析,19c打補丁過程中會遇到各種許可權問題。
解決辦法:
[root@xydb8node1 ~]# chmod 777 /u01/app/oraInventory/ContentsXML/oui-patch.xml
#resume是接著上次失敗的地方繼續安裝的意思。
[root@xydb8node1 ~]# /u01/app/oracle/product/19.3.0/db_1/OPatch/opatchauto resume
錯誤No.2
2020-03-10 11:18:18.961 [CSSDMONITOR(150856)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 150856
2020-03-10T11:18:19.092125+08:00
Errors in file /u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc (incident=41):
CRS-6015 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/grid/diag/crs/xydb8node2/crs/incident/incdir_41/ohasd_i41.trc
2020-03-10 11:18:19.081 [OHASD(147218)]CRS-6015: Oracle Clusterware has experienced an internal error. Details at (:CLSGEN00100:) {0:0:2} in /u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc.
2020-03-10 11:18:19.106 [OHASD(147218)]CRS-8505: Oracle Clusterware OHASD process with operating system process ID 147218 encountered internal error CRS-06015
trace日誌:/u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc
擷取部份錯誤日誌,如下:
2020-03-10 11:18:19.057 :CRSSHARED:4034262784: [ INFO] [F-ALGO]{0:0:2} getIpcPath returning (ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_UI_SOCKET))
2020-03-10 11:18:19.058 :GIPCXCPT:4038465280: gipcInternalConnectSync: failed sync request, addr 0x7f9c9405c720 [000000000000b814] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)
2020-03-10 11:18:19.058 :GIPCXCPT:4038465280: gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ] failed sync connect endp 0x7f9c9405b2a0 [000000000000b80d] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9c9405e350, sendp 0x7f9c9405e100 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9c9405c720 [000000000000b814] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000
2020-03-10 11:18:19.058 :UiServer:4034262784: [ INFO] {0:0:2} GIPC address: clsc://(ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_UI_SOCKET))
2020-03-10 11:18:19.058 : GIPC:4034262784: sgipcnDSBindHelper: file /var/tmp/.oracle/sOHASD_UI_SOCKET_lock is locked by PID 147162
2020-03-10 11:18:19.058 :GIPCXCPT:4034262784: gipcmodNetworkProcessBind: failed to bind endp 0x7f9c8c000950 [000000000000b819] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x562c5e71a240, ready 0, wobj 0x7f9c8c03b390, sendp 0x7f9c8c03b140 status 13flags 0xa1000712, flags-2 0x0, usrFlags 0x20 }, addr 0x7f9c8c039460 [000000000000b81b] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x5 }
2020-03-10 11:18:19.058 :GIPCXCPT:4034262784: gipcmodNetworkProcessBind: slos op : sgipcnDSBindHelper
2020-03-10 11:18:19.058 :GIPCXCPT:4034262784: gipcmodNetworkProcessBind: slos dep : Resource temporarily unavailable (11)
2020-03-10 11:18:19.058 :GIPCXCPT:4034262784: gipcmodNetworkProcessBind: slos loc : lockf
2020-03-10 11:18:19.058 :GIPCXCPT:4034262784: gipcmodNetworkProcessBind: slos info: failed to grab a lock for (/var/tmp/.oracle/sOHASD_UI_SOCKET_lock)
2020-03-10 11:18:19.058 :GIPCXCPT:4034262784: gipcListenF [initServerSocket : clsSocket.cpp : 584]: EXCEPTION[ ret gipcretAddressInUse (20) ] failed to listen on endp 0x7f9c8c000950 [000000000000b819] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x562c5e71a240, ready 0, wobj 0x7f9c8c03b390, sendp 0x7f9c8c03b140 status 13flags 0xa1000712, flags-2 0x0, usrFlags 0x20 }, flags 0x0
2020-03-10 11:18:19.058 :UiServer:4034262784: [ ERROR] {0:0:2} SS(0x7f9c8c000eb0)GIPC Fatal Listen Error. gipc ret: gipcretAddressInUse. Address=clsc://(ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_UI_SOCKET))
2020-03-10 11:18:19.059 : CLSCEVT:4038465280: (:CLSCE0047:)clsce_publish_internal 0x7f9c94038da0 EvmConnCreate failed with status = 13, try = 0
2020-03-10 11:18:19.060 :GIPCXCPT:4038465280: gipcInternalConnectSync: failed sync request, addr 0x7f9c9405c7e0 [000000000000b834] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)
2020-03-10 11:18:19.060 :GIPCXCPT:4038465280: gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ] failed sync connect endp 0x7f9c9405b360 [000000000000b82d] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9c9405e330, sendp 0x7f9c9405e0e0 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9c9405c7e0 [000000000000b834] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000
2020-03-10 11:18:19.061 : CLSCEVT:4038465280: (:CLSCE0047:)clsce_publish_internal 0x7f9c94038da0 EvmConnCreate failed with status = 13, try = 1
2020-03-10 11:18:19.061 : CRSEVT:4038465280: [ INFO] {0:0:2} ClusterPubSub::publish Error posting to event stream. Connection will be retried on next publish [4]
2020-03-10 11:18:19.081 : CRSRPT:4038465280: [ INFO] {0:0:2} ClusterConnectException caught CRS_SERVER_STATE_CHANGE for xydb8node2
Trace file /u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc
Oracle Database 19c Clusterware Release 19.0.0.0.0 - Production
Version 19.6.0.0.0 Copyright 1996, 2019 Oracle. All rights reserved.
DDE: Flood control is not active
2020-03-10T11:18:19.092594+08:00
Incident 41 created, dump file: /u01/app/grid/diag/crs/xydb8node2/crs/incident/incdir_41/ohasd_i41.trc
CRS-6015 [] [] [] [] [] [] [] [] [] [] [] []
2020-03-10 11:18:19.107 :GIPCXCPT:421820160: gipcInternalConnectSync: failed sync request, addr 0x7f9d10022eb0 [000000000000b867] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)
2020-03-10 11:18:19.107 :GIPCXCPT:421820160: gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ] failed sync connect endp 0x7f9d10021a30 [000000000000b860] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9d1005d250, sendp 0x7f9d1005d000 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9d10022eb0 [000000000000b867] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000
2020-03-10 11:18:19.108 : CLSCEVT:421820160: (:CLSCE0047:)clsce_publish_internal 0x562c5e45bb90 EvmConnCreate failed with status = 13, try = 0
2020-03-10 11:18:19.108 :GIPCXCPT:421820160: gipcInternalConnectSync: failed sync request, addr 0x7f9d10022e70 [000000000000b878] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)
2020-03-10 11:18:19.109 :GIPCXCPT:421820160: gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ] failed sync connect endp 0x7f9d10021a10 [000000000000b871] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9d1005d230, sendp 0x7f9d1005cfe0 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9d10022e70 [000000000000b878] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000
2020-03-10 11:18:19.110 : CLSCEVT:421820160: (:CLSCE0047:)clsce_publish_internal 0x562c5e45bb90 EvmConnCreate failed with status = 13, try = 1
2020-03-10 11:18:19.171 : CRSCOMM:4059477760: [ INFO] IpcL: Accepted connection 45931 from user root member number 3
故障現象:
叢集能正常安裝,安裝完成後重啟叢集中其中一個節點可能會啟不來,crs alert日誌中丟擲異常crs-6015 ,gipcInternalConnectSync: failed sync request 錯誤。
解決方法:
查詢mos發現是個bug,測試在19.5中未進行修復,在最新19.6的RU中已進行了修復,所以新裝的19.3 RAC 建議直接升級到19.6。
錯誤No.3
[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/ru19.6/30463609/30501910 -oh /u01/app/19.3.0/grid
OPatchauto session is initiated at Tue Mar 10 15:37:44 2020
OPATCHAUTO-72083: Performing bootstrap operations failed.
OPATCHAUTO-72083: The bootstrap execution failed because failed to detect Grid Infrastructure setup due to null.
OPATCHAUTO-72083: Fix the reported problem and re-run opatchauto.
OPatchauto session completed at Tue Mar 10 15:38:07 2020
Time taken to complete the session 0 minute, 23 seconds
opatchauto bootstrapping failed with error code 255.
問題分析:
這個錯誤在正常打補丁過程中,如果shell斷開,再重新執行命令會報這個錯誤。
解決方法:
不能重新執行之前的命令,要用resume,如下,已經正常在跑了。
[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume
OPatchauto session is initiated at Tue Mar 10 15:40:33 2020
Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-10_03-40-34PM.log
Resuming existing session with id E7W9
Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid
Checking shared status of home.....
Starting CRS service on home /u01/app/19.3.0/grid
錯誤No.4
Failed to start CRS service on home /u01/app/19.3.0/grid
Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : xydb7node1->/u01/app/19.3.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: xydb7node1.
Command failed: /u01/app/19.3.0/grid/perl/bin/perl -I/u01/app/19.3.0/grid/perl/lib -I/u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/crs/install -I/u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/xag /u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output:
Using configuration parameter file: /u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/xydb7node1/crsconfig/crs_postpatch_xydb7node1_2020-03-10_03-41-09PM.log
2020/03/10 15:41:20 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVG-2032 : Group of file "/etc/oracleafd.conf" did not match the expected value on node "xydb7node1". [Expected = "oinstall(1001)" ; Found = "asmadmin(1005)"]
PRVH-0116 : Path "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" with permissions "rw-r--r--" does not have execute permissions for the owner, file's group, and others on node "xydb7node1".
PRVG-2031 : Owner of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "xydb7node1". [Expected = "grid(1002)" ; Found = "root(0)"]
PRVG-2032 : Group of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "xydb7node1". [Expected = "oinstall(1001)" ; Found = "root(0)"]
PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".
PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".
PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so.1.0" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".
PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so.1.0" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".
PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/clntshcore.map" with permissions "rw-r-----" does not have read permissions for others on node "xydb7node1".
PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/clntsh.map" with permissions "rw-r-----" does not have read permissions for others on node "xydb7node1".
PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libocci.so" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".
PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libocci.so" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".
PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libocci.so.19.1" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".
PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libocci.so.19.1" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
2020/03/10 15:46:52 CLSRSC-117: Failed to start Oracle Clusterware stack
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Tue Mar 10 15:46:54 2020
Time taken to complete the session 6 minutes, 21 seconds
opatchauto failed with error code 42
問題分析:
這個也是檔案許可權的問題,按要求設定許可權就行。透過lspatches直接檢查gi的版本,發現已經是19.6了,估計不改應該也行,我還是按要求來改了。
解決辦法:
修改這2個檔案的許可權,繼續resume,後續很可能遇到crs-6015錯誤。
[root@xydb7node1 ~]# chown grid:oinstall /etc/oracleafd.conf
[root@xydb7node1 ~]# chown grid:oinstall /u01/app/19.3.0/grid/crs/install/cmdllroot.sh
[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume
錯誤No.5
```shell
[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume
OPatchauto session is initiated at Tue Mar 10 16:06:47 2020
Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-10_04-06-47PM.log
Resuming existing session with id E7W9
Checking shared status of home.....
Starting CRS service on home /u01/app/19.3.0/grid
=====>resume後這裡一直卡著,檢查alter日誌有如下錯誤:
2020-03-10 16:07:22.095 [OHASD(126635)]CRS-6015: Oracle Clusterware has experienced an internal error. Details at (:CLSGEN00100:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.
2020-03-10T16:07:22.106550+08:00
Errors in file /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc (incident=9):
CRS-6015 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/grid/diag/crs/xydb7node1/crs/incident/incdir_9/ohasd_i9.trc
2020-03-10 16:07:22.120 [OHASD(126635)]CRS-8505: Oracle Clusterware OHASD process with operating system process ID 126635 encountered internal error CRS-06015
2020-03-10 16:10:51.606 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/orarootagent_root'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.
2020-03-10 16:10:51.638 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/oraagent_grid'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.
2020-03-10 16:12:51.684 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/cssdagent_root'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.
2020-03-10 16:12:51.705 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/cssdmonitor_root'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.
問題分析:
到了這一步說明gi補丁已安裝成功,在啟動crs叢集時卡住了,這裡我為了完美打補丁,不強行ctrl+c 結束,想了個辦法幫它重啟crs。(這個錯誤是個bug,這裡就不略過了)。
解 決辦法:
複製一個shell窗本,先停掉has,再啟動has就行了,具體操作如下:
[root@xydb7node1 ~]# crsctl stop has -f
[root@xydb7node1 ~]# crsctl start has
等待一會,gi補丁就安裝成功了,如下:
CRS service started successfully on home /u01/app/19.3.0/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:xydb7node1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /tmp/ru19.6/30463609/30501910/30489227
Reason: This patch is already been applied, so not going to apply again.
Patch: /tmp/ru19.6/30463609/30501910/30489632
Reason: This patch is already been applied, so not going to apply again.
Patch: /tmp/ru19.6/30463609/30501910/30557433
Reason: This patch is already been applied, so not going to apply again.
Patch: /tmp/ru19.6/30463609/30501910/30655595
Reason: This patch is already been applied, so not going to apply again.
OPatchauto session completed at Tue Mar 10 16:26:39 2020
Time taken to complete the session 19 minutes, 53 seconds
總結
以上就是在整個19.3 rac 補丁安裝過程中遇到的一些問題彙報,希望能有所幫助,crs-6015這個bug沒想到在19.6才修復,前面幾個ru也沒處理,對於印象中高大上的Oracle來說,實屬意外,需要加倍學習。最後感謝雲和恩墨各位磚家的給力支援,謝謝。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31556440/viewspace-2681158/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle 19c rac打補丁常見錯誤Oracle
- Oracle 11g RAC 打補丁過程(11.2.0.3.15)Oracle
- Oracle EBS APP & DB 打補丁過程簡述OracleAPP
- ORACLE 19C RAC FOR RHEL7 打補丁報錯OPatchException: Unable to create patchObjectOracleExceptionObject
- Oracle RAC 19.3打19.5.1 RU補丁Oracle
- Oracle RAC更新補丁Oracle
- 安裝oracle 11g RAC時打8670579?補丁Oracle
- 避坑指南:關於SPDK問題分析過程
- oracle打補丁回顧Oracle
- Oracle EBS中打補丁Oracle
- Oracle 11g RAC自動打GI PSU補丁Oracle
- ORACLE RAC的全自動 打補丁標準化文件Oracle
- oracle_如何打11GR2 RAC PSU補丁Oracle
- 使用OPatch給Oracle打補丁Oracle
- ORACLE打補丁的方法和案例Oracle
- ORACLE11G DG打補丁Oracle
- Oracle資料庫打補丁方法Oracle資料庫
- oracle 小補丁能全部打嗎?Oracle
- Oracle 11g RAC自動打GI PSU補丁(11.2.0.4.8)Oracle
- 【opatch】Oracle打補丁工具opatch簡介Oracle
- 在windows上打Oracle的CPU補丁WindowsOracle
- 給Oracle資料庫打補丁(轉)Oracle資料庫
- 【ASK_ORACLE】Oracle 19c RAC使用opatchauto安裝補丁報錯OPATCHAUTO-72083Oracle
- ORACLE 10G RAC 升級補丁Oracle 10g
- 【UP_ORACLE】如何給Oracle DG打補丁(二)備庫安裝補丁步驟Oracle
- 【UP_ORACLE】如何給Oracle DG打補丁(三)主庫安裝補丁步驟Oracle
- Oracle RAC 第二節點打補丁報錯 oui-patch.xml (Permission denied)OracleUIXML
- Oracle 11g RAC 環境打PSU補丁的詳細步驟Oracle
- Oracle Goldengate 12c打pus補丁OracleGo
- [20220329]19c sql語句打補丁.txtSQL
- .NET AsyncLocal 避坑指南
- 上雲避坑指南
- 記一次12c pdb打補丁失敗處理過程
- 檢視系統已經打過的補丁
- 資料庫Oracle 11g RAC手動打GI PSU補丁(11.2.0.4.8)資料庫Oracle
- 打Oracle最新CPU patch與打臨時補丁的區別Oracle
- 給oracle打Patch 9352237補丁Oracle
- 簡單介紹Oracle 19c RAC 手工建庫的過程Oracle