Oracle Restart 單例項執行root.sh報錯roothas.pl line 377【轉載】
Oracle restart 執行root.sh報錯
[root@asm01 11.2.0.4]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm01 successfully pinned.
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2019-03-23 21:33:32.447:
[client(10642)]CRS-2101:The OLR was formatted using version 3.
2019-03-23 21:33:38.163:
[client(10669)]CRS-1001:The OCR was formatted using version 3.
ohasd failed to start at /u01/app/grid/11.2.0.4/crs/install/roothas.pl line 377, <ALERTLOG> line 4.
/u01/app/grid/11.2.0.4/perl/bin/perl -I/u01/app/grid/11.2.0.4/perl/lib -I/u01/app/grid/11.2.0.4/crs/install /u01/app/grid/11.2.0.4/crs/install/roothas.pl execution failed
本報錯是由bug 18370031 - 18370031 RC SCRIPTS (ETC RC.D RC , ETC INIT.D ) ON OL7 FOR CLUSTERWARE 導致的,由於rhel 7中systemd執行原理不同於/etc/inittab導致
解決以下報錯有兩個方法
#################################
方法一、
#################################
方法一不需要重新安裝grid,僅對grid重新配置即可
1.回退grid 配置
使用grid中perl絕對路徑執行
[root@asm01 11.2.0.4]# /u01/app/grid/11.2.0.4/perl/bin/perl /u01/app/grid/11.2.0.4/crs/install/roothas.pl -deconfig -force
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Successfully deconfigured Oracle Restart stack
2.重新執行root.sh
[root@asm01 11.2.0.4]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm01 successfully pinned.
Adding Clusterware entries to inittab
asm01 2019/03/23 21:58:33 /u01/app/grid/11.2.0.4/cdata/asm01/backup_20190323_215833.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
3.執行root.sh同時新開一個視窗,執行root.sh指令碼出現Adding Clusterware entries to inittab時
在新視窗執行以下命令
[root@asm01 ~]# dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1
^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 40.3934 s, 0.0 kB/s
root.sh指令碼執行成功完畢後,ctrl-c 取消命令
####################################
方法二、打補丁修復
####################################
安裝補丁法由以下步驟思路構成
(1)刪除grid
(2)重新安裝grid
(3)下載並安裝補丁p18370031_112040_Linux-x86-64.zip
(4)重新執行root.sh
1.刪除grid
[root@asm01 deinstall]# ./deinstall
You must not be logged in as root to run ./deinstall.
Log in as Oracle user and rerun ./deinstall.
[root@asm01 deinstall]# pwd
/u01/app/grid/11.2.0.4/deinstall
[root@asm01 deinstall]# su - grid
Last login: Sat Mar 23 21:58:25 CST 2019 on pts/1
[grid@asm01 ~]$ cd /u01/app/grid/11.2.0.4/deinstall
[grid@asm01 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/grid/11.2.0.4
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/grid/11.2.0.4
Checking for sufficient temp space availability on node(s) : 'asm01'
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-03-23_10-19-18-PM.log
Specify all Oracle Restart enabled listeners that are to be de-configured [LISTENER]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2019-03-23_10-19-20-PM.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/grid/11.2.0.4.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +DATA
ASM diskstring : /dev/asm*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you want to modify above information (y|n) [n]: y
Specify the ASM Diagnostic Destination [/u01/app/grid]:
Specify the diskstring [/dev/asm*]:
Specify the diskgroups that are managed by this ASM instance [+DATA]:
De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/grid/11.2.0.4
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/grid/11.2.0.4
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-03-23_10-19-16-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-03-23_10-19-16-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2019-03-23_10-19-32-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-03-23_10-20-08-PM.log
De-configuring Oracle Restart enabled listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Unregistering listener: LISTENER
Listener unregistered successfully.
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
Run the following command as the root user or the administrator on node "asm01".
/tmp/deinstall2019-03-23_10-19-13PM/perl/bin/perl -I/tmp/deinstall2019-03-23_10-19-13PM/perl/lib -I/tmp/deinstall2019-03-23_10-19-13PM/crs/install /tmp/deinstall2019-03-23_10-19-13PM/crs/install/roothas.pl -force -deconfig -paramfile "/tmp/deinstall2019-03-23_10-19-13PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/grid/11.2.0.4' from the central inventory on the local node : Done
Delete directory '/u01/app/grid/11.2.0.4' on the local node : Done
Delete directory '/u01/app/grid' on the local node : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2019-03-23_10-19-13PM' on node 'asm01'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following Oracle Restart enabled listener(s) were de-configured successfully: LISTENER
Oracle Restart was already stopped and de-configured on node "asm01"
Oracle Restart is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/grid/11.2.0.4' from the central inventory on the local node.
Successfully deleted directory '/u01/app/grid/11.2.0.4' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
2.刪除完畢後,重新安裝grid軟體,在出現執行root.sh指令碼前已grid使用者對grid進行達補丁
3.下載完p18370031_112040_Linux-x86-64.zip補丁後,對grid打補丁修復
[grid@asm01 OPatch]$ ./opatch napply -local 18370031/
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/grid/11.2.0.4
Central Inventory : /u01/app/oraInventory
from : /u01/app/grid/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 18370031
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/grid/11.2.0.4')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/u01/app/grid/11.2.0.4'
Patching component oracle.crs, 11.2.0.4.0...
Verifying the update...
Patch 18370031 successfully applied.
Log file location: /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
OPatch succeeded.
[grid@asm01 OPatch]$
[grid@asm01 OPatch]$ ./opatch napply -local 18370031/
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/grid/11.2.0.4
Central Inventory : /u01/app/oraInventory
from : /u01/app/grid/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 18370031
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/grid/11.2.0.4')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/u01/app/grid/11.2.0.4'
Patching component oracle.crs, 11.2.0.4.0...
Verifying the update...
Patch 18370031 successfully applied.
Log file location: /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
OPatch succeeded.
4.補丁安裝成功後,執行root.sh指令碼
[root@asm01 11.2.0.4]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm01 successfully pinned.
Adding Clusterware entries to oracle-ohasd.service
asm01 2019/03/23 22:34:17 /u01/app/grid/11.2.0.4/cdata/asm01/backup_20190323_223417.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31547066/viewspace-2902487/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 11.2.0.4單例項ASM安裝報錯ohasd failed to ... line 73.單例ASMAI
- 執行用例報錯
- Oracle RAC重新執行root.sh指令碼Oracle指令碼
- OEL6.8上安裝11.2.0.2執行root.sh報錯
- docker 執行elasticsearch單例項(elasticsearch:7.12.0)DockerElasticsearch單例
- oracle 10203啟動例項報警Oracle
- Activiti的流程例項【ProcessInstance】與執行例項【Execution】
- oracle rac 單個例項不能生成awr報告的問題Oracle
- Thread 併發執行例項thread
- oracle之 單例項監聽修改埠Oracle單例
- restart oracle streamRESTOracle
- oracle netca建立監聽報錯/u01/oracle/bin/netca: line 178: 11819 AbortedOracle
- ORACLE dbca執行到40%報錯之ORA-12154Oracle
- 多執行緒併發鎖分類以及簡單例項執行緒單例
- 編寫ORACLE效能報告的九大注意事項(轉載)Oracle
- 在Suse 12.4上安裝11.2.0.4的rac執行root.sh報錯“ORA-12547: TNS:lost contact”
- 4.1.1 Oracle Restart概述OracleREST
- oracle中執行os命令(轉)Oracle
- [重慶思莊每日技術分享]-安裝GI時報錯“Oracle Restart Integrity”OracleREST
- Oracle 11g RAC到單例項OGG同步Oracle單例
- Linux中執行多個MySQL例項LinuxMySql
- Spring Boot Intellij 執行應用的時候 Command line is too long. Shorten command line for 錯誤Spring BootIntelliJ
- 4.1. Oracle例項Oracle
- Oracle Far Sync例項Oracle
- c++ 運算子過載、執行緒安全實現單例C++執行緒單例
- 4.1 關於 Oracle RestartOracleREST
- oracle資料庫與oracle例項Oracle資料庫
- keras轉tensorflow lite【方法二】直接轉:簡單模型例項Keras模型
- 打 patch 報錯:corrupt patch at line 36
- ORACLE-LINUX環境字元介面單例項安裝OracleLinux字元單例
- Oracle 11G資料庫單例項安裝Oracle資料庫單例
- 【PSU】Oracle打PSU及解除安裝PSU(單例項)Oracle單例
- 唯一標識 Java 執行的例項Java
- 在Windows中執行多個MySQL例項WindowsMySql
- 執行caffe自帶的mnist例項教程
- NCF的Dapr應用例項的執行
- Python程式和執行緒例項詳解Python執行緒
- [轉載] Python 機器學習經典例項Python機器學習