Oracle Restart 單例項執行root.sh報錯roothas.pl line 377【轉載】
Oracle restart 執行root.sh報錯
[root@asm01 11.2.0.4]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm01 successfully pinned.
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2019-03-23 21:33:32.447:
[client(10642)]CRS-2101:The OLR was formatted using version 3.
2019-03-23 21:33:38.163:
[client(10669)]CRS-1001:The OCR was formatted using version 3.
ohasd failed to start at /u01/app/grid/11.2.0.4/crs/install/roothas.pl line 377, <ALERTLOG> line 4.
/u01/app/grid/11.2.0.4/perl/bin/perl -I/u01/app/grid/11.2.0.4/perl/lib -I/u01/app/grid/11.2.0.4/crs/install /u01/app/grid/11.2.0.4/crs/install/roothas.pl execution failed
本報錯是由bug 18370031 - 18370031 RC SCRIPTS (ETC RC.D RC , ETC INIT.D ) ON OL7 FOR CLUSTERWARE 導致的,由於rhel 7中systemd執行原理不同於/etc/inittab導致
解決以下報錯有兩個方法
#################################
方法一、
#################################
方法一不需要重新安裝grid,僅對grid重新配置即可
1.回退grid 配置
使用grid中perl絕對路徑執行
[root@asm01 11.2.0.4]# /u01/app/grid/11.2.0.4/perl/bin/perl /u01/app/grid/11.2.0.4/crs/install/roothas.pl -deconfig -force
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Successfully deconfigured Oracle Restart stack
2.重新執行root.sh
[root@asm01 11.2.0.4]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm01 successfully pinned.
Adding Clusterware entries to inittab
asm01 2019/03/23 21:58:33 /u01/app/grid/11.2.0.4/cdata/asm01/backup_20190323_215833.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
3.執行root.sh同時新開一個視窗,執行root.sh指令碼出現Adding Clusterware entries to inittab時
在新視窗執行以下命令
[root@asm01 ~]# dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1
^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 40.3934 s, 0.0 kB/s
root.sh指令碼執行成功完畢後,ctrl-c 取消命令
####################################
方法二、打補丁修復
####################################
安裝補丁法由以下步驟思路構成
(1)刪除grid
(2)重新安裝grid
(3)下載並安裝補丁p18370031_112040_Linux-x86-64.zip
(4)重新執行root.sh
1.刪除grid
[root@asm01 deinstall]# ./deinstall
You must not be logged in as root to run ./deinstall.
Log in as Oracle user and rerun ./deinstall.
[root@asm01 deinstall]# pwd
/u01/app/grid/11.2.0.4/deinstall
[root@asm01 deinstall]# su - grid
Last login: Sat Mar 23 21:58:25 CST 2019 on pts/1
[grid@asm01 ~]$ cd /u01/app/grid/11.2.0.4/deinstall
[grid@asm01 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/grid/11.2.0.4
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/grid/11.2.0.4
Checking for sufficient temp space availability on node(s) : 'asm01'
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-03-23_10-19-18-PM.log
Specify all Oracle Restart enabled listeners that are to be de-configured [LISTENER]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2019-03-23_10-19-20-PM.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/grid/11.2.0.4.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +DATA
ASM diskstring : /dev/asm*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you want to modify above information (y|n) [n]: y
Specify the ASM Diagnostic Destination [/u01/app/grid]:
Specify the diskstring [/dev/asm*]:
Specify the diskgroups that are managed by this ASM instance [+DATA]:
De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/grid/11.2.0.4
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/grid/11.2.0.4
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-03-23_10-19-16-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-03-23_10-19-16-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2019-03-23_10-19-32-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-03-23_10-20-08-PM.log
De-configuring Oracle Restart enabled listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Unregistering listener: LISTENER
Listener unregistered successfully.
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
Run the following command as the root user or the administrator on node "asm01".
/tmp/deinstall2019-03-23_10-19-13PM/perl/bin/perl -I/tmp/deinstall2019-03-23_10-19-13PM/perl/lib -I/tmp/deinstall2019-03-23_10-19-13PM/crs/install /tmp/deinstall2019-03-23_10-19-13PM/crs/install/roothas.pl -force -deconfig -paramfile "/tmp/deinstall2019-03-23_10-19-13PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/grid/11.2.0.4' from the central inventory on the local node : Done
Delete directory '/u01/app/grid/11.2.0.4' on the local node : Done
Delete directory '/u01/app/grid' on the local node : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2019-03-23_10-19-13PM' on node 'asm01'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following Oracle Restart enabled listener(s) were de-configured successfully: LISTENER
Oracle Restart was already stopped and de-configured on node "asm01"
Oracle Restart is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/grid/11.2.0.4' from the central inventory on the local node.
Successfully deleted directory '/u01/app/grid/11.2.0.4' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
2.刪除完畢後,重新安裝grid軟體,在出現執行root.sh指令碼前已grid使用者對grid進行達補丁
3.下載完p18370031_112040_Linux-x86-64.zip補丁後,對grid打補丁修復
[grid@asm01 OPatch]$ ./opatch napply -local 18370031/
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/grid/11.2.0.4
Central Inventory : /u01/app/oraInventory
from : /u01/app/grid/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 18370031
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/grid/11.2.0.4')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/u01/app/grid/11.2.0.4'
Patching component oracle.crs, 11.2.0.4.0...
Verifying the update...
Patch 18370031 successfully applied.
Log file location: /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
OPatch succeeded.
[grid@asm01 OPatch]$
[grid@asm01 OPatch]$ ./opatch napply -local 18370031/
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/grid/11.2.0.4
Central Inventory : /u01/app/oraInventory
from : /u01/app/grid/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 18370031
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/grid/11.2.0.4')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/u01/app/grid/11.2.0.4'
Patching component oracle.crs, 11.2.0.4.0...
Verifying the update...
Patch 18370031 successfully applied.
Log file location: /u01/app/grid/11.2.0.4/cfgtoollogs/opatch/opatch2019-03-23_22-31-46PM_1.log
OPatch succeeded.
4.補丁安裝成功後,執行root.sh指令碼
[root@asm01 11.2.0.4]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm01 successfully pinned.
Adding Clusterware entries to oracle-ohasd.service
asm01 2019/03/23 22:34:17 /u01/app/grid/11.2.0.4/cdata/asm01/backup_20190323_223417.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31547066/viewspace-2902487/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 執行用例報錯
- oracle單例項轉RACOracle單例
- 11.2.0.4單例項ASM安裝報錯ohasd failed to ... line 73.單例ASMAI
- RHEL 7 安裝oracle rac 11.2.0.4執行root.sh報錯ohasd failed to startOracleAI
- 單節點執行ASM例項ASM
- 10g rac安裝crs,執行root.sh報錯
- Oracle Restart啟動資料庫例項故障一例OracleREST資料庫
- OEL6.8上安裝11.2.0.2執行root.sh報錯
- C#實現單例項執行C#單例
- 限制程式執行例項數 (轉)
- 安裝RAC 執行root.sh指令碼報錯,解決辦法指令碼
- docker 執行elasticsearch單例項(elasticsearch:7.12.0)DockerElasticsearch單例
- linux執行緒池簡單例項Linux執行緒單例
- Oracle RAC重新執行root.sh指令碼Oracle指令碼
- Oracle Restart啟動資料庫例項故障一例( Oracle ASM儲存Spfile解析)OracleREST資料庫ASM
- RHEL5單機安裝11gR2 asm執行root.sh報錯:CRS-4124ASM
- MySQL 執行 Online DDL 操作報錯空間不足?MySql
- 執行root.sh 時報錯Failed to upgrade Oracle Cluster Registry configurationAIOracle
- Centos 6.4安裝rac,執行root.sh時報錯CentOS
- Oracle Enterprise Linux 5.6下安裝Oracle 10g RAC執行root.sh報錯問題解決LinuxOracle 10g
- 啟動ASM 例項報錯ASM
- 通用執行緒:Awk 例項,第 1 部分(轉)執行緒
- oracle單例項通過dataguard遷移到RAC 轉Oracle單例
- 11.2.0.4 RAC grid 執行root.sh 報錯libcap.so.1找不到
- pipeline 例項
- java多執行緒結合單例模式例項,簡單實用易理解Java執行緒單例模式
- 水晶報表官方例項下載:報表和應用程式 (轉)
- oracle10g rac on solaris 10 x86 clusterware_執行root.sh出錯Oracle
- java多執行緒例項Java執行緒
- rhel5.6安裝10g clusterware時,節點2執行root.sh報錯
- Oracle OCM作業執行報錯ORA-29280Oracle
- Oracle DataBase單例項遷移到Oracle RACOracleDatabase單例
- Activiti的流程例項【ProcessInstance】與執行例項【Execution】
- RAC環境啟動單例項報錯ORA-1105單例
- 11.2單例項ASM啟動報錯ORA-15186單例ASM
- oracle rac 單個例項不能生成awr報告的問題Oracle
- 多執行緒併發鎖分類以及簡單例項執行緒單例
- ./mongod命令執行報錯Go