Oraccle 11G R2 RAC解除安裝
出於研究或者測試的目的我們可能已經在平臺上安裝了11gR2的Grid Infrastructure和RAC Database,因為GI部署的特殊性我們不能直接刪除CRS_HOME和一些列指令碼的方法來解除安裝GI和RAC Database軟體,所幸在11gR2中Oracle提供了解除安裝軟體的新特性:Deinstall,透過執行Deinstall指令碼可以方便地刪除Oracle軟體產品在系統上的各類配置檔案。
具體的解除安裝步驟如下:
1. 將平臺上現有的資料庫遷移走或者物理、邏輯地備份,如果該資料庫已經沒有任何價值的話使用DBCA刪除該資料庫及相關服務。
以oracle使用者登入系統啟動DBCA介面,並選擇RAC database:
[oracle@vrh2 ~]$ dbca
在step 1 of 2 :operations上選擇刪除資料庫 delete a Database
在 step 2 of 2 : List of cluster databases上選擇所要刪除的資料庫
逐一刪除Cluster環境中所有的Database
2.
使用oracle使用者登入任意節點並執行$ORACLE_HOME/deinstall目錄下的deinstall指令碼
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> select * from global_name;
GLOBAL_NAME
--------------------------------------------------------------------------------
[root@vrh2 ~]# su - oracle
[oracle@vrh2 ~]$ cd $ORACLE_HOME/deinstall [oracle@vrh2 deinstall]$ ./deinstall Checking for required files and bootstrapping ...
Please wait ...
Location of logs /g01/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
Install check configuration START
Checking for existence of the Oracle home location /s01/orabase/product/11.2.0/dbhome_1
Oracle Home type selected for de-install is: RACDB
Oracle Base selected for de-install is: /s01/orabase
Checking for existence of central inventory location /g01/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /g01/11.2.0/grid
The following nodes are part of this cluster: vrh1,vrh2
Install check configuration END
Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END
Network Configuration check config START
Network de-configuration trace file location:
/g01/oraInventory/logs/netdc_check2011-08-31_11-19-25-PM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [CRS_LISTENER]:
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /g01/oraInventory/logs/databasedc_check2011-08-31_11-19-39-PM.log
Use comma as separator when specifying list of values as input
Specify the list of database names that are configured in this Oracle home []:
Database Check Configuration END
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /g01/oraInventory/logs/emcadc_check2011-08-31_11-19-46-PM.log
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /g01/oraInventory/logs//ocm_check131.log
Oracle Configuration Manager check END
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /g01/11.2.0/grid
The cluster node(s) on which the Oracle home de-installation will be performed are:vrh1,vrh2
Oracle Home selected for de-install is: /s01/orabase/product/11.2.0/dbhome_1
Inventory Location where the Oracle home registered is: /g01/oraInventory
Skipping Windows and .NET products configuration check
Following RAC listener(s) will be de-configured: CRS_LISTENER
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
vrh1 : Oracle Home exists with CCR directory, but CCR is not configured
vrh2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/g01/oraInventory/logs/deinstall_deconfig2011-08-31_11-19-23-PM.out'
Any error messages from this session will be written to: '/g01/oraInventory/logs/deinstall_deconfig2011-08-31_11-19-23-PM.err'
######################## CLEAN OPERATION START ########################
Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /g01/oraInventory/logs/emcadc_clean2011-08-31_11-19-46-PM.log
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /g01/oraInventory/logs/databasedc_clean2011-08-31_11-20-00-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /g01/oraInventory/logs/netdc_clean2011-08-31_11-20-00-PM.log
De-configuring RAC listener(s): CRS_LISTENER
De-configuring listener: CRS_LISTENER
Stopping listener: CRS_LISTENER
Listener stopped successfully.
Unregistering listener: CRS_LISTENER
Listener unregistered successfully.
Listener de-configured successfully.
De-configuring Listener configuration file on all nodes...
Listener configuration file de-configured successfully.
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /g01/oraInventory/logs//ocm_clean131.log
Oracle Configuration Manager clean END
Removing Windows and .NET products configuration END
Oracle Universal Installer clean START
Detach Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the local node : Done
Delete directory '/s01/orabase/product/11.2.0/dbhome_1' on the local node : Done
Delete directory '/s01/orabase' on the local node : Done
Detach Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the remote nodes 'vrh1' : Done
Delete directory '/s01/orabase/product/11.2.0/dbhome_1' on the remote nodes 'vrh1' : Done
Delete directory '/s01/orabase' on the remote nodes 'vrh1' : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-19-18PM' on node 'vrh2'
Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-19-18PM' on node 'vrh1'
Oracle install clean END
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: CRS_LISTENER
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/s01/orabase/product/11.2.0/dbhome_1' on the local node.
Successfully deleted directory '/s01/orabase' on the local node.
Successfully detached Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the remote nodes 'vrh1'.
Successfully deleted directory '/s01/orabase/product/11.2.0/dbhome_1' on the remote nodes 'vrh1'.
Successfully deleted directory '/s01/orabase' on the remote nodes 'vrh1'.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
以上deinstall指令碼會刪除所有節點上的$ORACLE_HOME下的RDBMS軟體,並從central inventory中將已經解除安裝的RDBMS軟體登出,注意這種操作是不可逆的!
3.
使用root使用者登入在所有節點上注意執行”$ORA_CRS_HOME/crs/install/rootcrs.pl -verbose -deconfig -force”的命令,注意在最後一個節點不要執行該命令。舉例來說如果你有2個節點的話,就只要在一個節點上執行上述命令即可:
[root@vrh1 ~]# $ORA_CRS_HOME/crs/install/rootcrs.pl -verbose -deconfig -force Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /vrh1-vip/192.168.1.162/192.168.1.0/255.255.255.0/eth0, hosting node vrh1
VIP exists: /vrh2-vip/192.168.1.164/192.168.1.0/255.255.255.0/eth0, hosting node vrh2
VIP exists: /vrh3-vip/192.168.1.166/192.168.1.0/255.255.255.0/eth0, hosting node vrh3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'vrh1'
CRS-2677: Stop of 'ora.registry.acfs' on 'vrh1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'vrh1'
CRS-2673: Attempting to stop 'ora.crsd' on 'vrh1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'vrh1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'vrh1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'vrh1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'vrh1'
CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'vrh1'
CRS-2677: Stop of 'ora.oc4j' on 'vrh1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'vrh2'
CRS-2676: Start of 'ora.oc4j' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'vrh1'
CRS-2677: Stop of 'ora.asm' on 'vrh1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'vrh1' has completed
CRS-2677: Stop of 'ora.crsd' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'vrh1'
CRS-2673: Attempting to stop 'ora.evmd' on 'vrh1'
CRS-2673: Attempting to stop 'ora.asm' on 'vrh1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vrh1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'vrh1'
CRS-2677: Stop of 'ora.asm' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'vrh1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'vrh1'
CRS-2677: Stop of 'ora.cssd' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'vrh1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'vrh1'
CRS-2677: Stop of 'ora.diskmon' on 'vrh1' succeeded
CRS-2677: Stop of 'ora.crf' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vrh1'
CRS-2677: Stop of 'ora.gipcd' on 'vrh1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vrh1'
CRS-2677: Stop of 'ora.gpnpd' on 'vrh1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'vrh1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
4.在最後的節點(last node)以root使用者執行”$ORA_CRS_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode”命令,該命令會清空OCR和Votedisk :
[root@vrh2 ~]# $ORA_CRS_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params
CRS resources for listeners are still configured
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /vrh2-vip/192.168.1.164/192.168.1.0/255.255.255.0/eth0, hosting node vrh2
VIP exists: /vrh3-vip/192.168.1.166/192.168.1.0/255.255.255.0/eth0, hosting node vrh3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'vrh2'
CRS-2677: Stop of 'ora.registry.acfs' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'vrh2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'vrh2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'vrh2'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'vrh2'
CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'vrh2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'vrh2'
CRS-2677: Stop of 'ora.oc4j' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'vrh2'
CRS-2677: Stop of 'ora.asm' on 'vrh2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'vrh2' has completed
CRS-2677: Stop of 'ora.crsd' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'vrh2'
CRS-2673: Attempting to stop 'ora.evmd' on 'vrh2'
CRS-2673: Attempting to stop 'ora.asm' on 'vrh2'
CRS-2677: Stop of 'ora.asm' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'vrh2'
CRS-2677: Stop of 'ora.evmd' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'vrh2'
CRS-2677: Stop of 'ora.cssd' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'vrh2'
CRS-2677: Stop of 'ora.diskmon' on 'vrh2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh2'
CRS-2676: Start of 'ora.cssdmonitor' on 'vrh2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'vrh2'
CRS-2672: Attempting to start 'ora.diskmon' on 'vrh2'
CRS-2676: Start of 'ora.diskmon' on 'vrh2' succeeded
CRS-2676: Start of 'ora.cssd' on 'vrh2' succeeded
CRS-4611: Successful deletion of voting disk +SYSTEMDG.
ASM de-configuration trace file location: /tmp/asmcadc_clean2011-08-31_11-55-52-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2011-08-31_11-55-52-PM.log for details.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'vrh2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vrh2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'vrh2'
CRS-2673: Attempting to stop 'ora.asm' on 'vrh2'
CRS-2677: Stop of 'ora.asm' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'vrh2'
CRS-2677: Stop of 'ora.cssd' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vrh2'
CRS-2673: Attempting to stop 'ora.diskmon' on 'vrh2'
CRS-2677: Stop of 'ora.gipcd' on 'vrh2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vrh2'
CRS-2677: Stop of 'ora.diskmon' on 'vrh2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'vrh2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'vrh2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
5.在任意節點以Grid Infrastructure擁有者使用者執行”$ORA_CRS_HOME/deinstall/deinstall”指令碼:
[root@vrh1 ~]# su - grid [grid@vrh1 ~]$ cd $ORA_CRS_HOME [grid@vrh1 grid]$ cd deinstall/ [grid@vrh1 deinstall]$ cat deinstall #!/bin/sh # # $Header: install/utl/scripts/db/deinstall /main/3 2010/05/28 20:12:57 ssampath Exp $ # # Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. # # NAME # deinstall - wrapper script that calls deinstall tool. # # DESCRIPTION # This script will set all the necessary variables and call the tools # entry point. # # NOTES # # # MODIFIED (MM/DD/YY) # mwidjaja 04/29/10 - XbranchMerge mwidjaja_bug-9579184 from # st_install_11.2.0.1.0 # mwidjaja 04/15/10 - Added SHLIB_PATH for HP-PARISC # mwidjaja 01/14/10 - XbranchMerge mwidjaja_bug-9269768 from # st_install_11.2.0.1.0 # mwidjaja 01/14/10 - Fix help message for params # ssampath 12/24/09 - Fix for bug 9227535. Remove legacy version_check # function # ssampath 12/01/09 - XbranchMerge ssampath_bug-9167533 from # st_install_11.2.0.1.0 # ssampath 11/30/09 - Set umask to 022. # prsubram 10/12/09 - XbranchMerge prsubram_bug-9005648 from main # prsubram 10/08/09 - Compute ARCHITECTURE_FLAG in the script # prsubram 09/15/09 - Setting LIBPATH for AIX # prsubram 09/10/09 - Add AIX specific code check java version # prsubram 09/10/09 - Change TOOL_DIR to BOOTSTRAP_DIR in java cmd # invocation of bug#8874160 # prsubram 09/08/09 - Change the default shell to /usr/xpg4/bin/sh on # SunOS # prsubram 09/03/09 - Removing -d64 for client32 homes for the bug8859294 # prsubram 06/22/09 - Resolve port specific id cmd issue # ssampath 06/02/09 - Fix for bug 8566942 # ssampath 05/19/09 - Move removal of /tmp/deinstall to java # code. # prsubram 04/30/09 - Fix for the bug#8474891 # mwidjaja 04/29/09 - Added user check between the user running the # script and inventory owner # ssampath 04/29/09 - Changes to make error message better when deinstall # tool is invoked from inside ORACLE_HOME and -home # is passed. # ssampath 04/15/09 - Fix for bug 8414555 # prsubram 04/09/09 - LD_LIBRARY_PATH is ported for sol,hp-ux & aix # mwidjaja 03/26/09 - Disallow -home for running from OH # ssampath 03/24/09 - Fix for bug 8339519 # wyou 02/25/09 - restructure the ohome check # wyou 02/25/09 - change the error msg for directory existance check # wyou 02/12/09 - add directory existance check # wyou 02/09/09 - add the check for the writablity for the oracle # home passed-in # ssampath 01/21/09 - Add oui/lib to LD_LIBRARY_PATH # poosrini 01/07/09 - LOG related changes # ssampath 11/24/08 - Create /main/osds/unix branch # dchriste 10/30/08 - eliminate non-generic tools like 'cut' # ssampath 08/18/08 - Pickup srvm.jar from JLIB directory. # ssampath 07/30/08 - Add http_client.jar and OraCheckpoint.jar to # CLASSPATH # ssampath 07/08/08 - assistantsCommon.jar and netca.jar location has # changed. # ssampath 04/11/08 - If invoking the tool from installed home, JRE_HOME # should be set to $OH/jdk/jre. # ssampath 04/09/08 - Add logic to instantiate ORA_CRS_HOME, JAVA_HOME # etc., # ssampath 04/03/08 - Pick up ldapjclnt11.jar # idai 04/03/08 - remove assistantsdc.jar and netcadc.jar # bktripat 02/23/07 - # khsingh 07/18/06 - add osdbagrp fix # khsingh 07/07/06 - fix regression # khsingh 06/20/06 - fix bug 5228203 # bktripat 06/12/06 - Fix for bug 5246802 # bktripat 05/08/06 - # khsingh 05/08/06 - fix tool to run from any parent directory # khsingh 05/08/06 - fix LD_LIBRARY_PATH to have abs. path # ssampath 05/01/06 - Fix for bug 5198219 # bktripat 04/21/06 - Fix for bug 5074246 # khsingh 04/11/06 - fix bug 5151658 # khsingh 04/08/06 - Add WA for bugs 5006414 & 5093832 # bktripat 02/08/06 - Fix for bug 5024086 & 5024061 # bktripat 01/24/06 - # mstalin 01/23/06 - Add lib to pick libOsUtils.so # bktripat 01/19/06 - adding library changes # rahgupta 01/19/06 - # bktripat 01/19/06 - # mstalin 01/17/06 - Modify the assistants deconfig jar file name # rahgupta 01/17/06 - updating emcp classpath # khsingh 01/17/06 - export ORACLE_HOME # khsingh 01/17/06 - fix for CRS deconfig. # hying 01/17/06 - netcadc.jar # bktripat 01/16/06 - # ssampath 01/16/06 - # bktripat 01/11/06 - # clo 01/10/06 - add EMCP entries # hying 01/10/06 - netcaDeconfig.jar # mstalin 01/09/06 - Add OraPrereqChecks.jar # mstalin 01/09/06 - # khsingh 01/09/06 - # mstalin 01/09/06 - Add additional jars for assistants # ssampath 01/09/06 - removing parseOracleHome temporarily # ssampath 01/09/06 - # khsingh 01/08/06 - fix for CRS deconfig # ssampath 12/08/05 - added java version check # ssampath 12/08/05 - initial run,minor bugs fixed # ssampath 12/07/05 - Creation # #MACROS if [ -z "$UNAME" ]; then UNAME="/bin/uname"; fi if [ -z "$ECHO" ]; then ECHO="/bin/echo"; fi if [ -z "$AWK" ]; then AWK="/bin/awk"; fi if [ -z "$ID" ]; then ID="/usr/bin/id"; fi if [ -z "$DIRNAME" ]; then DIRNAME="/usr/bin/dirname"; fi if [ -z "$FILE" ]; then FILE="/usr/bin/file"; fi if [ "`$UNAME`" = "SunOS" ] then if [ -z "${_xpg4ShAvbl_deconfig}" ] then _xpg4ShAvbl_deconfig=1 export _xpg4ShAvbl_deconfig /usr/xpg4/bin/sh $0 "$@" exit $? fi AWK="/usr/xpg4/bin/awk" fi # Set umask to 022 always. umask 022 INSTALLED_VERSION_FLAG=true ARCHITECTURE_FLAG=64 TOOL_ARGS=$* # initialize this always. # Since the OTN and the installed version of the tool is same, only way to # differentiate is through the instantated variable ORA_CRS_HOME. If it is # NOT instantiated, then the tool is a downloaded version. # Set HOME_VER to true based on the value of $INSTALLED_VERSION_FLAG if [ x"$INSTALLED_VERSION_FLAG" = x"true" ] then ORACLE_HOME=/g01/11.2.0/grid HOME_VER=1 # HOME_VER TOOL_ARGS="$ORACLE_HOME $TOOL_ARGS" else HOME_VER=0 fi # Save current working directory CURR_DIR=`pwd` # If CURR_DIR is different from TOOL_DIR get that location and cd into it. TOOL_REL_PATH=`$DIRNAME $0` cd $TOOL_REL_PATH DOT=`$ECHO $TOOL_REL_PATH | $AWK -F'/' '{ print $1}'` if [ "$DOT" = "." ]; then TOOL_DIR=$CURR_DIR/$TOOL_REL_PATH elif [ `expr "$DOT" : '.*'` -gt 0 ]; then TOOL_DIR=$CURR_DIR/$TOOL_REL_PATH else TOOL_DIR=$TOOL_REL_PATH fi # Check if this script is run as root. If so, then error out. # This is fix for bug 5024086. RUID=`$ID|$AWK -F\( '{print $2}'|$AWK -F\) '{print $1}'` if [ ${RUID} = "root" ];then $ECHO "You must not be logged in as root to run $0." $ECHO "Log in as Oracle user and rerun $0." exit $ROOT_USER fi # DEFINE FUNCTIONS BELOW computeArchFlag() { TOOL_HOME=$1 case `$UNAME` in HP-UX) if [ "`/usr/bin/file $TOOL_HOME/bin/kfod | $AWK -F\: '{print $2}' | $AWK -F\- '{print $2}' | $AWK '{print $1}'`" = "64" ];then ARCHITECTURE_FLAG="-d64" fi ;; AIX) if [ "`/usr/bin/file $TOOL_HOME/bin/kfod | $AWK -F\: '{print $2}' | $AWK '{print $1}' | $AWK -F\- '{print $1}'`" = "64" ];then ARCHITECTURE_FLAG="-d64" fi ;; *) if [ "`/usr/bin/file $TOOL_HOME/bin/kfod | $AWK -F\: '{print $2}' | $AWK '{print $2}' | $AWK -F\- '{print $1}'`" = "64" ];then ARCHITECTURE_FLAG="-d64" fi ;; esac } if [ $HOME_VER = 1 ]; then $ECHO "Checking for required files and bootstrapping ..." $ECHO "Please wait ..." TEMP_LOC=`$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/deinstall/bootstrap.pl $HOME_VER $TOOL_ARGS` TOOL_DIR=$TEMP_LOC else TEMP_LOC=`$TOOL_DIR/perl/bin/perl $TOOL_DIR/bootstrap.pl $HOME_VER $TOOL_ARGS` fi computeArchFlag $TOOL_DIR $TOOL_DIR/perl/bin/perl $TOOL_DIR/deinstall.pl $HOME_VER $TEMP_LOC $TOOL_DIR $ARCHITECTURE_FLAG $TOOL_ARGS [grid@vrh1 deinstall]$ ./deinstall Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2011-08-31_11-59-55PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### Install check configuration START Checking for existence of the Oracle home location /g01/11.2.0/grid Oracle Home type selected for de-install is: CRS Oracle Base selected for de-install is: /g01/orabase Checking for existence of central inventory location /g01/oraInventory Checking for existence of the Oracle Grid Infrastructure home /g01/11.2.0/grid The following nodes are part of this cluster: vrh1,vrh2,vrh3 Install check configuration END Skipping Windows and .NET products configuration check Checking Windows and .NET products configuration END Traces log file: /tmp/deinstall2011-08-31_11-59-55PM/logs//crsdc.log Enter an address or the name of the virtual IP used on node "vrh1"[vrh1-vip] > The following information can be collected by running "/sbin/ifconfig -a" on node "vrh1" Enter the IP netmask of Virtual IP "192.168.1.162" on node "vrh1"[255.255.255.0] > Enter the network interface name on which the virtual IP address "192.168.1.162" is active > Enter an address or the name of the virtual IP used on node "vrh2"[vrh2-vip] > The following information can be collected by running "/sbin/ifconfig -a" on node "vrh2" Enter the IP netmask of Virtual IP "192.168.1.164" on node "vrh2"[255.255.255.0] > Enter the network interface name on which the virtual IP address "192.168.1.164" is active > Enter an address or the name of the virtual IP used on node "vrh3"[vrh3-vip] > The following information can be collected by running "/sbin/ifconfig -a" on node "vrh3" Enter the IP netmask of Virtual IP "192.168.1.166" on node "vrh3"[255.255.255.0] > Enter the network interface name on which the virtual IP address "192.168.1.166" is active > Enter an address or the name of the virtual IP[] > Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/ netdc_check2011-09-01_12-01-50-AM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/ asmcadc_check2011-09-01_12-01-51-AM.log ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: ASM was not detected in the Oracle Home ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /g01/11.2.0/grid The cluster node(s) on which the Oracle home de-installation will be performed are:vrh1,vrh2,vrh3 Oracle Home selected for de-install is: /g01/11.2.0/grid Inventory Location where the Oracle home registered is: /g01/oraInventory Skipping Windows and .NET products configuration check Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1 ASM was not detected in the Oracle Home Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2011-08-31_11-59-55PM/logs/deinstall_deconfig2011-09-01_12-01-15-AM.out' Any error messages from this session will be written to: '/tmp/deinstall2011-08-31_11-59-55PM/logs/deinstall_deconfig2011-09-01_12-01-15-AM.err' ######################## CLEAN OPERATION START ######################## ASM de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/asmcadc_clean2011-09-01_12-02-00-AM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/netdc_clean2011-09-01_12-02-00-AM.log De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1 De-configuring listener: LISTENER Stopping listener: LISTENER Warning: Failed to stop listener. Listener may not be running. Listener de-configured successfully. De-configuring listener: LISTENER_SCAN1 Stopping listener: LISTENER_SCAN1 Warning: Failed to stop listener. Listener may not be running. Listener de-configured successfully. De-configuring Naming Methods configuration file on all nodes... Naming Methods configuration file de-configured successfully. De-configuring Local Net Service Names configuration file on all nodes... Local Net Service Names configuration file de-configured successfully. De-configuring Directory Usage configuration file on all nodes... Directory Usage configuration file de-configured successfully. De-configuring backup files on all nodes... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "vrh3". /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Run the following command as the root user or the administrator on node "vrh2". /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Run the following command as the root user or the administrator on node "vrh1". /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode Press Enter after you finish running the above commands 執行deinstall過程中會要求以root使用者在所有平臺上執行相關命令 su - root [root@vrh3 ~]# /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Using configuration parameter file: /tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type PRCR-1068 : Failed to query resources Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.gsd is registered Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.ons is registered Cannot communicate with crsd ACFS-9200: Supported CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Successfully deconfigured Oracle clusterware stack on this node [root@vrh2 ~]# /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Using configuration parameter file: /tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp Usage: srvctl [command] [object] [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons For detailed help on each command and object and its options use: srvctl [command] -h or srvctl [command] [object] -h PRKO-2012 : nodeapps object is not supported in Oracle Restart ACFS-9200: Supported CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. You must kill crs processes or reboot the system to properly cleanup the processes started by Oracle clusterware ACFS-9313: No ADVM/ACFS installation detected. Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Failure in execution (rc=-1, 256, No such file or directory) for command 1 /etc/init.d/ohasd deinstall error: package cvuqdisk is not installed Successfully deconfigured Oracle clusterware stack on this node [root@vrh1 ~]# /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode Using configuration parameter file: /tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp Adding daemon to inittab crsexcl failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: 2011-08-31 23:36:55.813 [ctssd(4067)]CRS-2408:The clock on host vrh1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time. 2011-08-31 23:38:23.855 [ctssd(4067)]CRS-2408:The clock on host vrh1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time. 2011-08-31 23:39:03.873 [ctssd(4067)]CRS-2408:The clock on host vrh1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time. 2011-08-31 23:39:11.707 [/g01/11.2.0/grid/bin/orarootagent.bin(4559)]CRS-5822:Agent '/g01/11.2.0/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:2:27} in /g01/11.2.0/grid/log/vrh1/agent/crsd/orarootagent_root/orarootagent_root.log. 2011-08-31 23:39:12.725 [ctssd(4067)]CRS-2405:The Cluster Time Synchronization Service on host vrh1 is shutdown by user 2011-08-31 23:39:12.764 [mdnsd(3868)]CRS-5602:mDNS service stopping by request. 2011-08-31 23:39:13.987 [/g01/11.2.0/grid/bin/orarootagent.bin(3892)]CRS-5016:Process "/g01/11.2.0/grid/bin/acfsload" spawned by agent "/g01/11.2.0/grid/bin/orarootagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/g01/11.2.0/grid/log/vrh1/agent/ohasd/orarootagent_root/orarootagent_root.log" 2011-08-31 23:39:27.121 [cssd(3968)]CRS-1603:CSSD on node vrh1 shutdown by user. 2011-08-31 23:39:27.130 [ohasd(3639)]CRS-2767:Resource state recovery not attempted for 'ora.cssdmonitor' as its target state is OFFLINE 2011-08-31 23:39:31.926 [gpnpd(3880)]CRS-2329:GPNPD on node vrh1 shutdown. Usage: srvctl [command] [object] [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons For detailed help on each command and object and its options use: srvctl [command] -h or srvctl [command] [object] -h PRKO-2012 : scan_listener object is not supported in Oracle Restart Usage: srvctl [command] [object] [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons For detailed help on each command and object and its options use: srvctl [command] -h or srvctl [command] [object] -h PRKO-2012 : scan_listener object is not supported in Oracle Restart Usage: srvctl [command] [object] [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons For detailed help on each command and object and its options use: srvctl [command] -h or srvctl [command] [object] -h PRKO-2012 : scan object is not supported in Oracle Restart Usage: srvctl [command] [object] [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons For detailed help on each command and object and its options use: srvctl [command] -h or srvctl [command] [object] -h PRKO-2012 : scan object is not supported in Oracle Restart Usage: srvctl [command] [object] [] commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config objects: database|service|asm|diskgroup|listener|home|ons For detailed help on each command and object and its options use: srvctl [command] -h or srvctl [command] [object] -h PRKO-2012 : nodeapps object is not supported in Oracle Restart ACFS-9200: Supported CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Delete failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Modify failed, or completed with errors. Adding daemon to inittab crsexcl failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: [ctssd(4067)]CRS-2408:The clock on host vrh1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time. 2011-08-31 23:38:23.855 [ctssd(4067)]CRS-2408:The clock on host vrh1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time. 2011-08-31 23:39:03.873 [ctssd(4067)]CRS-2408:The clock on host vrh1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time. 2011-08-31 23:39:11.707 [/g01/11.2.0/grid/bin/orarootagent.bin(4559)]CRS-5822:Agent '/g01/11.2.0/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:2:27} in /g01/11.2.0/grid/log/vrh1/agent/crsd/orarootagent_root/orarootagent_root.log. 2011-08-31 23:39:12.725 [ctssd(4067)]CRS-2405:The Cluster Time Synchronization Service on host vrh1 is shutdown by user 2011-08-31 23:39:12.764 [mdnsd(3868)]CRS-5602:mDNS service stopping by request. 2011-08-31 23:39:13.987 [/g01/11.2.0/grid/bin/orarootagent.bin(3892)]CRS-5016:Process "/g01/11.2.0/grid/bin/acfsload" spawned by agent "/g01/11.2.0/grid/bin/orarootagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/g01/11.2.0/grid/log/vrh1/agent/ohasd/orarootagent_root/orarootagent_root.log" 2011-08-31 23:39:27.121 [cssd(3968)]CRS-1603:CSSD on node vrh1 shutdown by user. 2011-08-31 23:39:27.130 [ohasd(3639)]CRS-2767:Resource state recovery not attempted for 'ora.cssdmonitor' as its target state is OFFLINE 2011-08-31 23:39:31.926 [gpnpd(3880)]CRS-2329:GPNPD on node vrh1 shutdown. [client(13099)]CRS-10001:01-Sep-11 00:11 ACFS-9200: Supported CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Delete failed, or completed with errors. crsctl delete for vds in SYSTEMDG ... failed CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Delete failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. ACFS-9313: No ADVM/ACFS installation detected. Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Failure in execution (rc=-1, 256, No such file or directory) for command 1 /etc/init.d/ohasd deinstall error: package cvuqdisk is not installed Successfully deconfigured Oracle clusterware stack on this node 回到最初執行deintall的終端摁下回車 The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Press Enter after you finish running the above commands <---------------------------------------- Removing Windows and .NET products configuration END Oracle Universal Installer clean START Detach Oracle home '/g01/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/g01/11.2.0/grid' on the local node : Done Delete directory '/g01/oraInventory' on the local node : Done Delete directory '/g01/orabase' on the local node : Done Detach Oracle home '/g01/11.2.0/grid' from the central inventory on the remote nodes 'vrh3,vrh2' : Done Delete directory '/g01/11.2.0/grid' on the remote nodes 'vrh2,vrh3' : Done Delete directory '/g01/oraInventory' on the remote nodes 'vrh3' : Done Delete directory '/g01/oraInventory' on the remote nodes 'vrh2' : Failed <<<< The directory '/g01/oraInventory' could not be deleted on the nodes 'vrh2'. Delete directory '/g01/orabase' on the remote nodes 'vrh2' : Done Delete directory '/g01/orabase' on the remote nodes 'vrh3' : Done Oracle Universal Installer cleanup completed with errors. Oracle Universal Installer clean END Oracle install clean START Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-59-55PM' on node 'vrh1' Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-59-55PM' on node 'vrh2' Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-59-55PM' on node 'vrh3' Oracle install clean END ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1 Oracle Clusterware is stopped and successfully de-configured on node "vrh3" Oracle Clusterware is stopped and successfully de-configured on node "vrh2" Oracle Clusterware is stopped and successfully de-configured on node "vrh1" Oracle Clusterware is stopped and de-configured successfully. Skipping Windows and .NET products configuration clean Successfully detached Oracle home '/g01/11.2.0/grid' from the central inventory on the local node. Successfully deleted directory '/g01/11.2.0/grid' on the local node. Successfully deleted directory '/g01/oraInventory' on the local node. Successfully deleted directory '/g01/orabase' on the local node. Successfully detached Oracle home '/g01/11.2.0/grid' from the central inventory on the remote nodes 'vrh3,vrh2'. Successfully deleted directory '/g01/11.2.0/grid' on the remote nodes 'vrh2,vrh3'. Successfully deleted directory '/g01/oraInventory' on the remote nodes 'vrh3'. Failed to delete directory '/g01/oraInventory' on the remote nodes 'vrh2'. Successfully deleted directory '/g01/orabase' on the remote nodes 'vrh2'. Successfully deleted directory '/g01/orabase' on the remote nodes 'vrh3'. Oracle Universal Installer cleanup completed with errors. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'vrh1,vrh3' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'vrh1 vrh3 vrh2 ' at the end of the session. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
deintall執行完成後會提示讓你在必要的節點上執行”rm -rf /etc/oraInst.loc”和”rm -rf /opt/ORCLfmap”,照做即可。
以上指令碼執行完成後各節點上的GI已被刪除,且/etc/inittab檔案已還原為非GI版,/etc/init.d下的CRS相關指令碼也已相應刪除。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30484956/viewspace-2130348/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle 11g R2安裝RACOracle
- oracle 11g rac 靜默解除安裝Oracle
- 手動安裝、驗證、解除安裝11g R2 oracle textOracle
- 解除安裝RAC
- ORACLE 11G RAC 在window 2008 R2 64位上的解除安裝Oracle
- 【RAC】RAC安裝錯誤手工解除安裝
- RAC解除安裝說明
- Oracle 11g R2的解除安裝與重灌過程詳解Oracle
- oracle 11g RAC手動解除安裝grid,no deinstallOracle
- Oracle 11g解除安裝Oracle
- RHEL5 Oracle 11G R2 RAC 靜默安裝 (二)GI靜默安裝Oracle
- 10g RAC 解除安裝
- Oracle 11g R2 RAC安裝前的系統環境搭建Oracle
- ORACLE 11g R2 RAC 11.2.0.2.12 PSU 安裝升級Oracle
- RHEL5 Oracle 11G R2 RAC 靜默安裝 (三) rdbms安裝 dbca 建庫Oracle
- RHEL5 Oracle 11G R2 RAC 靜默安裝 (一) GI安裝前 準備Oracle
- 11g解除安裝指令碼指令碼
- ORACLE 11G完美解除安裝Oracle
- Oracle 11g r2 racOracle
- 【RAC】 RAC For W2K8R2 安裝--解除安裝(八)
- oracle11g 解除安裝racOracle
- oracle 10g rac 解除安裝Oracle 10g
- 如何安全解除安裝Oracle RAC(轉)Oracle
- Oracle11g R2 RAC安裝Oracle
- AIX 安裝 11g RACAI
- RedHat 安裝11g racRedhat
- 安裝Oracle RAC 11gOracle
- 手工刪除解除安裝oracle 11g rac的具體步驟(方法)Oracle
- 11g R2 RAC: SERVER POOLSServer
- Oracle RAC叢集解除安裝步驟Oracle
- grid 的解除安裝(RAC 11.2.0.3)
- 11G RAC NFS安裝文件NFS
- 11g R2執行root.sh 不成功的處理及解除安裝 11g grid
- 【11g 單庫解除安裝、靜默安裝】實驗
- rac中解除安裝監聽lsnr和asmASM
- CENTOS 6.6 x64 自動化安裝Oracle Database 11g R2 RAC指令碼CentOSOracleDatabase指令碼
- rhel7 安裝11g rac
- 11g rac安裝過程感悟