Oracle 11g RAC重新新增節點
環境:
suse 11sp4
Oracle 11.2.0.4.180116 RAC
安裝Oracle11g rac過程中,由於主機硬體原因,導致節點1作業系統需要重新安裝,目前叢集已經全部安裝完成,尚未建立資料庫。
詳細操作過程如下:
grid@XXXXXrac2:~> olsnodes -s -t
XXXXXrac1 Inactive Unpinned
XXXXXrac2 Active Unpinned
grid@XXXXXrac2:~> crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE XXXXXrac2
ora.DATA.dg
ONLINE ONLINE XXXXXrac2
ora.asm
ONLINE ONLINE XXXXXrac2 Started
ora.gsd
OFFLINE OFFLINE XXXXXrac2
ora.net1.network
ONLINE ONLINE XXXXXrac2
ora.ons
ONLINE ONLINE XXXXXrac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE XXXXXrac2
ora.cvu
1 ONLINE ONLINE XXXXXrac2
ora.XXXXXrac1.vip
1 ONLINE INTERMEDIATE XXXXXrac2 FAILED OVER
ora.XXXXXrac2.vip
1 ONLINE ONLINE XXXXXrac2
ora.oc4j
1 ONLINE ONLINE XXXXXrac2
ora.scan1.vip
1 ONLINE ONLINE XXXXXrac2
--刪除一節點:
/oracle/xxxx/grid/bin/crsctl delete node -n XXXXXrac1(二節點root執行)
XXXXXrac2:~ # /oracle/xxxx/grid/bin/crsctl delete node -n XXXXXrac1
CRS-4661: Node XXXXXrac1 successfully deleted.
XXXXXrac2:~ #
grid@XXXXXrac2:~> olsnodes -s -t
XXXXXrac2 Active Unpinned
--清除一節點VIP資訊
XXXXXrac2:~ # /oracle/xxxx/grid/bin/srvctl stop vip -i XXXXXrac1 -f
XXXXXrac2:~ # /oracle/xxxx/grid/bin/srvctl remove vip -i XXXXXrac1 -f
二節點更新inventory資訊
grid@XXXXXrac2:~> /oracle/xxxx/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/xxxx/grid "CLUSTER_NODES=XXXXXrac2" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
oracle@XXXXXrac2:~> /oracle/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=XXXXXrac2"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
檢查整個刪除過程
grid@XXXXXrac2:~> cluvfy stage -post nodedel -n XXXXXrac1 -verbose
Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "XXXXXrac2"
CRS integrity check passed
Result:
Node removal check passed
Post-check for node removal was successful.
==========新增節點
主機安裝作業系統。
配置叢集安裝基礎環境
--配置ssh互信
grid、oracle 使用者ssh互信配置
mkdir ~/.ssh
chmod 755 ~/.ssh
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh XXXXXrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh XXXXXrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys XXXXXrac2:~/.ssh
--新增grid
[export IGNORE_PREADDNODE_CHECKS=Y #可選]
grid@XXXXXrac2:~> /oracle/xxxx/grid/oui/bin/addNode.sh "CLUSTER_NEW_NODES={XXXXXrac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={XXXXXrac1-vip}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "XXXXXrac2"
Checking user equivalence...
User equivalence check passed for user "grid"
......
Saving inventory on nodes (Monday, March 19, 2018 4:13:26 PM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/oracle/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'XXXXXrac1'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oraInventory/orainstRoot.sh #On nodes XXXXXrac1
/oracle/xxxx/grid/root.sh #On nodes XXXXXrac1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/xxxx/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
--節點一執行指令碼
XXXXXrac1:~ # /oracle/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
XXXXXrac1:~ #
XXXXXrac1:~ #
XXXXXrac1:~ # /oracle/xxxx/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/xxxx/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/xxxx/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to /etc/inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node XXXXXrac2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
--檢查grid是否新增成功
檢查是否新增成功
grid@XXXXXrac2:~> cluvfy stage -post nodeadd -n XXXXXrac1
Performing post-checks for node addition
Checking node reachability...
Node reachability check passed from node "XXXXXrac2"
Checking user equivalence...
User equivalence check passed for user "grid"
....
--新增rdbms
oracle@XXXXXrac2:~> /oracle/app/oracle/product/11.2.0/db_1/oui/bin/addNode.sh "CLUSTER_NEW_NODES={XXXXXrac1}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "XXXXXrac2"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING:
Node "XXXXXrac1" already appears to be part of cluster
Pre-check for node addition was successful.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes XXXXXrac1 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oracle/app/oracle/product/11.2.0/db_1
New Nodes
Space Requirements
New Nodes
......
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oracle/product/11.2.0/db_1/root.sh #On nodes XXXXXrac1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
切換到root後執行root指令碼。
suse 11sp4
Oracle 11.2.0.4.180116 RAC
安裝Oracle11g rac過程中,由於主機硬體原因,導致節點1作業系統需要重新安裝,目前叢集已經全部安裝完成,尚未建立資料庫。
詳細操作過程如下:
grid@XXXXXrac2:~> olsnodes -s -t
XXXXXrac1 Inactive Unpinned
XXXXXrac2 Active Unpinned
grid@XXXXXrac2:~> crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE XXXXXrac2
ora.DATA.dg
ONLINE ONLINE XXXXXrac2
ora.asm
ONLINE ONLINE XXXXXrac2 Started
ora.gsd
OFFLINE OFFLINE XXXXXrac2
ora.net1.network
ONLINE ONLINE XXXXXrac2
ora.ons
ONLINE ONLINE XXXXXrac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE XXXXXrac2
ora.cvu
1 ONLINE ONLINE XXXXXrac2
ora.XXXXXrac1.vip
1 ONLINE INTERMEDIATE XXXXXrac2 FAILED OVER
ora.XXXXXrac2.vip
1 ONLINE ONLINE XXXXXrac2
ora.oc4j
1 ONLINE ONLINE XXXXXrac2
ora.scan1.vip
1 ONLINE ONLINE XXXXXrac2
--刪除一節點:
/oracle/xxxx/grid/bin/crsctl delete node -n XXXXXrac1(二節點root執行)
XXXXXrac2:~ # /oracle/xxxx/grid/bin/crsctl delete node -n XXXXXrac1
CRS-4661: Node XXXXXrac1 successfully deleted.
XXXXXrac2:~ #
grid@XXXXXrac2:~> olsnodes -s -t
XXXXXrac2 Active Unpinned
--清除一節點VIP資訊
XXXXXrac2:~ # /oracle/xxxx/grid/bin/srvctl stop vip -i XXXXXrac1 -f
XXXXXrac2:~ # /oracle/xxxx/grid/bin/srvctl remove vip -i XXXXXrac1 -f
二節點更新inventory資訊
grid@XXXXXrac2:~> /oracle/xxxx/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/xxxx/grid "CLUSTER_NODES=XXXXXrac2" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
oracle@XXXXXrac2:~> /oracle/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=XXXXXrac2"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
檢查整個刪除過程
grid@XXXXXrac2:~> cluvfy stage -post nodedel -n XXXXXrac1 -verbose
Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "XXXXXrac2"
CRS integrity check passed
Result:
Node removal check passed
Post-check for node removal was successful.
==========新增節點
主機安裝作業系統。
配置叢集安裝基礎環境
--配置ssh互信
grid、oracle 使用者ssh互信配置
mkdir ~/.ssh
chmod 755 ~/.ssh
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh XXXXXrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh XXXXXrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys XXXXXrac2:~/.ssh
--新增grid
[export IGNORE_PREADDNODE_CHECKS=Y #可選]
grid@XXXXXrac2:~> /oracle/xxxx/grid/oui/bin/addNode.sh "CLUSTER_NEW_NODES={XXXXXrac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={XXXXXrac1-vip}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "XXXXXrac2"
Checking user equivalence...
User equivalence check passed for user "grid"
......
Saving inventory on nodes (Monday, March 19, 2018 4:13:26 PM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/oracle/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'XXXXXrac1'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oraInventory/orainstRoot.sh #On nodes XXXXXrac1
/oracle/xxxx/grid/root.sh #On nodes XXXXXrac1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/xxxx/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
--節點一執行指令碼
XXXXXrac1:~ # /oracle/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
XXXXXrac1:~ #
XXXXXrac1:~ #
XXXXXrac1:~ # /oracle/xxxx/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/xxxx/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/xxxx/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to /etc/inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node XXXXXrac2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
--檢查grid是否新增成功
檢查是否新增成功
grid@XXXXXrac2:~> cluvfy stage -post nodeadd -n XXXXXrac1
Performing post-checks for node addition
Checking node reachability...
Node reachability check passed from node "XXXXXrac2"
Checking user equivalence...
User equivalence check passed for user "grid"
....
--新增rdbms
oracle@XXXXXrac2:~> /oracle/app/oracle/product/11.2.0/db_1/oui/bin/addNode.sh "CLUSTER_NEW_NODES={XXXXXrac1}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "XXXXXrac2"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING:
Node "XXXXXrac1" already appears to be part of cluster
Pre-check for node addition was successful.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes XXXXXrac1 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oracle/app/oracle/product/11.2.0/db_1
New Nodes
Space Requirements
New Nodes
......
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oracle/product/11.2.0/db_1/root.sh #On nodes XXXXXrac1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
切換到root後執行root指令碼。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/24585765/viewspace-2152021/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle 11g rac新增節點前之清除節點資訊Oracle
- Oracle RAC新增節點Oracle
- rac新增節點步驟(11g)
- 11g rac新增節點步驟(11g)
- oracle11g RAC新增節點Oracle
- rac新增節點容易遇到的問題(11g)
- 11g rac新增節點容易遇到的問題
- oracle11g_RAC新增刪除節點Oracle
- Oracle 11g RAC手動新增serviceOracle
- 【RAC】Oracle10g rac新增刪除節點命令參考Oracle
- Oracle優化案例-新增RAC節點(二十九)Oracle優化
- rac新增節點前之清除節點資訊
- Windows 11.2.0.4 RAC安裝配置以及RAC新增節點Windows
- 關於Oracle 11G RAC雙節點之間存在防火牆導致只能一個節點執行Oracle防火牆
- rac新增節點容易遇到的問題
- Oracle 11g RAC Silent Install For NFSOracleNFS
- 11gR2 RAC新增節點步驟
- Oracle 11g RAC 監聽日常管理Oracle
- Vmware linux redhat6.4 安裝11g(11.2.0.1) 雙節點RACLinuxRedhat
- Oracle 11G RAC叢集安裝(3)——安裝OracleOracle
- 安裝Oracle 11G RAC 遇到的2個問題——Failed to run "oifcfg" 和 找不到叢集節點OracleAI
- Tuning CPU 100% in Oracle 11g rac-20220215Oracle
- Oracle:Redhat 7 + Oracle RAC 11g 安裝 bug 總結OracleRedhat
- Oracle 11g RAC SCAN ip的原理及配置Oracle
- Oracle 11g RAC到單例項OGG同步Oracle單例
- oracle 11g rac配置em dbconsole ORA-12514Oracle
- oracle 11.2.0.4 rac節點異常當機之ORA-07445Oracle
- Oracle RAC某一節點異常,你該怎麼辦?Oracle
- Oracle 11g 重新建立控制檔案Oracle
- 通過ORACLE VM virtualbox環境安裝oracle 11G RAC(ASM)OracleASM
- Oracle 11g RAC自動打GI PSU補丁Oracle
- Oracle 11G RAC叢集安裝(2)——安裝gridOracle
- oracle 11g RAC 安裝前準備指令碼Oracle指令碼
- oracle 11G RAC的建立(VM虛擬環境)Oracle
- 2節點RAC安裝
- 11G oracle資料庫重新啟動crsOracle資料庫
- oracle兩節點RAC,由於gipc導致某節點crs無法啟動問題分析Oracle
- Oracle 11g RAC之HAIP相關問題總結OracleAI