11gRAC二節點重構

lmxx2020發表於2024-03-04

二節點在重啟後作業系統損壞,需要重構,下面將描述重構方法

一、清除叢集上二節點的節點資訊

1、刪除例項

dbca或靜默:

[oracle@rac1 ~]$ dbca -silent -deleteinstance -nodelist rac2 -gdbname orcl -instancename orcl2 -sysdbausername sys -sysdbapassword oracle

dbca-例項管理-刪除節例項-選擇服務輸入密碼-選擇inactive例項-確認刪除

2、檢視資料庫例項情況

[oracle@rac1 ~]$ srvctl config database -d orcl

Database unique name: orcl

Database name: orcl

Oracle home: /oracle/app/product/11.2.0/db_1

Oracle user: oracle

Spfile: +DATA/orcl/spfileorcl.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: orcl

Database instances: orcl1

Disk Groups: DATA

Services:

Database is administrator managed

sqlplus / as sysdba

SQL> select inst_id,instance_name from gv$instance;

INST_ID INSTANCE_NAME


1 orcl1

3、在保留節點使用oracle使用者更新叢集列表

[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1}"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /oracle/oraInventory

'UpdateNodeList' was successful.

4、移除叢集中二節點的VIP

停止二節點VIP:

cd $GRID_HOME/bin

[root@rac1 bin]# ./srvctl stop vip -i rac2-vip

刪除二節點VIP:

[root@rac1 bin]#./srvctl remove vip -i rac2-vip -f

5、檢視節點狀態

檢視叢集狀態

[grid@rac1 ~]$crs_stat -t

[grid@rac1 ~]$crsctl stat res -t

可以看到其中關於二節點的VIP資訊已被刪除

檢視叢集節點資訊

[grid@rac1 ~]$ olsnodes -s -t

rac1 Active Unpinned

rac2 Inactive Unpinned

(如果二節點是ping狀態,需要執行這步:[grid@rac1 ~]$crsctl unpin css -n rac2)

6、刪除節點

[root@rac1 bin]# $GRID_HOME/bin/crsctl delete node -n rac2

CRS-4661: Node rac2 successfully deleted.

7、更新GI的inventory資訊

su - gird

cd $ORACLE_HOME/oui/bin

[grid@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1}" CRS=TRUE -local

Starting Orac2e Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /oracle/oraInventory

'UpdateNodeList' was successful.

二:重新新增二節點到叢集中

1、前提條件

(1):新增相應的使用者和組,使用者及使用者組ID相同

(2):配置 hosts檔案 ,新增節點和原有都配置為相同

(3):配置系統引數,使用者引數和原有節點一樣,配置網路

(4):建立相應的目錄,並保證許可權對應(根據實際情況建立目錄,非常重要)

mkdir /oracle/app

mkdir /oracle/grid/crs_1

mkdir /oracle/gridbase

mkdir /oracle/oraInventory

chown oracle:oinstall /oracle/app

chown grid:oinstall /oracle/grid/

chown grid:oinstall /oracle/grid/crs_1

chown grid:oinstall /oracle/gridbase

chown grid:oinstall /oracle/oraInventory

chmod 770 /oracle/oraInventory

5)檢查多路徑,盤許可權

2、配置使用者之間的SSH、安裝叢集rpm包:

到grid軟體解壓目錄下:

[root@rac1 sshsetup]# cd /oracle/grid/grid/sshsetup

grid使用者的ssh:

./sshUserSetup.sh -user grid -hosts "rac1 rac2" -advanced –noPromptPassphrase

oracle使用者的ssh:

./sshUserSetup.sh -user oracle -hosts "rac1 rac2" -advanced –noPromptPassphrase

將grid軟體目錄下的rpm包傳到二節點:

[grid@rac1 ~]$scp cvuqdisk-1.0.9-1.rpm 192.168.40.102:/home/grid

二節點安裝rpm包:

[grid@rac2 ~]$ rpm-ivh cvuqdisk-1.0.9-1.rpm

若你沒有oinstall組,安裝可能報錯。此時手動export DISKGROUP=dba

3、檢查rac2是否滿足rac安裝條件

1.檢查網路和儲存:

[grid@racdb1 ~]$ cluvfy stage -post hwos -n rac2

Check: TCP connectivity of subnet "10.0.0.0"

Source Destination Connected?


rac1:192.168.40.101 rac2:10.0.0.3 failed

ERROR:

PRVF-7617 : Node connectivity between "rac1 : 192.168.40.101" and "rac2 : 10.0.0.3" failed

Result: TCP connectivity check failed for subnet "10.0.0.0"

Result: Node connectivity check failed

若出現上述報錯,可忽略。

2.檢查rpm包、磁碟空間等:

[grid@racdb1 ~]$ cluvfy comp peer -n rac2

3.整體檢查

[grid@racdb1 ~]$ cluvfy stage -pre nodeadd -n rac2 -fixup -verbose

4、新增節點

grid使用者在grid_home/oui/bin目錄下執行:

忽略addnote的時候進行的自檢(因為我們不使用DNS和NTP,若addnode的時候自檢不透過,則無法增加節點)

執行前刪除日誌小檔案,特別是審計日誌、trace日誌,不然複製很慢

export IGNORE_PREADDNODE_CHECKS=Y

[grid@rac1 bin]$ /oracle/grid/crs_1/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac2-priv}"

Each script in the list below is followed by a list of nodes.

/oracle/oraInventory/orainstRoot.sh #On nodes rac2

/oracle/grid/crs_1/root.sh #On nodes rac2

To execute the configuration scripts:

  1. Open a terminal window

  2. Log in as "root"

  3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/grid/crs_1 was unsuccessful.

Please check '/tmp/silentInstall.log' for more details.

跑以上提示指令碼

(1)[root@rac2 oracle]# /oracle/oraInventory/orainstRoot.sh

(2)[root@rac2 oracle]# /oracle/grid/crs_1/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /oracle/grid/crs_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /oracle/grid/crs_1/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization - successful

Adding Clusterware entries to upstart

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

PRKO-2190 : VIP exists for node rac2, VIP name rac2-vip

PRKO-2420 : VIP is already started on node(s): rac2

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

5、新增新節點資料庫:(在一節點操作)

oracle使用者:

cd $ORACLE_HOME/oui/bin

[oracle@rac1 bin]$./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}"

二節點執行指令碼:

[root@rac2 oracle]# /oracle/app/product/11.2.0/db_1/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /oracle/app/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

6、建立二節點例項(在一節點操作)

Dg庫需要修改db_unique_name和db_name一致,不然會報錯,等加好例項再把db_unique_name修改回去

[oracle@rac1 bin]$ dbca -silent -addInstance -nodeList rac2 -gdbName orcl -instanceName orcl2 -sysDBAUserName sys -sysDBAPassword "oracle"

Adding instance

1% complete

2% complete

6% complete

13% complete

20% complete

27% complete

28% complete

34% complete

41% complete

48% complete

54% complete

66% complete

Completing instance management.

76% complete

100% complete

Look at the log file "/oracle/app/cfgtoollogs/dbca/orcl/orcl.log" for further details.

7、驗證叢集狀態等

[grid@rac2 ~]$ crs_stat -t

Name Type Target State Host


ora.DATA.dg ora....up.type ONLINE ONLINE rac1

ora....ER.lsnr ora....er.type ONLINE ONLINE rac1

ora....N1.lsnr ora....er.type ONLINE ONLINE rac1

ora.OCRVOT.dg ora....up.type ONLINE ONLINE rac1

ora.asm ora.asm.type ONLINE ONLINE rac1

ora.cvu ora.cvu.type ONLINE ONLINE rac1

ora.gsd ora.gsd.type OFFLINE OFFLINE

ora....network ora....rk.type ONLINE ONLINE rac1

ora.oc4j ora.oc4j.type ONLINE ONLINE rac1

ora.ons ora.ons.type ONLINE ONLINE rac1

ora.orcl.db ora....se.type ONLINE ONLINE rac1

ora....SM1.asm application ONLINE ONLINE rac1

ora....C1.lsnr application ONLINE ONLINE rac1

ora.rac1.gsd application OFFLINE OFFLINE

ora.rac1.ons application ONLINE ONLINE rac1

ora.rac1.vip ora....t1.type ONLINE ONLINE rac1

ora....SM2.asm application ONLINE ONLINE rac2

ora....C2.lsnr application ONLINE ONLINE rac2

ora.rac2.gsd application OFFLINE OFFLINE

ora.rac2.ons application ONLINE ONLINE rac2

ora.rac2.vip ora....t1.type ONLINE ONLINE rac2

ora....ry.acfs ora....fs.type ONLINE ONLINE rac1

ora.scan1.vip ora....ip.type ONLINE ONLINE rac1

[grid@rac2 ~]$ srvctl status db -d orcl

Instance orcl1 is running on node rac1

Instance orcl2 is running on node rac2

SQL> select inst_id,status from gv$instance;

INST_ID STATUS


3 OPEN

1 OPEN

SQL> select open_mode from v$database;

OPEN_MODE


READ WRITE


來自 “ ITPUB部落格 ” ,連結:https://blog.itpub.net/22967847/viewspace-3007977/,如需轉載,請註明出處,否則將追究法律責任。

相關文章