oracle 10g 之RAC 搭建
第一:配置時間服務同步
A 在第一節點上作為時間伺服器([root@node1 etc]# vi ntp.conf )
a 配置內容如下
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
A 在第一節點上作為時間伺服器([root@node1 etc]# vi ntp.conf )
a 配置內容如下
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
b 檢查服務,檢視狀態 停止然後啟動服務
[root@node1 ~]# chkconfig ntpd on
[root@node1 ~]# service ntpd status
ntpd is stopped
[root@node2 etc]# service ntpd stop
Shutting down ntpd: [ OK ]
[root@node1 ~]# service ntpd start
Starting ntpd: [ OK ]
[root@node1 ~]# service ntpd status
ntpd (pid 4192) is running...
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
[root@node1 ~]# chkconfig ntpd on
[root@node1 ~]# service ntpd status
ntpd is stopped
[root@node2 etc]# service ntpd stop
Shutting down ntpd: [ OK ]
[root@node1 ~]# service ntpd start
Starting ntpd: [ OK ]
[root@node1 ~]# service ntpd status
ntpd (pid 4192) is running...
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
B 在第二節點上的時間與第一節點上的時間也一致([root@node2 etc]# vi ntp.conf )
a 修改以下的內容為如下:
server 192.168.189.138 prefer # local clock
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
b 修改以下語句
# restrict -6 default kod nomodify notrap nopeer noquery改變為以下
restrict -6 default kod nomodify notrap noquery
a 修改以下的內容為如下:
server 192.168.189.138 prefer # local clock
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
b 修改以下語句
# restrict -6 default kod nomodify notrap nopeer noquery改變為以下
restrict -6 default kod nomodify notrap noquery
c 停止然後啟動服務
[root@node2 etc]# service ntpd stop
Shutting down ntpd: [ OK ]
[root@node2 etc]# service ntpd start
Starting ntpd: [ OK ]
[root@node2 etc]# service ntpd stop
Shutting down ntpd: [ OK ]
[root@node2 etc]# service ntpd start
Starting ntpd: [ OK ]
d 驗證時間的一致性
[root@node2 etc]# date;ntpdate 192.168.189.138
Wed May 25 10:23:51 CST 2011
25 May 10:23:51 ntpdate[3690]: the NTP socket is in use, exiting
[root@node2 etc]# date;ntpdate 192.168.189.138
Wed May 25 10:23:51 CST 2011
25 May 10:23:51 ntpdate[3690]: the NTP socket is in use, exiting
第二:配置使用者等效性
A 第一個節點上
[root@node1 ~]# su - oracle
[oracle@node1 ~]$ mkdir ~/.ssh
[oracle@node1 ~]$ chmod 700 ~/.ssh
[oracle@node1 ~]$ ssh-keygen -t rsa
[oracle@node1 ~]$ ssh-keygen -t dsa
B 第二個節點上
[root@node2 ~]# su - oracle
[oracle@node2 ~]$ mkdir ~/.ssh
[oracle@node2 ~]$ chmod 700 ~/.ssh
[oracle@node2 ~]$ ssh-keygen -t rsa
[oracle@node2 ~]$ ssh-keygen -t dsa
[root@node2 ~]# su - oracle
[oracle@node2 ~]$ mkdir ~/.ssh
[oracle@node2 ~]$ chmod 700 ~/.ssh
[oracle@node2 ~]$ ssh-keygen -t rsa
[oracle@node2 ~]$ ssh-keygen -t dsa
C 第一個節點上
[oracle@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@node1 ~]$ scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys
D 第二個節點上
E 測試同步
[oracle@node1 ~]$ ssh node1 date
[oracle@node1 ~]$ ssh node2 date
[oracle@node1 ~]$ ssh node1-priv date
[oracle@node1 ~]$ ssh node2-priv date
[oracle@node1 ~]$ ssh node1 date
[oracle@node1 ~]$ ssh node2 date
[oracle@node1 ~]$ ssh node1-priv date
[oracle@node1 ~]$ ssh node2-priv date
G 新增到記憶體中(只對當前的回話起作用)
第一個節點上
[oracle@node1 ~]$ exec /usr/bin/ssh-agent $SHELLs
s
第一個節點上
[oracle@node1 ~]$ exec /usr/bin/ssh-agent $SHELLs
s
[oracle@node1 ~]$ /usr/bin/ssh-add
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
第二個節點上
[oracle@node2 ~]$ exec /usr/bin/ssh-agent $SHELL
[oracle@node2 ~]$ /usr/bin/ssh-add
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
[oracle@node2 ~]$
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
第二個節點上
[oracle@node2 ~]$ exec /usr/bin/ssh-agent $SHELL
[oracle@node2 ~]$ /usr/bin/ssh-add
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
[oracle@node2 ~]$
第三:配置ASM
A 第一個節點上
[root@node1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
A 第一個節點上
[root@node1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
Marking disk "VOL1" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sde1
Marking disk "VOL2" as an ASM disk: [ OK ]
Marking disk "VOL1" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sde1
Marking disk "VOL2" as an ASM disk: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
B 第二個節點上
[root@node2 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
B 第二個節點上
[root@node2 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
第四:安裝CRS
A 檢查缺少什麼包
[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
A 檢查缺少什麼包
[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
B 如果在第二個節點上執行root.sh碰到了如下錯誤:
/opt/ora10g/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0:
cannot open shared object file: No such file or directory
需要做以下的操作
===============================
a 修改 vipca 檔案
[root@node2 opt]# vi /opt/ora10g/product/10.2.0/crs_1/bin/vipca
找到如下內容:
Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
#End workaround
在 fi 後新新增一行:
unset LD_ASSUME_KERNEL
/opt/ora10g/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0:
cannot open shared object file: No such file or directory
需要做以下的操作
===============================
a 修改 vipca 檔案
[root@node2 opt]# vi /opt/ora10g/product/10.2.0/crs_1/bin/vipca
找到如下內容:
Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
#End workaround
在 fi 後新新增一行:
unset LD_ASSUME_KERNEL
b 修改 srvctl 檔案
[root@node2 opt]# vi /opt/ora10g/product/10.2.0/crs_1/bin/srvctl
找到如下內容:
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
同樣在其後新增加一行:
unset LD_ASSUME_KERNEL
[root@node2 opt]# vi /opt/ora10g/product/10.2.0/crs_1/bin/srvctl
找到如下內容:
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
同樣在其後新增加一行:
unset LD_ASSUME_KERNEL
最後在node2 重新執行 root.sh
C 如果在二個節點上執行root.sh碰到了如下錯誤:
OUI-25031 some of the configuration assistants failed
需要在在第二個節點上執行
以ROOT身份執行[root@node2 bin]# pwd
/opt/ora10g/product/10.2.0/crs_1/bin
[root@node2 bin]# ./vipca 配置完成後 就可以確認了
D 通過以下命令檢視CRS安裝是否成功
[root@node2 ~]# /opt/ora10g/product/10.2.0/crs_1/bin/crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node2 bin]# ./olsnodes
node1
node2
[root@node1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE UNKNOWN node1
ora.node1.ons application ONLINE UNKNOWN node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE UNKNOWN node2
ora.node2.ons application ONLINE UNKNOWN node2
ora.node2.vip application ONLINE ONLINE node2
出現上面的情況需要在節點NODE1上關閉,啟動服務就可以了
[root@node1 bin]# service init.crs stop
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
[root@node1 bin]# service init.crs start
Startup will be queued to init within 90 seconds.
檢視結果
[root@node1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE UNKNOWN node2
ora.node2.ons application ONLINE UNKNOWN node2
ora.node2.vip application ONLINE ONLINE node2
出現上面的情況需要在節點NODE2上同樣關閉,啟動服務就可以了
[root@node2 bin]# service init.crs stop
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
[root@node2 bin]# service init.crs start
Startup will be queued to init within 90 seconds.
檢視結果
[root@node2 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
OUI-25031 some of the configuration assistants failed
需要在在第二個節點上執行
以ROOT身份執行[root@node2 bin]# pwd
/opt/ora10g/product/10.2.0/crs_1/bin
[root@node2 bin]# ./vipca 配置完成後 就可以確認了
D 通過以下命令檢視CRS安裝是否成功
[root@node2 ~]# /opt/ora10g/product/10.2.0/crs_1/bin/crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node2 bin]# ./olsnodes
node1
node2
[root@node1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE UNKNOWN node1
ora.node1.ons application ONLINE UNKNOWN node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE UNKNOWN node2
ora.node2.ons application ONLINE UNKNOWN node2
ora.node2.vip application ONLINE ONLINE node2
出現上面的情況需要在節點NODE1上關閉,啟動服務就可以了
[root@node1 bin]# service init.crs stop
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
[root@node1 bin]# service init.crs start
Startup will be queued to init within 90 seconds.
檢視結果
[root@node1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE UNKNOWN node2
ora.node2.ons application ONLINE UNKNOWN node2
ora.node2.vip application ONLINE ONLINE node2
出現上面的情況需要在節點NODE2上同樣關閉,啟動服務就可以了
[root@node2 bin]# service init.crs stop
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
[root@node2 bin]# service init.crs start
Startup will be queued to init within 90 seconds.
檢視結果
[root@node2 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
第五 安裝ORACLE 10G 資料庫軟體
第六 配置監聽
確認監聽配置成功
[oracle@node1 database]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
[oracle@node1 database]$
第六 配置監聽
確認監聽配置成功
[oracle@node1 database]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
[oracle@node1 database]$
第七 建立ASM
第八 建立資料庫
驗證
[oracle@node1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE OFFLINE
ora....E1.lsnr application 0/5 0/0 ONLINE OFFLINE
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node2
ora....SM2.asm application 0/5 0/0 ONLINE OFFLINE
ora....E2.lsnr application 0/5 0/0 ONLINE OFFLINE
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node1
ora.racdb.db application 0/1 0/1 OFFLINE OFFLINE
ora....b1.inst application 0/5 0/0 ONLINE OFFLINE
ora....b2.inst application 0/5 0/0 ONLINE OFFLINE
用命令一次啟動和關閉相關程式
[root@rac2 bin]# ./crs_stop -all
[root@rac2 bin]# ./crs_start -all
第八 建立資料庫
驗證
[oracle@node1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE OFFLINE
ora....E1.lsnr application 0/5 0/0 ONLINE OFFLINE
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node2
ora....SM2.asm application 0/5 0/0 ONLINE OFFLINE
ora....E2.lsnr application 0/5 0/0 ONLINE OFFLINE
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node1
ora.racdb.db application 0/1 0/1 OFFLINE OFFLINE
ora....b1.inst application 0/5 0/0 ONLINE OFFLINE
ora....b2.inst application 0/5 0/0 ONLINE OFFLINE
用命令一次啟動和關閉相關程式
[root@rac2 bin]# ./crs_stop -all
[root@rac2 bin]# ./crs_start -all
第九 驗證
把以下的內容COPY 到 C:\Windows\System32\drivers\etc hosts下
192.168.189.138 node1
192.168.189.139 node2
把以下的內容COPY 到 C:\Windows\System32\drivers\etc hosts下
192.168.189.138 node1
192.168.189.139 node2
192.168.100.138 node1-vip
192.168.100.139 node2-vip
192.168.100.139 node2-vip
把以下內容新增到客戶端的tnsnames.ora中
RACDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb)
(FAILOVER_MODE=
(TYPE=session)
(METHOD=basic)
(RETRIES=180)
(DELAY=5)
)
)
)
RACDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb)
(FAILOVER_MODE=
(TYPE=session)
(METHOD=basic)
(RETRIES=180)
(DELAY=5)
)
)
)
A 體驗 failover
a 連線到RAC
sqlplus system/123456@RACDB
b 確認使用者連線到的當前的例項
select instance_name from v$instance;
c 關閉連線到的當前例項
shutdown abort;
d 等待一會後 再檢視連線的例項
select instance_name from v$instance;
B 體驗 loadBalance
a test.sh
#!/bin/sh
count=0
while [$count -lt $2]
do
count='expr $count +1'
sqlplus -s username/password@$1 @test.sql
sleep 1
done
a 連線到RAC
sqlplus system/123456@RACDB
b 確認使用者連線到的當前的例項
select instance_name from v$instance;
c 關閉連線到的當前例項
shutdown abort;
d 等待一會後 再檢視連線的例項
select instance_name from v$instance;
B 體驗 loadBalance
a test.sh
#!/bin/sh
count=0
while [$count -lt $2]
do
count='expr $count +1'
sqlplus -s username/password@$1 @test.sql
sleep 1
done
b 檢視例項名稱
select instance_name from v$instance
c 執行指令碼
./test.sh racdb 1000
d 指令碼執行完後,檢視每個例項建立的連線數量
select inst_id,count(*) from gv$session;
select instance_name from v$instance
c 執行指令碼
./test.sh racdb 1000
d 指令碼執行完後,檢視每個例項建立的連線數量
select inst_id,count(*) from gv$session;
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/20976446/viewspace-696365/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle 10g RAC故障處理Oracle 10g
- Oracle 11.2 DataGuard RAC To RAC搭建Oracle
- Oracle 10g RAC 資料儲存更換Oracle 10g
- Oracle RAC+DG搭建Oracle
- AIX 5.3 Install Oracle 10g RAC 錯誤集錦AIOracle 10g
- 【TUNE_ORACLE】Oracle 19c RAC搭建番外篇之RAC引數配置參考(四)Oracle
- 【TUNE_ORACLE】Oracle 19c RAC搭建番外篇之RAC引數配置參考(五)Oracle
- 【TUNE_ORACLE】Oracle 19c RAC搭建番外篇之RAC引數配置參考(三)Oracle
- 【TUNE_ORACLE】Oracle 19c RAC搭建番外篇之RAC引數配置參考(二)Oracle
- 【TUNE_ORACLE】Oracle 19c RAC搭建番外篇之RAC引數配置參考(一)Oracle
- 10g RAC on AIXAI
- oracle 10G特性之awrOracle 10g
- 【BUILD_ORACLE】Oracle 19c RAC搭建(六)建立RAC資料庫UIOracle資料庫
- Oracle搭建rac到單庫的adgOracle
- Oracle 12c 使用RMAN搭建物理備庫(RAC to RAC)Oracle
- ORACLE RAC TO RAC DG搭建過程中可能遇到的問題Oracle
- oracle RACOracle
- Oracle RAC Cache Fusion 系列十七:Oracle RAC DRMOracle
- Solaris 10下遷移10G RAC (六)
- Solaris 10下遷移10G RAC (八)
- Solaris 10下遷移10G RAC (四)
- Solaris 10下遷移10G RAC (二)
- Solaris 10下遷移10G RAC (七)
- Solaris 10下遷移10G RAC (三)
- Solaris 10下遷移10G RAC (一)
- Solaris 10下遷移10G RAC (五)
- oracle 10g flashback databaseOracle 10gDatabase
- Oracle RAC CacheFusion 系列十五:Oracle RAC CRServer Part TwoOracleServer
- 【BUILD_ORACLE】Oracle 19c RAC搭建(五)DB軟體安裝UIOracle
- 【BUILD_ORACLE】Oracle 19c RAC搭建(四)Grid軟體安裝UIOracle
- 【DG】Oracle 19c使用dbca來搭建物理DG--主rac備racOracle
- ORACLE RAC clusterwareOracle
- ORACLE 12C RAC 生產環境搭建介紹Oracle
- Oracle RAC Cache Fusion系列十八:Oracle RAC Statisticsand Wait EventsOracleAI
- 【BUILD_ORACLE】Oracle 19c RAC搭建(一)安裝資源規劃UIOracle
- 【BUILD_ORACLE】Oracle 19c RAC搭建(三)使用UDEV配置共享儲存UIOracledev
- Scheduler in Oracle Database 10g(轉)OracleDatabase
- Oracle 10g 下載地址Oracle 10g
- Oracle 9i, 10g, and 11g RAC on Linux所需要的Hangcheck-Timer Module介紹OracleLinuxGC