Linux系統 Oracle 10gR2(10.2.0.5)RAC安裝
Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝 Part1:準備工作
環境:OEL 5.7 + Oracle 10.2.0.5 RAC
1.實施前準備工作
- 1.1 伺服器安裝作業系統
- 1.2 Oracle安裝介質
- 1.3 共享儲存規劃
- 1.4 網路規劃分配
2.主機配置
- 2.1 使用yum安裝oracle-validated包來簡化主機配置的部分工作
- 2.2 共享儲存配置
- 2.3 配置/etc/hosts
- 2.4 配置Oracle使用者等價性
- 2.5 建立軟體目錄
- 2.6 配置使用者環境變數
- 2.7 關閉各節點主機防火牆和SELinux
- 2.8 各節點系統時間校對
Linux平臺 Oracle 10gR2 RAC安裝步驟筆記:
Part1:準備工作
Part2:clusterware安裝和升級
Part3:db安裝和升級
1.實施前準備工作
1.1 伺服器安裝作業系統
配置完全相同的兩臺伺服器,安裝相同版本的Linux作業系統。留存系統光碟或者映象檔案。
我這裡是OEL5.7,系統目錄大小均一致。對應OEL5.7的系統映象檔案放在伺服器上,供後面配置本地yum使用。
1.2 Oracle安裝介質
Oracle 10.2.0.1版本的clusterware和db,以及10.2.0.5的升級包。
-rwxr-xr-x 1 root root 302M 12月 24 13:07 10201_clusterware_linux_x86_64.cpio.gz
-rwxr-xr-x 1 root root 724M 12月 24 13:08 10201_database_linux_x86_64.cpio.gz
-rwxr-xr-x 1 root root 1.2G 12月 24 13:10 p8202632_10205_Linux-x86-64.zip
這個用MOS賬號自己去support.oracle.com下載,然後只需要上傳到節點1即可。
1.3 共享儲存規劃
從儲存中劃分出兩臺主機可以同時看到的共享LUN。
我這裡自己的實驗環境是使用openfiler模擬出共享LUN:
5個100M大小LUN;用於OCR,votedisk;
3個10G大小LUN;用於DATA;
2個5G大小LUN;用於FRA。
openfiler使用可參考:Openfiler配置RAC共享儲存
1.4 網路規劃分配
公有網路 以及 私有網路。
公有網路:物理網路卡eth0(public IP,VIP),需要4個IP地址。
私有網路:物理網路卡eth1(private IP),需要2個內部IP地址。
實際生產環境一般伺服器都至少有4塊網路卡。建議是兩兩bonding後分別作為公有網路和私有網路。
2.主機配置
2.1 使用yum安裝oracle-validated包來簡化主機配置的部分工作
由於系統環境是OEL5.7,可以簡化依賴包安裝、核心引數調整,使用者和組建立等工作,可參考:OEL上使用yum install oracle-validated 簡化主機配置工作
2.2 共享儲存配置:
我這裡openfiler所在主機的IP地址為192.168.1.12。歸劃的10塊LUN全部對映到iqn.2006-01.com.openfiler:rac10g上。
[root@oradb28 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.12
192.168.1.12:3260,1 iqn.2006-01.com.openfiler:rac10g
#手工登入iscsi目標
iscsiadm -m node -T iqn.2006-01.com.openfiler:rac10g -p 192.168.1.12 -l
#配置自動登入
iscsiadm -m node -T iqn.2006-01.com.openfiler:rac10g -p 192.168.1.12 --op update -n node.startup -v automatic
#重啟iscsi服務
service iscsi stop
service iscsi start
注意:安裝10g RAC,要確保共享裝置上劃分的LUN要在所有節點上被識別為相同裝置名稱。
[root@oradb27 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8, 0 Jan 2 22:40 /dev/sda
brw-r----- 1 root disk 8, 16 Jan 2 22:40 /dev/sdb
brw-r----- 1 root disk 8, 32 Jan 2 22:40 /dev/sdc
brw-r----- 1 root disk 8, 48 Jan 2 22:40 /dev/sdd
brw-r----- 1 root disk 8, 64 Jan 2 22:40 /dev/sde
brw-r----- 1 root disk 8, 80 Jan 2 22:40 /dev/sdf
brw-r----- 1 root disk 8, 96 Jan 2 22:40 /dev/sdg
brw-r----- 1 root disk 8, 112 Jan 2 22:40 /dev/sdh
brw-r----- 1 root disk 8, 128 Jan 2 22:40 /dev/sdi
brw-r----- 1 root disk 8, 144 Jan 2 22:40 /dev/sdj
[root@oradb28 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8, 0 Jan 2 22:41 /dev/sda
brw-r----- 1 root disk 8, 16 Jan 2 22:41 /dev/sdb
brw-r----- 1 root disk 8, 32 Jan 2 22:41 /dev/sdc
brw-r----- 1 root disk 8, 48 Jan 2 22:41 /dev/sdd
brw-r----- 1 root disk 8, 64 Jan 2 22:41 /dev/sde
brw-r----- 1 root disk 8, 80 Jan 2 22:41 /dev/sdf
brw-r----- 1 root disk 8, 96 Jan 2 22:41 /dev/sdg
brw-r----- 1 root disk 8, 112 Jan 2 22:41 /dev/sdh
brw-r----- 1 root disk 8, 128 Jan 2 22:41 /dev/sdi
brw-r----- 1 root disk 8, 144 Jan 2 22:41 /dev/sdj
其中sda,sdb,sdc,sdd,sde是100M大小的LUN,我們分別將這5個LUN各分成一個區(我實驗中發現如果不分割槽直接綁成裸裝置,在安裝clusterware後執行root.sh時會報錯:“Failed to upgrade Oracle Cluster Registry configuration”,分割槽後繫結分割槽成裸裝置,發現可以正常執行通過)
[root@oradb27 ~]# ls -lh /dev/sd*
brw-r----- 1 root disk 8, 0 Jan 3 09:36 /dev/sda
brw-r----- 1 root disk 8, 1 Jan 3 09:36 /dev/sda1
brw-r----- 1 root disk 8, 16 Jan 3 09:36 /dev/sdb
brw-r----- 1 root disk 8, 17 Jan 3 09:36 /dev/sdb1
brw-r----- 1 root disk 8, 32 Jan 3 09:36 /dev/sdc
brw-r----- 1 root disk 8, 33 Jan 3 09:36 /dev/sdc1
brw-r----- 1 root disk 8, 48 Jan 3 09:36 /dev/sdd
brw-r----- 1 root disk 8, 49 Jan 3 09:36 /dev/sdd1
brw-r----- 1 root disk 8, 64 Jan 3 09:36 /dev/sde
brw-r----- 1 root disk 8, 65 Jan 3 09:36 /dev/sde1
[root@oradb28 crshome_1]# ls -lh /dev/sd*
brw-r----- 1 root disk 8, 0 Jan 3 09:36 /dev/sda
brw-r----- 1 root disk 8, 1 Jan 3 09:36 /dev/sda1
brw-r----- 1 root disk 8, 16 Jan 3 09:36 /dev/sdb
brw-r----- 1 root disk 8, 17 Jan 3 09:36 /dev/sdb1
brw-r----- 1 root disk 8, 32 Jan 3 09:36 /dev/sdc
brw-r----- 1 root disk 8, 33 Jan 3 09:36 /dev/sdc1
brw-r----- 1 root disk 8, 48 Jan 3 09:36 /dev/sdd
brw-r----- 1 root disk 8, 49 Jan 3 09:36 /dev/sdd1
brw-r----- 1 root disk 8, 64 Jan 3 09:36 /dev/sde
brw-r----- 1 root disk 8, 65 Jan 3 09:36 /dev/sde1
1)使用udev 繫結raw devices ,供ocr和voting disk使用
編輯配置檔案並追加以下內容:
# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sda1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="raw*", OWNER=="oracle", GROUP=="oinstall", MODE=="0660"
啟動start_udev:
[root@oradb27 rules.d]# start_udev
Starting udev: [ OK ]
[root@oradb27 rules.d]# ls -l /dev/raw*
crw-rw---- 1 oracle oinstall 162, 0 Jan 2 22:37 /dev/rawctl
/dev/raw:
total 0
crw-rw---- 1 oracle oinstall 162, 1 Jan 2 23:11 raw1
crw-rw---- 1 oracle oinstall 162, 2 Jan 2 23:11 raw2
crw-rw---- 1 oracle oinstall 162, 3 Jan 2 23:11 raw3
crw-rw---- 1 oracle oinstall 162, 4 Jan 2 23:11 raw4
crw-rw---- 1 oracle oinstall 162, 5 Jan 2 23:11 raw5
[root@oradb27 rules.d]#
配置檔案60-raw.rules傳到節點2:
[root@oradb27 rules.d]# scp /etc/udev/rules.d/60-raw.rules oradb28:/etc/udev/rules.d/
在節點2啟動start_udev。
注意:如果安裝中發現raw曾被使用過,可能需要dd清除頭部資訊;
dd if=/dev/zero of=/dev/raw/raw1 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw2 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw3 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw4 bs=1048576 count=10
dd if=/dev/zero of=/dev/raw/raw5 bs=1048576 count=10
2)使用udev 繫結asm devices,供data磁碟組和fra磁碟組使用
for i in f g h i j;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\""
done
操作過程如下:
[root@oradb27 rules.d]# for i in f g h i j;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c455279366c36366a2d5a4243752d58394a33", NAME="asm-diskf", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c45525453586652542d67786f682d594c4a66", NAME="asm-diskg", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c455232586c3151572d62504e412d3343547a", NAME="asm-diskh", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c45527061334151682d4666656d2d5a6a4c67", NAME="asm-diski", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="14f504e46494c4552495649757a352d675251532d47744353", NAME="asm-diskj", OWNER="oracle", GROUP="oinstall", MODE="0660"
[root@oradb27 rules.d]#
vi
[root@oradb27 rules.d]# vi 99-oracle-asmdevices.rules
[root@oradb27 rules.d]# start_udev
Starting udev: [ OK ]
[root@oradb27 rules.d]# ls -lh /dev/asm*
brw-rw---- 1 oracle oinstall 8, 80 Jan 2 23:18 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8, 96 Jan 2 23:18 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 Jan 2 23:18 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 Jan 2 23:18 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 Jan 2 23:18 /dev/asm-diskj
#拷貝配置檔案99-oracle-asmdevices.rules到節點2,啟動start_udev
[root@oradb27 rules.d]# scp 99-oracle-asmdevices.rules oradb28:/etc/udev/rules.d/99-oracle-asmdevices.rules
[root@oradb28 ~]# start_udev
Starting udev: [ OK ]
[root@oradb28 ~]# ls -l /dev/asm*
brw-rw---- 1 oracle oinstall 8, 80 Jan 2 23:20 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8, 96 Jan 2 23:20 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 Jan 2 23:20 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 Jan 2 23:20 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 Jan 2 23:20 /dev/asm-diskj
2.3 配置/etc/hosts
按照規劃配置節點1的/etc/hosts內容
#public ip
192.168.1.27 oradb27
192.168.1.28 oradb28
#private ip
10.10.10.27 oradb27-priv
10.10.10.28 oradb28-priv
#virtual ip
192.168.1.57 oradb27-vip
192.168.1.58 oradb28-vip
然後scp拷貝/etc/hosts配置檔案到節點2:
scp /etc/hosts oradb28:/etc/
2.4 配置Oracle使用者等價性
#所有節點執行:
ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
#節點1執行:
ssh 192.168.1.27 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh 192.168.1.28 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys 192.168.1.28:~/.ssh/
#所有節點執行驗證ssh等價性:
ssh 192.168.1.27 date;ssh 192.168.1.28 date;
ssh oradb27 date;ssh oradb28 date;
ssh oradb27-priv date;ssh oradb28-priv date;
對配置使用者ssh互信步驟如有疑問可以參考:記錄一則Linux SSH的互信配置過程
2.5 建立軟體目錄
mkdir -p /u01/app/oracle/product/10.2.0.5/dbhome_1
mkdir -p /u01/app/oracle/product/10.2.0.5/crshome_1
chown -R oracle:oinstall /u01/app
2.6 配置使用者環境變數
節點1: vi /home/oracle/.bash_profile
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.5/dbhome_1
export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0.5/crshome_1
export ORACLE_SID=jyrac1
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias sql="sqlplus \"/as sysdba\""
節點2:vi /home/oracle/.bash_profile
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.5/dbhome_1
export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0.5/crshome_1
export ORACLE_SID=jyrac2
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
alias sql="sqlplus \"/as sysdba\""
2.7 關閉各節點主機防火牆和SELinux
各節點檢查、關閉防火牆和SE Linux:
service iptables status
service iptables stop
chkconfig iptables off
getenforce
setenforce 0
vi /etc/selinux/config
修改:Enforcing -> disabled
2.8 各節點系統時間校對
service ntpd stop
date
#如果時間有問題,就按下面的語法進行設定
date 072310472015 //設定日期為2015-07-23 10:47:00
hwclock -w
hwclock -r
至此,主機配置的相關準備工作已經完成。
Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝 Part2:clusterware安裝和升級
環境:OEL 5.7 + Oracle 10.2.0.5 RAC
3.安裝Clusterware
- 3.1 解壓clusterware安裝介質
- 3.2 開始安裝clusterware
- 3.3 root使用者按提示執行指令碼
- 3.4 vipca建立(可能不需要)
4.升級Clusterware
- 4.1 解壓Patchset包
- 4.2 開始升級clusterware
- 4.3 root使用者按提示執行指令碼
3.安裝Clusterware
3.1 解壓clusterware安裝介質
將存放Oracle相關安裝介質目錄賦權給Oracle使用者:
[root@oradb27 media]# chown -R oracle:oinstall /u01/media/
oracle使用者解壓安裝介質:
[oracle@oradb27 media]$ gunzip 10201_clusterware_linux_x86_64.cpio.gz
[oracle@oradb27 media]$ cpio -idmv < 10201_clusterware_linux_x86_64.cpio
執行預檢查:
[root@oradb27 media]# /u01/media/clusterware/rootpre/rootpre.sh
No OraCM running
3.2 開始安裝clusterware
使用Xmanager(MAC系統是XQuartz)開始安裝clusterware:
[root@oradb27 media]# cd /u01/media/clusterware/install
[root@oradb27 install]# vi oraparam.ini
修改下面這裡,
[Certified Versions]
Linux=RedHat-3,SUSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2
新增redhat-5,即:
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5
[root@oradb27 clusterware]# pwd
/u01/media/clusterware
[root@oradb27 clusterware]# ./runInstaller
3.3 root使用者按提示執行指令碼
節點1執行:
#開始沒有對/dev/sd{a,b,c,d,e},這5個LUN分割槽
[root@oradb27 rules.d]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 rules.d]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Failed to upgrade Oracle Cluster Registry configuration
#對/dev/sd{a,b,c,d,e},這5個LUN分別分割槽sd{a,b,c,d,e}1後執行成功
[root@oradb27 10.2.0.5]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 10.2.0.5]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oradb27
CSS is inactive on these nodes.
oradb28
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@oradb27 10.2.0.5]#
官方對這個錯誤的解決方法可參考MOS文件: Executing root.sh errors with "Failed To Upgrade Oracle Cluster Registry Configuration" (文件 ID 466673.1)
Before running the root.sh on the first node in the cluster do the following:
- Download Patch:4679769 from Metalink (contains a patched version of clsfmt.bin).
- Do the following steps as stated in the patch README to fix the problem:
Note: clsfmt.bin need only be replaced on the 1st node of the cluster
節點2執行:
[root@oradb28 crshome_1]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb28 crshome_1]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oradb27
oradb28
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0.5/crshome_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@oradb28 crshome_1]#
上面的這個報錯資訊,需要在/u01/app/oracle/product/10.2.0.5/crshome_1/bin下修改vipca和srvctl檔案內容:
[root@oradb28 bin]# ls -l vipca
-rwxr-xr-x 1 oracle oinstall 5343 Jan 3 09:44 vipca
[root@oradb28 bin]# ls -l srvctl
-rwxr-xr-x 1 oracle oinstall 5828 Jan 3 09:44 srvctl
加入
unset LD_ASSUME_KERNEL
重新執行 /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
沒有再報錯,但是也沒有成功顯示進行vipca建立。
3.4 vipca建立(可能不需要)
如果上面3.3步驟正常執行成功了vipca,那麼此步驟不再需要;
如果上面3.3步驟沒有正常執行成功vipca,那麼就需要手工在最後一個節點手工vipca建立:
這裡手工執行vipca還遇到一個錯誤如下:
[root@oradb28 bin]# ./vipca
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
檢視網路層相關的資訊,並手工註冊資訊:
[root@oradb28 bin]# ./oifcfg getif
[root@oradb28 bin]# ./oifcfg iflist
eth0 192.168.1.0
eth1 10.10.10.0
[root@oradb28 bin]# ifconfig
eth0 Link encap:Ethernet HWaddr 06:CB:72:01:07:88
inet addr:192.168.1.28 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0
TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2196870487 (2.0 GiB) TX bytes:43268497 (41.2 MiB)
eth1 Link encap:Ethernet HWaddr 22:1A:5A:DE:C1:21
inet addr:10.10.10.28 Bcast:10.10.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5343 errors:0 dropped:0 overruns:0 frame:0
TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1315035 (1.2 MiB) TX bytes:1219689 (1.1 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2193 errors:0 dropped:0 overruns:0 frame:0
TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:65167 (63.6 KiB) TX bytes:65167 (63.6 KiB)
[root@oradb28 bin]# ./oifcfg -h
PRIF-9: incorrect usage
Name:
oifcfg - Oracle Interface Configuration Tool.
Usage: oifcfg iflist [-p [-n]]
oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
oifcfg [-help]
<nodename> - name of the host, as known to a communications network
<if_name> - name by which the interface is configured in the system
<subnet> - subnet address of the interface
<if_type> - type of the interface { cluster_interconnect | public | storage }
[root@oradb28 bin]# ./oifcfg setif -global eth0/192.168.1.0:public
[root@oradb28 bin]# ./oifcfg getif
eth0 192.168.1.0 global public
[root@oradb28 bin]#
[root@oradb28 bin]#
[root@oradb28 bin]#
[root@oradb28 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
[root@oradb28 bin]# ./oifcfg getif
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
[root@oradb28 bin]#
當oifcfg getif正常獲取資訊後,再次執行VIPCA建立成功。
然後再繼續回到安裝clusterware的介面繼續也顯示成功。
此時檢視叢集的狀態應該都是正常的:
[oracle@oradb27 bin]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb27 bin]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb27 bin]$
[oracle@oradb28 ~]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb28 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb28 ~]$
4.升級Clusterware
4.1 解壓Patchset包
[root@oradb27 media]$ unzip p8202632_10205_Linux-x86-64.zip
[root@oradb27 media]$ cd Disk1/
[root@oradb27 Disk1]$ pwd
/u01/media/Disk1
4.2 開始升級clusterware
使用xquartz開始升級clusterware:
ssh -X oracle@192.168.1.27
[root@oradb27 Disk1]$ ./runInstaller
升級過程中,在預安裝檢查時,有一個引數設定不符合檢查要求,如下:
Checking for rmem_default=1048576; found rmem_default=262144. Failed <<<<
可以調整/etc/sysctl.conf配置檔案,然後執行sysctl -p生效。
4.3 root使用者按提示執行指令碼
1. Log in as the root user.
2. As the root user, perform the following tasks:
a. Shutdown the CRS daemons by issuing the following command:
/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
b. Run the shell script located at:
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
This script will automatically start the CRS daemons on the
patched node upon completion.
3. After completing this procedure, proceed to the next node and repeat.
即分別執行:
/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
節點1執行:
[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb27 bin]#
節點2執行:
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb28 bin]#
升級成功,確認crs版本為10.2.0.5,叢集狀態正常:
[oracle@oradb27 bin]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]
[oracle@oradb28 ~]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]
[oracle@oradb27 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb27 ~]$
至此,oracle clusterware安裝(10.2.0.1)和升級(10.2.0.5)已完成。
Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝 Part3:db安裝和升級
環境:OEL 5.7 + Oracle 10.2.0.5 RAC
5.安裝Database軟體
- 5.1 解壓安裝介質
- 5.2 開始安裝db軟體
- 5.3 root使用者執行指令碼
6.升級Database軟體
- 6.1 升級db軟體
- 6.2 root使用者執行指令碼
7.建立資料庫
- 7.1 建立監聽
- 7.2 建立ASM
- 7.3 建立資料庫
Linux平臺 Oracle 10gR2 RAC安裝指導:
Part1:Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝 Part1:準備工作
Part2:Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝 Part2:clusterware安裝和升級
Part3:Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝 Part3:db安裝和升級
5.安裝Database軟體
5.1 解壓安裝介質
[oracle@oradb27 ~]$ cd /u01/media/
[oracle@oradb27 media]$ gunzip 10201_database_linux_x86_64.cpio.gz
[oracle@oradb27 media]$ cpio -idmv < 10201_database_linux_x86_64.cpio
[oracle@oradb27 media]$ cd database/
[oracle@oradb27 database]$ ls
doc install response runInstaller stage welcome.html
[oracle@oradb27 database]$ cd install/
[oracle@oradb27 install]$ ls
addLangs.sh addNode.sh images lsnodes oneclick.properties oraparam.ini oraparamsilent.ini resource response unzip
[oracle@oradb27 install]$ vi oraparam.ini
#找到“[Certified Versions]”這裡,行尾新增RedHat-5,儲存退出
[Certified Versions]
Linux=redhat-3,SUSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5
5.2 開始安裝db軟體
使用xquartz連線到節點1,開始呼叫圖形安裝db軟體。
[oracle@oradb27 database]$ pwd
/u01/media/database
[oracle@oradb27 database]$ ls
doc install response runInstaller stage welcome.html
[oracle@oradb27 database]$ ./runInstaller
5.3 root使用者執行指令碼
root使用者按照提示執行指令碼@all nodes
/u01/app/oracle/product/10.2.0.5/dbhome_1/root.sh
然後繼續下一步,提示完成安裝:
The following J2EE Applications have been deployed and are accessible at the URLs listed below.
iSQL*Plus URL:
http://oradb27:5561/isqlplus
iSQL*Plus DBA URL:
http://oradb27:5561/isqlplus/dba
到這裡,就完成db 10.2.0.1的軟體安裝。
6.升級Database軟體
6.1 升級db軟體
[oracle@oradb27 Disk1]$ pwd
/u01/media/Disk1
[oracle@oradb27 Disk1]$ ls
install patch_note.htm response runInstaller stage
[oracle@oradb27 Disk1]$ ./runInstaller
6.2 root使用者執行指令碼@all nodes
root使用者執行指令碼@all nodes
/u01/app/oracle/product/10.2.0.5/dbhome_1/root.sh
執行完成後,返回圖形介面完成安裝:
The iSQL*Plus URL is:
http://oradb27:5560/isqlplus
The iSQL*Plus DBA URL is:
http://oradb27:5560/isqlplus/dba
到這裡,成功升級db軟體到10.2.0.5版本。
7.建立資料庫
7.1 建立監聽
netca建立監聽。
成功後檢視叢集資源,已經多了監聽資源:
[oracle@oradb27 Disk1]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....27.lsnr application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....28.lsnr application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb27 Disk1]$
7.2 建立ASM
dbca 配置ASM磁碟組,新增DATA,FRA磁碟組。
配置ASM磁碟組的過程中會建立ASM例項,成功後,檢視叢集資源,已經多了asm例項資源:
[oracle@oradb27 Disk1]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oradb27
ora....27.lsnr application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oradb28
ora....28.lsnr application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
7.3 建立資料庫
dbca 建立資料庫。
建立完成後,檢視叢集資源,多了資料庫例項資源和資料庫資源:
[oracle@oradb27 Disk1]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.jyrac.db application 0/0 0/1 ONLINE ONLINE oradb28
ora....c1.inst application 0/5 0/0 ONLINE ONLINE oradb27
ora....c2.inst application 0/5 0/0 ONLINE ONLINE oradb28
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oradb27
ora....27.lsnr application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oradb28
ora....28.lsnr application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb27 Disk1]$
驗證當前資料庫版本和可用性:
[oracle@oradb27 Disk1]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.5.0 - Production on Tue Jan 3 20:59:21 2017
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ WRITE
SQL>
SQL> col comp_name for a45
SQL> set linesize 120
SQL> select comp_name, status, version from dba_registry;
COMP_NAME STATUS VERSION
--------------------------------------------- ---------------------- ------------------------------
Spatial VALID 10.2.0.5.0
Oracle interMedia VALID 10.2.0.5.0
OLAP Catalog VALID 10.2.0.5.0
Oracle Enterprise Manager VALID 10.2.0.5.0
Oracle XML Database VALID 10.2.0.5.0
Oracle Text VALID 10.2.0.5.0
Oracle Expression Filter VALID 10.2.0.5.0
Oracle Rules Manager VALID 10.2.0.5.0
Oracle Workspace Manager VALID 10.2.0.5.0
Oracle Data Mining VALID 10.2.0.5.0
Oracle Database Catalog Views VALID 10.2.0.5.0
COMP_NAME STATUS VERSION
--------------------------------------------- ---------------------- ------------------------------
Oracle Database Packages and Types VALID 10.2.0.5.0
JServer JAVA Virtual Machine VALID 10.2.0.5.0
Oracle XDK VALID 10.2.0.5.0
Oracle Database Java Packages VALID 10.2.0.5.0
OLAP Analytic Workspace VALID 10.2.0.5.0
Oracle OLAP API VALID 10.2.0.5.0
Oracle Real Application Clusters VALID 10.2.0.5.0
18 rows selected.
至此,Linux平臺 Oracle 10gR2(10.2.0.5)RAC安裝系列已全部完成。
相關文章
- 安裝Oracle 10.2.0.5 RAC for AIX6(五)OracleAI
- 安裝Oracle 10.2.0.5 RAC for AIX6(四)OracleAI
- 安裝Oracle 10.2.0.5 RAC for AIX6(三)OracleAI
- 安裝Oracle 10.2.0.5 RAC for AIX6(二)OracleAI
- 安裝Oracle 10.2.0.5 RAC for AIX6(一)OracleAI
- oracle 11gR2 RAC安裝與oracle 10gR2 rac 安裝時的不同點Oracle 10g
- Oracle Linux 6.5 安裝Oracle 10gR2LinuxOracle 10g
- Oracle /RAC linux 安裝大全OracleLinux
- Linux AS4.0上安裝Oracle RAC系統--如何解除安裝,徹底刪除LinuxOracle
- 【解除安裝】通過全面刪除Linux系統上Oracle檔案的方式解除安裝Oracle RACLinuxOracle
- oracle 10gR2 10.2.0.5最新版本釋出 --for linuxOracle 10gLinux
- oracle 10gR2 rac+asm 資料庫安裝配置步驟Oracle 10gASM資料庫
- oracle 11gR2與10gR2 rac安裝時不同點(z)Oracle
- 【實驗】使用 VMware 在 Linux 5.1 上安裝、升級及維護 Oracle 10gR2 RAC (一)LinuxOracle 10g
- 【實驗】使用 VMware 在 Linux 5.1 上安裝、升級及維護 Oracle 10gR2 RAC (二)LinuxOracle 10g
- 【實驗】使用 VMware 在 Linux 5.1 上安裝、升級及維護 Oracle 10gR2 RAC (三)LinuxOracle 10g
- 【實驗】使用 VMware 在 Linux 5.1 上安裝、升級及維護 Oracle 10gR2 RAC (四)LinuxOracle 10g
- 【實驗】使用 VMware 在 Linux 5.1 上安裝、升級及維護 Oracle 10gR2 RAC (五)LinuxOracle 10g
- AIX 5.3/6.1環境下安裝Oracle 10gR2 RAC常見報錯AIOracle 10g
- silent安裝oracle10g 10.2.0.5 patchOracle
- linux4.0下安裝oracle RAC(一)LinuxOracle
- linux4.0下安裝oracle RAC(二)LinuxOracle
- linux4.0下安裝oracle RAC(三)LinuxOracle
- linux4.0下安裝oracle RAC(五)LinuxOracle
- 1 Oracle Database 11.2.0.3.0 RAC On Oralce Linux 6.5 使用-客戶作業系統安裝OracleDatabaseLinux作業系統
- oracle 10.2.0.4 rac 升級到oracle 10.2.0.5 rac步驟Oracle
- Oracle安裝部署之linux(redhat/centos)快速安裝oracle 11g racOracleLinuxRedhatCentOS
- oracle rac aix 安裝OracleAI
- Linux系統安裝Linux
- Linux系統安裝01-centos7系統安裝LinuxCentOS
- Linux下Oracle 11.2.0.1 RAC安裝筆記LinuxOracle筆記
- Oracle Linux 7.1 靜默安裝Oracle 18c RACOracleLinux
- 在oracle linux 5.6上安裝oracle 11g RACOracleLinux
- Linux系統安裝——Centos 7.6安裝LinuxCentOS
- redhat as 4下快速安裝oracle 10gR2RedhatOracle 10g
- Oracle 10gR2 DST Patch 安裝全程LOGOracle 10g
- 【RAC安裝】 AIX下安裝Oracle 11gR2 RACAIOracle
- 在Linux x86-64平臺上安裝oracle 10gR2LinuxOracle 10g