虛擬機器上靜默安裝oracle11g rac

ahfhuang發表於2017-03-10

一、準備兩臺linux機器(兩網路卡,多塊共享硬碟)
1、linux版本
$ uname -a
Linux rac1 2.6.32-642.15.1.el6.x86_64 #1 SMP Fri Feb 24 14:31:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
2、準備oracle安裝包
官網地址:http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
p13390677_112040_Linux-x86-64_1of7.zip
p13390677_112040_Linux-x86-64_2of7.zip
p13390677_112040_Linux-x86-64_3of7.zip

二、預安裝準備
1、採用yum安裝必要的包(僅供參考,每個節點執行),如下:
$ cd /etc/yum.repos.d/
$ mv  CentOS-Base.repo  CentOS-Base.repo.bak
修改源為阿里源
$ wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
清除YUM快取
yum clean all
將伺服器上的軟體包資訊快取到本地快取,以提高搜尋和安裝軟體的速度
$ yum makecache

$ yum install -y binutils*
$ yum install -y compat-libstdc*
$ yum install -y compat-libcap*
$ yum install -y elfutils-libelf*
$ yum install -y gcc*
$ yum install -y glibc*
$ yum install -y ksh*
$ yum install -y libaio*
$ yum install -y libgcc*
$ yum install -y libstdc*
$ yum install -y make*
$ yum install -y sysstat*
$ yum install -y libXp*
$ yum install -y glibc-kernheaders*
$ yum install -y libaio*
$ yum install -y compat-libstdc*
$ yum install -y libaio-devel*
$ yum install -y libgcc*
$ yum install -y unixODBC*
$ yum install -y unixODBC-devel*
$ yum install -y pdksh*
$ yum install -y rsh*
$ yum install -y cvuqdisk*
$ yum install -y compat-libcap1
$ yum install -y libcap*

注:後期安裝GI和oracle檢查會提示缺什麼包,再下載安裝。安裝包時可能與其他有衝突,卸掉原來的或強制裝。
$ rpm -vih pdksh-5.2.14-37.el5_8.1.x86_64.rpm
warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY
error: Failed dependencies:
 pdksh conflicts with ksh-20120801-33.el6.x86_64
$ rpm -e ksh-20120801-33.el6.x86_64
$ rpm -vih pdksh-5.2.14-37.el5_8.1.x86_64.rpm
warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY
Preparing...                ########################################### [100%]
   1:pdksh                  ########################################### [100%]

建立後oinstall組後 cvuqdisk-1.0.9-1.rpm 包方可安裝成功
$ rpm -vih pdksh-5.2.14-37.el5_8.1.x86_64.rpm
warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY
Preparing...                ########################################### [100%]
   1:pdksh                  ########################################### [100%]
[root@localhost ~]# rpm -vih cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
Using default group oinstall to install package
Group oinstall not found in /etc/group
oinstall : Group doesn't exist.
Please define environment variable CVUQDISK_GRP with the correct group to be used
error: %pre(cvuqdisk-1.0.9-1.x86_64) scriptlet failed, exit status 1
error:   install: %pre scriptlet failed (2), skipping cvuqdisk-1.0.9-1

如果出現以下問題:
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: PackageKit
    Memory :  57 M RSS (365 MB VSZ)
    Started: Fri Mar 10 03:42:13 2017 - 30:41 ago
通過強制關掉yum程式:
$ rm -f /var/run/yum.pid

檢查資料包
$ yum install binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel gcc- gcc-c++ libaio-devel libaio libgcc libstdc++ libstdc++-devel make sysstat unixODBC unixODBC-devel pdksh ksh compat-libcap1

2、關防火牆:(每個節點執行)
臨時關閉(重啟失效):
$ service iptables stop
永久關閉:
$ chkconfig iptables off

3、禁用SELINUX(每個節點執行)
$ sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
及時生效
$ setenforce 0

4、修改hostname
rac1-> sed -i "s/HOSTNAME=localhost.localdomain/HOSTNAME=rac1/" /etc/sysconfig/network
及時生效
rac1-> hostname rac1
在第二節點2
rac2-> sed -i "s/HOSTNAME=localhost.localdomain/HOSTNAME=rac2/" /etc/sysconfig/network
及時生效
rac2-> hostname rac2

5、修改/etc/pam.d/login 檔案(每個節點執行)
就可以防止本地登入一直回覆到login狀態
$ echo "session required pam_limits.so" >> /etc/pam.d/login

6、修改sysctl.conf 配置檔案(#本機器預設,每個節點執行)
#kernel.shmmax = 68719476736
#kernel.shmall = 4294967296
$ cat >> /etc/sysctl.conf <<EOF
fs.file-max = 6815744
fs.aio-max-nr = 1048576
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
EOF

注:引數的意義
  shmmax:該引數定義了共享記憶體段的最大尺寸(以位元組為單位)。預設為32M,對oracle來說,該預設值太低了,這個設定的比SGA_MAX_SIZE大比較好。
  shmall:該參數列示系統可以使用的共享記憶體總量(以頁為單位)。Linux共享記憶體頁大小為4KB,預設值就是2097152
  注:shmall 是全部允許使用的共享記憶體大小,shmmax 是單個段允許使用的大小。這兩個可以設定為記憶體的 90%。例如 16G 記憶體,
      16*1024*1024*1024*90% = 15461882265,shmall 的大小為 15461882265/4k(getconf PAGESIZE可得到) = 3774873。
  shmmin:最小的記憶體segment的大小
  shmmni:這個核心引數用於設定系統範圍內共享記憶體段的最大數量。該引數的預設值是 4096 。
  shmseg:每個程式可以使用的記憶體segment的最大個數
  fs.aio-max-nr:指的是 同時可以擁有的的非同步IO請求數目。
  fs.file-max:這個參數列示程式可以同時開啟的最大控制程式碼數,這個引數直接限制最大併發連線數。
  sem:該參數列示設定的訊號量。
  net.ipv4.ip_local_port_range:這個引數定義了在UDP和TCP連線中本地埠的取值範圍。表示應用程式可使用的IPv4埠範圍。
  net.core.rmem_max:設定客戶端的最大接收快取大小
  net.core.wmem_max:buffer of socket max size

讓配置生效:
$ sysctl -p

7、增加使用者(每個節點執行)
7.1 建立使用者組及使用者
$ /usr/sbin/groupadd -g 501 oinstall
$ /usr/sbin/groupadd -g 502 dba
$ /usr/sbin/groupadd -g 507 oper
$ /usr/sbin/groupadd -g 504 asmadmin
$ /usr/sbin/groupadd -g 505 asmoper
$ /usr/sbin/groupadd -g 506 asmdba
$ /usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
$ /usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
$ cat /etc/group

7.2 修改使用者密碼:
$ passwd grid
$ passwd oracle
$ id oracle
$ id grid

7.3 建立目錄及賦權
$ mkdir -p /u01/app/grid
$ mkdir -p /u01/app/11.2.0/grid
$ mkdir -p /u01/app/oraInventory
$ chown -R grid:oinstall /u01/app/oraInventory
$ chown -R grid:oinstall /u01/app
$ mkdir -p /u01/app/oracle
$ chown -R oracle:oinstall /u01/app/oracle
$ chmod -R 775 /u01

8、修改/etc/profile在後部追加(每個節點執行)
$ vi /etc/profile
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
   if [ $SHELL = "/bin/ksh" ]; then
      ulimit -p 16384
      ulimit -n 65536
   else
      ulimit -u 16384 -n 65536
   fi
  umask 022
fi
export GRID_HOME=/u01/app/11.2.0/grid
export PATH=$GRID_HOME/bin:$PATH

9、修改/etc/security/limits.conf 檔案,增加內容如下:(每個節點執行)
$ vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 2048
grid hard nofile 65536
grid soft stack 10240
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 2048
oracle hard nofile 65536
oracle soft stack 10240

注:soft 指的是當前系統生效的設定值。hard 表明系統中所能設定的最大值。
   grid soft nproc 2047   #(軟)定義使用者grid的最大程式數為2047
   grid hard nproc 16384  #(硬)定義使用者grid的最大程式數為16384
   nofile -- 開啟檔案的最大數目
   stack  -- 最大棧大小
   noproc -- 程式的最大數目

10、建立磁碟(VMware)
10.1 建立磁碟檔案
開啟windows命令視窗
$ cmd
cd到vmware-vdiskmanager.exe 所安裝的目錄下執行
E:\>cd vm
E:\vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\ocr1.vmdk
Creating disk 'E:\CentOS1\sharedisk\ocr1.vmdk'
  Create: 100% done.
Virtual disk creation successful.

E:\vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\ocr2.vmdk
E:\vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\ocr3.vmdk
E:\vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\ocr4.vmdk
E:\vm> vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\data1.vmdk
E:\vm> vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\data2.vmdk
E:\vm> vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:\CentOS1\sharedisk\fra.vmdk

10.2 在虛擬機器上增加硬碟

編輯虛擬機器1設定->新增->硬碟->下一步->使用現有虛擬磁碟->檔案(選擇上面建立的磁碟檔案)->完成->保持現有格式
把一節點的配置檔案複製到另個節點 vi E:\vm\CentOS2.vmx

scsi1:0.present = "TRUE"
scsi1:0.fileName = "E:\CentOS1\sharedisk\ocr1.vmdk"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "E:\CentOS1\sharedisk\ocr2.vmdk"
scsi1:2.present = "TRUE"
scsi1:2.fileName = "E:\CentOS1\sharedisk\ocr3.vmdk"
scsi1:3.present = "TRUE"
scsi1:3.fileName = "E:\CentOS1\sharedisk\ocr4.vmdk"
scsi1:4.present = "TRUE"
scsi1:4.fileName = "E:\CentOS1\sharedisk\data1.vmdk"
scsi1:5.present = "TRUE"
scsi1:5.fileName = "E:\CentOS1\sharedisk\fra.vmdk"
scsi1:6.present = "TRUE"
scsi1:6.fileName = "E:\CentOS1\sharedisk\data2.vmdk"
disk.locking="false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

編輯虛擬機器2設定->新增->硬碟->下一步->使用現有虛擬磁碟->檔案(選擇上面建立的磁碟檔案)->完成->保持現有格式


10.3 分割槽,可以不掛載(每個節點執行)
$ fdisk -l
$ fdisk /dev/sdb
$ fdisk /dev/sdc
$ fdisk /dev/sdd
$ fdisk /dev/sde
$ fdisk /dev/sdf
$ fdisk /dev/sdg
$ fdisk /dev/sdh
m->n->p->1->回車->回車->p->w
--------可用下面代替
for i in b c d e f g h
do
fdisk /dev/sd$i <<EOF
m
n
p
1


p
w
EOF
done
-----------

11、配置和檢查網路, (在伺服器上各增加1塊網路介面卡,以hostonly模式用於私網心跳,在所有節點上執行)
11.1修改節點的host配置(每個節點執行)
cat >> /etc/hosts <<EOF
192.168.91.140  rac1.burton.com      rac1
192.168.214.130 rac1-priv.burton.com rac1-priv
192.168.91.152  rac1-vip.burton.com  rac1-vip
192.168.91.142  rac2.burton.com      rac2
192.168.214.131 rac2-priv.burton.com rac2-priv
192.168.91.153  rac2-vip.burton.com  rac2-vip
192.168.91.154  scan-ip.burton.com   scan-ip
EOF

11.2修改網路卡配置(其他節點只要修改IPADDR,HWADDR即可)
$ ifconfig
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
BROADCAST=192.168.91.255
IPADDR=192.168.91.140
NETMASK=255.255.255.0
ONBOOT="yes"
TYPE="Ethernet"
GATEWAY=192.168.91.2
HWADDR="00:0C:29:2B:0C:0B"
USERCTL="no"
IPV6INIT="no"
PEERDNS="yes"
DNS1=114.114.114.114
DNS2=8.8.8.8

$ vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="static"
BROADCAST=192.168.214.255
IPADDR=192.168.214.130
NETMASK=255.255.255.0
ONBOOT="yes"
TYPE="Ethernet"
HWADDR="00:0C:29:2B:0C:15"
USERCTL="no"
IPV6INIT="no"
PEERDNS="yes"
DNS1=114.114.114.114
DNS2=8.8.8.8

GATEWAY:參考 route -n
HWADDR:參考 cat /etc/udev/rules.d/70-persistent-net.rules

11.3 重啟網路(每個節點執行)
$ service network restart

12、SSH設定互信關係,(oracle和grid)
12.1設定使用者SSH(每個節點執行)
$ su - grid
$ mkdir -p ~/.ssh
$ cd ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
以下操作在第一個節點上執行即可:
公鑰存在authorized_keys檔案中,寫到本機
$ cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub>>~/.ssh/authorized_keys
$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac2 (192.168.91.142)' can't be established.
RSA key fingerprint is bb:41:f5:d0:5f:84:8a:0d:90:a5:29:cb:0c:b1:12:cf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.91.142' (RSA) to the list of known hosts.
$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid@rac2's password:
$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
grid@rac2's password:
authorized_keys                               100% 1980     1.9KB/s   00:00 

12.2兩個節點上分別驗證
$ ssh rac1 date
$ ssh rac2 date
$ ssh rac1-priv date
$ ssh rac2-priv date
同樣用oracle使用者執行
su - oracle
.....

13、安裝oracleasm(每個節點執行)
上傳相應的rpm包並安裝
關聯問題:error: Failed dependencies:
        oracleasm >= 1.0.4 is needed by oracleasmlib-2.0.4-1.el6.x86_64
$ yum install kmod-oracleasm* -y
$ yum install oracleasm* -y
$ yum localinstall -y kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
$ yum localinstall -y oracleasmlib-2.0.4-1.el6.x86_64.rpm
$ yum localinstall -y oracleasm-support-2.1.8-1.el6.x86_64.rpm
$ rpm -vih oracleasm*
檢視安裝情況
$ rpm -qa|grep oracleasm
kmod-oracleasm-2.0.8-13.el6_8.x86_64
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-1.el6.x86_64

14、實現共享硬碟
14.1 兩個節點都執行(root使用者執行)
最好重啟下,不重啟可能由於某些引數沒生效導致ASM失敗
oracleasm configure -i 或 /etc/init.d/oracleasm configure
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y

14.2啟動(如Failed可以重啟系統,再啟動)
$ /etc/init.d/oracleasm start
檢視日誌
$ cat /var/log/oracleasm

14.3建立共享磁碟
rac1-> service oracleasm createdisk FRA /dev/sdb1
rac1-> service oracleasm createdisk DATA1 /dev/sdc1
rac1-> service oracleasm createdisk DATA2 /dev/sdd1
rac1-> service oracleasm createdisk OCR_VOTE1 /dev/sde1
rac1-> service oracleasm createdisk OCR_VOTE2 /dev/sdf1
rac1-> service oracleasm createdisk OCR_VOTE3 /dev/sdg1
rac1-> service oracleasm createdisk OCR_VOTE4 /dev/sdh1
另個節點執行
rac2-> oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "FRA"
Instantiating disk "DATA1"
Instantiating disk "DATA2"
Instantiating disk "OCR_VOTE1"
Instantiating disk "OCR_VOTE2"
Instantiating disk "OCR_VOTE3"
Instantiating disk "OCR_VOTE4"
幫助命令
$ oracleasm -h
檢視磁碟情況
$ /etc/init.d/oracleasm listdisks
DATA1
DATA2
FRA
OCR_VOTE1
OCR_VOTE2
OCR_VOTE3
OCR_VOTE4

15.禁用NTP server(每個節點執行)
關閉ntp 時間同步服務, 時間同步所需要的設定(11gR2 新增檢查項)
$ /sbin/service ntpd stop
$ chkconfig ntpd off
$ mv /etc/ntp.conf /etc/ntp.conf.bak
$ rm /var/run/ntpd.pid

注:我們這裡用oracle rac自帶CTSS時鐘同步模式,CTSS時鐘同步轉換為NTP同步模式的實施記錄參考下面連結:
    http://blog.itpub.net/25116248/viewspace-1152989/
    先設定好兩臺伺服器的時區和同步時間

16.shm 的修改 (根據實際情況,可以略過)
/dev/shm 共享記憶體不足的處理
解決方法:
例如:為了將/dev/shm 的大小增加到1GB,修改/etc/fstab 的這行:預設的:
tmpfs /dev/shm tmpfs defaults 0 0
改成:
tmpfs /dev/shm tmpfs defaults,size=2048m 0 0
size 引數也可以用G 作單位:size=2G。
重新 mount /dev/shm 使之生效:
$ mount -o remount /dev/shm
或者:
$ umount /dev/shm
$ mount -a
馬上可以用"df -h"命令檢查變化

17、設定環境變數
17.1 oracle1環境變數配置
oracle使用者(節點2 ORACLE_SID=burton2)
$ su - oracle
$ vi ~/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_SID=burton1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8/ZHS16GBK
umask 022
$ source ~/.bash_profile

17.2 grid1環境變數配置
grid使用者(節點2 ORACLE_SID=+ASM2)
$ su - grid
$ vi ~/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export THREADS_FLAG=native
export PATH=$ORACLE_HOME/bin:$PATH
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
$ source ~/.bash_profile

18、給oracle和grid使用者授權(儘量讓執行使用者直接使用者登入系統,避免用root使用者切換過去)
root使用者執行:
$ xhost +SI:localuser:grid
$ xhost +SI:localuser:oracle

三、安裝GI軟體和建立磁碟組(在主節點執行)
1、使用CVU工具執行以下的命令驗證預安裝環境:
rac1-> cd /home/grid/grid
rac1-> ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -r 11gR2 -verbose

Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac2                                  yes
  rac1                                  yes
Result: Node reachability check passed from node "rac1"

.............

Checking user equivalence...
Check: Time zone consistency
Result: Time zone consistency check passed
Pre-check for cluster services setup was successful.

注:可能遇到問題一:File "/etc/resolv.conf" is not consistent across nodes,詳解見最後部分。

2、編輯響應檔案。
2.1 切換到Grid Infrastructure安裝介質目錄,找到response目錄,編輯grid_install.rsp檔案根據提示修改grid_install.rsp檔案的內容,
下面是對 grid_install.rsp 檔案修改的內容(僅供修改參考,不可直接拷貝使用):
rac1-> cd /home/grid/grid/response
rac1-> cp ./grid_install.rsp ./gi_install.rsp

具體配置(#標識的是預設項)
rac1-> vi ./gi_install.rsp

#oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
ORACLE_HOSTNAME=rac1
INVENTORY_LOCATION=/u01/app/oraInventory
#SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.gpnp.scanName=scan-ip.burton.com
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=rac-cluster
#oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.clusterNodes=rac1:rac1-vip,rac2:rac2-vip
oracle.install.crs.config.networkInterfaceList=eth0:192.168.91.0:1,eth1:192.168.214.0:2
oracle.install.crs.config.storageOption=ASM_STORAGE
#oracle.install.crs.config.useIPMI=false
oracle.install.asm.SYSASMPassword=oracle4U
oracle.install.asm.diskGroup.name=OCRVOTE
#oracle.install.asm.diskGroup.redundancy=NORMAL
#oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=oracle4U
#oracle.install.asm.upgradeASM=false
oracle.installer.autoupdates.option=SKIP_UPDATES

注:
  http://blog.chinaunix.net/xmlrpc.php?r=blog/article&id=4681351&uid=29655480 (圖形化介面做對比會更深刻些)
  http://blog.itpub.net/22128702/viewspace-730567/ (配置具體說明)
 (1)例:oracle.install.crs.config.networkInterfaceList=bnxe2:192.168.129.192:1,bnxe3:10.31.130.200:2
      1代表public,2代表private,3代表在群集中不使用該網路卡;bnxe2和bnxe3是網路卡的裝置名,用ifconfig -a 可以看到
 (2)例:如果兩臺機子公網私網裝置名稱不一樣需要修改網路裝置名(例如:eth0->eth2)
      a.修改/etc/udev/rules.d/70-persistent-net.rules檔案(eth0->eth2)
        $ cat /etc/udev/rules.d/70-persistent-net.rules
        # PCI device 0x15ad:0x07b0 (vmxnet3)
        SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:2c:b4:4a", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"
        # PCI device 0x8086:0x100f (e1000)
        SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:2c:b4:40", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
      b.再檔案重新命名,並修改裡面的DEVICE(eth0->eth2)
        $ mv /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth2
        $ vi /etc/sysconfig/network-scripts/ifcfg-eth2
          DEVICE=eth2
      c.重啟系統
        $ reboot

2.2 檢視配置詳情
rac1-> cat /home/grid/grid/response/gi_install.rsp | grep -v ^# | grep -v ^$

3、靜默安裝Grid Infrastructure軟體。
以grid使用者切換到Grid Infrastructure安裝介質目錄,執行以下的命令開始靜默安裝Grid Infrastructure軟體:
rac1-> cd /home/grid/grid/
rac1-> /home/grid/grid/runInstaller -responseFile /home/grid/grid/response/gi_install.rsp -silent -ignorePrereq -showProgress

.................................
As a root user, execute the following script(s):
 1. /u01/app/oraInventory/orainstRoot.sh
 2. /u01/app/11.2.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[rac1, rac2]
Execute /u01/app/11.2.0/grid/root.sh on the following nodes:
[rac1, rac2]

..................................................   100% Done.

Execute Root Scripts successful.
As install user, execute the following script to complete the configuration.
 1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=

  Note:
 1. This script must be run on the same host from where installer was run.
 2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

Successfully Setup Software.

注:執行./runInstaller -help可以看到詳細的引數資訊。

3.1 在每個節點按順序分別執行orainstRoot.sh和root.sh指令碼(execute all nodes order by node sequence)
$ /u01/app/oraInventory/orainstRoot.sh
$ /u01/app/grid/11.2.0/root.sh
執行tail -f /u01/app/11.2.0/grid/install/root_rac1_2017-03-06_14-52-36.log 可以看到詳細的輸出。

3.2 最後在安裝節點執行的命令完成配置工作:
rac1-> su - grid
rac1-> cd /u01/app/11.2.0/grid/cfgtoollogs
# 編輯一個響應檔案儲存ASM的密碼,生成密碼檔案的過程需要使用
rac1-> vi cfgrsp.properties
oracle.assistants.asm|S_ASMPASSWORD=oracle4U
oracle.assistants.asm|S_ASMMONITORPASSWORD=oracle4U
rac1-> chmod 700 cfgrsp.properties
rac1-> ./configToolAllCommands RESPONSE_FILE=./cfgrsp.properties
Setting the invPtrLoc to /u01/app/11.2.0/grid/oraInst.loc
perform - mode is starting for action: configure
perform - mode finished for action: configure
......
You can see the log file: /u01/app/11.2.0/grid/cfgtoollogs/oui/configActions2017-03-06_03-47-00-PM.log

4、增加磁碟組(RAC1)在安裝GRID的過程中,已經建立了OCR磁碟組,這裡增加其他的兩個磁碟組。為了演示不同的語法,
   這裡用了增加磁碟的做法。(grid使用者執行)
rac1-> asmca -silent -createDiskGroup -sysAsmPassword oracle4U -diskString '/dev/oracleasm/disks/*' -diskGroupName FRA -diskList '/dev/oracleasm/disks/FRA' -redundancy EXTERNAL -compatible.asm 11.2 -compatible.rdbms 11.2
Disk Group FRA created successfully.
rac1-> asmca -silent -createDiskGroup -sysAsmPassword oracle4U -diskString '/dev/oracleasm/disks/*' -diskGroupName DATA -diskList '/dev/oracleasm/disks/DATA1' -redundancy EXTERNAL -compatible.asm 11.2 -compatible.rdbms 11.2
Disk Group DATA created successfully.
rac1-> asmca -silent -addDisk -sysAsmPassword oracle4U -diskGroupName DATA -diskList '/dev/oracleasm/disks/DATA2'
Disks added successfully to disk group DATA
或者以下方式建立磁碟組(grid使用者執行)
rac1-> sqlplus / as sysasm
SQL> create diskgroup FRA external redundancy disk '/dev/oracleasm/disks/FRA';
SQL> create diskgroup DATA  external redundancy disk '/dev/oracleasm/disks/DATA1';
SQL> alter diskgroup DATA add disk '/dev/oracleasm/disks/DATA2';

5、檢查Clusterware環境:
rac1-> crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

四、資料庫軟體安裝
1、安裝前驗證(RAC1)
在實際安裝之前,通過cluvfy工具進行驗證。這個時候因為僅安裝了GI軟體,所以暫時使用grid使用者目錄下的工具來執行。
把結果列印到一個檔案,方便閱讀。
rac1-> cd /u01/app/11.2.0/grid/bin
rac1-> ./cluvfy stage -pre dbinst -n rac1,rac2 -verbose

可能會遇到ERROR:PRVF-4657,PRVF-4664問題,詳情見最後部分。

2、配置響應檔案(安裝包在主目錄下)
rac1-> su - oracle
rac1-> cd /home/oracle/database/response
rac1-> cp db_install.rsp db_in.rsp
rac1-> chmod 755 db_in.rsp

rac1-> sed -i "s/oracle.install.option=/oracle.install.option=INSTALL_DB_SWONLY/" ./db_in.rsp
rac1-> sed -i "s/ORACLE_HOSTNAME=/ORACLE_HOSTNAME=rac1/" ./db_in.rsp
rac1-> sed -i "s/UNIX_GROUP_NAME=/UNIX_GROUP_NAME=oinstall/" ./db_in.rsp
rac1-> sed -i "s|INVENTORY_LOCATION=|INVENTORY_LOCATION=/u01/app/oraInventory|" ./db_in.rsp
rac1-> sed -i "s|ORACLE_HOME=|ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1|" ./db_in.rsp
rac1-> sed -i "s|ORACLE_BASE=|ORACLE_BASE=/u01/app/oracle|" ./db_in.rsp
rac1-> sed -i "s/oracle.install.db.InstallEdition=/oracle.install.db.InstallEdition=EE/" ./db_in.rsp
rac1-> sed -i "s/oracle.install.db.DBA_GROUP=/oracle.install.db.DBA_GROUP=dba/" ./db_in.rsp
rac1-> sed -i "s/oracle.install.db.OPER_GROUP=/oracle.install.db.OPER_GROUP=oper/" ./db_in.rsp
rac1-> sed -i "s/oracle.install.db.CLUSTER_NODES=/oracle.install.db.CLUSTER_NODES=rac1,rac2/" ./db_in.rsp
rac1-> sed -i "s/oracle.install.db.isRACOneInstall=/oracle.install.db.isRACOneInstall=false/" ./db_in.rsp
rac1-> sed -i "s/SECURITY_UPDATES_VIA_MYORACLESUPPORT=/SECURITY_UPDATES_VIA_MYORACLESUPPORT=false/" ./db_in.rsp
rac1-> sed -i "s/DECLINE_SECURITY_UPDATES=/DECLINE_SECURITY_UPDATES=true/" ./db_in.rsp
rac1-> sed -i "s/oracle.installer.autoupdates.option=/oracle.installer.autoupdates.option=SKIP_UPDATES/" ./db_in.rsp

3、檢視配置情況
rac1-> cat /home/oracle/database/response/db_in.rsp | sed -n '/^[^#]/p'

4、安裝資料庫軟體
注意:-responseFile引數必須使用絕對路徑
rac1-> cd /home/oracle/database/
rac1-> ./runInstaller -silent -force -showProgress -ignorePrereq -responseFile /home/oracle/database/response/db_in.rsp

5、最後步驟是用root身份執行下面檔案。(execute all nodes order by node sequence)
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh

五、安裝資料庫
1、本節的操作如無特殊說明,都是使用oracle使用者執行。
安裝前驗證(RAC1)
rac1-> cd /u01/app/oracle/product/11.2.0/dbhome_1/bin
rac1-> ./cluvfy stage -pre dbcfg -n rac1,rac2 -d $ORACLE_HOME

Performing pre-checks for database configuration
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.10.0"
Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/u01/app/oracle/product/11.2.0/dbhome_1"
Free disk space check passed for "rac1:/u01/app/oracle/product/11.2.0/dbhome_1"
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "rac1:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
WARNING:
PRVF-7584 : Multiple versions of package "elfutils-libelf" found on node rac2: elfutils-libelf(x86_64)-0.152-1.el6,elfutils-libelf(x86_64)-0.164-2.el6
WARNING:
PRVF-7584 : Multiple versions of package "elfutils-libelf" found on node rac1: elfutils-libelf(x86_64)-0.152-1.el6,elfutils-libelf(x86_64)-0.164-2.el6
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "pdksh"
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "rac2,rac1"
Checking existence of ONS node application (optional)
ONS node application check passed
Time zone consistency check passed
Pre-check for database configuration was successful.

檢驗成功,但有兩個告警,主要我裝了兩個elfutils-libelf的包

2、靜默安裝配置(RAC1)確認並修改或另外建立靜默安裝配置檔案 dbca.rsp
2.1 檢視下dbca.rsp裡面的引數
rac1-> cat /home/oracle/database/response/dbca.rsp | grep -v ^# | grep -v ^$
2.2 複製編輯響應檔案,僅保留部分引數,和單機安裝最大的一個區別就是NODELIST引數
rac1-> cd /home/oracle/database/response
rac1-> vi racdbca.rsp
rac1-> chmod 750 racdbca.rsp
[GENERAL]
RESPONSEFILE_VERSION = "11.2.0"
OPERATION_TYPE="createDatabase"
[CREATEDATABASE]
GDBNAME="burton"
SID="burton"
TEMPLATENAME="General_Purpose.dbc"
NODELIST=rac1,rac2
SYSPASSWORD="oracle4U"
SYSTEMPASSWORD="oracle4U"
STORAGETYPE=ASM
DISKGROUPNAME=DATA
RECOVERYGROUPNAME=FRA
CHARACTERSET="AL32UTF8"
NATIONALCHARACTERSET="UTF8"

3、靜默安裝資料庫(RAC1)
注意:-responseFile 引數必須使用絕對路徑
rac1-> cd $ORACLE_HOME/bin
rac1-> $ORACLE_HOME/bin/dbca -silent -responseFile /home/oracle/database/response/racdbca.rsp

六、建立監聽
rac1-> $ORACLE_HOME/bin/netca /silent /responseFile /home/oracle/database/response/netca.rsp

Parsing command line arguments:
    Parameter "silent" = true
    Parameter "responsefile" = /home/oracle/database/response/netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
Default listener "LISTENER" is already configured in Grid Infrastructure home: /u01/app/11.2.0/grid
Oracle Net Services configuration successful. The exit code is 0

注:裝完netca後發現沒有listener.ora檔案,原來在grid使用者下/u01/app/11.2.0/grid/network/admin/listener.ora
    採用的動態監聽。

SQL> show parameter listener

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
listener_networks       string
local_listener        string  (ADDRESS=(PROTOCOL=TCP)(HOST= 192.168.91.152)(PORT=1521))
remote_listener        string  scan-ip.burton.com:1521

 

七、開啟歸檔日誌:
1、建立歸檔目錄
rac1-> su - grid
rac1-> asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      9992     8250                0            8250              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      4996     4666                0            4666              0             N  FRA/
MOUNTED  NORMAL  N         512   4096  1048576      2997     2071              999             536              0             Y  OCRVOTE/
ASMCMD> cd data/burton
ASMCMD> mkdir arch
ASMCMD> cd arch
ASMCMD> pwd
+data/burton/arc

2、修改歸檔路徑引數
rac1-> su - oracle
rac1-> sqlplus / as sysdba
SQL> alter system set log_archive_dest_1='location=+data/burton/arch' scope=spfile sid='burton1';
SQL> alter system set log_archive_dest_1='location=+data/burton/arch' scope=spfile sid='burton2';
SQL> alter system set log_archive_format='burton_%t_%s_%r.arc' scope=spfile;

3、開歸檔模式
3.1 關閉資料庫,全部節點都要關閉(在任意節點完成)
rac1-> srvctl stop database -d burton -o immediate
將一個節點資料庫啟動到mount狀態

3.2 檢查資料庫關閉後狀態
rac1-> srvctl status database -d burton
Instance burton1 is not running on node rac1
Instance burton2 is not running on node rac2

3.3 啟動第一個例項到mount狀態
rac1-> srvctl start instance -d burton -i burton1 -o mount
修改資料庫的歸檔模式並啟動資料庫
rac1-> sqlplus / as sysdba
SQL> alter database archivelog;
SQL> alter database open;
檢查狀態
SQL> archive log list;
開啟另個節點,由於控制檔案在ASM共享檔案中,其他的節點會讀取修改後的控制檔案
rac1-> srvctl start instance -d burton -i burton2
檢查叢集狀態(root使用者執行)
rac1-> crsctl stat res -t

八、叢集基本操作
1、oracle 11g RAC關閉順序
1.1 停止em服務
su - oracle
export ORACLE_UNQNAME=rac1db
emctl status dbconsole
emctl stop dbconsole
emctl status dbconsole

1.2 停資料庫
srvctl stop database -d burton -o immediate
停某個例項
srvctl stop instance -d burton -i burton2 -o immediate
srvctl stop asm -n rac2 -i +ASM2

1.3 停監聽
srvctl status listener
srvctl stop listener
srvctl status listener
停某個監聽
srvctl stop listener -n rac2

1.4 停crs(每個節點執行)
su - root
/u01/app/11.2.0/grid/bin/crsctl stop crs

1.5 檢視程式
ps -ef | grep lsnr |grep -v grep
ps -ef | grep crs |grep -v grep
ps -ef | grep smon |grep -v grep


2、啟動順序
2.1 啟crs(兩個節點都執行,如果要重啟作業系統,那麼這一步可以省略)
su - root
cd /u01/app/11.2.0/grid/bin
./crsctl start crs
./crs_stat -t -v

2.2 啟動資料庫
su - oracle
srvctl start database -d burton -o open
啟動某個例項
srvctl start instance -d burton -i burton1

2.3 檢視資料庫和監聽狀態
srvctl status database -d burton
srvctl status listener
srvctl start listener -n rac2

2.4 檢視crs
/u01/app/11.2.0/grid/bin/crs_stat -t -v

2.5 檢視程式
ps -ef | grep lsnr |grep -v grep
ps -ef | grep crs |grep -v grep
ps -ef | grep smon |grep -v grep


執行過程中,遇到如下問題
ERROR1:crfclust.bdb檔案過大,Bug 20186278
== check
cd /u01/app/11.2.0/grid/bin
./crsctl stat res ora.crf -init -t
== stop
./crsctl stop res ora.crf -init
== delete
cd /u01/app/11.2.0/grid/crf/db/rac1
rm -rf *.bdb
== start
cd /u01/app/11.2.0/grid/bin
./crsctl start res ora.crf -init


問題一:
1.1 遇到錯誤,可以把日誌重定向到檔案,便於查詢問題。
問題:
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac2                                  failed
  rac1                                  passed
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac2
File "/etc/resolv.conf" is not consistent across nodes
1.2 解決方案:
(1)修改DNS伺服器的/etc/named.conf檔案,新增fil "/dev/null";資訊即可。
zone "." IN {
      type hint;
//      file "named.ca";
        file "/dev/null";

(2)分別在各個RAC節點新增如下引數:
vi /etc/resolv.conf
#search prudentwoo.com
nameserver 114.114.114.114
nameserver 8.8.8.8
options rotate
options timeout:2
options attempts:5

問題二:
可以把輸出重定向到檔案,發現以下錯誤
ERROR:
PRVG-1101 : SCAN name "scan-ip.burton.com" failed to resolve
  SCAN Name     IP Address                Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  scan-ip.burton.com  192.168.91.154            failed                    NIS Entry

ERROR:
PRVF-4657 : Name resolution setup check for "scan-ip.burton.com" (IP address: 192.168.91.154) failed
ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-ip.burton.com"
Verification of SCAN VIP and Listener setup failed
Checking VIP configuration.
可能沒配置DNS或GSD伺服器,就可能報這錯(原因待查)。
解決方法:
網上大家都說可以忽略,暫時這麼處理。

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30590361/viewspace-2135123/,如需轉載,請註明出處,否則將追究法律責任。

相關文章