Solaris 10下遷移10G RAC (二)

space6212發表於2019-07-21
最近做了一個rac資料庫的遷移,中間涉及到很多部分內容,包括rac環境的搭建、ASM的設定、資料庫的遷移、升級等。
本文是這次遷移工作的第二部分:安裝rac的準備工作。

準備工作

新增使用者組和使用者

在所有節點新增相同的使用者組和使用者:

pre1執行:

bash-3.00# groupadd oinstall

bash-3.00# groupadd dba

bash-3.00# mkdir -p /export/home/oracle

bash-3.00# useradd -u 200 -g oinstall -G dba –d /export/home/oracle oracle

bash-3.00# passwd oracle

New Password:

Re-enter new Password:

passwd: password successfully changed for oracle

安裝rac時還會用到nobody使用者,檢查nobody使用者是否存在,如果使用者不存在,則需要新增使用者:

bash-3.00# id nobody

uid=60001(nobody) gid=60001(nobody)

pre2

安裝racoracle需要檢驗各個節點間使用者的一致性,包括UIDGID

先在pre1上檢視oracle使用者的資訊:

bash-3.00# id -a oracle

uid=200(oracle) gid=100(oinstall) groups=101(dba)

pre2上執行:

bash-3.00# groupadd -g 100 oinstall

bash-3.00# groupadd -g 101 dba

bash-3.00# mkdir -p /export/home/oracle

bash-3.00# useradd -u 200 -g oinstall -G dba –d /export/home/oracle oracle

bash-3.00# passwd oracle

New Password:

Re-enter new Password:

passwd: password successfully changed for oracle

bash-3.00# id -a oracle

uid=200(oracle) gid=100(oinstall) groups=101(dba)

bash-3.00# id nobody

uid=60001(nobody) gid=60001(nobody)

配置網路

安裝RAC要求節點至少有2個物理網路卡,3IPPUBLIC IPPRIVATE IPVIRTUAL IP,其中公有IP和虛擬IP要在同一網段上。

pre1上執行:

修改/etc/hosts檔案,新增如下內容:

127.0.0.1 localhost

172.0.2.1 pre1 loghost

172.0.2.2 vip-pre1

10.0.0.1 priv-pre1

172.0.2.3 pre2

172.0.2.4 vip-pre2

10.0.0.2 priv-pre2

pre2/etc/hosts新增如下內容:

127.0.0.1 localhost

172.0.2.1 pre1

172.0.2.2 vip-pre1

10.0.0.1 priv-pre1

172.0.2.3 pre2 loghost

172.0.2.4 vip-pre2

10.0.0.2 priv-pre2

dladm檢查伺服器上的網路卡裝置:

bash-3.00# dladm show-link

ce0 type: legacy mtu: 1500 device: ce0

ce1 type: legacy mtu: 1500 device: ce1

上面資訊表示有兩塊網路卡:ce0ce1

bash-3.00# ifconfig -a

lo0: flags=2001000849 mtu 8232 index 1

inet 127.0.0.1 netmask ff000000

ce0: flags=1000843 mtu 1500 index 2

inet 172.0.2.1 netmask ffffff00 broadcast 172.0.2.255

ether 0:3:ba:2c:da:de

上面的資訊說明當前只繫結了一個網路卡,還有一個沒有分配IP

bash-3.00# ifconfig ce1 plumb

bash-3.00# ifconfig ce1 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 up

為了重啟後網路卡自動繫結IP,修改/etc/hosetname.ce1檔案:

bash-3.00# vi /etc/hostname.ce1

priv-pre1

修改/etc/netmasks,新增廣播地址和掩碼

bash-3.00# chmod o+w /etc/netmasks

bash-3.00# vi /etc/netmasks

172.0.2.0 255.255.255.0

10.0.0.0 255.255.255.0

設定預設路由:

bash-3.00# vi /etc/defaultrouter

172.0.2.252

在另一個節點做類似的操作:

bash-3.00# ifconfig ce1 plumb

bash-3.00# ifconfig ce1 10.0.0.2 netmask 255.255.255.0 broadcast 10.0.0.255 up

bash-3.00# vi /etc/hosetname.ce1

priv-pre2

bash-3.00# vi /etc/netmasks

172.0.2.0 255.255.255.0

10.0.0.0 255.255.255.0

bash-3.00# vi /etc/defaultrouter

172.0.2.252

配置ssh驗證:

Oracle在安裝clusterware過程中需要複製檔案到另一個節點,所以需要對兩個節點配置ssh驗證,使得這兩個節點間連線不需要輸入口令。(要在oracle使用者下執行)

Ø 在所有節點生成RSADSA Keys

過程需要回車幾次。

bash-3.00$ id

uid=200(oracle) gid=100(oinstall)

bash-3.00$ mkdir ~/.ssh

bash-3.00$ chmod 700 ~/.ssh

bash-3.00$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/export/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /export/home/oracle/.ssh/id_rsa.

Your public key has been saved in /export/home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

b0:c1:4b:92:b4:05:ff:4e:79:a0:61:89:ab:8d:7b:f8 oracle@pre1

bash-3.00$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/export/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /export/home/oracle/.ssh/id_dsa.

Your public key has been saved in /export/home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

8f:e3:dc:ca:ff:05:b6:5f:a6:c6:25:9f:3a:35:d1:2a oracle@pre1

Ø 新增金鑰資訊到驗證檔案中

這一系列步驟只需要在其中一個節點執行就可以了(這裡選擇pre1):

首先生成一個驗證檔案(ssh登入時會讀取這個檔案的資訊),用來儲存各個金鑰資訊:

bash-3.00$ touch ~/.ssh/authorized_keys

把各個節點的金鑰資訊都放在上一步新建的驗證檔案中:

bash-3.00$ cd ~/.ssh

bash-3.00$ ls

authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub

bash-3.00$ ssh pre1 cat /export/home/oracle/.ssh/id_rsa.pub >> authorized_keys

The authenticity of host 'pre1 (172.0.2.1)' can't be established.

RSA key fingerprint is 0e:3d:ae:3a:49:88:ad:bb:e5:0a:c3:2a:02:35:b2:19.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'pre1,172.0.2.1' (RSA) to the list of known hosts.

Password:

bash-3.00$ ssh pre2 cat /export/home/oracle/.ssh/id_rsa.pub >> authorized_keys

The authenticity of host 'pre2 (172.0.2.3)' can't be established.

RSA key fingerprint is ef:9c:17:53:50:e5:b6:23:d0:89:a5:d8:ef:69:e3:a8.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'pre2,172.0.2.3' (RSA) to the list of known hosts.

Password:

bash-3.00$ ssh pre1 cat /export/home/oracle/.ssh/id_dsa.pub >> authorized_keys

bash-3.00$ ssh pre2 cat /export/home/oracle/.ssh/id_dsa.pub >> authorized_keys

Password:

Ø pre1把儲存公鑰資訊的驗證檔案傳送到pre2

bash-3.00$ scp authorized_keys pre2:`pwd`

Password:

authorized_keys 100% |*********************************************************************************| 1644 00:00

bash-3.00$

Ø 設定驗證檔案的許可權

在每一個節點執行:

bash-3.00$ chmod 600 ~/.ssh/authorized_keys

Ø 啟用使用者一致性

在你執行OUI的節點以oracle使用者執行(這裡選擇pre1)

bash-3.00$ exec /usr/bin/ssh-agent $SHELL

$ ssh-add

Identity added: /export/home/oracle/.ssh/id_rsa (/export/home/oracle/.ssh/id_rsa)

Identity added: /export/home/oracle/.ssh/id_dsa (/export/home/oracle/.ssh/id_dsa)

Ø 驗證ssh配置是否正確

oracle使用者在所有節點分別執行:

ssh pre1 date

ssh pre2 date

ssh priv-pre1 date

ssh priv-pre2 date

如果不需要輸入密碼就可以輸出時間,說明ssh驗證配置成功。必須把以上命令在兩個節點都執行,每一個命令在第一次執行的時候需要輸入yes

如果不執行這些命令,即使ssh驗證已經配好,安裝clusterware的時候也會出現錯誤:

The specified nodes are not clusterable

因為,配好ssh後,還需要在第一次訪問時輸入yes,才算是真正的無障礙訪問其他伺服器。

安裝必須系統包和補丁

在不同的資料庫版本和OS版本下,需要安裝的包和補丁是不一樣的。在solaris 10 sparc下安裝10g rac必須要包含這些系統包:

SUNWarc

SUNWbtool

SUNWhea

SUNWlibC

SUNWlibm

SUNWlibms

SUNWsprot

SUNWtoo

SUNWi1of

SUNWxwfnt

可以用如下命令來檢查需要的包是否已經安裝:

bash-3.00# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWi1of SUNWxwfnt

system SUNWarc Lint Libraries (usr)

system SUNWbtool CCS tools bundled with SunOS

system SUNWhea SunOS Header Files

system SUNWi1of ISO-8859-1 (Latin-1) Optional Fonts

system SUNWlibC Sun Workshop Compilers Bundled libC

system SUNWlibm Math & Microtasking Library Headers & Lint Files (Usr)

system SUNWlibms Math & Microtasking Libraries (Usr)

system SUNWsprot Solaris Bundled tools

system SUNWtoo Programming Tools

system SUNWxwfnt X Window System platform required fonts

solaris 10 sparc 上裝oracle 10g rac不需要打系統補丁

設定環境變數和對應目錄

在各個節點都要設定環境變數:

pre1:修改~/.profile,新增如下內容:

umask 022

ORACLE_SID=prerac1

ORACLE_BASE=/oracle/app

ORACLE_HOME=$ORACLE_BASE/product/10.2/database

ORA_CRS_HOME=$ORACLE_BASE/product/10.2/crs

NLS_LANG='SIMPLIFIED CHINESE_CHINA.ZHS16GBK'

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_SID ORACLE_BASE ORACLE_HOME ORA_CRS_HOME NLS_LANG PATH

pre2:修改~/.profile,新增如下內容:

umask 022

ORACLE_SID=prerac2

ORACLE_BASE=/oracle/app

ORACLE_HOME=$ORACLE_BASE/product/10.2/database

ORA_CRS_HOME=$ORACLE_BASE/product/10.2/crs

NLS_LANG='SIMPLIFIED CHINESE_CHINA.ZHS16GBK'

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_SID ORACLE_BASE ORACLE_HOME ORA_CRS_HOME NLS_LANG PATH

在兩個節點都建立相關目錄並授權(與環境變數設定要匹配):

mkdir –p /oracle/app/product/10.2/{database,crs}

chown –R oracle:oinstall /oracle

chmod –R 775 /oracle

設定核心引數

在兩個節點分別修改系統引數,重啟

set noexec_user_stack=1

set semsys:seminfo_semmni=100

set semsys:seminfo_semmns=1024

set semsys:seminfo_semmsl=256

set semsys:seminfo_semvmx=32767

set shmsys:shminfo_shmmax=4294967295

set shmsys:shminfo_shmmin=1

set shmsys:shminfo_shmmni=100

set shmsys:shminfo_shmseg=10

設定NDP,使得網路更高效

# ndd -set /dev/udp udp_xmit_hiwat 65536

# ndd -set /dev/udp udp_recv_hiwat 65536

為了確保修改在重啟後仍然有效,新增/ect/init.d/nddudp檔案:

# vi /etc/init.d/nddudp

ndd -set /dev/udp udp_xmit_hiwat 65536

ndd -set /dev/udp udp_recv_hiwat 65536

然後在/etcrc1.drc2.drcS.d目錄下建立連線,連線必須以S70S71為字首:

# ln -s -f /etc/init.d/nddudp /etc/rc1.d/S70nddudp

# ln -s -f /etc/init.d/nddudp /etc/rc2.d/S70nddudp

# ln -s -f /etc/init.d/nddudp /etc/rcS.d/S70nddudp

做過這些後重啟伺服器使設定生效。Solaris 10可以用資源控制器來動態修改這些核心引數而不用重啟系統,這裡不做說明。

檢查記憶體和磁碟

oracle要求至少1G實體記憶體,1.5-2倍記憶體的swap/tmp要空閒空間400M以上。

用如下命令在所有節點檢查記憶體和磁碟空間:

$ /usr/sbin/prtconf | grep "Memory size"

Memory size: 4096 Megabytes

$ /usr/sbin/swap -s

total: 165384k bytes allocated + 30456k reserved = 195840k used, 11421296k available

$ df -k /tmp

Filesystem kbytes used avail capacity Mounted on

swap 11419720 112 11419608 1% /tmp

檢查晶片,確保你下載的軟體與晶片相符:

$ /bin/isainfo -kv

64-bit sparcv9 kernel modules

對裸裝置進行授權

一個裸裝置可以看作是一個分割槽,按照前面的設計,一共需要4個裸裝置:

Ocr -- /dev/rdsk/c3t0d3s5

voting -- /dev/rdsk/c3t0d3s6

ASMDISK -- /dev/rdsk/c3t0d0s6 /dev/rdsk/c3t0d2s6

需要把這些裸裝置授權給oracle使用者。

在每一個節點以root使用者執行:

chown oracle:dba /dev/rdsk/c3t0d3s5

chown oracle:dba /dev/rdsk/c3t0d3s6

chown oracle:dba /dev/rdsk/c3t0d0s6

chown oracle:dba /dev/rdsk/c3t0d2s6

chmod 660 /dev/rdsk/c3t0d3s5

chmod 660 /dev/rdsk/c3t0d3s6

chmod 660 /dev/rdsk/c3t0d0s6

chmod 660 /dev/rdsk/c3t0d2s6

注意:cxtydzsn都是連結,授權後用ls –l檢視/dev/rdsk/c3t0d3s5發現還是root.root,不過實際許可權已經發生改變了。

cluvfy驗證當前環境rac的需求

Ø 驗證網路是否滿足需求:

$ ./runcluvfy.sh comp nodecon -n pre1,pre2 -verbose

Verifying node connectivity

ERROR:

User equivalence unavailable on all the nodes.

Verification cannot proceed.

Verification of node connectivity was unsuccessful on all the nodes.

solaris下很容易遇到這個錯誤,這是因為Oracle在尋找sshscp命令時,去/usr/local/bin目錄下尋找,而ssh命令在/usr/bin目錄下。

相應的解決方法也很簡單,在/usr/local/bin目錄下建立一個指向/usr/bin/ssh的連結就可以了。

具體步驟是:

在需要執行cluvfy的節點上執行執行下面步驟:

root建立連結:

bash-3.00# mkdir -p /usr/local/bin

bash-3.00# ln -s -f /usr/bin/ssh /usr/local/bin/ssh

bash-3.00# ln -s -f /usr/bin/scp /usr/local/bin/scp

oracle使用者下再次新增ssh驗證:

$ exec /usr/bin/ssh-agent $SHELL

$ /usr/bin/ssh-add

Identity added: /export/home/oracle/.ssh/id_rsa (/export/home/oracle/.ssh/id_rsa)

Identity added: /export/home/oracle/.ssh/id_dsa (/export/home/oracle/.ssh/id_dsa)

再次執行驗證就可以成功了:

bash-3.00$ ./runcluvfy.sh comp nodecon -n pre1,pre2

Verifying node connectivity

Checking node connectivity...

Node connectivity check passed for subnet "172.0.2.0" with node(s) pre2,pre1.

Node connectivity check passed for subnet "10.0.0.0" with node(s) pre2,pre1.

Suitable interfaces for VIP on subnet "172.0.2.0":

pre2 ce0:172.0.2.3 ce0:172.0.2.4

pre1 ce0:172.0.2.1 ce0:172.0.2.2

Suitable interfaces for the private interconnect on subnet "10.0.0.0":

pre2 ce1:10.0.0.2

pre1 ce1:10.0.0.1

Node connectivity check passed.

Verification of node connectivity was successful.

Ø 檢驗系統是否滿足安裝rac要求

pre1上以oracle使用者執行如下命令:

$ ./runcluvfy.sh comp sys -n pre1,pre2 -p crs -osdba crs -orainv oinstall

Verifying system requirement

Checking system requirements for 'crs'...

Total memory check passed.

Free disk space check passed.

Swap space check passed.

System architecture check passed.

Operating system version check passed.

Package existence check passed for "SUNWarc".

Package existence check passed for "SUNWbtool".

Package existence check passed for "SUNWhea".

Package existence check passed for "SUNWlibm".

Package existence check passed for "SUNWlibms".

Package existence check passed for "SUNWsprot".

Package existence check passed for "SUNWsprox".

Package existence check passed for "SUNWtoo".

Package existence check passed for "SUNWi1of".

Package existence check passed for "SUNWi1cs".

Package existence check passed for "SUNWi15cs".

Package existence check passed for "SUNWxwfnt".

Package existence check passed for "SUNWlibC".

Package existence check failed for "SUNWscucm:3.1".

Check failed on nodes:

pre2,pre1

Package existence check failed for "SUNWudlmr:3.1".

Check failed on nodes:

pre2,pre1

Package existence check failed for "SUNWudlm:3.1".

Check failed on nodes:

pre2,pre1

Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".

Check failed on nodes:

pre2,pre1

Package existence check failed for "SUNWscr:3.1".

Check failed on nodes:

pre2,pre1

Package existence check failed for "SUNWscu:3.1".

Check failed on nodes:

pre2,pre1

Group existence check failed for "crs".

Check failed on nodes:

pre2,pre1

Group existence check passed for "oinstall".

User existence check passed for "oracle".

User existence check passed for "nobody".

System requirement failed for 'crs'

Verification of system requirement was unsuccessful on all the nodes.

上面隨便有部分沒有檢測透過,但是沒有透過部分都是與sun cluster相關的,我們這裡用crs,所以不用管這些出錯資訊。

至此,準備工作已經完成,下面開始安裝clusterware

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/231499/viewspace-63863/,如需轉載,請註明出處,否則將追究法律責任。

相關文章