oracle10g ASM+RAC安裝

邱東陽發表於2014-06-03

 

RAC --------真應用叢集

叢集的計算分類:

1、高效能運算叢集

 計算任務分配到不同計算機節點來提高整體計算能力,主要應用在科學計算領域。主要利用的是平行計算。

2、高可用性叢集

目的是提高系統的可用性,整合硬體和軟體的容錯性來實現整體服務的高可用性。採用SOA的思想,提供資源池服務。

3、負載均衡叢集

將負載流量儘可能合理地分配到叢集的各個節點上,每個節點都可以處理一部分負載,並且可以根據負載情況進行動態的平衡。

 

 

 

 

RAC的安裝

OS: linux Redhat 4 U8 64bit

DB: oracle 10.2.0.1

 

 

 

建立使用者與組

 

 

節點1

[root@ rac1 ~]# groupadd -g 1001 dba

[root@ rac1 ~]# groupadd -g 1002 oinstall

[root@ rac1 ~]# useradd -u 1001 -g oinstall -G dba oracle

[root@ rac1 ~]# passwd oracle

[root@ rac1 ~]# id nobody

uid=99(nobody) gid=99(nobody) groups=99(nobody)

[root@ rac1 ~]#

 

節點2

[root@ rac2 ~]# groupadd -g 1001 dba

[root@ rac2 ~]# groupadd -g 1002 oinstall

[root@ rac2 ~]# useradd -u 1001 -g oinstall -G dba oracle

[root@ rac2 ~]# passwd oracle

 [root@ rac2 ~]# id nobody

uid=99(nobody) gid=99(nobody) groups=99(nobody)

[root@ rac2 ~]#

 

 

網路的配置

 

節點1

[root@ rac1 ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 08:00:27:C0:3B:F2 

          inet addr:192.168.56.50  Bcast:192.168.56.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fec0:3bf2/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:162 errors:0 dropped:0 overruns:0 frame:0

          TX packets:107 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:14353 (14.0 KiB)  TX bytes:11093 (10.8 KiB)

 

eth1      Link encap:Ethernet  HWaddr 08:00:27:6E:75:FC 

          inet addr:10.10.10.1  Bcast:10.10.10.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe6e:75fc/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:53 errors:0 dropped:0 overruns:0 frame:0

          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:4064 (3.9 KiB)  TX bytes:714 (714.0 b)

 

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:12 errors:0 dropped:0 overruns:0 frame:0

          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:900 (900.0 b)  TX bytes:900 (900.0 b)

[root@ rac1 ~]# vi /etc/hosts

 

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost.localdomain localhost

192.168.56.50    rac1

192.168.56.52    vip1

10.10.10.1       priv1

192.168.56.51    rac2

192.168.56.53    vip2

10.10.10.2       priv2

 

節點2

[root@ rac2 ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 08:00:27:D6:3E:6D 

          inet addr:192.168.56.51  Bcast:192.168.56.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fed6:3e6d/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:95 errors:0 dropped:0 overruns:0 frame:0

          TX packets:73 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:8535 (8.3 KiB)  TX bytes:7713 (7.5 KiB)

 

eth1      Link encap:Ethernet  HWaddr 08:00:27:5B:1A:14 

          inet addr:10.10.10.2  Bcast:10.10.10.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe5b:1a14/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:24 errors:0 dropped:0 overruns:0 frame:0

          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:1492 (1.4 KiB)  TX bytes:714 (714.0 b)

 

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:12 errors:0 dropped:0 overruns:0 frame:0

          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:900 (900.0 b)  TX bytes:900 (900.0 b) [root@rac2 ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost.localdomain localhost

192.168.56.50    rac1

192.168.56.52    vip1

10.10.10.1       priv1

192.168.56.51    rac2

192.168.56.53    vip2

10.10.10.2       priv2

 

配置使用者安全的shell通道


 

 

節點1

[root@ rac1 oracle]# su - oracle

[oracle@ rac1 ~]$ mkdir .ssh

[oracle@ rac1 ~]$ chmod 700 .ssh

[oracle@ rac1 ~]$ cd .ssh

[oracle@ rac1 .ssh]$ ssh-keygen -t rsa     --生成祕鑰 密碼為空

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter—全部回車

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

ae:49:e6:cd:a6:7e:98:e0:26:6f:c3:6b:e6:6d:77:56 oracle@ rac1

[oracle@rac1 .ssh]$

 

節點2

[root@ rac2 ~]# su - oracle

[oracle@ rac2 ~]$ mkdir .ssh

[oracle@ rac2 ~]$ chmod 700 .ssh

[oracle@ rac2 ~]$ cd .ssh

[oracle@ rac2 .ssh]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

70:42:f6:f3:24:a3:a7:a0:51:05:74:b6:0a:6c:c5:35 oracle@ rac2

[oracle@rac2 .ssh]$

 

 

生成 authorized_keys 檔案其中包括節點1與節點2的祕鑰

 

節點1

[oracle@ rac1 .ssh]$ ls

id_rsa  id_rsa.pub

[oracle@ rac1 .ssh]$ ssh rac1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

The authenticity of host 'rac1 (192.168.56.50)' can't be established.

RSA key fingerprint is d1:04:e6:04:3c:85:b7:93:f9:aa:5f:49:70:58:88:20.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'rac1,192.168.56.50' (RSA) to the list of known hosts.

oracle@ rac1's password:  節點1 oracle使用者密碼

[oracle@ rac1 .ssh]$

[oracle@ rac1 .ssh]$ ssh rac2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

The authenticity of host 'rac2 (192.168.56.51)' can't be established.

RSA key fingerprint is f5:eb:27:6d:85:5e:46:cd:d0:9b:41:f4:0e:35:e8:1c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'rac2,192.168.56.51' (RSA) to the list of known hosts.

oracle@ rac2's password:  節點2oracle使用者密碼

[oracle@ rac1 .ssh]$

 

將生成的authorized_keys檔案傳到節點2

[oracle@rac1 .ssh]$ cat authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEApcThdWeDZnLnG9LCenyOOstB9OEZuU9pwAQh1Mx1dtPN2/92CaveIizpqdgj7v

hH1LQgvBGOBKRKcwVERaKcTjTx0ymu0d0QHlCB6ukXLC0W88Rq4dnyP7L/WEn5x5sDhPXyPusgQy9l9aK9zPVguoheAHDIAs5BH0b8XJ044fs= oracle@rac1

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAtZW18RkmewHxM4eYt4RfixiqW1GUP4UzXp495AsjNyGpuMXaEtlwJDYpMJQgxDeRYzpj

+//Ip1EYQ5HnJFb4DrHqCMPyxza/fcMvCK0nqVyTfp+pHGOjxnCg+nEHHD3hY9O93UIy7ctNHj1NMDB0pnTGXhba7OALmPT1Zuo9o68= oracle@rac2

[oracle@ rac1 .ssh]$ scp authorized_keys rac2:/home/oracle/.ssh/

oracle@rac2's password:  節點2 oracle使用者密碼

autbhrized_keys                               100%  458     0.5KB/s   00:00   

[oracle@rac1 .ssh]$

 

修改兩個節點 autborized_keys檔案的許可權

[oracle@rac1 .ssh]$ chmod 600 authorized_keys

[oracle@rac2 .ssh]$ chmod 600 authorized_keys

 

驗證祕鑰隊是否正常

節點1

 [oracle@rac1 .ssh]$ ssh rac1 date

Tue May 20 02:53:43 CST 2014

[oracle@rac1 .ssh]$ ssh rac2 date    ----沒有輸入密碼證明成功

Tue May 20 02:53:55 CST 2014

[oracle@rac1 .ssh]$

節點2

[oracle@rac2 .ssh]$ ssh rac1 date

The authenticity of host 'rac1 (192.168.56.50)' can't be established.

RSA key fingerprint is d1:04:e6:04:3c:85:b7:93:f9:aa:5f:49:70:58:88:20.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'rac1,192.168.56.50' (RSA) to the list of known hosts.

Tue May 20 02:55:39 CST 2014

[oracle@rac2 .ssh]$ ssh rac2 date

The authenticity of host 'rac2 (192.168.56.51)' can't be established.

RSA key fingerprint is f5:eb:27:6d:85:5e:46:cd:d0:9b:41:f4:0e:35:e8:1c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'rac2,192.168.56.51' (RSA) to the list of known hosts.

Tue May 20 02:55:45 CST 2014

[oracle@rac2 .ssh]$

 

進行軟體包認證

 

節點1

[root@rac1 ~]# rpm -q binutils- compat-db  control-center- gcc- gcc-c++- glibc- glibc-common- gnome-libs- libstdc++- libstdc++-devel  make

binutils-2.15.92.0.2-25

compat-db-4.1.25-9

compat-db-4.1.25-9

control-center-2.8.0-12.rhel4.5

gcc-3.4.6-11.0.1

gcc-c++-3.4.6-11.0.1

glibc-2.3.4-2.43

glibc-2.3.4-2.43

glibc-common-2.3.4-2.43

gnome-libs-1.4.1.2.90-44.2

libstdc++-3.4.6-11.0.1

libstdc++-3.4.6-11.0.1

libstdc++-devel-3.4.6-11.0.1

libstdc++-devel-3.4.6-11.0.1

make-3.80-7.EL4

 

節點2

[root@rac2 ~]# rpm -q binutils- compat-db  control-center- gcc- gcc-c++- glibc- glibc-common- gnome-libs- libstdc++- libstdc++-devel  make

binutils-2.15.92.0.2-25

compat-db-4.1.25-9

compat-db-4.1.25-9

control-center-2.8.0-12.rhel4.5

gcc-3.4.6-11.0.1

gcc-c++-3.4.6-11.0.1

glibc-2.3.4-2.43

glibc-2.3.4-2.43

glibc-common-2.3.4-2.43

gnome-libs-1.4.1.2.90-44.2

libstdc++-3.4.6-11.0.1

libstdc++-3.4.6-11.0.1

libstdc++-devel-3.4.6-11.0.1

make-3.80-7.EL4

[root@rac2 ~]#

rpm -q binutils compat-db control-center gcc gcc-c++ glibc glibc-common 

gnome-libs libstdc++ libstdc++-devel make pdksh sysstat xscreensaver libaio openmotif21 

 

 

 

配置核心引數

 

節點1

[root@rac1 ~]# vi /etc/sysctl.conf    新增以下引數

kernel.shmall = 2097152

 

kernel.shmmax = 2147483648

 

kernel.shmmni = 4096

 

kernel.sem = 250 32000 100 128     

 

fs.file-max = 65536

 

net.ipv4.ip_local_port_range = 1024 65000

 

net.core.rmem_default = 262144

 

net.core.rmem_max = 1048576

 

net.core.wmem_default = 262144

 

net.core.wmem_max = 1048576

 

[root@rac1 ~]# /sbin/sysctl –p    ---使剛設定的引數生效

 

節點2同樣操作

 

 

設定Shelloracle使用者的限制

 

 

 

節點1

[root@rac1 ~]# vi /etc/security/limits.conf      --新增以下內容

oracle              soft    nproc   2047

 

oracle               hard    nproc   16384

 

oracle               soft    nofile  1024

 

oracle               hard    nofile  65536

 

 

節點2 同樣配置

 

 

 

節點1

[root@rac1 ~]# vi /etc/pam.d/login        -新增以下內容

session    required     /lib/security/pam_limits.so

節點2

[root@rac2 ~]# vi /etc/pam.d/login       -新增以下內容

 

session    required     /lib/security/pam_limits.so

 

 

 

 

節點1

根據oracle預設shell更改

 Bourne, Bash, or Korn 

[root@rac1 ~]# vi /etc/profile     --新增以下內容

if [ $USER = "oracle" ]; then

 

        if [ $SHELL = "/bin/ksh" ]; then

 

              ulimit -p 16384

 

              ulimit -n 65536

 

        else

 

              ulimit -u 16384 -n 65536

 

        fi

 

fi

 

 C shell (csh or tcsh)

[root@rac1 ~]# vi /etc/csh.login   --新增以下內容

if ( $USER == "oracle" ) then

 

        limit maxproc 16384

 

        limit descriptors 65536

 

endif

 

節點2 相同配置

 

 

配置時間同步

 

節點1配置成時間同步伺服器:
[root@rac1 ~]#  vi /etc/ntp.conf
 
erver 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 11 
broadcastdelay 0.008
driftfile /var/lib/ntp/drift
啟動NTP服務:
[root@rac1 ~]#  /etc/init.d/ntpd start
Starting ntpd:                                             [  OK  ]
設定NTP 隨系統啟動:
[root@rac1 ~]#  chkconfig --level 345 ntpd on
[root@rac1 ~]# chkconfig --list |grep ntp
ntpd            0:off 1:off 2:off 3:on 4:on 5:on 6:off

節點2配置:
[root@rac2 ~]#  chkconfig --list |grep ntpd
ntpd            0:off 1:off 2:off 3:on 4:on 5:on 6:off
[root@rac2 ~]#  chkconfig --level 345 ntpd off  
 
[root@rac2 ~]#  chkconfig --list |grep ntpd
ntpd            0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@rac2 ~]#  service ntpd stop
Shutting down ntpd:
 
[root@rac2 ~]# vi /etc/crontab
*/5 * * * * root /usr/sbin/ntpdate 192.168.56.50  //5
分鐘同步1

 

 

 

配置環境變數

 

節點1

[oracle@rac1 ~]$ vi .bash_profile

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

 

umask=022

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1

export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs

export ORACLE_SID=rac1

export PATH=$ORACLE_HOME/bin:$PATH

export TNS_ADMIN=$ORACLE_HOME/network/admin

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

export CLASSPATH=$ORACLE_HOME/JRE

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export TEMP=/tmp

export TMPDIR=/tmp

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

[oracle@rac1 ~]$

 

節點2

[root@rac2 ~]# vi .bash_profile

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

 

k=022

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1

export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs

export ORACLE_SID=rac2

export PATH=$ORACLE_HOME/bin:$PATH

export TNS_ADMIN=$ORACLE_HOME/network/admin

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

export CLASSPATH=$ORACLE_HOME/JRE

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export TEMP=/tmp

export TMPDIR=/tmp

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBKATH=$PATH:$HOME/bin

".bash_profile" 26L, 854C written

[root@rac2 ~]#

 

將共享磁碟分割槽

 

 

[root@rac1 mnt]# fdisk /dev/sdc

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-522, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-522, default 522):

Using default value 522

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[root@rac1 mnt]#

[root@rac1 mnt]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1         130     1044193+  83  Linux

/dev/sda2             131        2610    19920600   8e  Linux LVM

 

Disk /dev/sdb: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1         500     4016218+  83  Linux

/dev/sdb2             501        1044     4369680   83  Linux

 

Disk /dev/sdc: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1         522     4192933+  83  Linux

[root@rac1 mnt]#

 

選擇使用哪種儲存方式

 

OCFS2 

這裡只是提到ocfs2的安裝方法,我實際安裝的是ASM+raw。

下載包

 

https://oss.oracle.com/projects/ocfs2-tools/files/

https://oss.oracle.com/projects/ocfs2/files/

 

 

 

 

[root@rac1 ~]# ls

anaconda-ks.cfg    ocfs2-2.6.9-89.0.11.EL-1.2.9-1.el4.x86_64.rpm

Desktop             ocfs2console-1.2.4-1.x86_64.rpm

install.log         ocfs2-tools-1.2.4-1.x86_64.rpm

install.log.syslog

[root@rac1 ~]#

[root@rac1 ~]# rpm –ivh ocfs2-tools-1.2.4-1.x86_64.rpm

[root@rac1 ~]# rpm –ivh ocfs2-2.6.9-89.0.11.EL-1.2.9-1.el4.x86_64.rpm

[root@rac1 ~]# rpm –ivh ocfs2console-1.2.4-1.x86_64.rpm

節點2 同樣安裝

 

 

建立分割槽

[root@rac1 ~]# fdisk /dev/sdc   因為是共享磁碟 所以一個節點建立就可以了

 

重啟計算機

 

開啟Xmanager – Passive並設定變數

 

[root@rac1 ~]# export DISPLAY=192.168.56.1:0.0

啟動圖形介面

[root@rac1 ~]# confs2console

 選擇 Iasks 新增兩個磁碟組
然後選擇Cluster 中 confige Nodes新增兩個節點

 

在選擇 Cluster 中 Propagate Configuration

將節點1的檔案拷貝到節點2

輸入節點2root使用者的密碼

驗證

[root@rac1 ~]# cat /etc/ocfs2/clunter.conf

[root@rac2 ~]# cat /etc/ocfs2/clunter.conf

 

建立目錄將磁碟掛載

[root@rac1 ~]# mkdir -p / orac/orahome

[root@rac1 ~]# mkdir -p / orac/oradata

[root@rac1 ~]# mount –t ocfs2 /dev/hdc1 /orac/orahome

[root@rac1 ~]# mount –t ocfs2  -o datavolume,nointr /dev/hdc2  / orac/oradata

[root@rac1 ~]# mounted.ocfs2 –f  --檢視ocfs2磁碟狀態

系統啟動時ocfs2同時啟動 (兩個節點都執行)

[root@rac1 /]# /etc/init.d/o2cb configure

[root@rac1 /]# vi /etc/fstab

/dev/sdc1               / orac/orahome  ocfs2   _netdev                    0 0

/dev/sdc2               / orac/oradata   ocfs2   _netdev, datavolume,nointr   0 0

節點2掛載

[root@rac2 /]# /etc/init.d/o2cb online

[root@rac2 ~]# mount –t ocfs2 /dev/hdc1 / orac/orahome

[root@rac2 ~]# mount –t ocfs2  -o datavolume,nointr /dev/hdc2  / orac/oradata

 

修改許可權

[root@rac1 orcl]# chown root.oinstall crs

 [root@rac1 orcl]# chown oracle.oinstall oradata

[root@rac1 orcl]# chown oracle.oinstall orahome

[root@rac1 orcl]# chmod 775 crs

[root@rac1 orcl]# chmod 775 oradata

[root@rac1 orcl]# chmod 775 orahome

 

 

 

 

 

ASM

 

在每個節點配置裸裝置與磁碟關聯

[root@rac1 ~]# vi /etc/sysconfig/rawdevices

 

/dev/raw/raw1  /dev/sdb1

/dev/raw/raw2  /dev/sdb2

重啟裸裝置服務

[root@rac1 ~]# /sbin/service rawdevices restart

Assigning devices:

           /dev/raw/raw1  --&gt   /dev/sdb1

/dev/raw/raw1:  bound to major 8, minor 17

           /dev/raw/raw2  --&gt   /dev/sdb2

/dev/raw/raw2:  bound to major 8, minor 18

done

[root@rac1 ~]#

在每個節點修改裸裝置的許可權和所屬者

[root@rac1 ~]# chown root:oinstall /dev/raw/raw1

[root@rac1 ~]# chmod 660 /dev/raw/raw1

[root@rac1 ~]# chown oracle:oinstall /dev/raw/raw2

[root@rac1 ~]# chmod 644/dev/raw/raw2

 

[root@rac1 ~]#  vi /etc/rc.local

 

touch /var/lock/subsys/local

chown root:oinstall /dev/raw/raw1

chmod 660 /dev/raw/raw1

chown oracle:oinstall /dev/raw/raw2

chmod 644 /dev/raw/raw2

 

在每個節點安裝ASMLIB

[root@rac1 RPMS]# rpm -ivh  oracleasm-support-2.1.3-1.el4.x86_64.rpm

 

[root@rac1 RPMS]# rpm -ivh oracleasm-2.6.9-89.0.0.0.1.EL-2.0.5-1.el4.x86_64.rpm

 

[root@rac1 ~]# rpm -ivh oracleasmlib-2.0.4-1.el4.x86_64.rpm

 

在每個節點配置

[root@rac1 mnt]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: oracle

Default group to own the driver interface []: dba

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [  OK  ]

Scanning the system for Oracle ASMLib disks: [  OK  ]

[root@rac1 mnt]#

[root@rac1 mnt]# /etc/init.d/oracleasm enable

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [  OK  ]

Scanning the system for Oracle ASMLib disks: [  OK  ]

[root@rac1 mnt]#

 

ASM配置磁碟

[root@rac1 ~]# /etc/init.d/oracleasm  createdisk VOL1 /dev/sdc1

Marking disk "VOL1" as an ASM disk: [  OK  ]

[root@rac1 ~]#顯示配置的所有磁碟

[root@rac1 mnt]# /etc/init.d/oracleasm listdisks

VOL1

VOL2

[root@rac1 mnt]#

建立安裝目錄

[root@rac1 ~]# mkdir -p /u01/app/oracle

[root@rac1 oracle]#chown oracle:oinstall /u01 -R

重啟OS

 

開始安裝

 

在每個節點配置使用者等效性

#su – oracle 

 

[oracle@rac2 ~]$ export DISPLAY=192.168.56.1:0.0

[oracle@rac2 ~]$  exec /usr/bin/ssh-agent $SHELL

[oracle@rac2 ~]$ /usr/bin/ssh-add

Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

[oracle@rac2 ~]$

 

 

先使用 gunzip 解壓字尾.cpio.gz的包。

在使用cpio –idcmv<壓縮包name    解壓。Coip的壓縮包。

 

1565626 blocks

[oracle@rac1 ~]$ ls

10201_clusterware_linux_x86_64.cpio  clusterware

10201_database_linux_x86_64.cpio     database

[oracle@rac1 ~]$

在節點1 安裝clusterware

[oracle@rac1 ~]$ cd clusterware/

[oracle@rac1 clusterware]$ ls

cluvfy  install   rootpre  runInstaller  upgrade

doc     response  rpm      stage         welcome.html

[oracle@rac1 clusterware]$ ./runInstaller

********************************************************************************

 

Has 'rootpre.sh' been run by root? [y/n] (n)

這裡輸入Y

 

 Crs安裝路徑: 與設定的變數一致

系統檢查全部通過下一步

這裡的設定與/etc/hosts中一直 有幾個節點新增幾個節點

新增完成下一步

eth0的網路卡改為公用


修改完成下一步

根據自己的需求選擇冗餘方式我選擇的是外部冗餘 這裡指定的是OCR的位置

這裡可以選擇正常冗餘或外部冗餘,效果一樣,正常冗餘提供了一個

OCR的映象位置,而外部冗餘沒有提供OCR映象,只需要OCR位置就可以,我這裡選擇的是外部冗餘

 

同樣的選擇冗餘方式:這裡存放的是表決磁碟位置

這裡出現你安裝的所有設定:點選安裝

等待安裝完成

這裡提示使用root使用者執行兩個指令碼 :注意要先執行節點1

[root@rac1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@rac1 ~]#

[root@rac2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@rac2 ~]#

[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

WARNING: directory '/u01' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

 

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

WARNING: directory '/u01' is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: rac1 priv1 rac1

node 2: rac2 priv2 rac2

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /dev/raw/raw2

Format of 1 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

        rac1

CSS is inactive on these nodes.

        rac2

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

[root@rac1 ~]#

[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

WARNING: directory '/u01' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

 

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

WARNING: directory '/u01' is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: rac1 priv1 rac1

node 2: rac2 priv2 rac2

clscfg: Arguments check out successfully.

 

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

        rac1

        rac2

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

[root@rac2 ~]#

這裡需要手工配置vipca

[root@rac1 ~]#  cd /u01/app/oracle/product/10.2.0/crs/bin

[root@rac1 bin]# export DISPLAY=192.168.56.1:0.0

[root@rac1 bin]# ./vipca

配置完成 進入安裝介面點選OK 會進行自動檢查: 成功會會提示以下介面

 

 在下面安裝錯誤中提到vipca的步驟

 

 

下面安裝資料庫軟體

 

在節點1 執行

[oracle@rac1 clusterware]$ cd ..

[oracle@rac1 ~]$ ls

10201_clusterware_linux_x86_64.cpio  clusterware

10201_database_linux_x86_64.cpio     database

[oracle@rac1 ~]$ cd database/

[oracle@rac1 database]$ ls

doc  install  response  runInstaller  stage  welcome.html

[oracle@rac1 database]$ ./runInstaller

 

選擇安裝版本:企業版、標準版與自定義。我選擇的是企業版

選擇安裝路徑:環境變數ORACLE_HOME

這裡選擇叢集安裝: 將所有節點都選中

這裡進行檢查:這裡沒有通過 我的swap只有2047MB 沒有達到3006MB的需求

在每個節點操作

[root@rac1 ~]# free -m

             total       used       free     shared    buffers     cached

Mem:          2007       1511        495          0         32       1304

-/+ buffers/cache:        174       1832

Swap:         2047          0       2047

[root@rac1 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VGoracle-LVroot

                       14G  5.6G  6.9G  45% /

/dev/sda1            1012M   39M  923M   4% /boot

none                 1004M     0 1004M   0% /dev/shm

[root@rac1 ~]# mkdir /swapimage

[root@rac1 ~]# cd /swapimage/

[root@rac1 swapimage]# dd if=/dev/zero of=/swapimage/swap bs=1024 count=1024000

1024000+0 records in

1024000+0 records out

[root@rac1 swapimage]# mkswap /swapimage/swap

Setting up swapspace version 1, size = 1048571 kB

[root@rac1 swapimage]# free -m

             total       used       free     shared    buffers     cached

Mem:          2007       1978         28          0         11       1773

-/+ buffers/cache:        193       1813

Swap:         3047          0       2047

選擇只安裝資料庫軟體

點選安裝

可以在進行重新安裝嘗試

執行指令碼

[root@rac1 ~]# cd /u01/app/oracle/product/10.2.0/db_1/

[root@rac1 db_1]# ./root.sh

Running Oracle10 root.sh script...

 

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/db_1

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

 

[root@rac1 db_1]#

配置netca

全部下一步。

最後可以配置命名為本地命名

 

配置資料庫

[oracle@rac1 ~]$ dbca

 

 

 

 

 

 

 

 

 

 













 

安裝錯誤

 

第二個節點執行 root.sh 報錯:

Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

 

解決方法: 手工配置vipca

root@rac1 # cd /u01/app/oracle/product/10.2.0/crs/bin
root@rac1 # ./vipca

Xmanager中啟動一個終端,啟動vipca圖形介面。點選next,出現所有可用的網路介面,由於ce0配置的是PUBLIC INTERFACT,這裡選擇ce0,點選next,在出現的配置中IP Alias Name分別填入:vip1vip2IP address處填入:vip地址。這裡如果你的配置是正確的,那麼你填完一個IPOracle會自動將剩下三個配置補齊。點選next,出現彙總頁面,檢查無誤後,點選Finish

Oracle
會執行6個步驟,Create VIP application resourceCreate GSD application resourceCreate ONS application resourceStart VIP application resourceStart GSD application resourceStart ONS application resource

全部成功後點選OK,結束VIPCA的配置。

 

 檢查未通過

Checking Network Configuration requirements ...

Check complete. The overall result of this check is: Not executed <<<<

Recommendation: Oracle supports installations on systems with DHCP-assigned public IP addresses. However, the primary network interface on the system should be configured with a static IP address in order for the Oracle Software to function properly. See the Installation Guide for more details on installing the software on systems configured with DHCP.(推薦系統採用靜態的IP設定)

 

 

解決方法:

將所有節點的eth0網路卡設定為靜態

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=static

IPADDR=192.168.56.50

NETMASK=255.255.255.0

GATEWAY=192.168.56.1

#HWADDR=00:0c:29:c9:31:a9

ONBOOT=yes

TYPE=Ethernet

 

 

 

配置客戶端

 

windows 編輯 C:\Windows\System32\drivers\etc\hosts 檔案

新增linux /etc/hosts中的配置

192.168.56.50           rac1

192.168.56.52           vip1

10.10.10.1              priv1

192.168.56.51           rac2

192.168.56.53           vip2

10.10.10.2              priv2

 

使用windowscmd工具 ping vip1 vip2 看是否能ping

同過OK。編輯oracle10g客戶端中tnsnames.ora 檔案

新增linux 主機中/u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora rac內容

 

RAC =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = vip1)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = vip2)(PORT = 1521))

    (LOAD_BALANCE = yes)

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = rac)

      (FAILOVER_MODE =

        (TYPE = SELECT)

        (METHOD = BASIC)

        (RETRIES = 180)

        (DELAY = 5)

      )

    )

  )

 

故障查詢語句

 

以下SQL 查詢可以用來檢視一個會話的故障切換型別、故障切換方法和是否發生了故障切換。在這個例子中自始至終使用這個查詢。

 

 

Sql>COLUMN instance_name    FORMAT a13 

Sql>COLUMN host_name        FORMAT a9 

Sql>COLUMN failover_method  FORMAT a15 

Sql>COLUMN failed_over      FORMAT a11 

Sql>SELECT instance_name,host_name, 

 NULL AS failover_type,NULL AS failover_method, 

 NULL AS failed_over 

 FROM v$instance 

UNION 

 SELECT NULL,NULL,failover_type,failover_method,failed_over 

 FROM v$session WHERE username = 'SYSTEM';

 

 

 

 

 

叢集基本命令 

 

停止Oracle RAC 10g環境

 第一步是停止Oracle 例項。當此例項(和相關服務)關閉後,關閉ASM例項。最後,關閉節點應用程式(虛擬IP、GSD、TNS 監聽器和 ONS) 

$ export ORACLE_SID=rac1 

 $ emctl stop dbconsole 

 $ srvctl stop instance -d rac -i rac1

 $ srvctl stop asm –n rac1 

$ srvctl stop nodeapps -n rac1 

啟動Oracle RAC 10g環境

 

 第一步是啟動節點應用程式(虛擬 IP、GSD、TNS 監聽器和 ONS)。當成功啟動節點應用程式後,啟動ASM 例項。最後,啟動Oracle 例項(和相關服務)以及企業管理器資料庫控制檯。

$ export ORACLE_SID=rac2 

 

$ srvctl start nodeapps -n rac1 

 

$ srvctl start asm -n rac1 

 

$ srvctl start instance -d rac -i rac1 

 

$ emctl start dbconsole 

 

使用SRVCTL 啟動/停止所有例項

 

 

 

$ srvctl start database -d rac1 

$ srvctl stop database -d rac1 

 

所有例項和服務的狀態

 

 

 

$ srvctl status database -d rac 

單個例項的狀態

 

 

$ srvctl status instance -d rac -i rac2 

在資料庫全域性命名服務的狀態

 

 

$ srvctl status service -d rac -s rac 

特定節點上節點應用程式的狀態

 

 

$ srvctl status nodeapps -n rac1 

ASM 

例項的狀態

 

 

$ srvctl status asm -n rac1 

列出配置的所有資料庫

 

 

$ srvctl config database 

顯示

 

RAC 

資料庫的配置

 

 

$ srvctl config database -d rac 

顯示指定叢集資料庫的所有服務

 

 

$ srvctl config service -d rac

顯示節點應用程式的配置-(VIP、GSD、ONS、監聽器)

 

 

$ srvctl config nodeapps -n oradb 1 -a -g -s -l 

VIP 

exists.:/vip- rac 1/192.168.1.200/255.255.255.0/eth0:eth1 

GSD exists. 

ONS daemon exists. 

Listener exists. 

顯示

 

ASM 

例項的配置

 

 

$ srvctl config asm -n rac1 

+ASM1 /home/oracle/product/10.2.0/db_1

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29532781/viewspace-1174777/,如需轉載,請註明出處,否則將追究法律責任。

相關文章