10G RAC RAW+ASM rhel-server-5.5-x86_64

Appleses發表於2016-01-30

由於學校老師只講了11g的RAC安裝,所以想自己試試,中間出了很多錯誤,借鑑了很多前輩寫的文件,無抄襲之意,僅為自己學習所整理,可能有很多錯誤,歡迎指正
這裡我會提供我安裝過程中所需要的所有安裝包和光碟映象等,省的大家跟我似的苦逼呵呵找半天
虛擬機器映象:
http://pan.baidu.com/s/1dDvNcop
ASM軟體:
http://pan.baidu.com/s/1hGbz4
10g linux x86_64 clusterware:
http://pan.baidu.com/s/1hqHvDGg
rhel-server-5.5-x86_64-dvd.iso
http://pan.baidu.com/s/1gdor2L1
10g oracle軟體  for linux x86_64:
http://pan.baidu.com/s/1bn6a6qb



一.修改IP,hostname

127.0.0.1               localhost

192.168.6.103      rh1

192.168.6.104      rh1-vip

10.10.10.3             rh1-priv

 

192.168.6.105      rh2

192.168.6.106      rh2-vip

10.10.10.5             rh2-priv

 

[root@rh1 u01]# vi /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=rh1

 

[root@rh1 u01]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=static

ONBOOT=yes

IPADDR=192.168.6.103

GATEWAY=192.168.6.1

NETMASK=255.255.255.0

 

[root@rh1 u01]# vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=static

ONBOOT=yes

IPADDR=10.10.10.3

NETMASK=255.255.255.0


二.增加交換分割槽

[root@rh1 u01]# dd if=/dev/zero of=/u01/swap1 bs=1024k count=2048

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 29.9432 seconds, 71.7 MB/s

 

[root@rh1 u01]# mkswap -c /u01/swap1

Setting up swapspace version 1, size = 2147479 kB

 

[root@rh1 u01]# swapon /u01/swap1

 

[root@rh1 u01]# vi /etc/fstab


LABEL=/                               /                           ext3          defaults        1 1
LABEL=/u01                        /u01                    ext3          defaults        1 2
tmpfs                                    /dev/shm            tmpfs      defaults,size=2g        0 0
devpts                                  /dev/pts             devpts      gid=5,mode=620  0 0
sysfs                                      /sys                      sysfs          defaults        0 0
proc                                      /proc                   proc         defaults        0 0
LABEL=SWAP-sda2           swap                    swap       defaults        0 0
/dev/sda3                           /soft                     ext3         defaults        0 0


/u01/swap1                        swap                   swap        defaults        0 0



 

[root@rh2 /]# free -m

             total       used       free     shared    buffers     cached

Mem:          1341       1312         28          0          4       1062

-/+ buffers/cache:        244       1096

Swap:         4095          0       4095


三.建立使用者,組和目錄並賦予許可權   設定oracle使用者環境變數

groupadd -g 200 oinstall

groupadd -g 201 dba

useradd -u 200 -g oinstall -G dba oracle

passwd oracle

 

節點1

su - oracle

vi .bash_profile

PATH=$PATH:$HOME/bin

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1

export ORA_CRS_HOME=/u01/crs_1

export ORACLE_SID=prod1

export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin

export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin

export PATH=${PATH}:$ORACLE_BASE/common/oracle/bin

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

export EDITOR=vi

 

節點2 SID=prod2其他不變

 

建立目錄

mkdir -p /u01/app/oracle/product/10.2.0/db_1

mkdir -p /u01/crs_1

mkdir -p /u01/app/oraInventory

chmod -R 775 /u01/app/oracle/product/10.2.0/db_1

chmod -R 775 /u01/app/oraInventory/

chmod -R 775 /u01/crs_1/


四.建立使用者等效性

[oracle@rh1 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

32:8a:e1:8c:0d:70:86:8e:88:97:5a:3a:15:fd:a3:dd oracle@rh1

[oracle@rh1 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

d8:88:23:9b:6f:cd:74:81:8f:f9:13:16:98:5f:8a:8f oracle@rh1

節點2執行相同操作

[oracle@rh2 ~]$ ssh-keygen -t rsa

[oracle@rh2 ~]$ ssh-keygen -t dsa

 

節點1

[oracle@rh1 ~]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

[oracle@rh1 ~]$ cat .ssh/id_dsa.pub >> .ssh/authorized_keys

[oracle@rh1 ~]$ ssh rh2 cat .ssh/id_rsa.pub >> .ssh/authorized_keys  這一步是將節點2的id_rsa.pub追加寫入到節點1的authorized_keys檔案裡

oracle@rh2's password:

[oracle@rh1 ~]$ ssh rh2 cat .ssh/id_dsa.pub >> .ssh/authorized_keys

oracle@rh2's password:

[oracle@rh1 ~]$ scp .ssh/authorized_keys rh2:~/.ssh  將寫有節點1和節點2密碼檔案的authorized_keys拷貝到節點2,兩邊就都擁有金鑰了

oracle@rh2's password:

authorized_keys 100% 1988 1.9KB/s 00:00


驗證信任關係

ssh rh1 date

ssh rh1-priv date

ssh rh2 date

ssh rh2-priv date

兩個節點都要驗證


五.修改系統引數

[root@rh1:/root]# vi /etc/sysctl.conf   在最後新增下面的引數

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 536870912

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

[root@rh1:/root]# sysctl -p  立即生效

 

[root@rh1:/root]# vi /etc/security/limits.conf               限制使用者訪問的CPU和記憶體資源

oracle  soft    nproc   2047

oracle  hard    nproc   16384

oracle  soft    nofile  1024

oracle  hard    nofile  65536

oracle  soft    stack   10240

 

 

[root@rh1:/root]# vi /etc/pam.d/login         linux作業系統的登陸配置檔案 

新增

session required /lib/security/pam_limits.so           當使用者登陸以後自動啟用上面的限制

 

[root@rh1:/root]# vi /etc/profile

新增

 

if [ $USER = "oracle" ]; then

        if [ $SHELL = "/bin/ksh" ]; then

                ulimit -p 16384

                ulimit -n 65536

        else

                ulimit -u 16384 -n 65536

        fi

fi


六.時間同步

用ntp

vi /etc/ntp.conf

節點1做服務端

broadcastclient

driftfile /var/lib/ntp/drift

server 127.127.1.0

節點2為客戶端

driftfile /var/lib/ntp/drift

server 192.168.6.103      節點1的IP

 

使NTP服務可以在系統引導的時候自動啟動,執行    

# chkconfig ntpd on   

啟動/關閉/重啟NTP的命令:   

# /etc/init.d/ntpd start   

# /etc/init.d/ntpd stop   

# /etc/init.d/ntpd restart   

#service ntpd restart

將同步好的時間寫到CMOS裡   

vi /etc/sysconfig/ntpd   

SYNC_HWCLOCK=yes   

 

每次修改了配置檔案後都需要重新啟動服務來使配置生效。

可以使用下面的命令來檢查NTP服務是否啟動,你應該可以得到一個程式ID號:   

# pgrep ntpd   

使用下面的命令檢查時間伺服器同步的狀態:   

# ntpq -p   

用ntpstat 也可以檢視一些同步狀態,用netstat -ntlup檢視埠使用情況! 

 

安裝完畢客戶端需過5-10分鐘才能從伺服器端更新時間!  

 

一些補充和拾遺(挺重要)

 

1. 配置檔案中的driftfile是什麼?

我們每一個system clock的頻率都有小小的誤差,這個就是為什麼機器執行一段時間後會不精確. NTP會自動來監測我們時鐘的誤差值並予以調整.但問題是這是一個冗長的過程,所以它會把記錄下來的誤差先寫入driftfile.這樣即使你重新開機以後之前的計算結果也就不會丟失了

 

2. 如何同步硬體時鐘?

NTP一般只會同步system clock. 但是如果我們也要同步RTC(hwclock)的話那麼只需要把下面的選項開啟就可以了

程式碼:

# vi /etc/sysconfig/ntpd

SYNC_HWCLOCK=yes


七.格式化分割槽

 

ocr 我們要劃分成2個100M,分別來存放ocr配置檔案,votingdisk我們要分成3個100M的。 Asm4data 要分成兩個,10G 存放資料檔案,5G 用在flashback上。 Backup 我們分成一個區。

在一個結點執行格式化就可以了,因為他們是共享的。

[root@rh1 soft]# fdisk /dev/sdc

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.

 

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +100m

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 2

First cylinder (14-2610, default 14):

Using default value 14

Last cylinder or +size or +sizeM or +sizeK (14-2610, default 2610): +100m

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 3

First cylinder (27-2610, default 27):

Using default value 27

Last cylinder or +size or +sizeM or +sizeK (27-2610, default 2610): +100m

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

e

Selected partition 4

First cylinder (40-2610, default 40):

Using default value 40

Last cylinder or +size or +sizeM or +sizeK (40-2610, default 2610):

Using default value 2610

 

Command (m for help): n

First cylinder (40-2610, default 40):

Using default value 40

Last cylinder or +size or +sizeM or +sizeK (40-2610, default 2610): +100m

 

Command (m for help): n

First cylinder (53-2610, default 53):

Using default value 53

Last cylinder or +size or +sizeM or +sizeK (53-2610, default 2610): +100m

 

Command (m for help): n

First cylinder (66-2610, default 66):

Using default value 66

Last cylinder or +size or +sizeM or +sizeK (66-2610, default 2610): +10g

 

Command (m for help): n

First cylinder (1283-2610, default 1283):

Using default value 1283

Last cylinder or +size or +sizeM or +sizeK (1283-2610, default 2610): +5g

 

Command (m for help): n

First cylinder (1892-2610, default 1892):

Using default value 1892

Last cylinder or +size or +sizeM or +sizeK (1892-2610, default 2610):

Using default value 2610

 

Command (m for help): p

 

Disk /dev/sdc: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start        End        Blocks            Id    System

/dev/sdc1               1             13        104391             83    Linux

/dev/sdc2              14           26        104422+           83    Linux

/dev/sdc3              27           39        104422+           83    Linux

/dev/sdc4              40         2610        20651557+    5    Extended

/dev/sdc5              40           52        104391            83    Linux

/dev/sdc6              53          65        104391             83    Linux

/dev/sdc7              66        1282        9775521        83    Linux

/dev/sdc8            1283     1891        4891761        83    Linux

/dev/sdc9            1892     2610        5775336        83    Linux


八.配置raw 裝置

 

   所謂raw 裝置,就是通過字元方式訪問的裝置,也就是讀寫裝置不需要緩衝區。 在Linux 下,對磁碟值提供了塊方式的訪問。要想通過字元方式訪問,必須配置raw 裝置服務,並且Oracle 使用者對這些raw 裝置必須有訪問的許可權。

 

在2個節點上做如下操作:

 

1.修改/etc/udev/rules.d/60-raw.rules 檔案

 

root@rh1 soft]# vi /etc/udev/rules.d/60-raw.rules

ACTION=="add", KERNEL=="sdc1",RUN+="/bin/raw /dev/raw/raw1 %N"

ACTION=="add", KERNEL=="sdc2",RUN+="/bin/raw /dev/raw/raw2 %N"

ACTION=="add", KERNEL=="sdc3",RUN+="/bin/raw /dev/raw/raw3 %N"

ACTION=="add", KERNEL=="sdc5",RUN+="/bin/raw /dev/raw/raw4 %N"

ACTION=="add", KERNEL=="sdc6",RUN+="/bin/raw /dev/raw/raw5 %N"

ACTION=="add",KERNEL=="raw[1-5]", OWNER="oracle", GROUP="oinstall", MODE="660"

 

2. 重啟服務:

[root@rac1 ~]# start_udev

Starting udev:         [  OK  ]

 

3. 檢視raw裝置:

 

[root@rh1 soft]# ls -lrt /dev/raw

total 0

crw-rw---- 1 oracle oinstall 162, 5 May 17 15:02 raw5

crw-rw---- 1 oracle oinstall 162, 3 May 17 15:02 raw3

crw-rw---- 1 oracle oinstall 162, 2 May 17 15:02 raw2

crw-rw---- 1 oracle oinstall 162, 1 May 17 15:02 raw1

crw-rw---- 1 oracle oinstall 162, 4 May 17 15:02 raw4


九.安裝ASM軟體
這裡附一下ASM軟體,我下了很多次,安裝後總是configure失敗,最後找到一個可以用的,應該是之前的版本不對
http://pan.baidu.com/s/1bnthLYB

兩個節點都要安

[root@rh2 asm]# rpm -ivh *

warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-support      ########################################### [ 20%]

   2:oracleasm-2.6.18-194.el########################################### [ 40%]

   3:oracleasm-2.6.18-194.el########################################### [ 60%]

   4:oracleasm-2.6.18-194.el########################################### [ 80%]

   5:oracleasmlib           ########################################### [100%]

安裝後在各個節點都要configure    oracle dba y y

[root@rh2 asm]# service oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface [oracle]:

Default group to own the driver interface [dba]:

Start Oracle ASM library driver on boot (y/n) [y]:

Scan for Oracle ASM disks on boot (y/n) [y]:

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver:                     [  OK  ]

Scanning the system for Oracle ASMLib disks:      [  OK  ]

                                                          

建立ASM磁碟

節點1建立

[root@rh1 asm]# service oracleasm createdisk ASM_DATA1 /dev/sdc7

Marking disk "ASM_DATA1" as an ASM disk:                   [  OK  ]

[root@rh1 asm]# service oracleasm createdisk ASM_DATA2 /dev/sdc8

Marking disk "ASM_DATA2" as an ASM disk:                   [  OK  ]

[root@rh1 asm]# service oracleasm listdisks

ASM_DATA1

ASM_DATA2

 

節點2要scan一下

[root@rh2 asm]# service oracleasm listdisks

[root@rh2 asm]# service oracleasm scandisks

Scanning the system for Oracle ASMLib disks:               [  OK  ]

[root@rh2 asm]# service oracleasm listdisks

ASM_DATA1

ASM_DATA2

十.安裝clusterware
http://download.oracle.com/otn/linux/oracle10g/10201/10201_clusterware_linux_x86_64.cpio.gz

因為Oracle10發行的時候還不支援rh5,修改安裝目錄裡面oraparam.ini檔案redhat-4直接改成5(可以用find -name oraparam.ini ),或者/etc/redhat-release 改成4

 

Has 'rootpre.sh' been run by root? [y/n] (n)

[root@rh1 rootpre]# sh rootpre.sh

No OraCM running

這裡我們跑rootpre提示這個

網上說可以忽略

 

進行安裝前的校驗時又出現

[oracle@rh1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n rh1,rh2 -verbose

ERROR:

Could not find a suitable set of interfaces for VIPs.

網上說可以忽略,之後跑vipca即可

目錄要和我們建立的一樣





這裡預設只顯示一個節點,需要手工新增



指定網路卡型別,這裡系統把所有可用的網路卡都掃描進來,而我們實際需要用到的只有eth0跟eth1而已,手動遮蔽沒用的網路卡,注意還要將eth0改成public!














以root使用者跑指令碼,倆節點都跑,跑完一個在跑下一個
orainstROOT.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

 

root.sh

WARNING: directory '/u01' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

 

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01' is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: rh1 rh1-priv rh1

node 2: rh2 rh2-priv rh2

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /dev/raw/raw3

Now formatting voting device: /dev/raw/raw4

Now formatting voting device: /dev/raw/raw5

Format of 3 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

rh1

CSS is inactive on these nodes.

rh2

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

 

節點2跑root.sh報錯了

Running vipca(silent) for configuring nodeapps

/u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

 

解決這個問題,需要三個步驟來完成:

1、在每個節點上,修改$CRS_HOME/bin目錄下的srvctl和vipca檔案,在vipca檔案ARGUMENTS=""行之前和srvctl檔案的export LD_ASSUME_KERNEL行之後增加 unset LD_ASSUME_KERNEL 語句

2、使用$CRS_HOME/bin目錄下的oifcfg工具配置pub ip和pri ip

3、在任意一個節點上,用root使用者,手動執行vipca,配置完正確的prvip和vip 資訊之後,crs就可以安裝完成,操作過程如下:


1.1、修改vipca檔案,增加標記為紅色的那一行

[root@rh2 ~]# cd /u01/crs_1/
[root@rh2 crs_1]# cd bin/
[root@rh2 bin]# cp vipca vipca.bak
[root@rh2 bin]# vi vipca


#!/bin/sh
#
# $Header: vipca.sbs 10-dec-2004.15:30:55 khsingh Exp $
#
# vipca
#
# Copyright (c) 2001, 2004, Oracle. All rights reserved.
#
# NAME
# vipca - Node Apps and VIPs Configuration Assistant
#
# DESCRIPTION
# Oracle Cluster Node Applications Configuration Assistant is
# used to configure the Node Applications and the virtual IPs.
#
# MODIFIED (MM/DD/YY)
# khsingh 12/10/04 - fix LINUX workaround for bug 4054430
# khsingh 11/22/04 - remove obsolete files
# rxkumar 11/29/04 - fix bug4024708
# khsingh 10/07/04 - add workaround for bug (3937317)
# khsingh 09/27/04 - changes for PLE (3914991)
# khsingh 09/13/04 - add orahome arg back
# khsingh 08/16/04 - remove orahome arg
# khsingh 12/07/03 - change oembase to oemlt
# khsingh 10/31/03 - fix ice browser
# khsingh 10/29/03 - add jewt var
# khsingh 08/08/03 - fix ==
# jtellez 06/10/03 - change to srvm_trace
# rdasari 06/02/03 - set LD_LIBRARY_PATH appropriately for 32 and 64 bit solaris platforms
# jtellez 11/15/02 - change SRVM_DEFS to SRVM_PROPERTY_DEFS
# jtellez 11/04/02 - Add srvm_defs
# jtellez 10/10/02 - fix srvmhas
# jtellez 10/04/02 - srvmhas to jlib
# jtellez 09/24/02 - add tracing
# jtellez 09/09/02 - add versions to jars
# rdasari 08/07/02 - use java instead of jre
# jtellez 08/08/02 - enhance comment
# jtellez 08/06/02 - add GUI jars
# rdasari 08/01/02 - use jdk131
# jtellez 07/26/02 - add srvmhas.jar to classpath
# jtellez 07/29/02 - add gui jars
# jtellez 07/24/02 - jtellez_vipca
# jtellez 7/24/02 - creation
#
#!/bin/sh


# Properties to pass directly to java
if [ "X$SRVM_PROPERTY_DEFS" = "X" ]
then
SRVM_PROPERTY_DEFS=""
fi


# Check for tracing
if [ "X$SRVM_TRACE" != "X" ]
then
SRVM_PROPERTY_DEFS="$SRVM_PROPERTY_DEFS -DTRACING.ENABLED=true -DTRACING.LEVEL=2"
fi


# External Directory Variables set by the Installer
JREDIR=/u01/crs/oracle/product/10.2.0/crs/jdk/jre/
ORACLE_HOME=/u01/crs/oracle/product/10.2.0/crs
export ORACLE_HOME;
/export
EMBASE_FILE=oemlt-10_1_0.jar


# GUI jars
EWTJAR=$JLIBDIR/$EWT_FILE
JEWTJAR=$JLIBDIR/$JEWT_FILE
ICEJAR=$JLIBDIR/$ICE_BROWSER5_FILE
EMBASEJAR=$JLIBDIR/$EMBASE_FILE
SHAREJAR=$JLIBDIR/$SHARE_FILE
HELPJAR=$JLIBDIR/$HELP_FILE
GUIJARS=$EWTJAR:$JEWTJAR:$SHAREJAR:$EMBASEJAR:$HELPJAR:$ICEJAR


# Set Classpath for Net Configuration Assistant
CLASSPATH=$JREJAR:$JRECLASSES:$OPSMJAR:$SRVMHASJAR:$VIPCAJAR:$GUIJARS


#Used for specifying any platforms specific Java options
JRE_OPTIONS=""


# Set the shared library path for JNI shared libraries
# A few platforms use an environment variable other than LD_LIBRARY_PATH
PLATFORM=`uname`
case $PLATFORM in
HP-UX) SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/srvm/lib32:$SHLIB_PATH
export SHLIB_PATH
;;
AIX) LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/srvm/lib32:$LIBPATH
export LIBPATH
;;
Linux) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH


#Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
121 LD_ASSUME_KERNEL=2.4.19
122 export LD_ASSUME_KERNEL
123 fi
124 #End workaround
125 ;;
126 SunOS) MACH_HARDWARE=`/bin/uname -i`
127 case $MACH_HARDWARE in
128 i86pc)
129 LD_LIBRARY_PATH=$ORACLE_HOME/lib:ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
130 export LD_LIBRARY_PATH
131 ;;
132 *)
133 LD_LIBRARY_PATH_64=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH_64
134 export LD_LIBRARY_PATH_64
135 JRE_OPTIONS="-d64"
136 ;;
137 esac
138 ;;
139 OSF1) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
140 export LD_LIBRARY_PATH
141 ;;
142 
143 Darwin) DYLD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$DYLD_LIBRARY_PATH
144 export DYLD_LIBRARY_PATH 
145 ;; 
146 *) if [ -d $ORACLE_HOME/lib32 ];
147 then
148 LD_LIBRARY_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/srvm/lib32:$LD_LIBRARY_PATH
149 else
150 LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
151 fi
152 export LD_LIBRARY_PATH
153 ;;
154 esac
155 
156 unset LD_ASSUME_KERNEL
157 
158 ARGUMENTS=""
159 NUMBER_OF_ARGUMENTS=$#
160 if [ $NUMBER_OF_ARGUMENTS -gt 0 ]; then
161 ARGUMENTS=$*
162 fi
163 
164 # Run Vipca
165 exec $JRE $JRE_OPTIONS $SRVM_PROPERTY_DEFS -classpath $CLASSPATH oracle.ops.vipca.VipCA -orahome $ORACLE_HOME $ARGUME NTS
"vipca" 167L, 5034C written


1.2、修改srvctl檔案,增加標記為紅色的那一行


[root@rh2 bin]# cp srvctl srvctl.bak

[root@rh2 bin]# vi srvctl

 

#!/bin/sh

#

# $Header: srvctl.sbs 29-nov-2004.11:56:24 rxkumar Exp $

#

# srvctl

#

# Copyright (c) 2000, 2004, Oracle. All rights reserved.

#

# NAME

# srvctl - Oracle Server Control Utility

#

# DESCRIPTION

# Oracle Server Control Utility can be used to administer a RAC database,

# i.e., to modify the configuration information of a RAC

# database server as well as to do start/stop/status operations on the

# instances of the server.

#

# MODIFIED (MM/DD/YY)

# rxkumar 11/29/04 - fix bug4024708

# dliu 11/18/04 - replace OH

# khsingh 10/07/04 - add workaround for bug (3937317)

# khsingh 09/27/04 - update case statement (3914991)

# gdyoung 09/17/04 - ;;

# gdyoung 08/20/04 - ple/st script merging

# dliu 08/04/04 - get them work on linux

# dliu 11/20/03 - support for trace

# dliu 11/12/03 - unset ORA_CRSDEBUG

# bhamadan 09/18/03 - replacing s_jre131Location with s_jreLocation

# khsingh 06/25/03 - remove policy file

# rxkumar 06/03/03 - add srvmasm.jar

# rdasari 06/02/03 - set LD_LIBRARY_PATH appropriately for 32 and 64 bit solaris platforms

# dliu 02/21/03 - add i18n.jar

# dliu 11/13/02 - use ORA_CRS_UI_FMT to turn on output capture

# dliu 10/17/02 - turn on output capture

# jtellez 10/04/02 - make policy ==

# surchatt 09/06/02 - puttint policy file location

# rdasari 08/07/02 - use java instead of jre

# rdasari 08/01/02 - use jdk131

# jtellez 07/26/02 - add srvmhas.jar to classpath

# rdasari 05/09/01 - changing the header information

# rdasari 03/22/01 - changing to ops to srv.

# dliu 03/02/01 - use "$@" for argument list. this iscritical for correct interpretation of arguments with spaces in them..

# dliu 02/26/01 - fix bug #1656127: SHLIB_PATH change.

# dliu 02/23/01 - replace $ORACLE_HOME in classpath with an install variable..

# jcreight 11/08/00 - define OPSMJAR, not OPSJAR

"srvctl" 171L, 5554C

126 JRE_OPTIONS="-d64"

127 ;;

128 esac

129 ;;

130 OSF1) LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH

131 export LD_LIBRARY_PATH

132 ;;

133 Darwin)

134 DYLD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$DYLD_LIBRARY_PATH

135 export DYLD_LIBRARY_PATH

136 ;;

137 *) if [ -d $ORACLE_HOME/lib32 ];

138 then

139 LD_LIBRARY_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/srvm/lib32:$LD_LIBRARY_PATH

140 else

141 LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH

142 fi

143 export LD_LIBRARY_PATH

144 ;;

145 esac

146 

147 

148 # turn off crs debug flag that would otherwise interfere with crs profile

149 # modification

150 ORA_CRSDEBUG=0

151 export ORA_CRSDEBUG

152 

153 # environment variable to turn on trace: set SRVM_TRACE to turn it on.

154 if [ "X$SRVM_TRACE" != "X" ]

155 then

156 TRACE="-DTRACING.ENABLED=true -DTRACING.LEVEL=2"

157 else

158 TRACE=

159 fi

160 

161 if [ "X$SRVM_TRACE" != "X" ]

162 then

163 echo $JRE $JRE_OPTIONS -classpath $CLASSPATH $TRACE oracle.ops.opsctl.OPSCTLDriver "$@"

164 fi

165 

166 #Remove this workaround when the bug 3937317 is fixed

167 LD_ASSUME_KERNEL=2.4.19

168 export LD_ASSUME_KERNEL

169 unset LD_ASSUME_KERNEL

170 

171 # Run ops control utility

"srvctl" 173L, 5578C written

 

 

2、在任意一個節點上使用oifcfg配置public和vip網路

[root@rh2 bin]# ./oifcfg setif -global eth0/192.168.6.0:public

[root@rh2 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect

[root@rh2 bin]# ./oifcfg getif

eth0  192.168.6.0  global  public

eth1  10.10.10.0  global  cluster_interconnect

-- 注意這裡最後一個是0. 代表一個網段。 在一個節點設定之後,其他節點也能看到。 

[root@rh1 bin]# ./oifcfg getif

eth0  192.168.6.0  global  public

eth1  10.10.10.0  global  cluster_interconnect



這時候沒有必要再返回重新執行root.sh了,需要我們手工執行vipca命令來為兩個節點新增一些必要的程式,至於在哪個節點上執行,這個就無所謂了,這裡我是在RAC2上(root使用者)執行vipca命令:




在空白處填寫各節點對應的vip名稱以及IP地址(其實只要填寫RAC1的vip名稱,再點其他空白處,就自動獲取出來了),點選“next”








這個地方安裝完之後就可以點選“ok”退出了。這裡我們手工執行完vipca之後,如果成功,那麼相當於RAC2的root.sh也順利完成使命,下一步需要做的就是返回到RAC1節點,執行剩下的步驟,如下圖所示:

點選OK

開始安裝後的檢驗


這裡有一個小插曲,在node1跑orainstRoot.sh由於疏忽使用了oracle使用者,雖然之後又用root用跑了一次,但在最後第三項檢測時還是失敗了,這是由於/etc/oraInst.loc的許可權不對,無法讀取,更正後retry成功

 

[root@rh1 etc]# ls -l |grep -i ora

drwxr-xr-x  3 root oinstall   4096 May 18 12:56 oracle

-rwxr-x---  1 root root         63 May 18 12:56 oraInst.loc

[root@rh1 etc]# chmod 644 /etc/oraInst.loc

[root@rh1 etc]# ls -l |grep -i ora

drwxr-xr-x  3 root oinstall   4096 May 18 12:56 oracle

-rw-r--r--  1 root root         63 May 18 12:56 oraInst.loc


最後修改一下root使用者環境變數
vi .bash_profile
PATH裡新增ORA_CRS_HOME/bin

使用crs_stat -t 檢視一下

至此clusterware安裝結束!



十.安裝Oracle 10gR2 database

1. 檢查Oracle 的相關包。Oracle 10g 需要如下包

binutils-2.15.92.0.2-10.EL4

compat-db-4.1.25-9

control-center-2.8.0-12

gcc-3.4.3-9.EL4

gcc-c++-3.4.3-9.EL4

glibc-2.3.4-2

glibc-common-2.3.4-2

gnome-libs-1.4.1.2.90-44.1

libstdc++-3.4.3-9.EL4

libstdc++-devel-3.4.3-9.EL4

make-3.80-5

pdksh-5.2.14-30

sysstat-5.0.5-1

xscreensaver-4.18-5.rhel4.2

libaio-0.3.96

 

To see which versions of these packages are installed on your system, run the following command:

rpm -q binutils compat-db control-center gcc gcc-c++ glibc glibc-common gnome-libs libstdc++ libstdc++-devel make pdksh sysstat xscreensaver libaio openmotif21



建立yum軟體倉庫,通過yum來安裝,yum install -y 包名

[root@rh1 u01]# cd /etc/yum.repos.d/

[root@rh1 yum.repos.d]# ls

rhel-debuginfo.repo

[root@rh1 yum.repos.d]# cp rhel-debuginfo.repo yum.repo

[root@rh1 yum.repos.d]# vi yum.repo

[Base]

name=Red Hat Enterprise Linux

baseurl=file:///media/Server

enabled=1

gpgcheck=0

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release







在“warning”和“not executed”的小方框裡面打勾,點選“next”



只裝軟體




以root使用者跑指令碼,倆節點都跑,跑完一個在跑下一個

[root@rh1 database]# /u01/app/oracle/product/10.2.0/db_1/root.sh

Running Oracle10 root.sh script...

 

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/db_1

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.





十一.配置監聽

資料庫軟體安裝完成之後,接下來要做的是給兩個節點配置listener,也就是監聽。監聽在ORACLE RAC中的地位非常重要,如果監聽沒有配置好,後期使用RAC方式建立例項就會出問題,下面在RAC1節點上以oracle身份登陸,執行netca,開啟網路配置,來完成監聽過程的全過程。

通過oracle執行netca,開啟網路配置介面,選擇cluster配置方式,點選“next”














執行命令crs_stat -t,可以看到剛配置好的兩個監聽程式已經啟動

十二.建立ASM 例項

1. 執行DBCA 命令


2. 選擇 configure Automatic Storage Management, 來建立ASM 例項




3. 選擇所有結點



4. 輸入密碼。RAC 的spfile 必須放在共享目錄下。  引數檔案我們選擇第一個initialization parameter。 也可以放在我們建的裸裝置上。

這裡密碼設為oracle









5.    ASM 例項建立完後,用Create New 來建立ASM 磁碟組。 我們用ASM_DATA1來建立一個DATA 組, ASM_DATA2 建立FLASH_RECOVERY_AREA組




Redundancy 一般選external 就是也就是不考慮冗餘,假如選normal 則是mirror, 至少要一個FailGroup選High 就是triple mirror,3倍映象,需要三個FailGroup 








繼續create





6. 建立完成後,能看到組的狀態是Mount, ASM 組必須mount之後才能使用。

 



十三.DBCA建庫

dbca















選擇ASM 來儲存, 分別選擇我們剛建立的DATA 和RCY












這裡可以手工新增刪除一些指定的表空間、控制檔案、日誌檔案等,我選擇系統預設,然後點選“next



改一下字符集





剩下都next

開始安裝了





安完了檢查一下

[root@rh1 database]# crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora.prod.db    application    ONLINE    ONLINE    rh1        

ora....d1.inst application    ONLINE    ONLINE    rh1        

ora....d2.inst application    ONLINE    ONLINE    rh2        

ora....SM1.asm application    ONLINE    ONLINE    rh1        

ora....H1.lsnr application    ONLINE    ONLINE    rh1        

ora.rh1.gsd    application    ONLINE    ONLINE    rh1        

ora.rh1.ons    application    ONLINE    ONLINE    rh1        

ora.rh1.vip    application    ONLINE    ONLINE    rh1        

ora....SM2.asm application    ONLINE    ONLINE    rh2        

ora....H2.lsnr application    ONLINE    ONLINE    rh2        

ora.rh2.gsd    application    ONLINE    ONLINE    rh2        

ora.rh2.ons    application    ONLINE    ONLINE    rh2        

ora.rh2.vip    application    ONLINE    ONLINE    rh2    

 

[root@rh2 bin]# ./crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora.prod.db    application    ONLINE    ONLINE    rh1        

ora....d1.inst application    ONLINE    ONLINE    rh1        

ora....d2.inst application    ONLINE    ONLINE    rh2        

ora....SM1.asm application    ONLINE    ONLINE    rh1        

ora....H1.lsnr application    ONLINE    ONLINE    rh1        

ora.rh1.gsd    application    ONLINE    ONLINE    rh1        

ora.rh1.ons    application    ONLINE    ONLINE    rh1        

ora.rh1.vip    application    ONLINE    ONLINE    rh1        

ora....SM2.asm application    ONLINE    ONLINE    rh2        

ora....H2.lsnr application    ONLINE    ONLINE    rh2        

ora.rh2.gsd    application    ONLINE    ONLINE    rh2        

ora.rh2.ons    application    ONLINE    ONLINE    rh2        

ora.rh2.vip    application    ONLINE    ONLINE    rh2     

 

 

[oracle@rh1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 10.2.0.1.0 - Production on Sun May 18 16:22:57 2014

 

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP and Data Mining options

 

SQL> select status from gv$instance;

 

STATUS

------------

OPEN

OPEN


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26736162/viewspace-1627708/,如需轉載,請註明出處,否則將追究法律責任。

相關文章