RAC叢集資料庫搭建
搭建RAC叢集資料庫命令配置搭建系統環境過程:
搭建RAC最重要的是前期工作,就是配備系統搭建環境。
俗話有說:磨刀不誤砍柴工。這句話用在RAC的前期工作
最恰當不過了。經過多次的搭建測試表明:只要前期的
搭建系統環境配置無誤,後面安裝GI軟體,oracle軟體以及建庫
就不會遇到各種報錯。若中間環境配置的過程中稍有操作誤差,
就會影響RAC叢集資料庫的安裝工作,增加安裝的工作量,要排查
錯誤,解決錯誤,甚至安裝不到GI軟體,或者安裝不到oracle軟體,
這樣,就別說搭建RAC資料庫了,也使前期的配置工作盡廢。
使用虛擬機器VM-VirtualBox
作業系統:redhat 5.5 32位
節點 ip ip-vip ip-priv
Node1 192.168.56.11 192.168.56.31 192.168.100.21
Node2 192.168.56.12 192.168.56.32 192.168.100.22
Rac_scan 192.168.56.25
----主機系統記憶體、網路儲存等配置:
--節點1:
[root@node1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node1
[root@node1 ~]#
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.56.11
NETMASK=255.255.255.0
GATEWAY=192.168.56.1
[root@node1 ~]#
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.21
GATEWAY=255.255.255.0
[root@node1 ~]#
--節點2:
[root@node2 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node2
[root@node2 ~]#
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.56.12
NETMASK=255.255.255.0
GATEWAY=192.168.56.1
[root@node2 ~]#
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.22
NETMASK=255.255.255.0
[root@node2 ~]#
--設定hosts檔案:
[root@node1 ~]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost
::1 localhost6.localdomain6 localhost6
192.168.56.11 node1
192.168.56.31 node1-vip
192.168.100.21 node1-priv
192.168.56.12 node2
192.168.56.32 node2-vip
192.168.100.22 node2-priv
192.168.56.25 rac_scan
~
#兩個節點一樣設定。
----新增組或者使用者:
--刪除已存在的使用者或者使用者組:
[root@node1 ~]# cd /var/spool/mail
[root@node1 mail]# ls
oracle rpc tom
[root@node1 mail]# rm -rf oracle
[root@node1 mail]# cd /home
[root@node1 home]# ls
oracle tom
[root@node1 home]# rm -rf oracle/
[root@node1 home]# cd \
[root@node1 home]# cd \
>
[root@node1 ~]#
[root@node1 ~]# userdel oracle
[root@node1 ~]# groupdel dba
[root@node1 ~]# groupdel oinstall
[root@node1 ~]# groupdel oper
groupdel: group oper does not exist
[root@node1 ~]#
#刪除原有的使用者或者組,兩個節點都是這樣操作。
--新增使用者或使用者組:
[root@node1 ~]#
[root@node1 ~]# groupadd -g 200 oinstall
[root@node1 ~]# groupadd -g 201 dba
[root@node1 ~]# groupadd -g 202 oper
[root@node1 ~]# groupadd -g 203 asmadmin
[root@node1 ~]# groupadd -g 204 asmoper
[root@node1 ~]# groupadd -g 205 asmdba
[root@node1 ~]# useradd -u 200 -g oinstall -G dba,asmdba,oper oracle
[root@node1 ~]# useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
[root@node1 ~]#
[root@node2 ~]# cd /var/spool/mail
[root@node2 mail]# rm -rf oracle
[root@node2 mail]# cd /home
[root@node2 home]# rm -rf oracle/
[root@node2 home]# cd \
>
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# userdel oracle
[root@node2 ~]# groupdel dba
[root@node2 ~]# groupdel oinstall
[root@node2 ~]# groupdel oper
groupdel: group oper does not exist
[root@node2 ~]#
[root@node2 ~]# groupadd -g 200 oinstall
[root@node2 ~]# groupadd -g 201 dba
[root@node2 ~]# groupadd -g 202 oper
[root@node2 ~]# groupadd -g 203 asmadmin
[root@node2 ~]# groupadd -g 204 asmoper
[root@node2 ~]# groupadd -g 205 asmdba
[root@node2 ~]# useradd -u 200 -g oinstall -G dba,asmdba,oper oracle
[root@node2 ~]# useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
[root@node2 ~]#
----建立相關目錄並授權指令碼:
--節點1:
[root@node1 ~]# pwd
/root
[root@node1 ~]# mkdir -p /u01/app/oraInventory
[root@node1 ~]# chown -R grid:oinstall /u01/app/oraInventory/
[root@node1 ~]# chmod -R 775 /u01/app/oraInventory/
[root@node1 ~]# mkdir -p /u01/11.2.0/grid
[root@node1 ~]# chown -R grid:oinstall /u01/11.2.0/grid/
[root@node1 ~]# chmod -R 775 /u01/11.2.0/grid/
[root@node1 ~]# mkdir -p /u01/app/oracle
[root@node1 ~]# mkdir -p /u01/app/oracle/cfgtoollogs
[root@node1 ~]# mkdir -p /u01/app/oracle/product/11.2.0/db_1
[root@node1 ~]# chown -R oracle:oinstall /u01/app/oracle
[root@node1 ~]# chmod -R 775 /u01/app/oracle
[root@node1 ~]#
----------------------------
--節點2:
[root@node2 ~]# pwd
/root
[root@node2 ~]# mkdir -p /u01/app/oraInventory
[root@node2 ~]# chown -R grid:oinstall /u01/app/oraInventory/
[root@node2 ~]# chmod -R 775 /u01/app/oraInventory/
[root@node2 ~]# mkdir -p /u01/11.2.0/grid
[root@node2 ~]# chown -R grid:oinstall /u01/11.2.0/grid/
[root@node2 ~]# chmod -R 775 /u01/11.2.0/grid/
[root@node2 ~]# mkdir -p /u01/app/oracle
[root@node2 ~]# mkdir -p /u01/app/oracle/cfgtoollogs
[root@node2 ~]# mkdir -p /u01/app/oracle/product/11.2.0/db_1
[root@node2 ~]# chown -R oracle:oinstall /u01/app/oracle
[root@node2 ~]# chmod -R 775 /u01/app/oracle
[root@node2 ~]#
----設定oracle使用者和grid使用者密碼:
[root@node1 ~]#
[root@node1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]# passwd grid
Changing password for user grid.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]#
------------------------
[root@node2 ~]#
[root@node2 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node2 ~]# passwd grid
Changing password for user grid.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node2 ~]#
#兩個節點都同樣設定。
----修改核心引數:
--新增核心檔案內容1:
[root@node1 ~]# vi /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
... ...
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
--核心引數修改生效:
[root@node1 ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 4294967295
kernel.shmall = 268435456
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
[root@node1 ~]#
#兩個節點同樣的操作。
--新增核心檔案內容2:
[root@node1 ~]# vi /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
... ...
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
--新增核心檔案內容3:
[root@node1 ~]# vi /etc/pam.d/login
session required /lib/security/pam_limits.so
#兩個加點同樣操作。
--新增核心檔案內容4:
[root@node1 ~]# vi /etc/profile :
if [ $USER = "oracle" ]||[ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
#兩個節點同樣操作。
----關閉系統ntp服務,採用oracle 自帶的時間同步服務:
--停止部分系統服務:
[root@node1 ~]#
[root@node1 ~]# chkconfig ntpd off
[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
[root@node1 ~]# chkconfig sendmail off
[root@node1 ~]#
#兩個節點同樣操作。
--校驗連個節點時間相差20s內:
[root@node1 ~]#
[root@node1 ~]# date
Fri Oct 28 12:23:11 CST 2016
[root@node1 ~]#
[root@node2 ~]#
[root@node2 ~]# date
Fri Oct 28 12:23:20 CST 2016
[root@node2 ~]#
----進入oracle與grid使用者分別修改環境變數(所有節點):
#node1 ORACLE_SID=prod1 ORACLE_SID=+ASM1
#node2 ORACLE_SID=prod2 ORACLE_SID=+ASM2
---oracle使用者:
--節點1:
[oracle@node1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=prod1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
~
[oracle@node1 ~]$ . .bash_profile
[oracle@node1 ~]$
--節點2:
[oracle@node2 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=prod2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
[oracle@node2 ~]$ . .bash_profile
[oracle@node2 ~]$
---grid使用者:
--節點1:
[grid@node1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/11.2.0/grid
export GRID_HOME=/u01/11.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export THREADS_FLAG=native
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
~
~
".bash_profile" 23L, 484C written
[grid@node1 ~]$ . .bash_profile
[grid@node1 ~]$
--節點2:
[grid@node2 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/11.2.0/grid
export GRID_HOME=/u01/11.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export THREADS_FLAG=native
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
~
~
".bash_profile" 23L, 484C written
[grid@node2 ~]$ . .bash_profile
[grid@node2 ~]$
----配置共享儲存:
---透過ASM管理:
1)OCR DISK :儲存CRS資源配置資訊
2)VOTE DISK:仲裁盤,記錄節點狀態
3)Data Disk:存放datafile、controlfile、redologfile、spfile 等
4)Recovery Area:存放flashback database log、archive log、rman backup等
--檢視磁碟大小情況:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 68.8 GB, 68862869504 bytes
255 heads, 63 sectors/track, 8372 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 8372 67143667+ 8e Linux LVM
Disk /dev/sdb: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
[root@node1 ~]#
--分配磁碟分割槽:
[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 3263.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
--分盤操作:
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3263, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-3263, default 3263): +1G
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (124-3263, default 124):
Using default value 124
Last cylinder or +size or +sizeM or +sizeK (124-3263, default 3263): +1G
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (247-3263, default 247):
Using default value 247
Last cylinder or +size or +sizeM or +sizeK (247-3263, default 3263): +1G
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Selected partition 4
First cylinder (370-3263, default 370):
Using default value 370
Last cylinder or +size or +sizeM or +sizeK (370-3263, default 3263):
Using default value 3263
Command (m for help): n
First cylinder (370-3263, default 370):
Using default value 370
Last cylinder or +size or +sizeM or +sizeK (370-3263, default 3263): +7G
Command (m for help): n
First cylinder (1222-3263, default 1222):
Using default value 1222
Last cylinder or +size or +sizeM or +sizeK (1222-3263, default 3263): +7G
Command (m for help): n
First cylinder (2074-3263, default 2074):
Using default value 2074
Last cylinder or +size or +sizeM or +sizeK (2074-3263, default 3263): +3G
Command (m for help): n
First cylinder (2440-3263, default 2440):
Using default value 2440
Last cylinder or +size or +sizeM or +sizeK (2440-3263, default 3263): +3G
Command (m for help): n
First cylinder (2806-3263, default 2806):
Using default value 2806
Last cylinder or +size or +sizeM or +sizeK (2806-3263, default 3263): +1G
Command (m for help): n
First cylinder (2929-3263, default 2929): +1G
Value out of range.
First cylinder (2929-3263, default 2929):
Using default value 2929
Last cylinder or +size or +sizeM or +sizeK (2929-3263, default 3263): +1G
Command (m for help): n
First cylinder (3052-3263, default 3052):
Using default value 3052
Last cylinder or +size or +sizeM or +sizeK (3052-3263, default 3263):
Using default value 3263
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]#
#分割槽只需在節點1操作。
--檢視磁碟分割槽情況:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 68.8 GB, 68862869504 bytes
255 heads, 63 sectors/track, 8372 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 8372 67143667+ 8e Linux LVM
Disk /dev/sdb: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 123 987966 83 Linux
/dev/sdb2 124 246 987997+ 83 Linux
/dev/sdb3 247 369 987997+ 83 Linux
/dev/sdb4 370 3263 23246055 5 Extended
/dev/sdb5 370 1221 6843658+ 83 Linux
/dev/sdb6 1222 2073 6843658+ 83 Linux
/dev/sdb7 2074 2439 2939863+ 83 Linux
/dev/sdb8 2440 2805 2939863+ 83 Linux
/dev/sdb9 2806 2928 987966 83 Linux
/dev/sdb10 2929 3051 987966 83 Linux
/dev/sdb11 3052 3263 1702858+ 83 Linux
[root@node1 ~]#
--在node2上檢視磁碟,由於是共享的,所有node2檢視的磁碟已經分好區:
[root@node2 ~]# fdisk -l
Disk /dev/sda: 68.8 GB, 68862869504 bytes
255 heads, 63 sectors/track, 8372 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 8372 67143667+ 8e Linux LVM
Disk /dev/sdb: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 123 987966 83 Linux
/dev/sdb2 124 246 987997+ 83 Linux
/dev/sdb3 247 369 987997+ 83 Linux
/dev/sdb4 370 3263 23246055 5 Extended
/dev/sdb5 370 1221 6843658+ 83 Linux
/dev/sdb6 1222 2073 6843658+ 83 Linux
/dev/sdb7 2074 2439 2939863+ 83 Linux
/dev/sdb8 2440 2805 2939863+ 83 Linux
/dev/sdb9 2806 2928 987966 83 Linux
/dev/sdb10 2929 3051 987966 83 Linux
/dev/sdb11 3052 3263 1702858+ 83 Linux
[root@node2 ~]#
---ASM軟體管理:
--建立spft資料夾,並上傳rpm包:
[root@node1 ~]#
[root@node1 ~]# mkdir asm
[root@node1 ~]# ls
anaconda-ks.cfg asm Desktop install.log install.log.syslog
[root@node1 ~]#
[root@node1 ~]# cd asm
[root@node1 asm]# rz
rz waiting to receive.
開始 zmodem 傳輸。 按 Ctrl+C 取消。
100% 126 KB 126 KB/s 00:00:01 0 Errors686.rpm...
100% 13 KB 13 KB/s 00:00:01 0 Errors
100% 83 KB 83 KB/s 00:00:01 0 Errors...
#上傳成功。
--安裝rmp包:
[root@node1 asm]#
[root@node1 asm]# ls
oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm #該rmp包的版本要求與系統核心版本一直,檢視核心版本的命令:uname -a.
oracleasm-support-2.1.8-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
[root@node1 asm]#
[root@node1 asm]#
[root@node1 asm]# rpm -ivh *
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-194.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]
[root@node1 asm]#
#安裝完畢,兩個節點同樣的操作。
---配置oracleASM,兩個節點同樣操作:
[root@node1 asm]#
[root@node1 asm]# service oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 asm]#
[root@node1 asm]#
---建立oracle ASM磁碟:
--節點1:#只需在一個節點操作(這裡是節點1),在另外一個節點2作另外操作其他命令。
[root@node1 asm]#
[root@node1 asm]# service oracleasm
Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}
[root@node1 asm]# service oracleasm createdisk OCR_VOTE1 /dev/sdb1
Marking disk "OCR_VOTE1" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk OCR_VOTE2 /dev/sdb2
Marking disk "OCR_VOTE2" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk OCR_VOTE3 /dev/sdb3
Marking disk "OCR_VOTE3" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_DATA1 /dev/sdb5
Marking disk "ASM_DATA1" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_DATA2 /dev/sdb6
Marking disk "ASM_DATA2" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_RCY1 /dev/sdb7
Marking disk "ASM_RCY1" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_RCY2 /dev/sdb8
Marking disk "ASM_RCY2" as an ASM disk: [ OK ]
[root@node1 asm]#
[root@node1 asm]# service oracleasm listdisks
ASM_DATA1
ASM_DATA2
ASM_RCY1
ASM_RCY2
OCR_VOTE1
OCR_VOTE2
OCR_VOTE3
[root@node1 asm]#
--節點2:
[root@node2 asm]#
[root@node2 asm]# service oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 asm]#
[root@node2 asm]#
[root@node2 asm]# service oracleasm listdisks
ASM_DATA1
ASM_DATA2
ASM_RCY1
ASM_RCY2
OCR_VOTE1
OCR_VOTE2
OCR_VOTE3
[root@node2 asm]#
#可以看到在節點1建立的ASM磁碟已經對映到節點2.
----建立主機間的互信:
--建立節點之間oracle 、grid 使用者之間的信任(透過ssh 建立公鑰和私鑰)
--生成金鑰對(所有節點的oracle使用者和grid使用者):
---oracle使用者:
--節點1oracle使用者:
[root@node1 ~]# su - oracle
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh-keygen -t rsa #公鑰,不用輸密碼,保留空#
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
09:7d:4d:26:a5:3c:40:24:55:bd:25:5f:cd:e3:5f:73 oracle@node1
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh-keygen -t dsa #金鑰,不用輸密碼,保留空#
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
32:3f:5e:7a:fc:19:78:cf:39:24:89:6e:80:dd:7a:65 oracle@node1
[oracle@node1 ~]$
[oracle@node1 ~]$ ls .ssh
id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node1 ~]$
--節點2oracle使用者:
[root@node2 ~]# su - oracle
[oracle@node2 ~]$
[oracle@node2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
22:28:28:eb:b0:fa:43:00:71:f7:ca:a2:53:ed:38:ca oracle@node2
[oracle@node2 ~]$
[oracle@node2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
04:3c:bf:64:db:e3:db:9b:19:90:45:d4:06:dd:71:30 oracle@node2
[oracle@node2 ~]$
[oracle@node2 ~]$ ls .ssh
id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node2 ~]$
---配置信任關係:
[oracle@node1 ~]$ cat .ssh/id_rsa.pub >>.ssh/authorized_keys
[oracle@node1 ~]$ cat .ssh/id_dsa.pub >>.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat .ssh/id_rsa.pub >>.ssh/authorized_keys
The authenticity of host 'node2 (192.168.56.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.56.12' (RSA) to the list of known hosts.
oracle@node2's password:
[oracle@node1 ~]$ ssh node2 cat .ssh/id_dsa.pub >>.ssh/authorized_keys
oracle@node2's password:
[oracle@node1 ~]$ scp .ssh/authorized_keys node2:~/.ssh
oracle@node2's password:
authorized_keys 100% 1992 2.0KB/s 00:00
[oracle@node1 ~]$
--可檢視金鑰公鑰檔案:
--節點1:
[oracle@node1 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
[oracle@node1 ~]$
--節點2:
[oracle@node2 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node2 ~]$
---驗證信任關係 :
--節點1:
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh node2 date
Fri Oct 28 13:16:54 CST 2016
[oracle@node1 ~]$ ssh node2-priv date
The authenticity of host 'node2-priv (10.10.10.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2-priv,10.10.10.12' (RSA) to the list of known hosts.
Fri Oct 28 13:17:05 CST 2016
[oracle@node1 ~]$ ssh node2-priv date
Fri Oct 28 13:17:10 CST 2016
[oracle@node1 ~]$ ssh node1 date
The authenticity of host 'node1 (192.168.56.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.56.11' (RSA) to the list of known hosts.
Fri Oct 28 13:17:38 CST 2016
[oracle@node1 ~]$ ssh node1 date
Fri Oct 28 13:17:42 CST 2016
[oracle@node1 ~]$ ssh node1-priv date
The authenticity of host 'node1-priv (10.10.10.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1-priv,10.10.10.11' (RSA) to the list of known hosts.
Fri Oct 28 13:17:54 CST 2016
[oracle@node1 ~]$ ssh node1-priv date
Fri Oct 28 13:17:57 CST 2016
[oracle@node1 ~]$
[oracle@node1 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
[oracle@node1 ~]$
--節點2:
[oracle@node2 ~]$ ssh node1 date
The authenticity of host 'node1 (192.168.56.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.56.11' (RSA) to the list of known hosts.
Fri Oct 28 13:19:23 CST 2016
[oracle@node2 ~]$ ssh node1 date
Fri Oct 28 13:19:26 CST 2016
[oracle@node2 ~]$ ssh node1-priv date
The authenticity of host 'node1-priv (10.10.10.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1-priv,10.10.10.11' (RSA) to the list of known hosts.
Fri Oct 28 13:19:51 CST 2016
[oracle@node2 ~]$ ssh node1-priv date
Fri Oct 28 13:19:54 CST 2016
[oracle@node2 ~]$ ssh node2 date
The authenticity of host 'node2 (192.168.56.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.56.12' (RSA) to the list of known hosts.
Fri Oct 28 13:20:09 CST 2016
[oracle@node2 ~]$ ssh node2 date
Fri Oct 28 13:20:12 CST 2016
[oracle@node2 ~]$ ssh node2-priv date
The authenticity of host 'node2-priv (10.10.10.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2-priv,10.10.10.12' (RSA) to the list of known hosts.
Fri Oct 28 13:20:23 CST 2016
[oracle@node2 ~]$ ssh node2-priv date
Fri Oct 28 13:20:26 CST 2016
[oracle@node2 ~]$
[oracle@node2 ~]$
[oracle@node2 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
[oracle@node2 ~]$
#兩個節點的oracle使用者已經建立互信關係。在兩個節點的grid使用者下同樣操作建立兩個節點間grid使用者之間的互信關係。
[root@node1 ~]# cd /etc/yum.repos.d
[root@node1 yum.repos.d]# ls
rhel-debuginfo.repo
[root@node1 yum.repos.d]# cp rhel-debuginfo.repo yum.repo
[root@node1 yum.repos.d]# vi yum.repo
[Base]
name=Red Hat Enterprise Linux
baseurl=file:///media/Server
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
#兩個節點同樣的操作。
--在虛擬機器裡掛載光碟:安裝軟體。
#由於是影像化操作,具體操作略。
--掛載光碟,安裝軟體:
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# mount /dev/hdc /media
mount: block device /dev/hdc is write-protected, mounting read-only
[root@node1 yum.repos.d]#
#兩個節點同樣的操作。
--安裝yum:
[root@node1 yum.repos.d]# yum install libaio* -y
Loaded plugins: rhnplugin, security
Repository rhel-debuginfo is listed more than once in the configuration
Repository rhel-debuginfo-beta is listed more than once in the configuration
This system is not registered with RHN.
RHN support will be disabled.
Base | 1.3 kB 00:00
Base/primary | 753 kB 00:00
Base 2348/2348
Setting up Install Process
Package libaio-0.3.106-5.i386 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package libaio-devel.i386 0:0.3.106-5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================
Package Arch Version Repository Size
===============================================================================================
Installing:
libaio-devel i386 0.3.106-5 Base 12 k
Transaction Summary
===============================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size: 12 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libaio-devel 1/1
error: failed to stat /media/RHEL_5.5 i386 DVD: No such file or directory
Installed:
libaio-devel.i386 0:0.3.106-5
Complete!
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# yum install syssta* -y
Loaded plugins: rhnplugin, security
Repository rhel-debuginfo is listed more than once in the configuration
Repository rhel-debuginfo-beta is listed more than once in the configuration
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package sysstat.i386 0:7.0.2-3.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================
Package Arch Version Repository Size
===============================================================================================
Installing:
sysstat i386 7.0.2-3.el5 Base 170 k
Transaction Summary
===============================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size: 170 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : sysstat 1/1
error: failed to stat /media/RHEL_5.5 i386 DVD: No such file or directory
Installed:
sysstat.i386 0:7.0.2-3.el5
Complete!
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# yum install unixO* -y
Loaded plugins: rhnplugin, security
Repository rhel-debuginfo is listed more than once in the configuration
Repository rhel-debuginfo-beta is listed more than once in the configuration
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package unixODBC.i386 0:2.2.11-7.1 set to be updated
---> Package unixODBC-devel.i386 0:2.2.11-7.1 set to be updated
---> Package unixODBC-kde.i386 0:2.2.11-7.1 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================
Package Arch Version Repository Size
===============================================================================================
Installing:
unixODBC i386 2.2.11-7.1 Base 832 k
unixODBC-devel i386 2.2.11-7.1 Base 737 k
unixODBC-kde i386 2.2.11-7.1 Base 558 k
Transaction Summary
===============================================================================================
Install 3 Package(s)
Upgrade 0 Package(s)
Total download size: 2.1 M
Downloading Packages:
-----------------------------------------------------------------------------------------------
Total 429 MB/s | 2.1 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : unixODBC 1/3
error: failed to stat /media/RHEL_5.5 i386 DVD: No such file or directory
Installing : unixODBC-kde 2/3
Installing : unixODBC-devel 3/3
Installed:
unixODBC.i386 0:2.2.11-7.1 unixODBC-devel.i386 0:2.2.11-7.1 unixODBC-kde.i386 0:2.2.11-7.1
Complete!
--檢視是否安裝成功,共有3個:
[root@node1 yum.repos.d]# rpm -qa |grep -i odbc
unixODBC-kde-2.2.11-7.1
unixODBC-devel-2.2.11-7.1
unixODBC-2.2.11-7.1
[root@node1 yum.repos.d]#
#yum安裝成功,兩個節點同樣的操作。
----安裝GI軟體:
--進入到grid使用者,建立一個soft目錄:(安裝只需一個節點操作,安裝之後會對映到另外的節點):
--建立好soft目錄之後,上傳安裝介質到soft目錄:
[grid@node1 ~]$ pwd
/home/grid
[grid@node1 ~]$ mkdir soft
[grid@node1 ~]$ cd soft/
[grid@node1 soft]$ pwd
/home/grid/soft
[grid@node1 soft]$
[grid@node1 soft]$ rz
rz waiting to receive.
開始 zmodem 傳輸。 按 Ctrl+C 取消。
100% 957843 KB 5949 KB/s 00:02:41 0 Errorss
[grid@node1 soft]$ ls
linux_11gR2_grid.zip
[grid@node1 soft]$
#可看到上傳成功。
--解壓安裝包:
[grid@node1 soft]$
[grid@node1 soft]$ unzip linux_11gR2_grid.zip
... ...
creating: grid/stage/properties/
inflating: grid/stage/properties/oracle.crs_Complete.properties
creating: grid/stage/sizes/
inflating: grid/stage/sizes/oracle.crs11.2.0.1.0Complete.sizes.properties
inflating: grid/stage/OuiConfigVariables.xml
inflating: grid/stage/fastcopy.xml
[grid@node1 soft]$
[grid@node1 soft]$ ls
grid linux_11gR2_grid.zip
[grid@node1 soft]$
#解壓成功。
--進入grid目錄:
[grid@node1 soft]$ cd grid/
[grid@node1 grid]$ ls
doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[grid@node1 grid]$
--先檢測GI的安裝環境:
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed
node1 passed
Verification of the hosts config file successful
Interface information for node "node2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.56.12 192.168.56.0 0.0.0.0 192.168.56.1 08:00:27:FB:15:AB 1500
eth1 10.10.10.12 10.10.10.0 0.0.0.0 192.168.56.1 08:00:27:59:BC:90 1500
Interface information for node "node1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.56.11 192.168.56.0 0.0.0.0 192.168.56.1 08:00:27:FB:15:AA 1500
eth1 10.10.10.11 10.0.0.0 0.0.0.0 192.168.56.1 08:00:27:E1:66:38 1500
Check: Node connectivity of subnet "192.168.56.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) node2,node1
Check: TCP connectivity of subnet "192.168.56.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:192.168.56.11 node2:192.168.56.12 passed
Result: TCP connectivity check passed for subnet "192.168.56.0"
... ... #檢視兩個節點的各項對否有fail項,沒有則安裝環境是好的。
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
node2 does not exist passed
node1 does not exist passed
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 0022 0022 passed
node1 0022 0022 passed
Result: Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
[grid@node1 grid]$
[grid@node1 grid]$
#檢測完畢,無異常。
----在上面檢測GI安裝環境無異常之後,啟動影像化介面,安裝GI軟體。
---啟動X—shell工具,指明使用環境,啟動影像化安裝介面:
[grid@node1 grid]$ export DISPLAY=192.168.56.101:0.0
[grid@node1 grid]$
[grid@node1 grid]$ ls
doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[grid@node1 grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 80 MB. Actual 51512 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3935 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-10-28_02-02-45PM. Please wait ...
---影像化安裝GI後面需要在兩個節點的root使用者上執行兩條指令碼:
[root@node1 ~]# /u01/app/oraInventory/orainstRoot.sh #節點1:
[root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh #節點2:
[root@node1 ~]# /u01/11.2.0/grid/root.sh #節點1:
[root@node2 ~]# /u01/11.2.0/grid/root.sh #節點2:
#安裝完成
---然後驗證一下資源:
[grid@node1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE ONLINE node1
ora....N1.lsnr ora....er.type ONLINE ONLINE node1
ora....VOTE.dg ora....up.type ONLINE ONLINE node1
ora.asm ora.asm.type ONLINE ONLINE node1
ora.eons ora.eons.type ONLINE ONLINE node1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application OFFLINE OFFLINE
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip ora....t1.type ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application OFFLINE OFFLINE
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip ora....t1.type ONLINE ONLINE node2
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE node1
ora....ry.acfs ora....fs.type ONLINE ONLINE node1
ora.scan1.vip ora....ip.type ONLINE ONLINE node1
---檢視以下4個服務,都為線上狀態時,可以進行下一步安裝資料庫軟體:
[grid@node1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@node1 ~]$
----檢查完環境資源後,安裝oracle軟體(單一節點安裝):
---在oracle使用者下建立soft資料夾,將安裝包上傳到soft資料夾,然後解壓縮:
[oracle@node1 soft]$ ls
database linux_11gR2_database_1of2.zip linux_11gR2_database_2of2.zip
[oracle@node1 soft]$ cd database/
[oracle@node1 database]$ ls
doc install response rpm runInstaller sshsetup stage welcome.html
[oracle@node1 database]$ export DISPLAY=192.168.10.1:0.0
[oracle@node1 database]$ ./runInstaller
---解壓兩個oracle軟體安裝包:
[oracle@node1 soft]$ unzip linux_11gR2_database_1of2.zip
creating: database/stage/sizes/
extracting: database/stage/sizes/oracle.server11.2.0.1.0EE.sizes.properties
extracting: database/stage/sizes/oracle.server11.2.0.1.0SE.sizes.properties
extracting: database/stage/sizes/oracle.server11.2.0.1.0Custom.sizes.properties
inflating: database/stage/OuiConfigVariables.xml
inflating: database/stage/oracle.server.11_2_0_1_0.xml
inflating: database/stage/fastcopy.xml
[oracle@node1 soft]$
[oracle@node1 soft]$
[oracle@node1 soft]$ unzip linux_11gR2_database_2of2.zip
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup6.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup5.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup2.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup10.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup11.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup7.jar
[oracle@node1 soft]$
---解壓成功後,檢視並進入database目錄:
[oracle@node1 soft]$ ls
database linux_11gR2_database_1of2.zip linux_11gR2_database_2of2.zip
[oracle@node1 soft]$
[oracle@node1 soft]$ cd database/
[oracle@node1 database]$ ls
doc install response rpm runInstaller sshsetup stage welcome.html
---指定環境變數,啟動影像化介面,安裝oracle軟體:
[oracle@node1 database]$ export DISPLAY=192.168.56.101:0.0
[oracle@node1 database]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 80 MB. Actual 43893 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3791 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-10-29_07-40-13AM. Please wait ...
... ...
#oracle軟體安裝成功。
---在root使用者下執行以下一條指令碼:
[root@node1 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
#指令碼執行完畢。
----使用 ASMCA建立餘下來兩個ASM檔案組:
[grid@node1 ~]$ export DISPLAY=192.168.56.101:0.0
[grid@node1 ~]$
[grid@node1 ~]$ asmca
#建立完成。
----在SCRT啟用DBCA影像化介面建立叢集資料庫:
[oracle@node1 ~]$ export DISPLAY=192.168.56.101:0.0
[oracle@node1 ~]$
[oracle@node1 ~]$ dbca
... ...
#安裝完畢。
搭建RAC最重要的是前期工作,就是配備系統搭建環境。
俗話有說:磨刀不誤砍柴工。這句話用在RAC的前期工作
最恰當不過了。經過多次的搭建測試表明:只要前期的
搭建系統環境配置無誤,後面安裝GI軟體,oracle軟體以及建庫
就不會遇到各種報錯。若中間環境配置的過程中稍有操作誤差,
就會影響RAC叢集資料庫的安裝工作,增加安裝的工作量,要排查
錯誤,解決錯誤,甚至安裝不到GI軟體,或者安裝不到oracle軟體,
這樣,就別說搭建RAC資料庫了,也使前期的配置工作盡廢。
使用虛擬機器VM-VirtualBox
作業系統:redhat 5.5 32位
節點 ip ip-vip ip-priv
Node1 192.168.56.11 192.168.56.31 192.168.100.21
Node2 192.168.56.12 192.168.56.32 192.168.100.22
Rac_scan 192.168.56.25
----主機系統記憶體、網路儲存等配置:
--節點1:
[root@node1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node1
[root@node1 ~]#
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.56.11
NETMASK=255.255.255.0
GATEWAY=192.168.56.1
[root@node1 ~]#
[root@node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.21
GATEWAY=255.255.255.0
[root@node1 ~]#
--節點2:
[root@node2 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node2
[root@node2 ~]#
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.56.12
NETMASK=255.255.255.0
GATEWAY=192.168.56.1
[root@node2 ~]#
[root@node2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Intel Corporation 82540EM Gigabit Ethernet Controller
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.22
NETMASK=255.255.255.0
[root@node2 ~]#
--設定hosts檔案:
[root@node1 ~]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost
::1 localhost6.localdomain6 localhost6
192.168.56.11 node1
192.168.56.31 node1-vip
192.168.100.21 node1-priv
192.168.56.12 node2
192.168.56.32 node2-vip
192.168.100.22 node2-priv
192.168.56.25 rac_scan
~
#兩個節點一樣設定。
----新增組或者使用者:
--刪除已存在的使用者或者使用者組:
[root@node1 ~]# cd /var/spool/mail
[root@node1 mail]# ls
oracle rpc tom
[root@node1 mail]# rm -rf oracle
[root@node1 mail]# cd /home
[root@node1 home]# ls
oracle tom
[root@node1 home]# rm -rf oracle/
[root@node1 home]# cd \
[root@node1 home]# cd \
>
[root@node1 ~]#
[root@node1 ~]# userdel oracle
[root@node1 ~]# groupdel dba
[root@node1 ~]# groupdel oinstall
[root@node1 ~]# groupdel oper
groupdel: group oper does not exist
[root@node1 ~]#
#刪除原有的使用者或者組,兩個節點都是這樣操作。
--新增使用者或使用者組:
[root@node1 ~]#
[root@node1 ~]# groupadd -g 200 oinstall
[root@node1 ~]# groupadd -g 201 dba
[root@node1 ~]# groupadd -g 202 oper
[root@node1 ~]# groupadd -g 203 asmadmin
[root@node1 ~]# groupadd -g 204 asmoper
[root@node1 ~]# groupadd -g 205 asmdba
[root@node1 ~]# useradd -u 200 -g oinstall -G dba,asmdba,oper oracle
[root@node1 ~]# useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
[root@node1 ~]#
[root@node2 ~]# cd /var/spool/mail
[root@node2 mail]# rm -rf oracle
[root@node2 mail]# cd /home
[root@node2 home]# rm -rf oracle/
[root@node2 home]# cd \
>
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# userdel oracle
[root@node2 ~]# groupdel dba
[root@node2 ~]# groupdel oinstall
[root@node2 ~]# groupdel oper
groupdel: group oper does not exist
[root@node2 ~]#
[root@node2 ~]# groupadd -g 200 oinstall
[root@node2 ~]# groupadd -g 201 dba
[root@node2 ~]# groupadd -g 202 oper
[root@node2 ~]# groupadd -g 203 asmadmin
[root@node2 ~]# groupadd -g 204 asmoper
[root@node2 ~]# groupadd -g 205 asmdba
[root@node2 ~]# useradd -u 200 -g oinstall -G dba,asmdba,oper oracle
[root@node2 ~]# useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
[root@node2 ~]#
----建立相關目錄並授權指令碼:
--節點1:
[root@node1 ~]# pwd
/root
[root@node1 ~]# mkdir -p /u01/app/oraInventory
[root@node1 ~]# chown -R grid:oinstall /u01/app/oraInventory/
[root@node1 ~]# chmod -R 775 /u01/app/oraInventory/
[root@node1 ~]# mkdir -p /u01/11.2.0/grid
[root@node1 ~]# chown -R grid:oinstall /u01/11.2.0/grid/
[root@node1 ~]# chmod -R 775 /u01/11.2.0/grid/
[root@node1 ~]# mkdir -p /u01/app/oracle
[root@node1 ~]# mkdir -p /u01/app/oracle/cfgtoollogs
[root@node1 ~]# mkdir -p /u01/app/oracle/product/11.2.0/db_1
[root@node1 ~]# chown -R oracle:oinstall /u01/app/oracle
[root@node1 ~]# chmod -R 775 /u01/app/oracle
[root@node1 ~]#
----------------------------
--節點2:
[root@node2 ~]# pwd
/root
[root@node2 ~]# mkdir -p /u01/app/oraInventory
[root@node2 ~]# chown -R grid:oinstall /u01/app/oraInventory/
[root@node2 ~]# chmod -R 775 /u01/app/oraInventory/
[root@node2 ~]# mkdir -p /u01/11.2.0/grid
[root@node2 ~]# chown -R grid:oinstall /u01/11.2.0/grid/
[root@node2 ~]# chmod -R 775 /u01/11.2.0/grid/
[root@node2 ~]# mkdir -p /u01/app/oracle
[root@node2 ~]# mkdir -p /u01/app/oracle/cfgtoollogs
[root@node2 ~]# mkdir -p /u01/app/oracle/product/11.2.0/db_1
[root@node2 ~]# chown -R oracle:oinstall /u01/app/oracle
[root@node2 ~]# chmod -R 775 /u01/app/oracle
[root@node2 ~]#
----設定oracle使用者和grid使用者密碼:
[root@node1 ~]#
[root@node1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]# passwd grid
Changing password for user grid.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]#
------------------------
[root@node2 ~]#
[root@node2 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node2 ~]# passwd grid
Changing password for user grid.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node2 ~]#
#兩個節點都同樣設定。
----修改核心引數:
--新增核心檔案內容1:
[root@node1 ~]# vi /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
... ...
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
--核心引數修改生效:
[root@node1 ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 4294967295
kernel.shmall = 268435456
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
[root@node1 ~]#
#兩個節點同樣的操作。
--新增核心檔案內容2:
[root@node1 ~]# vi /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
... ...
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
--新增核心檔案內容3:
[root@node1 ~]# vi /etc/pam.d/login
session required /lib/security/pam_limits.so
#兩個加點同樣操作。
--新增核心檔案內容4:
[root@node1 ~]# vi /etc/profile :
if [ $USER = "oracle" ]||[ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
#兩個節點同樣操作。
----關閉系統ntp服務,採用oracle 自帶的時間同步服務:
--停止部分系統服務:
[root@node1 ~]#
[root@node1 ~]# chkconfig ntpd off
[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
[root@node1 ~]# chkconfig sendmail off
[root@node1 ~]#
#兩個節點同樣操作。
--校驗連個節點時間相差20s內:
[root@node1 ~]#
[root@node1 ~]# date
Fri Oct 28 12:23:11 CST 2016
[root@node1 ~]#
[root@node2 ~]#
[root@node2 ~]# date
Fri Oct 28 12:23:20 CST 2016
[root@node2 ~]#
----進入oracle與grid使用者分別修改環境變數(所有節點):
#node1 ORACLE_SID=prod1 ORACLE_SID=+ASM1
#node2 ORACLE_SID=prod2 ORACLE_SID=+ASM2
---oracle使用者:
--節點1:
[oracle@node1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=prod1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
~
[oracle@node1 ~]$ . .bash_profile
[oracle@node1 ~]$
--節點2:
[oracle@node2 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=prod2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
[oracle@node2 ~]$ . .bash_profile
[oracle@node2 ~]$
---grid使用者:
--節點1:
[grid@node1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/11.2.0/grid
export GRID_HOME=/u01/11.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export THREADS_FLAG=native
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
~
~
".bash_profile" 23L, 484C written
[grid@node1 ~]$ . .bash_profile
[grid@node1 ~]$
--節點2:
[grid@node2 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export EDITOR=vi
export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/11.2.0/grid
export GRID_HOME=/u01/11.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export THREADS_FLAG=native
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
~
~
".bash_profile" 23L, 484C written
[grid@node2 ~]$ . .bash_profile
[grid@node2 ~]$
----配置共享儲存:
---透過ASM管理:
1)OCR DISK :儲存CRS資源配置資訊
2)VOTE DISK:仲裁盤,記錄節點狀態
3)Data Disk:存放datafile、controlfile、redologfile、spfile 等
4)Recovery Area:存放flashback database log、archive log、rman backup等
--檢視磁碟大小情況:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 68.8 GB, 68862869504 bytes
255 heads, 63 sectors/track, 8372 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 8372 67143667+ 8e Linux LVM
Disk /dev/sdb: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
[root@node1 ~]#
--分配磁碟分割槽:
[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 3263.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
--分盤操作:
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3263, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-3263, default 3263): +1G
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (124-3263, default 124):
Using default value 124
Last cylinder or +size or +sizeM or +sizeK (124-3263, default 3263): +1G
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (247-3263, default 247):
Using default value 247
Last cylinder or +size or +sizeM or +sizeK (247-3263, default 3263): +1G
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Selected partition 4
First cylinder (370-3263, default 370):
Using default value 370
Last cylinder or +size or +sizeM or +sizeK (370-3263, default 3263):
Using default value 3263
Command (m for help): n
First cylinder (370-3263, default 370):
Using default value 370
Last cylinder or +size or +sizeM or +sizeK (370-3263, default 3263): +7G
Command (m for help): n
First cylinder (1222-3263, default 1222):
Using default value 1222
Last cylinder or +size or +sizeM or +sizeK (1222-3263, default 3263): +7G
Command (m for help): n
First cylinder (2074-3263, default 2074):
Using default value 2074
Last cylinder or +size or +sizeM or +sizeK (2074-3263, default 3263): +3G
Command (m for help): n
First cylinder (2440-3263, default 2440):
Using default value 2440
Last cylinder or +size or +sizeM or +sizeK (2440-3263, default 3263): +3G
Command (m for help): n
First cylinder (2806-3263, default 2806):
Using default value 2806
Last cylinder or +size or +sizeM or +sizeK (2806-3263, default 3263): +1G
Command (m for help): n
First cylinder (2929-3263, default 2929): +1G
Value out of range.
First cylinder (2929-3263, default 2929):
Using default value 2929
Last cylinder or +size or +sizeM or +sizeK (2929-3263, default 3263): +1G
Command (m for help): n
First cylinder (3052-3263, default 3052):
Using default value 3052
Last cylinder or +size or +sizeM or +sizeK (3052-3263, default 3263):
Using default value 3263
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]#
#分割槽只需在節點1操作。
--檢視磁碟分割槽情況:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 68.8 GB, 68862869504 bytes
255 heads, 63 sectors/track, 8372 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 8372 67143667+ 8e Linux LVM
Disk /dev/sdb: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 123 987966 83 Linux
/dev/sdb2 124 246 987997+ 83 Linux
/dev/sdb3 247 369 987997+ 83 Linux
/dev/sdb4 370 3263 23246055 5 Extended
/dev/sdb5 370 1221 6843658+ 83 Linux
/dev/sdb6 1222 2073 6843658+ 83 Linux
/dev/sdb7 2074 2439 2939863+ 83 Linux
/dev/sdb8 2440 2805 2939863+ 83 Linux
/dev/sdb9 2806 2928 987966 83 Linux
/dev/sdb10 2929 3051 987966 83 Linux
/dev/sdb11 3052 3263 1702858+ 83 Linux
[root@node1 ~]#
--在node2上檢視磁碟,由於是共享的,所有node2檢視的磁碟已經分好區:
[root@node2 ~]# fdisk -l
Disk /dev/sda: 68.8 GB, 68862869504 bytes
255 heads, 63 sectors/track, 8372 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 8372 67143667+ 8e Linux LVM
Disk /dev/sdb: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 123 987966 83 Linux
/dev/sdb2 124 246 987997+ 83 Linux
/dev/sdb3 247 369 987997+ 83 Linux
/dev/sdb4 370 3263 23246055 5 Extended
/dev/sdb5 370 1221 6843658+ 83 Linux
/dev/sdb6 1222 2073 6843658+ 83 Linux
/dev/sdb7 2074 2439 2939863+ 83 Linux
/dev/sdb8 2440 2805 2939863+ 83 Linux
/dev/sdb9 2806 2928 987966 83 Linux
/dev/sdb10 2929 3051 987966 83 Linux
/dev/sdb11 3052 3263 1702858+ 83 Linux
[root@node2 ~]#
---ASM軟體管理:
--建立spft資料夾,並上傳rpm包:
[root@node1 ~]#
[root@node1 ~]# mkdir asm
[root@node1 ~]# ls
anaconda-ks.cfg asm Desktop install.log install.log.syslog
[root@node1 ~]#
[root@node1 ~]# cd asm
[root@node1 asm]# rz
rz waiting to receive.
開始 zmodem 傳輸。 按 Ctrl+C 取消。
100% 126 KB 126 KB/s 00:00:01 0 Errors686.rpm...
100% 13 KB 13 KB/s 00:00:01 0 Errors
100% 83 KB 83 KB/s 00:00:01 0 Errors...
#上傳成功。
--安裝rmp包:
[root@node1 asm]#
[root@node1 asm]# ls
oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm #該rmp包的版本要求與系統核心版本一直,檢視核心版本的命令:uname -a.
oracleasm-support-2.1.8-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
[root@node1 asm]#
[root@node1 asm]#
[root@node1 asm]# rpm -ivh *
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-194.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]
[root@node1 asm]#
#安裝完畢,兩個節點同樣的操作。
---配置oracleASM,兩個節點同樣操作:
[root@node1 asm]#
[root@node1 asm]# service oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 asm]#
[root@node1 asm]#
---建立oracle ASM磁碟:
--節點1:#只需在一個節點操作(這裡是節點1),在另外一個節點2作另外操作其他命令。
[root@node1 asm]#
[root@node1 asm]# service oracleasm
Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}
[root@node1 asm]# service oracleasm createdisk OCR_VOTE1 /dev/sdb1
Marking disk "OCR_VOTE1" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk OCR_VOTE2 /dev/sdb2
Marking disk "OCR_VOTE2" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk OCR_VOTE3 /dev/sdb3
Marking disk "OCR_VOTE3" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_DATA1 /dev/sdb5
Marking disk "ASM_DATA1" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_DATA2 /dev/sdb6
Marking disk "ASM_DATA2" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_RCY1 /dev/sdb7
Marking disk "ASM_RCY1" as an ASM disk: [ OK ]
[root@node1 asm]# service oracleasm createdisk ASM_RCY2 /dev/sdb8
Marking disk "ASM_RCY2" as an ASM disk: [ OK ]
[root@node1 asm]#
[root@node1 asm]# service oracleasm listdisks
ASM_DATA1
ASM_DATA2
ASM_RCY1
ASM_RCY2
OCR_VOTE1
OCR_VOTE2
OCR_VOTE3
[root@node1 asm]#
--節點2:
[root@node2 asm]#
[root@node2 asm]# service oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 asm]#
[root@node2 asm]#
[root@node2 asm]# service oracleasm listdisks
ASM_DATA1
ASM_DATA2
ASM_RCY1
ASM_RCY2
OCR_VOTE1
OCR_VOTE2
OCR_VOTE3
[root@node2 asm]#
#可以看到在節點1建立的ASM磁碟已經對映到節點2.
----建立主機間的互信:
--建立節點之間oracle 、grid 使用者之間的信任(透過ssh 建立公鑰和私鑰)
--生成金鑰對(所有節點的oracle使用者和grid使用者):
---oracle使用者:
--節點1oracle使用者:
[root@node1 ~]# su - oracle
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh-keygen -t rsa #公鑰,不用輸密碼,保留空#
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
09:7d:4d:26:a5:3c:40:24:55:bd:25:5f:cd:e3:5f:73 oracle@node1
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh-keygen -t dsa #金鑰,不用輸密碼,保留空#
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
32:3f:5e:7a:fc:19:78:cf:39:24:89:6e:80:dd:7a:65 oracle@node1
[oracle@node1 ~]$
[oracle@node1 ~]$ ls .ssh
id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node1 ~]$
--節點2oracle使用者:
[root@node2 ~]# su - oracle
[oracle@node2 ~]$
[oracle@node2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
22:28:28:eb:b0:fa:43:00:71:f7:ca:a2:53:ed:38:ca oracle@node2
[oracle@node2 ~]$
[oracle@node2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
04:3c:bf:64:db:e3:db:9b:19:90:45:d4:06:dd:71:30 oracle@node2
[oracle@node2 ~]$
[oracle@node2 ~]$ ls .ssh
id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node2 ~]$
---配置信任關係:
[oracle@node1 ~]$ cat .ssh/id_rsa.pub >>.ssh/authorized_keys
[oracle@node1 ~]$ cat .ssh/id_dsa.pub >>.ssh/authorized_keys
[oracle@node1 ~]$ ssh node2 cat .ssh/id_rsa.pub >>.ssh/authorized_keys
The authenticity of host 'node2 (192.168.56.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.56.12' (RSA) to the list of known hosts.
oracle@node2's password:
[oracle@node1 ~]$ ssh node2 cat .ssh/id_dsa.pub >>.ssh/authorized_keys
oracle@node2's password:
[oracle@node1 ~]$ scp .ssh/authorized_keys node2:~/.ssh
oracle@node2's password:
authorized_keys 100% 1992 2.0KB/s 00:00
[oracle@node1 ~]$
--可檢視金鑰公鑰檔案:
--節點1:
[oracle@node1 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
[oracle@node1 ~]$
--節點2:
[oracle@node2 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@node2 ~]$
---驗證信任關係 :
--節點1:
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh node2 date
Fri Oct 28 13:16:54 CST 2016
[oracle@node1 ~]$ ssh node2-priv date
The authenticity of host 'node2-priv (10.10.10.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2-priv,10.10.10.12' (RSA) to the list of known hosts.
Fri Oct 28 13:17:05 CST 2016
[oracle@node1 ~]$ ssh node2-priv date
Fri Oct 28 13:17:10 CST 2016
[oracle@node1 ~]$ ssh node1 date
The authenticity of host 'node1 (192.168.56.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.56.11' (RSA) to the list of known hosts.
Fri Oct 28 13:17:38 CST 2016
[oracle@node1 ~]$ ssh node1 date
Fri Oct 28 13:17:42 CST 2016
[oracle@node1 ~]$ ssh node1-priv date
The authenticity of host 'node1-priv (10.10.10.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1-priv,10.10.10.11' (RSA) to the list of known hosts.
Fri Oct 28 13:17:54 CST 2016
[oracle@node1 ~]$ ssh node1-priv date
Fri Oct 28 13:17:57 CST 2016
[oracle@node1 ~]$
[oracle@node1 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
[oracle@node1 ~]$
--節點2:
[oracle@node2 ~]$ ssh node1 date
The authenticity of host 'node1 (192.168.56.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.56.11' (RSA) to the list of known hosts.
Fri Oct 28 13:19:23 CST 2016
[oracle@node2 ~]$ ssh node1 date
Fri Oct 28 13:19:26 CST 2016
[oracle@node2 ~]$ ssh node1-priv date
The authenticity of host 'node1-priv (10.10.10.11)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1-priv,10.10.10.11' (RSA) to the list of known hosts.
Fri Oct 28 13:19:51 CST 2016
[oracle@node2 ~]$ ssh node1-priv date
Fri Oct 28 13:19:54 CST 2016
[oracle@node2 ~]$ ssh node2 date
The authenticity of host 'node2 (192.168.56.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.56.12' (RSA) to the list of known hosts.
Fri Oct 28 13:20:09 CST 2016
[oracle@node2 ~]$ ssh node2 date
Fri Oct 28 13:20:12 CST 2016
[oracle@node2 ~]$ ssh node2-priv date
The authenticity of host 'node2-priv (10.10.10.12)' can't be established.
RSA key fingerprint is 25:cb:8a:67:4a:41:eb:1d:39:1e:ba:8f:0d:24:05:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2-priv,10.10.10.12' (RSA) to the list of known hosts.
Fri Oct 28 13:20:23 CST 2016
[oracle@node2 ~]$ ssh node2-priv date
Fri Oct 28 13:20:26 CST 2016
[oracle@node2 ~]$
[oracle@node2 ~]$
[oracle@node2 ~]$ ls .ssh
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts
[oracle@node2 ~]$
#兩個節點的oracle使用者已經建立互信關係。在兩個節點的grid使用者下同樣操作建立兩個節點間grid使用者之間的互信關係。
[root@node1 ~]# cd /etc/yum.repos.d
[root@node1 yum.repos.d]# ls
rhel-debuginfo.repo
[root@node1 yum.repos.d]# cp rhel-debuginfo.repo yum.repo
[root@node1 yum.repos.d]# vi yum.repo
[Base]
name=Red Hat Enterprise Linux
baseurl=file:///media/Server
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
#兩個節點同樣的操作。
--在虛擬機器裡掛載光碟:安裝軟體。
#由於是影像化操作,具體操作略。
--掛載光碟,安裝軟體:
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# mount /dev/hdc /media
mount: block device /dev/hdc is write-protected, mounting read-only
[root@node1 yum.repos.d]#
#兩個節點同樣的操作。
--安裝yum:
[root@node1 yum.repos.d]# yum install libaio* -y
Loaded plugins: rhnplugin, security
Repository rhel-debuginfo is listed more than once in the configuration
Repository rhel-debuginfo-beta is listed more than once in the configuration
This system is not registered with RHN.
RHN support will be disabled.
Base | 1.3 kB 00:00
Base/primary | 753 kB 00:00
Base 2348/2348
Setting up Install Process
Package libaio-0.3.106-5.i386 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package libaio-devel.i386 0:0.3.106-5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================
Package Arch Version Repository Size
===============================================================================================
Installing:
libaio-devel i386 0.3.106-5 Base 12 k
Transaction Summary
===============================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size: 12 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libaio-devel 1/1
error: failed to stat /media/RHEL_5.5 i386 DVD: No such file or directory
Installed:
libaio-devel.i386 0:0.3.106-5
Complete!
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# yum install syssta* -y
Loaded plugins: rhnplugin, security
Repository rhel-debuginfo is listed more than once in the configuration
Repository rhel-debuginfo-beta is listed more than once in the configuration
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package sysstat.i386 0:7.0.2-3.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================
Package Arch Version Repository Size
===============================================================================================
Installing:
sysstat i386 7.0.2-3.el5 Base 170 k
Transaction Summary
===============================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size: 170 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : sysstat 1/1
error: failed to stat /media/RHEL_5.5 i386 DVD: No such file or directory
Installed:
sysstat.i386 0:7.0.2-3.el5
Complete!
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# yum install unixO* -y
Loaded plugins: rhnplugin, security
Repository rhel-debuginfo is listed more than once in the configuration
Repository rhel-debuginfo-beta is listed more than once in the configuration
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package unixODBC.i386 0:2.2.11-7.1 set to be updated
---> Package unixODBC-devel.i386 0:2.2.11-7.1 set to be updated
---> Package unixODBC-kde.i386 0:2.2.11-7.1 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================
Package Arch Version Repository Size
===============================================================================================
Installing:
unixODBC i386 2.2.11-7.1 Base 832 k
unixODBC-devel i386 2.2.11-7.1 Base 737 k
unixODBC-kde i386 2.2.11-7.1 Base 558 k
Transaction Summary
===============================================================================================
Install 3 Package(s)
Upgrade 0 Package(s)
Total download size: 2.1 M
Downloading Packages:
-----------------------------------------------------------------------------------------------
Total 429 MB/s | 2.1 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : unixODBC 1/3
error: failed to stat /media/RHEL_5.5 i386 DVD: No such file or directory
Installing : unixODBC-kde 2/3
Installing : unixODBC-devel 3/3
Installed:
unixODBC.i386 0:2.2.11-7.1 unixODBC-devel.i386 0:2.2.11-7.1 unixODBC-kde.i386 0:2.2.11-7.1
Complete!
--檢視是否安裝成功,共有3個:
[root@node1 yum.repos.d]# rpm -qa |grep -i odbc
unixODBC-kde-2.2.11-7.1
unixODBC-devel-2.2.11-7.1
unixODBC-2.2.11-7.1
[root@node1 yum.repos.d]#
#yum安裝成功,兩個節點同樣的操作。
----安裝GI軟體:
--進入到grid使用者,建立一個soft目錄:(安裝只需一個節點操作,安裝之後會對映到另外的節點):
--建立好soft目錄之後,上傳安裝介質到soft目錄:
[grid@node1 ~]$ pwd
/home/grid
[grid@node1 ~]$ mkdir soft
[grid@node1 ~]$ cd soft/
[grid@node1 soft]$ pwd
/home/grid/soft
[grid@node1 soft]$
[grid@node1 soft]$ rz
rz waiting to receive.
開始 zmodem 傳輸。 按 Ctrl+C 取消。
100% 957843 KB 5949 KB/s 00:02:41 0 Errorss
[grid@node1 soft]$ ls
linux_11gR2_grid.zip
[grid@node1 soft]$
#可看到上傳成功。
--解壓安裝包:
[grid@node1 soft]$
[grid@node1 soft]$ unzip linux_11gR2_grid.zip
... ...
creating: grid/stage/properties/
inflating: grid/stage/properties/oracle.crs_Complete.properties
creating: grid/stage/sizes/
inflating: grid/stage/sizes/oracle.crs11.2.0.1.0Complete.sizes.properties
inflating: grid/stage/OuiConfigVariables.xml
inflating: grid/stage/fastcopy.xml
[grid@node1 soft]$
[grid@node1 soft]$ ls
grid linux_11gR2_grid.zip
[grid@node1 soft]$
#解壓成功。
--進入grid目錄:
[grid@node1 soft]$ cd grid/
[grid@node1 grid]$ ls
doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[grid@node1 grid]$
--先檢測GI的安裝環境:
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed
node1 passed
Verification of the hosts config file successful
Interface information for node "node2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.56.12 192.168.56.0 0.0.0.0 192.168.56.1 08:00:27:FB:15:AB 1500
eth1 10.10.10.12 10.10.10.0 0.0.0.0 192.168.56.1 08:00:27:59:BC:90 1500
Interface information for node "node1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.56.11 192.168.56.0 0.0.0.0 192.168.56.1 08:00:27:FB:15:AA 1500
eth1 10.10.10.11 10.0.0.0 0.0.0.0 192.168.56.1 08:00:27:E1:66:38 1500
Check: Node connectivity of subnet "192.168.56.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) node2,node1
Check: TCP connectivity of subnet "192.168.56.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:192.168.56.11 node2:192.168.56.12 passed
Result: TCP connectivity check passed for subnet "192.168.56.0"
... ... #檢視兩個節點的各項對否有fail項,沒有則安裝環境是好的。
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
node2 does not exist passed
node1 does not exist passed
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 0022 0022 passed
node1 0022 0022 passed
Result: Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
[grid@node1 grid]$
[grid@node1 grid]$
#檢測完畢,無異常。
----在上面檢測GI安裝環境無異常之後,啟動影像化介面,安裝GI軟體。
---啟動X—shell工具,指明使用環境,啟動影像化安裝介面:
[grid@node1 grid]$ export DISPLAY=192.168.56.101:0.0
[grid@node1 grid]$
[grid@node1 grid]$ ls
doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[grid@node1 grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 80 MB. Actual 51512 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3935 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-10-28_02-02-45PM. Please wait ...
---影像化安裝GI後面需要在兩個節點的root使用者上執行兩條指令碼:
[root@node1 ~]# /u01/app/oraInventory/orainstRoot.sh #節點1:
[root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh #節點2:
[root@node1 ~]# /u01/11.2.0/grid/root.sh #節點1:
[root@node2 ~]# /u01/11.2.0/grid/root.sh #節點2:
#安裝完成
---然後驗證一下資源:
[grid@node1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE ONLINE node1
ora....N1.lsnr ora....er.type ONLINE ONLINE node1
ora....VOTE.dg ora....up.type ONLINE ONLINE node1
ora.asm ora.asm.type ONLINE ONLINE node1
ora.eons ora.eons.type ONLINE ONLINE node1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application OFFLINE OFFLINE
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip ora....t1.type ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application OFFLINE OFFLINE
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip ora....t1.type ONLINE ONLINE node2
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE node1
ora....ry.acfs ora....fs.type ONLINE ONLINE node1
ora.scan1.vip ora....ip.type ONLINE ONLINE node1
---檢視以下4個服務,都為線上狀態時,可以進行下一步安裝資料庫軟體:
[grid@node1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@node1 ~]$
----檢查完環境資源後,安裝oracle軟體(單一節點安裝):
---在oracle使用者下建立soft資料夾,將安裝包上傳到soft資料夾,然後解壓縮:
[oracle@node1 soft]$ ls
database linux_11gR2_database_1of2.zip linux_11gR2_database_2of2.zip
[oracle@node1 soft]$ cd database/
[oracle@node1 database]$ ls
doc install response rpm runInstaller sshsetup stage welcome.html
[oracle@node1 database]$ export DISPLAY=192.168.10.1:0.0
[oracle@node1 database]$ ./runInstaller
---解壓兩個oracle軟體安裝包:
[oracle@node1 soft]$ unzip linux_11gR2_database_1of2.zip
creating: database/stage/sizes/
extracting: database/stage/sizes/oracle.server11.2.0.1.0EE.sizes.properties
extracting: database/stage/sizes/oracle.server11.2.0.1.0SE.sizes.properties
extracting: database/stage/sizes/oracle.server11.2.0.1.0Custom.sizes.properties
inflating: database/stage/OuiConfigVariables.xml
inflating: database/stage/oracle.server.11_2_0_1_0.xml
inflating: database/stage/fastcopy.xml
[oracle@node1 soft]$
[oracle@node1 soft]$
[oracle@node1 soft]$ unzip linux_11gR2_database_2of2.zip
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup6.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup5.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup2.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup10.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup11.jar
inflating: database/stage/Components/oracle.sysman.console.db/11.2.0.1.0/1/DataFiles/filegroup7.jar
[oracle@node1 soft]$
---解壓成功後,檢視並進入database目錄:
[oracle@node1 soft]$ ls
database linux_11gR2_database_1of2.zip linux_11gR2_database_2of2.zip
[oracle@node1 soft]$
[oracle@node1 soft]$ cd database/
[oracle@node1 database]$ ls
doc install response rpm runInstaller sshsetup stage welcome.html
---指定環境變數,啟動影像化介面,安裝oracle軟體:
[oracle@node1 database]$ export DISPLAY=192.168.56.101:0.0
[oracle@node1 database]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 80 MB. Actual 43893 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3791 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-10-29_07-40-13AM. Please wait ...
... ...
#oracle軟體安裝成功。
---在root使用者下執行以下一條指令碼:
[root@node1 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
#指令碼執行完畢。
----使用 ASMCA建立餘下來兩個ASM檔案組:
[grid@node1 ~]$ export DISPLAY=192.168.56.101:0.0
[grid@node1 ~]$
[grid@node1 ~]$ asmca
#建立完成。
----在SCRT啟用DBCA影像化介面建立叢集資料庫:
[oracle@node1 ~]$ export DISPLAY=192.168.56.101:0.0
[oracle@node1 ~]$
[oracle@node1 ~]$ dbca
... ...
#安裝完畢。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31392094/viewspace-2127342/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 基於Docker搭建Percona XtraDB Cluster資料庫叢集Docker資料庫
- Oracle 例項和RAC叢集下資料庫日誌目錄合集Oracle資料庫
- 大資料之CDH叢集搭建大資料
- 大資料7.1 - hadoop叢集搭建大資料Hadoop
- 【資料庫】Redis叢集篇資料庫Redis
- 大資料平臺Hadoop叢集搭建大資料Hadoop
- 使用青雲搭建大資料叢集大資料
- Moebius資料庫多活叢集資料庫
- 達夢資料庫主備實時叢集搭建和維護資料庫
- 大資料叢集搭建(1)ubuntu、jdk、ssh搭建配置大資料UbuntuJDK
- Elasticsearch高階之-叢集搭建,資料分片Elasticsearch
- 【BUILD_ORACLE】Oracle 19c RAC搭建(六)建立RAC資料庫UIOracle資料庫
- 時序資料庫的叢集方案?資料庫
- Kubernetes 部署 Nebula 圖資料庫叢集資料庫
- [專業術語]資料庫叢集資料庫
- 搭建zookeeper叢集(偽叢集)
- 最快方式搭建docker大資料 測試叢集Docker大資料
- rac叢集日常維護命令
- LNMP 分散式叢集(三):MySQL主從資料庫伺服器的搭建LNMP分散式MySql資料庫伺服器
- zookeeper叢集及kafka叢集搭建Kafka
- 資料庫代理服務和叢集管理資料庫
- 快速建立POLARDB for PostgreSQL資料庫叢集教程SQL資料庫
- 通過memberlist庫實現gossip管理叢集以及叢集資料互動Go
- Redis叢集資料沒法拆分時的搭建策略Redis
- Redis系列:搭建Redis叢集(叢集模式)Redis模式
- linux下搭建ZooKeeper叢集(偽叢集)Linux
- Hadoop叢集搭建Hadoop
- Zookeeper叢集搭建
- redis叢集搭建Redis
- mysql叢集搭建MySql
- zookeeper 叢集搭建
- 搭建 Redis 叢集Redis
- RabbitMQ叢集搭建MQ
- nacos 叢集搭建
- HBASE叢集搭建
- 搭建ELK叢集
- Ambari叢集搭建
- kafka叢集搭建Kafka
- Hadoop搭建叢集Hadoop