Linux AS 4.0下Oracle10g RAC搭建(虛擬機器+裸裝置)
Linux AS 4.0 , Oracle 10.2.0.4.0 rac , 虛擬機器, 裸裝置
1. 前期設定參考其他文件 。
安裝linux packages , 這裡只是安裝了缺少的幾個包,不同安裝方式可能會不同。
參考其他文件操作。
[root@localhost packages]# rpm -ivh glibc-kernheaders-2.4-9.1.100.EL.i386.rpm
warning: glibc-kernheaders-2.4-9.1.100.EL.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:glibc-kernheaders ########################################### [100%]
[root@localhost packages]# rpm -ivh glibc-headers-2.3.4-2.36.i386.rpm
warning: glibc-headers-2.3.4-2.36.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:glibc-headers ########################################### [100%]
[root@localhost packages]# rpm -ivh glibc-devel-2.3.4-2.36.i386.rpm
warning: glibc-devel-2.3.4-2.36.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:glibc-devel ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-gcc-32-3.2.3-47.3.i386.rpm
warning: compat-gcc-32-3.2.3-47.3.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:compat-gcc-32 ########################################### [100%]
[root@localhost packages]#
[root@localhost packages]# rpm -ivh compat-libstdc++-33-3.2.3-47.3.i386.rpm
warning: compat-libstdc++-33-3.2.3-47.3.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:compat-libstdc++-33 ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-gcc-32-c++-3.2.3-47.3.i386.rpm
warning: compat-gcc-32-c++-3.2.3-47.3.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:compat-gcc-32-c++ ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-libgcc-296-2.96-132.7.2.i386.rpm
warning: compat-libgcc-296-2.96-132.7.2.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:compat-libgcc-296 ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-libstdc++-296-2.96-132.7.2.i386.rpm
warning: compat-libstdc++-296-2.96-132.7.2.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:compat-libstdc++-296 ########################################### [100%]
2. 修改hostname(每個節點) .
[root@localhost sysconfig]# vi network
NETWORKING=yes
HOSTNAME=rac01
3. 建立oracle組和使用者(每個節點) :
[root@localhost etc]# groupadd dba
[root@localhost etc]# groupadd oper
[root@localhost etc]# useradd -g dba -G oper oracle
[root@localhost etc]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
建立目錄(每個節點) :
[root@localhost /]# mkdir -p /u01/product
[root@localhost /]# chown oracle.dba /u01
4. 配置核心引數 (每個節點) :
[root@localhost etc]# vi sysctl.conf
# Added by DBA for Oracle DB
kernel.shmall = 2097152
kernel.shmmax = 545259520
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
5. 設定ORACLE環境變數 (每個節點) : :
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/bin:/sbin:/usr/bin:/usr/sbin
BASH_ENV=$HOME/.BASHRC
export BASH_ENV PATH
unset USERNAME
# Set Oracle Environment
ORACLE_HOME=/u01/product/oracle;export ORACLE_HOME
ORACLE_SID=orcl2; export ORACLE_SID
ORACLE_OWNER=oracle;export ORACLE_OWNER
ORACLE_BASE=/u01/product;export ORACLE_BASE
ORACLE_TERM=vt100;export ORACLE_TERM
#NLS_LANG='traditional chinese_taiwan'.ZHT16BIG5;export NLS_LANG
LD_LIBRARY_PATH=$ORACLE_HOME/lib;export LD_LIBRARY_PATH
ORA_CRS_HOME=/u01/product/crs; export ORA_CRS_HOME
set -u
PS1=`hostname`'$';export PS1
EDITOR=/bin/vi; export EDITOR
JAVA_HOME=/usr/local/java;export JAVA_HOME
ORA_NLS33=/u01/product/oracle/ocommon/nls/admin/data;export ORA_NLS33
CLASSPATH=/u01/product/oracle/jdbc/lib/classesl11.zip:/usr/local/java;
export DISPLAY=127.0.0.1:0.0
export LD_ASSUME_KERNEL=2.6.9
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$ORA_CRS_HOME/bin:$JAVA_HOME/bin:$PATH:.;export
PATH
alias ll='ls -l';
alias ls='ls --color';
alias his='history';
# alias sqlplus='rlwrap sqlplus'
# alias rman='rlwrap rman'
stty erase ^H
umask 022
6. 設定/etc/hosts (每個節點) :
[root@rac01 etc]# vi hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.161.34.111 rac01
10.1.1.1 pri01
10.161.32.151 vip01
10.161.34.112 rac02
10.1.1.2 pri02
10.161.32.152 vip02
[root@rac02 ~]# cd /etc/security/
[root@rac02 security]# vi limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
[root@rac01 security]# cd /etc/pam.d/
[root@rac01 pam.d]# vi login
加入
session required pam_limits.so
[root@rac01 etc]# vi grub.conf
加入
selinux=0
7. 兩個節點上關閉開機耗時的服務。
[root@rac02 ~]# chkconfig cups off
[root@rac02 ~]# chkconfig sendmail off
[root@rac02 ~]# chkconfig isdn off
[root@rac02 ~]# chkconfig smartd off
[root@rac02 ~]# chkconfig iptables off
8. 建立信任關係
rac01$mkdir .ssh
rac01$chmod 700 .ssh/
rac01$ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
ea:9b:ed:1e:3d:9e:c9:3c:92:6f:b2:1c:ce:d1:5e:b5
rac01$ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
78:12:ec:f6:60:24:1a:3a:2a:63:05:67:a1:2a:10:f4
rac02$ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
c3:30:11:b3:13:8e:c7:b7:62:87:0b:1f:e6:ef:4b:1f
rac02$ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
23:e9:02:32:f5:18:2e:9b:72:50:cf:9e:54:16:26:b9
rac01$cd .ssh/
rac01$
rac01$ssh rac01 cat /home/oracle/.ssh/id_rsa.pub >>authorized_keys
The authenticity of host 'rac01 (10.161.34.111)' can't be established.
RSA key fingerprint is 25:a2:67:c5:a6:58:e3:78:34:0e:36:6d:a5:be:6b:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac01,10.161.34.111' (RSA) to the list of known hosts.
password:
rac01$
rac01$ssh rac01 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys
rac01$ssh rac02 cat /home/oracle/.ssh/id_rsa.pub >>authorized_keys
The authenticity of host 'rac02 (10.161.34.112)' can't be established.
RSA key fingerprint is 25:a2:67:c5:a6:58:e3:78:34:0e:36:6d:a5:be:6b:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac02,10.161.34.112' (RSA) to the list of known hosts.
password:
rac01$ssh rac02 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys
password:
rac01$scp authorized_keys rac02:/home/oracle/.ssh/
password:
authorized_keys 100% 1648 1.6KB/s 00:00
rac01$
rac01$chmod 600 authorized_keys
rac02$chmod 600 authorized_keys
9. 新增加裸裝置共享磁碟 :
新增硬體嚮導:
a. 虛擬裝置節點:選擇SCSI 1:0。一定要選擇跟系統盤不相同的匯流排,
如系統盤一般是SCSI(0:1),而這些共享盤一定要選擇SCSI(1:x)、SCSI(2:x)、
SCSI(3:x);
b. 模式:選擇Independent,針對所有共享磁碟選擇Persistent。
c. 選擇"Allocate all disk space now"
d. 單擊 Finish。
為了兩個虛擬rac之間的磁碟共享,還需要配置虛擬機器檔案:
http://xjnobadyit.blog.sohu.com/162291611.html
關閉兩個虛擬機器 。 到D:\VM\Linux4_TestRawDev,開啟Linux4_TestRawDev.vmx ,
在最後空白處新增這幾段內容(注意,vmx檔案中每行都不能重複,否則會報錯,
所以下面的語句行如果已經存在,請不要重複新增) :
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "virtual"
-- 這段是開啟 scsi1上的使用,並且設定成virtual, controller設定成lsilogic,
-- 然後依次新增
scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.fileName = "E:\SharedDisk\shareddisk01.vmdk"
scsi1:0.deviceType = "disk"
-- 最後新增這個,這段是對vmware使用共享硬碟的方式進行定義,必須新增
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
修改節點2上的Linux4_TestRawDev_rac02.vmx 檔案, 加入類似上面的語句。
儲存退出之後,啟動虛擬機器就可以看到剛才新增的硬碟了.
節點1上
[root@rac01 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6 48163+ 83 Linux
/dev/sda2 7 515 4088542+ 83 Linux
/dev/sda3 516 1771 10088820 83 Linux
/dev/sda4 1772 1958 1502077+ 5 Extended
/dev/sda5 1772 1958 1502046 82 Linux swap
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
節點2上
[root@rac02 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6 48163+ 83 Linux
/dev/sda2 7 515 4088542+ 83 Linux
/dev/sda3 516 1771 10088820 83 Linux
/dev/sda4 1772 1958 1502077+ 5 Extended
/dev/sda5 1772 1958 1502046 82 Linux swap
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
10. 配置 hangcheck-timer 模組
檢視模組位置:
[root@rac01 etc]# find /lib/modules -name "hangcheck-timer.ko"
/lib/modules/2.6.9-55.EL/kernel/drivers/char/hangcheck-timer.ko
/lib/modules/2.6.9-55.ELsmp/kernel/drivers/char/hangcheck-timer.ko
配置系統啟動時自動載入模組,在/etc/rc.d/rc.local 中新增如下內容
[root@rac01 etc]# modprobe hangcheck-timer
[root@rac01 etc]# vi /etc/rc.d/rc.local
modprobe hangcheck-timer
配置hangcheck-timer引數, 在/etc/modprobe.conf 中新增如下內容:
[root@rac01 etc]# vi /etc/modprobe.conf
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
確認模組載入成功:
[root@rac02 u01]# grep Hangcheck /var/log/messages | tail -2
Oct 20 15:23:38 rac02 kernel: Hangcheck: starting hangcheck timer 0.9.0 (tick is 180 seconds, margin is 60 seconds).
Oct 20 15:23:38 rac02 kernel: Hangcheck: Using monotonic_clock().
11. 開始磁碟分割槽 (節點1上執行)
[root@rac02 u01]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6 48163+ 83 Linux
/dev/sda2 7 515 4088542+ 83 Linux
/dev/sda3 516 1771 10088820 83 Linux
/dev/sda4 1772 1958 1502077+ 5 Extended
/dev/sda5 1772 1958 1502046 82 Linux swap
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
12. 建立VG的時候,同一卷組(VG)所有物理卷(PV)的物理區域(PE)大小需一致. 類似
Oracle block大小一致。
[root@rac01 ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 2610.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +10240M
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (1247-2610, default 1247):
Using default value 1247
Last cylinder or +size or +sizeM or +sizeK (1247-2610, default 2610): +10240M
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac01 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6 48163+ 83 Linux
/dev/sda2 7 515 4088542+ 83 Linux
/dev/sda3 516 1771 10088820 83 Linux
/dev/sda4 1772 1958 1502077+ 5 Extended
/dev/sda5 1772 1958 1502046 82 Linux swap
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1246 10008463+ 83 Linux
/dev/sdb2 1247 2492 10008495 83 Linux
13. 開始配置裸裝置 :
將分割槽初始化為物理卷(節點1上執行):
[root@rac01 ~]# pvcreate /dev/sdb1 /dev/sdb2
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
[root@rac01 ~]# pvscan
PV /dev/sdb1 lvm2 [9.54 GB]
PV /dev/sdb2 lvm2 [9.54 GB]
Total: 2 [19.09 GB] / in use: 0 [0 ] / in no VG: 2 [19.09 GB]
[root@rac01 ~]# pvdisplay
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 9.54 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 8v1kbs-dPsT-X7jG-qh7p-3AzM-i12P-OhNV4U
--- NEW Physical volume ---
PV Name /dev/sdb2
VG Name
PV Size 9.54 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID KooBSy-RpVJ-JxeY-vp2B-dOLt-UALf-CUl2nN
14. 建立VG (節點1上執行):
在PV的基礎上建立卷組,語法:vgcreate vgname pvname .
[root@rac01 ~]# vgcreate datavg01 /dev/sdb1 /dev/sdb2
Volume group "datavg01" successfully created
[root@rac01 etc]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "datavg01" using metadata type lvm2
[root@rac01 etc]# vgdisplay
--- Volume group ---
VG Name datavg01
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 15
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 14
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 19.09 GB
PE Size 4.00 MB
Total PE 4886
Alloc PE / Size 330 / 1.29 GB
Free PE / Size 4556 / 17.80 GB
VG UUID OeuV86-HCSQ-yFFp-IrgT-e3Dt-xRrs-P1kUWi
15. 建立LV (節點1上執行):
在VG中建立邏輯卷,語法: lvcreate -n lvname -L size vgname .
我們可以建立一個檔案createlv.txt批次執行:
lvcreate -n ocr2.dbf -L 120m datavg01
lvcreate -n votingdisk2 -L 120m datavg01
lvcreate -n system01.dbf -L 400m datavg01
lvcreate -n sysaux01.dbf -L 300m datavg01
lvcreate -n users01.dbf -L 10m datavg01
lvcreate -n undotbs01.dbf -L 200m datavg01
lvcreate -n temp01.dbf -L 200m datavg01
lvcreate -n spfileorcl.ora -L 10m datavg01
lvcreate -n control01.ctl -L 30m datavg01
lvcreate -n control02.ctl -L 30m datavg01
lvcreate -n control03.ctl -L 30m datavg01
lvcreate -n redo01.log -L 20m datavg01
lvcreate -n redo02.log -L 20m datavg01
lvcreate -n redo03.log -L 20m datavg01
[root@rac01 ~]# sh createlv.txt
Logical volume "ocr.dbf" created
Logical volume "votingdisk" created
Logical volume "system01.dbf" created
Logical volume "sysaux01.dbf" created
Rounding up size to full physical extent 12.00 MB
Logical volume "users01.dbf" created
Logical volume "undotbs01.dbf" created
Logical volume "temp01.dbf" created
Rounding up size to full physical extent 12.00 MB
Logical volume "spfileorcl.ora" created
Rounding up size to full physical extent 32.00 MB
Logical volume "control01.ctl" created
Rounding up size to full physical extent 32.00 MB
Logical volume "control02.ctl" created
Rounding up size to full physical extent 32.00 MB
Logical volume "control03.ctl" created
Logical volume "redo01.log" created
Logical volume "redo02.log" created
Logical volume "redo03.log" created
備註: 透過vgdisplay可以看到 PE Size是4.00 MB,所以設定
10M, 30M的非4M倍數的大小都被設定為了4M的倍數(12M,32M). 如
果需要刪除lv, 可以使用 # lvremove -f /dev/datavg01/xxx.dbf
lvextend, lvreduce, lvremove 可以修改大小及刪除。
[root@rac01 etc]# lvremove /dev/datavg01/data01.dbf
Do you really want to remove active logical volume "data01.dbf"? [y/n]: y
Logical volume "data01.dbf" successfully removed
檢視lv (節點1上執行) :
[root@rac01 ~]# lvscan
ACTIVE '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
ACTIVE '/dev/datavg01/votingdisk' [20.00 MB] inherit
ACTIVE '/dev/datavg01/system01.dbf' [400.00 MB] inherit
ACTIVE '/dev/datavg01/sysaux01.dbf' [300.00 MB] inherit
ACTIVE '/dev/datavg01/users01.dbf' [12.00 MB] inherit
ACTIVE '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
ACTIVE '/dev/datavg01/control01.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/control02.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/control03.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/redo01.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/redo02.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/redo03.log' [20.00 MB] inherit
[root@rac01 ~]# lvdisplay
建立完成後,可以在 /dev/datavg01 及/dev/mapper 下看到新建立的lv資訊。
備註: 刪除lv的命令類似lvremove /dev/datavg01/sysaux01.dbf
節點1上執行,其他節點這時看不到 :
[root@rac01 ~]# cd /dev/mapper/
[root@rac01 mapper]# ls -al
total 0
drwxr-xr-x 2 root root 340 Oct 22 15:05 .
drwxr-xr-x 10 root root 7100 Oct 22 15:05 ..
crw------- 1 root root 10, 63 Oct 20 11:43 control
brw-rw---- 1 root disk 253, 8 Oct 22 15:05 datavg01-control01.ctl
brw-rw---- 1 root disk 253, 9 Oct 22 15:05 datavg01-control02.ctl
brw-rw---- 1 root disk 253, 10 Oct 22 15:05 datavg01-control03.ctl
brw-rw---- 1 root disk 253, 0 Oct 22 15:03 datavg01-ocr.dbf
brw-rw---- 1 root disk 253, 11 Oct 22 15:05 datavg01-redo01.log
brw-rw---- 1 root disk 253, 12 Oct 22 15:05 datavg01-redo02.log
brw-rw---- 1 root disk 253, 13 Oct 22 15:05 datavg01-redo03.log
brw-rw---- 1 root disk 253, 7 Oct 22 15:05 datavg01-spfileorcl.ora
brw-rw---- 1 root disk 253, 3 Oct 22 15:05 datavg01-sysaux01.dbf
brw-rw---- 1 root disk 253, 2 Oct 22 15:05 datavg01-system01.dbf
brw-rw---- 1 root disk 253, 6 Oct 22 15:05 datavg01-temp01.dbf
brw-rw---- 1 root disk 253, 5 Oct 22 15:05 datavg01-undotbs01.dbf
brw-rw---- 1 root disk 253, 4 Oct 22 15:05 datavg01-users01.dbf
brw-rw---- 1 root disk 253, 1 Oct 22 15:05 datavg01-votingdisk
16. 啟用或重新啟動其他節點 (這裡只有節點2):
一般不用重新啟動其他節點,只需要在其他節點上啟用VG,LV就可以了。
即
# vgchange -a y datavgxxx
# lvchange -a y /dev/datavg01/xxxx.dbf
這裡僅僅演示一下未啟用之前及重啟之後的過程 :
在沒有重新啟動其他節點是看不到pv及vg,lv的。
[root@rac02 ~]# vgscan
Reading all physical volumes. This may take a while...
No volume groups found
[root@rac02 ~]# pvscan
No matching physical volumes found
重新啟動後(其他節點lv狀態inactive):
[root@rac02 mapper]# pvscan
PV /dev/sdb1 VG datavg01 lvm2 [9.54 GB / 8.90 GB free]
PV /dev/sdb2 VG datavg01 lvm2 [9.54 GB / 8.90 GB free]
Total: 2 [19.09 GB] / in use: 2 [19.09 GB] / in no VG: 0 [0 ]
[root@rac02 mapper]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "datavg01" using metadata type lvm2
[root@rac02 mapper]# lvscan
inactive '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
inactive '/dev/datavg01/votingdisk' [20.00 MB] inherit
inactive '/dev/datavg01/system01.dbf' [400.00 MB] inherit
inactive '/dev/datavg01/sysaux01.dbf' [300.00 MB] inherit
inactive '/dev/datavg01/users01.dbf' [12.00 MB] inherit
inactive '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
inactive '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
inactive '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
inactive '/dev/datavg01/control01.ctl' [32.00 MB] inherit
inactive '/dev/datavg01/control02.ctl' [32.00 MB] inherit
inactive '/dev/datavg01/control03.ctl' [32.00 MB] inherit
inactive '/dev/datavg01/redo01.log' [20.00 MB] inherit
inactive '/dev/datavg01/redo02.log' [20.00 MB] inherit
inactive '/dev/datavg01/redo03.log' [20.00 MB] inherit
檢視vg及lv是否啟用(建立後預設是active的, 重啟OS後需要啟用):
在被啟用之前,VG與LV是無法訪問的,這時可用命令啟用所要使用的卷組:
# vgchange -a y datavg01
我們看到除節點1 之外的其他節點lv狀態為inactive, 這裡我們需要啟用。
[root@rac02 dev]# vgchange -a y
14 logical volume(s) in volume group "datavg01" now active
[root@rac02 dev]# lvscan
ACTIVE '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
ACTIVE '/dev/datavg01/votingdisk' [20.00 MB] inherit
ACTIVE '/dev/datavg01/system01.dbf' [400.00 MB] inherit
ACTIVE '/dev/datavg01/sysaux01.dbf' [300.00 MB] inherit
ACTIVE '/dev/datavg01/users01.dbf' [12.00 MB] inherit
ACTIVE '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
ACTIVE '/dev/datavg01/control01.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/control02.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/control03.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/redo01.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/redo02.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/redo03.log' [20.00 MB] inherit
備註:
vgchange -a y/n # y 啟用卷組, n 禁用卷組
lvchange -a y/n # y 啟用邏輯卷,n 禁用邏輯卷
17. 在所有節點上建立 /dev/raw目錄(建立一個raw目錄只是為了管理方便):
[root@rac01 ~]# cd /dev
[root@rac01 dev]# mkdir -p raw
[root@rac01 dev]# chmod 755 raw
18. 繫結裸裝置(所有節點),將如下語句作為一個指令碼boundraw.txt執行:
vgscan ---- 註釋: 掃描並顯示系統中的卷組
vgchange -a y ---- 註釋: 啟用VG
/usr/bin/raw /dev/raw/raw1 /dev/datavg01/ocr.dbf
/usr/bin/raw /dev/raw/raw2 /dev/datavg01/votingdisk
/usr/bin/raw /dev/raw/raw3 /dev/datavg01/system01.dbf
/usr/bin/raw /dev/raw/raw4 /dev/datavg01/sysaux01.dbf
/usr/bin/raw /dev/raw/raw5 /dev/datavg01/users01.dbf
/usr/bin/raw /dev/raw/raw6 /dev/datavg01/undotbs01.dbf
/usr/bin/raw /dev/raw/raw7 /dev/datavg01/temp01.dbf
/usr/bin/raw /dev/raw/raw8 /dev/datavg01/spfileorcl.ora
/usr/bin/raw /dev/raw/raw9 /dev/datavg01/control01.ctl
/usr/bin/raw /dev/raw/raw10 /dev/datavg01/control02.ctl
/usr/bin/raw /dev/raw/raw11 /dev/datavg01/control03.ctl
/usr/bin/raw /dev/raw/raw12 /dev/datavg01/redo01.log
/usr/bin/raw /dev/raw/raw13 /dev/datavg01/redo02.log
/usr/bin/raw /dev/raw/raw14 /dev/datavg01/redo03.log
/usr/bin/raw /dev/raw/raw15 /dev/datavg01/ocr2.dbf
/usr/bin/raw /dev/raw/raw16 /dev/datavg01/votingdisk2
[root@rac01 ~]# sh boundraw.txt
Reading all physical volumes. This may take a while...
Found volume group "datavg01" using metadata type lvm2
14 logical volume(s) in volume group "datavg01" now active
/dev/raw/raw1: bound to major 253, minor 0
/dev/raw/raw2: bound to major 253, minor 1
/dev/raw/raw3: bound to major 253, minor 2
/dev/raw/raw4: bound to major 253, minor 3
/dev/raw/raw5: bound to major 253, minor 4
/dev/raw/raw6: bound to major 253, minor 5
/dev/raw/raw7: bound to major 253, minor 6
/dev/raw/raw8: bound to major 253, minor 7
/dev/raw/raw9: bound to major 253, minor 8
/dev/raw/raw10: bound to major 253, minor 9
/dev/raw/raw11: bound to major 253, minor 10
/dev/raw/raw12: bound to major 253, minor 11
/dev/raw/raw13: bound to major 253, minor 12
/dev/raw/raw14: bound to major 253, minor 13
.....
備註: 在一些地方可以會看到繫結裸裝置的後面/dev/datavg01/control03.ctl
部分會寫成 /dev/mapper/datavg01-control03.ctl,他們是軟連線的關係,所以
也是可以的。
備註:
# raw -qa 檢視raw繫結,
# raw /dev/raw/raw1 0 0 取消繫結。
[root@rac01 ~]# raw -qa
/dev/raw/raw1: bound to major 253, minor 0
/dev/raw/raw2: bound to major 253, minor 1
/dev/raw/raw3: bound to major 253, minor 2
/dev/raw/raw4: bound to major 253, minor 3
/dev/raw/raw5: bound to major 253, minor 4
/dev/raw/raw6: bound to major 253, minor 5
/dev/raw/raw7: bound to major 253, minor 6
/dev/raw/raw8: bound to major 253, minor 7
/dev/raw/raw9: bound to major 253, minor 8
/dev/raw/raw10: bound to major 253, minor 9
/dev/raw/raw11: bound to major 253, minor 10
/dev/raw/raw12: bound to major 253, minor 11
/dev/raw/raw13: bound to major 253, minor 12
/dev/raw/raw14: bound to major 253, minor 13
19. 修改raw的許可權 (所有節點執行,可編輯為指令碼批次執行):
/bin/chmod 600 /dev/raw/raw1
/bin/chmod 600 /dev/raw/raw2
/bin/chmod 600 /dev/raw/raw3
/bin/chmod 600 /dev/raw/raw4
/bin/chmod 600 /dev/raw/raw5
/bin/chmod 600 /dev/raw/raw6
/bin/chmod 600 /dev/raw/raw7
/bin/chmod 600 /dev/raw/raw8
/bin/chmod 600 /dev/raw/raw9
/bin/chmod 600 /dev/raw/raw10
/bin/chmod 600 /dev/raw/raw11
/bin/chmod 600 /dev/raw/raw12
/bin/chmod 600 /dev/raw/raw13
/bin/chmod 600 /dev/raw/raw14
....
/bin/chown oracle.dba /dev/raw/raw1
/bin/chown oracle.dba /dev/raw/raw2
/bin/chown oracle.dba /dev/raw/raw3
/bin/chown oracle.dba /dev/raw/raw4
/bin/chown oracle.dba /dev/raw/raw5
/bin/chown oracle.dba /dev/raw/raw6
/bin/chown oracle.dba /dev/raw/raw7
/bin/chown oracle.dba /dev/raw/raw8
/bin/chown oracle.dba /dev/raw/raw9
/bin/chown oracle.dba /dev/raw/raw10
/bin/chown oracle.dba /dev/raw/raw11
/bin/chown oracle.dba /dev/raw/raw12
/bin/chown oracle.dba /dev/raw/raw13
/bin/chown oracle.dba /dev/raw/raw14
...
檢視系統許可權(所有節點執行,讀寫許可權及屬主都改變):
[root@rac01 raw]# pwd
/dev/raw
[root@rac01 raw]# ls -al
total 0
drwxr-xr-x 2 root root 320 Oct 22 17:06 .
drwxr-xr-x 11 root root 7120 Oct 22 16:52 ..
crw------- 1 oracle dba 162, 1 Oct 22 17:04 raw1
crw------- 1 oracle dba 162, 10 Oct 22 17:06 raw10
crw------- 1 oracle dba 162, 11 Oct 22 17:06 raw11
crw------- 1 oracle dba 162, 12 Oct 22 17:06 raw12
crw------- 1 oracle dba 162, 13 Oct 22 17:06 raw13
crw------- 1 oracle dba 162, 14 Oct 22 17:06 raw14
crw------- 1 oracle dba 162, 2 Oct 22 17:06 raw2
crw------- 1 oracle dba 162, 3 Oct 22 17:06 raw3
crw------- 1 oracle dba 162, 4 Oct 22 17:06 raw4
crw------- 1 oracle dba 162, 5 Oct 22 17:06 raw5
crw------- 1 oracle dba 162, 6 Oct 22 17:06 raw6
crw------- 1 oracle dba 162, 7 Oct 22 17:06 raw7
crw------- 1 oracle dba 162, 8 Oct 22 17:06 raw8
crw------- 1 oracle dba 162, 9 Oct 22 17:06 raw9
備註:
---------------------------------------------------------
在linux中,會在/dev下存在3個目錄:
a. /dev/raw 裸裝置目錄
b. /dev/mapper/ 裸裝置對應的塊裝置目錄
c. /dev/datavg01/ 裸裝置和塊裝置的連結檔案目錄
修改以上3個目錄的許可權後,Oracle才能使用。
linux5以上,掛載及授權等資訊,可以配置在相應檔案中/etc/udev/rules.d/60-raw.rules:
ACTION=="add", KERNEL=="pv/lvol70", RUN+="/bin/raw /dev/raw/raw70 %N"
ACTION=="add", KERNEL=="raw*", WNER=="oracle", GROUP=="dba", MODE=="0600"
linux5以下,則需修改:
修改/etc/sysconfig/rawdevices檔案如下,以開機時自動載入裸裝置,如:
/dev/raw/raw70 /dev/vg01/lovl70
這種方式是透過啟動服務的方式來繫結裸裝置。
將/etc/udev/permissions.d/50-udev.permissions的113行
從 raw/*:root:disk:0660
修改為
raw/*:oracle:dba:0600
這個的意思是修改裸裝置的預設屬主為oracle:dba,預設的mode是0600。
---------------------------------------------------------
20. 建立Oracle資料檔案(所有節點執行)
建立oracle的資料檔案和引數檔案的軟連線檔案,對應到每一個裸裝置檔案,編輯
oracle邏輯檔名與raw的對映檔案,可批次處理。
ln -s /dev/raw/raw1 /u01/product/oradata/orcl/ocr.dbf
ln -s /dev/raw/raw2 /u01/product/oradata/orcl/votingdisk
ln -s /dev/raw/raw3 /u01/product/oradata/orcl/system01.dbf
ln -s /dev/raw/raw4 /u01/product/oradata/orcl/sysaux01.dbf
ln -s /dev/raw/raw5 /u01/product/oradata/orcl/users01.dbf
ln -s /dev/raw/raw6 /u01/product/oradata/orcl/undotbs01.dbf
ln -s /dev/raw/raw7 /u01/product/oradata/orcl/temp01.dbf
ln -s /dev/raw/raw8 /u01/product/oradata/orcl/spfileorcl.ora
ln -s /dev/raw/raw9 /u01/product/oradata/orcl/control01.ctl
ln -s /dev/raw/raw10 /u01/product/oradata/orcl/control02.ctl
ln -s /dev/raw/raw11 /u01/product/oradata/orcl/control03.ctl
ln -s /dev/raw/raw12 /u01/product/oradata/orcl/redo01.log
ln -s /dev/raw/raw13 /u01/product/oradata/orcl/redo02.log
ln -s /dev/raw/raw14 /u01/product/oradata/orcl/redo03.log
ln -s /dev/raw/raw15 /u01/product/oradata/orcl/ocr2.dbf
ln -s /dev/raw/raw16 /u01/product/oradata/orcl/votingdisk2
檢視系統軟連線(所有節點執行) :
[root@rac01 orcl]# pwd
/u01/product/oradata/orcl
[root@rac01 orcl]#
[root@rac01 orcl]# ls -al
total 72
drwxr-xr-x 2 oracle dba 4096 Oct 24 09:18 .
drwxr-xr-x 3 oracle dba 4096 Oct 24 09:15 ..
lrwxrwxrwx 1 root root 13 Oct 24 09:18 control01.ctl -> /dev/raw/raw9
lrwxrwxrwx 1 root root 14 Oct 24 09:18 control02.ctl -> /dev/raw/raw10
lrwxrwxrwx 1 root root 14 Oct 24 09:18 control03.ctl -> /dev/raw/raw11
lrwxrwxrwx 1 root root 13 Oct 24 09:18 ocr.dbf -> /dev/raw/raw1
lrwxrwxrwx 1 root root 14 Oct 24 09:18 redo01.log -> /dev/raw/raw12
lrwxrwxrwx 1 root root 14 Oct 24 09:18 redo02.log -> /dev/raw/raw13
lrwxrwxrwx 1 root root 14 Oct 24 09:18 redo03.log -> /dev/raw/raw14
lrwxrwxrwx 1 root root 13 Oct 24 09:18 spfileorcl.ora -> /dev/raw/raw8
lrwxrwxrwx 1 root root 13 Oct 24 09:18 sysaux01.dbf -> /dev/raw/raw4
lrwxrwxrwx 1 root root 13 Oct 24 09:18 system01.dbf -> /dev/raw/raw3
lrwxrwxrwx 1 root root 13 Oct 24 09:18 temp01.dbf -> /dev/raw/raw7
lrwxrwxrwx 1 root root 13 Oct 24 09:18 undotbs01.dbf -> /dev/raw/raw6
lrwxrwxrwx 1 root root 13 Oct 24 09:18 users01.dbf -> /dev/raw/raw5
lrwxrwxrwx 1 root root 13 Oct 24 09:18 votingdisk -> /dev/raw/raw2
21. 系統重啟自動掛載raw設定(所有節點執行) :
修改/etc/rc.local 檔案,加入以上手工執行的指令碼,使系統重啟後可自動
掛載裸裝置。
如果不設定 /etc/rc.local (假設其他地方也沒有掛載設定),那麼可以看到:
[root@rac01 dev]# cd raw
-bash: cd: raw: No such file or directory
[root@rac02 dev]# cd raw
-bash: cd: raw: No such file or directory
自動掛載raw指令碼如下(因ln -s一次設定就生效,不設定在開機自動指令碼中):
vgscan
vgchange -a y
/usr/bin/raw /dev/raw/raw1 /dev/datavg01/ocr.dbf
/usr/bin/raw /dev/raw/raw2 /dev/datavg01/votingdisk
/usr/bin/raw /dev/raw/raw3 /dev/datavg01/system01.dbf
/usr/bin/raw /dev/raw/raw4 /dev/datavg01/sysaux01.dbf
/usr/bin/raw /dev/raw/raw5 /dev/datavg01/users01.dbf
/usr/bin/raw /dev/raw/raw6 /dev/datavg01/undotbs01.dbf
/usr/bin/raw /dev/raw/raw7 /dev/datavg01/temp01.dbf
/usr/bin/raw /dev/raw/raw8 /dev/datavg01/spfileorcl.ora
/usr/bin/raw /dev/raw/raw9 /dev/datavg01/control01.ctl
/usr/bin/raw /dev/raw/raw10 /dev/datavg01/control02.ctl
/usr/bin/raw /dev/raw/raw11 /dev/datavg01/control03.ctl
/usr/bin/raw /dev/raw/raw12 /dev/datavg01/redo01.log
/usr/bin/raw /dev/raw/raw13 /dev/datavg01/redo02.log
/usr/bin/raw /dev/raw/raw14 /dev/datavg01/redo03.log
/bin/chmod 600 /dev/raw/raw1
/bin/chmod 600 /dev/raw/raw2
/bin/chmod 600 /dev/raw/raw3
/bin/chmod 600 /dev/raw/raw4
/bin/chmod 600 /dev/raw/raw5
/bin/chmod 600 /dev/raw/raw6
/bin/chmod 600 /dev/raw/raw7
/bin/chmod 600 /dev/raw/raw8
/bin/chmod 600 /dev/raw/raw9
/bin/chmod 600 /dev/raw/raw10
/bin/chmod 600 /dev/raw/raw11
/bin/chmod 600 /dev/raw/raw12
/bin/chmod 600 /dev/raw/raw13
/bin/chmod 600 /dev/raw/raw14
/bin/chown oracle.dba /dev/raw/raw1
/bin/chown oracle.dba /dev/raw/raw2
/bin/chown oracle.dba /dev/raw/raw3
/bin/chown oracle.dba /dev/raw/raw4
/bin/chown oracle.dba /dev/raw/raw5
/bin/chown oracle.dba /dev/raw/raw6
/bin/chown oracle.dba /dev/raw/raw7
/bin/chown oracle.dba /dev/raw/raw8
/bin/chown oracle.dba /dev/raw/raw9
/bin/chown oracle.dba /dev/raw/raw10
/bin/chown oracle.dba /dev/raw/raw11
/bin/chown oracle.dba /dev/raw/raw12
/bin/chown oracle.dba /dev/raw/raw13
/bin/chown oracle.dba /dev/raw/raw14
這裡碰到一個問題, 透過rc.lcoal中的許可權自動執行出現問題,
反覆重新啟動後得到的 /dev/raw/rawx 的屬主及讀寫許可權不一樣,
每次重新啟動後都不同,關閉一個節點也是一樣。 有待解決.....
22. 開始安裝clusterware .
主要是注意 ocr.dbf及votingdisk 檔案寫軟連線中的即可。
比如 /u01/app/product/oradata/orcl/ocr.dbf .
[root@rac01 ~]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script. is complete
[root@rac01 ~]# sh /u01/app/oracle/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: rac01 pri01 rac01
node 2: rac02 pri02 rac02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw16
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac01
CSS is inactive on these nodes.
rac02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac01 ~]#
[root@rac02 raw]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script. is complete
[root@rac02 raw]# sh /u01/app/oracle/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: rac01 pri01 rac01
node 2: rac02 pri02 rac02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac01
rac02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.
[root@rac02 raw]#
rac01$crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac01.gsd application ONLINE ONLINE rac01
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip application ONLINE ONLINE rac01
ora.rac02.gsd application ONLINE ONLINE rac02
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip application ONLINE ONLINE rac02
rac01$
rac01$vncserver
New 'rac01:2 (oracle)' desktop is rac01:2
Starting applications specified in /home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/rac01:2.log
rac01$crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
23. 開始安裝Oracle RDBMS及DBCA .
空間不足的話,可以
[root@rac01 ~]# lvresize -L +200M /dev/datavg01/system01.dbf
Extending logical volume system01.dbf to 600.00 MB
Logical volume system01.dbf successfully resized
DBCA的時候注意選擇raw device, 在後面更改資料檔案的路徑的時候,注意
也是寫入軟連線中的路徑及名稱 。如果大小不足,可以lvresize更改。
24. 裸裝置的操作
A. 在資料庫中新加入使用者表空間(vg空間還足夠的情況) 。
(1) 建立LV .
[root@rac01 ~]# lvcreate -n tony_test01.dbf -L 300m datavg01
Logical volume "tony_test01.dbf" created
[root@rac01 ~]# lvscan
...
ACTIVE '/dev/datavg01/ocr2.dbf' [120.00 MB] inherit
ACTIVE '/dev/datavg01/votingdisk2' [120.00 MB] inherit
ACTIVE '/dev/datavg01/log_date01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/tony_test01.dbf' [300.00 MB] inherit
...
節點2上非啟用。
[root@rac02 ~]# lvscan
...
ACTIVE '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
ACTIVE '/dev/datavg01/log_date01.dbf' [200.00 MB] inherit
inactive '/dev/datavg01/tony_test01.dbf' [300.00 MB] inherit
...
雖然未啟用,但是lv的大小在節點2上還是可以修改,在節點1上看到修改了。
如果覺得lv設定小了,可以修改。
[root@rac02 ~]# lvresize -L +100M /dev/datavg01/tony_test01.dbf
Extending logical volume tony_test01.dbf to 400.00 MB
Logical volume tony_test01.dbf successfully resized
(2) 啟用其他節點上的lv .
[root@rac02 etc]# vgchange -a y
18 logical volume(s) in volume group "datavg01" now active
其實這裡使用lvchange -a y /dev/datavg01/tony_test01.dbf 也是可以的。
檢查lv啟用狀態:
[root@rac02 etc]# lvscan
ACTIVE '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
ACTIVE '/dev/datavg01/votingdisk' [20.00 MB] inherit
ACTIVE '/dev/datavg01/system01.dbf' [800.00 MB] inherit
ACTIVE '/dev/datavg01/sysaux01.dbf' [800.00 MB] inherit
ACTIVE '/dev/datavg01/users01.dbf' [12.00 MB] inherit
ACTIVE '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
ACTIVE '/dev/datavg01/control01.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/control02.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/control03.ctl' [32.00 MB] inherit
ACTIVE '/dev/datavg01/redo01.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/redo02.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/redo03.log' [20.00 MB] inherit
ACTIVE '/dev/datavg01/ocr2.dbf' [120.00 MB] inherit
ACTIVE '/dev/datavg01/votingdisk2' [120.00 MB] inherit
ACTIVE '/dev/datavg01/log_date01.dbf' [200.00 MB] inherit
ACTIVE '/dev/datavg01/tony_test01.dbf' [400.00 MB] inherit
(3) 掛載raw (在每個節點上執行) .
# /usr/bin/raw /dev/raw/raw18 /dev/datavg01/tony_test01.dbf
[root@rac01 ~]# cd /dev/mapper/
[root@rac01 mapper]# ls
control datavg01-ocr2.dbf datavg01-spfileorcl.ora datavg01-undotbs01.dbf
datavg01-control01.ctl datavg01-ocr.dbf datavg01-sysaux01.dbf datavg01-users01.dbf
datavg01-control02.ctl datavg01-redo01.log datavg01-system01.dbf datavg01-votingdisk
datavg01-control03.ctl datavg01-redo02.log datavg01-temp01.dbf datavg01-votingdisk2
datavg01-log_date01.dbf datavg01-redo03.log datavg01-tony_test01.dbf
(4) 許可權修改 (在每個節點上執行) .
# /bin/chmod 600 /dev/raw/raw18
# /bin/chown oracle.dba /dev/raw/raw18
[root@rac01 raw]# ls -al raw18
crw------- 1 oracle dba 162, 18 Oct 28 15:03 raw18
(5) 建立軟連結 (在每個節點上執行) .
# ln -s /dev/raw/raw18 /u01/product/oradata/orcl/tony_test01.dbf
(6) 加入表空間 (節點1上執行) .
rac01$sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Oct 28 17:01:37 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> create tablespace tony_test datafile '/u01/product/oradata/orcl/tony_test01.dbf' size 200M ;
Tablespace created.
(7) 類似其他lv, 需要在/etc/rc.local中加入
/usr/bin/raw /dev/raw/raw18 /dev/datavg01/tony_test01.dbf
/bin/chmod 600 /dev/raw/raw18
/bin/chown oracle.dba /dev/raw/raw18
B. 在存在的VG中加入PV (vg空間不足)
[root@rac01 ~]# vgdisplay
--- Volume group ---
VG Name datavg01
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 34
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 18
Open LV 15
Max PV 0
Cur PV 2
Act PV 2
VG Size 19.09 GB
PE Size 4.00 MB
Total PE 4886
Alloc PE / Size 765 / 2.99 GB
Free PE / Size 4121 / 16.10 GB
VG UUID OeuV86-HCSQ-yFFp-IrgT-e3Dt-xRrs-P1kUWi
可以透過vgdisplay 檢視vg的總大小及目前已經使用了多少空間。
需要擴充套件空間有兩種方式,一種是新加入一個VG, 還有就是擴充套件現有的VG .
這裡假設VG空間不足,需要在已有VG中加入新的PV.
(1). 擴充套件現有的VG
新劃分一個分割槽(虛擬機器中如何加磁碟這裡不再重複),比如 :
# fdisk /dev/sdc, 假設劃分出 /dev/sdc1
[root@rac01 orcl]# pvcreate /dev/sdc1 (節點1執行)
Physical volume "/dev/sdc1" successfully created
[root@rac01 orcl]#
[root@rac01 orcl]# pvscan
PV /dev/sdb1 VG datavg01 lvm2 [9.54 GB / 8.20 GB free]
PV /dev/sdb2 VG datavg01 lvm2 [9.54 GB / 7.90 GB free]
PV /dev/sdc1 lvm2 [9.54 GB]
Total: 3 [28.63 GB] / in use: 2 [19.09 GB] / in no VG: 1 [9.54 GB]
[root@rac01 orcl]# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name datavg01
PV Size 9.54 GB / not usable 1.89 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 2443
Free PE 2098
Allocated PE 345
PV UUID 8v1kbs-dPsT-X7jG-qh7p-3AzM-i12P-OhNV4U
--- Physical volume ---
PV Name /dev/sdb2
VG Name datavg01
PV Size 9.54 GB / not usable 1.92 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 2443
Free PE 2023
Allocated PE 420
PV UUID KooBSy-RpVJ-JxeY-vp2B-dOLt-UALf-CUl2nN
--- NEW Physical volume ---
PV Name /dev/sdc1
VG Name
PV Size 9.54 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS
這時候可以看到新的PV還沒有加入到VG中,一些屬性都是空,這時其他節點
上pvscan看不到新的PV.
把物理卷/dev/sdc1加入到datavg01卷組中 (節點1執行),/dev/sdc1必須是可用狀態
[root@rac01 orcl]# vgextend datavg01 /dev/sdc1
Volume group "datavg01" successfully extended
這時候透過pvdisplay就可以看到新pv上的屬性了。 同樣透過vgdisplay 可以看到大小
增加了(原來是20G左右,現在加入了10G左右) 。
[root@rac01 orcl]# vgdisplay
--- Volume group ---
VG Name datavg01
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 35
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 18
Open LV 15
Max PV 0
Cur PV 3
Act PV 3
VG Size 28.63 GB
PE Size 4.00 MB
Total PE 7329
Alloc PE / Size 765 / 2.99 GB
Free PE / Size 6564 / 25.64 GB
VG UUID OeuV86-HCSQ-yFFp-IrgT-e3Dt-xRrs-P1kUWi
備註:
------------------------------------------------------------------------------
如果要從VG中刪除一個物理卷(注:不能刪除卷組中的最後一個物理卷)
# vgreduce datavg01 /dev/sdc1
------------------------------------------------------------------------------
這時候在其他節點上檢視:
[root@rac02 etc]# pvscan (其中下面的uuid對應的就是/dev/sdc1)
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
PV /dev/sdb1 VG datavg01 lvm2 [9.54 GB / 8.20 GB free]
PV /dev/sdb2 VG datavg01 lvm2 [9.54 GB / 7.90 GB free]
PV unknown device VG datavg01 lvm2 [9.54 GB / 9.54 GB free]
Total: 3 [28.63 GB] / in use: 3 [28.63 GB] / in no VG: 0 [0 ]
這裡提示PV unknown device , 不過大小倒是顯示出來了。
這時候在所有節點上Oracle RAC狀態是正常的 :
rac02$crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.orcl.db application ONLINE ONLINE rac02
ora....l1.inst application ONLINE ONLINE rac01
ora....l2.inst application ONLINE ONLINE rac02
ora....01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application ONLINE ONLINE rac01
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip application ONLINE ONLINE rac01
ora....02.lsnr application ONLINE ONLINE rac02
ora.rac02.gsd application ONLINE ONLINE rac02
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip application ONLINE ONLINE rac02
rac02$
rac01$crs_stop -all
[root@rac01 init.d]# ./init.crs stop all
[root@rac01 init.d]# vgchange -a n datavg01
0 logical volume(s) in volume group "datavg01" now active
[root@rac01 init.d]# vgexport datavg01
Volume group "datavg01" successfully exported
[root@rac02 lvm]# vgimport datavg01
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find all physical volumes for volume group datavg01.
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find all physical volumes for volume group datavg01.
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find all physical volumes for volume group datavg01.
Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
Couldn't find all physical volumes for volume group datavg01.
Volume group "datavg01" not found
還是不行, 重新啟動節點2, 可以看到新加入的pv .
[root@rac01 init.d]# pvscan
PV /dev/sdb1 is in exported VG datavg01 [9.54 GB / 8.20 GB free]
PV /dev/sdb2 is in exported VG datavg01 [9.54 GB / 7.90 GB free]
PV /dev/sdc1 is in exported VG datavg01 [9.54 GB / 9.54 GB free]
Total: 3 [28.63 GB] / in use: 3 [28.63 GB] / in no VG: 0 [0 ]
在節點 1檢視vg狀態 (注意:多了一個exported)
[root@rac01 init.d]# vgscan
Reading all physical volumes. This may take a while...
Found exported volume group "datavg01" using metadata type lvm2
在節點2上執行vmimport 並vgscan檢視
[root@rac02 ~]# vgimport datavg01
Volume group "datavg01" successfully imported
[root@rac02 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "datavg01" using metadata type lvm2
同時在節點1上檢視vg狀態,exported關鍵消失。
[root@rac01 init.d]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "datavg01" using metadata type lvm2
在其他節點上vgimport之後, 需要手工進行raw device繫結,許可權賦予等。
因為在vgimport之前, 顯然vg在其他節點上都是不能識別的。 rc.local 中
的一些繫結動作自然也是沒有作用了(如下raw路徑消失)。
[root@rac02 dev]# cd raw
-bash: cd: raw: No such file or directory
透過lvscan在所有節點檢查lv是否啟用, 在所有節點啟用vg .
[root@rac01 init.d]# vgchange -a y datavg01
18 logical volume(s) in volume group "datavg01" now active
[root@rac02 etc]# vgchange -a y datavg01
18 logical volume(s) in volume group "datavg01" now active
手工執行一次rc.local中的程式碼,確認/dev/raw下的裝置,許可權正確。
開啟RAC服務:
rac01$crs_start -all
或者
[root@rac01 init.d]# ./init.crs start
Startup will be queued to init within 90 seconds.
檢視狀態:
rac01$crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.orcl.db application ONLINE ONLINE rac01
ora....l1.inst application ONLINE ONLINE rac01
ora....l2.inst application ONLINE ONLINE rac02
ora....01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application ONLINE ONLINE rac01
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip application ONLINE ONLINE rac01
ora....02.lsnr application ONLINE ONLINE rac02
ora.rac02.gsd application ONLINE ONLINE rac02
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip application ONLINE ONLINE rac02
附錄:
------------------------------------------------------------
裸裝置常用操作命令:
pvcreate (建立物理卷)
pvdisplay (顯示物理卷資訊)
pvscan (掃描物理卷)
pvmove (轉移物理卷資料)
pvmove /dev/hda1 /dev/hda2 (轉移/dev/hda1資料到/dev/hda2)
pvmove /dev/hda1 (轉到/dev/hda1資料到別的物理卷)
pvremove (刪除物理卷)
vgcreate (建立卷組)
vgdisplay (顯示卷組資訊)
vgscan (掃描卷組)
vgextend (擴充套件卷組) vgextend vg0 /dev/hda2 (把物理卷/dev/hda2加到vg0卷組中)
vgreduce (刪除卷組中的物理卷) vgreduce vg0 /dev/hda2 (把物理卷/dev/hda2從卷組vg0中刪除)
vgchange (啟用卷組) vgchange -a y vg0 (啟用卷組vg0) vgchange -a n vg0 (相反)
vgremove (刪除卷組) vgremove vg0 (刪除卷組vg0)
lvcreate (建立邏輯卷)
lvdisplay (顯示邏輯卷資訊)
lvscan (掃描邏輯卷)
lvextend (擴充套件邏輯卷) lvextend -l +5G /dev/vg0/data (擴充套件邏輯卷/dev/vg0/data 5個G)
------------------------------------------------------------
---------------------------------------------------------------
參考文件:
http://space.itpub.net/519536/viewspace-557694
http://www.lupaworld.com/home-space-do-blog-uid-26777-id-214009.html
使用 VMware Server 在 Oracle Enterprise Linux 上安裝 Oracle RAC 10g
LVM原理及PV、VG、LV、PE、LE關係圖
http://hi.baidu.com/suofang/blog/item/02ce933dd837b614bba1676c.html
http://blog.csdn.net/tianlesoftware/article/details/5796962#comments
---------------------------------------------------------------
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/35489/viewspace-710341/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- vmware + 裸裝置 + crs + oracle10g RAC搭建步驟(二):安裝linuxOracleLinux
- 虛擬機器搭建rac ASM盤啟動失敗虛擬機ASM
- Linux 安裝 KVM 虛擬機器Linux虛擬機
- Linux裸裝置總結Linux
- linux 虛擬機器下 安裝redisLinux虛擬機Redis
- NOI Linux 虛擬機器安裝教程Linux虛擬機
- Hadoop叢集--linux虛擬機器Hadoop安裝與配置、克隆虛擬機器HadoopLinux虛擬機
- 虛擬機器的搭建虛擬機
- 虛擬機器arm虛擬環境搭建虛擬機
- 在Window系統中安裝VMware虛擬機器搭建Linux伺服器虛擬機Linux伺服器
- linux虛擬機器執行機必安裝Linux虛擬機
- linux udev裸裝置繫結Linuxdev
- Linux 虛擬機器詳細安裝MySQLLinux虛擬機MySql
- VMwareWorkstation虛擬機器安裝Linux系統虛擬機Linux
- 如何在windows下進行LINUX虛擬機器搭建WindowsLinux虛擬機
- Linux環境搭建 | 手把手教你配置Linux虛擬機器Linux虛擬機
- Mac 使用 PD 虛擬機器安裝 Kali LinuxMac虛擬機Linux
- 虛擬機器一定要安裝Linux嗎?虛擬機Linux
- 安裝虛擬機器虛擬機
- 虛擬機器Hadoop叢集搭建5安裝Hadoop虛擬機Hadoop
- DM7使用裸裝置搭建DMRAC
- DM8 使用裸裝置搭建DMRAC
- 愛快虛擬機器搭建openwrt虛擬機
- Windows虛擬機器安裝Linux的基礎配置Windows虛擬機Linux
- VM VirtualBox 虛擬機器 Linux 安裝增強功能虛擬機Linux
- LEDE 虛擬機器安裝虛擬機
- ubuntu虛擬機器安裝Ubuntu虛擬機
- Dalvik虛擬機器、Java虛擬機器與ART虛擬機器虛擬機Java
- xshell怎麼連線linux虛擬機器 xshell連結linux虛擬機器ssh命令Linux虛擬機
- Linux虛擬機器網路配置Linux虛擬機
- 在VMware上安裝CentOS版本的Linux虛擬機器CentOSLinux虛擬機
- Linux虛擬機器安裝配置到專案上架Linux虛擬機
- 邊緣計算工作負載:虛擬機器,容器還是裸機?負載虛擬機
- 虛擬機器環境搭建之vagrant虛擬機
- MacOS安裝虛擬機器教程Mac虛擬機
- 使用虛擬機器安裝Kail虛擬機AI
- Mac 安裝Windows虛擬機器MacWindows虛擬機
- CentOS 7 安裝虛擬機器CentOS虛擬機
- kvm 安裝 windows 虛擬機器Windows虛擬機