虛擬機器 redhat 6.5 oracle11g RAC
VMware 9.0+redhat 6.5(rhel-server-6.5-x86_x64)+oracle11gr2 rac
很久沒有關注ORACLE了,官方發行版已經更新到12c了,考慮到目前大部分使用者還在使用11g rac,今天在虛擬機器上安裝紅帽最新版redhat6.5,並克隆,安裝11G grid以及資料庫,安裝過程中有很多故事,蒐集了很多其他網友的文章,記錄了一些關鍵步驟與報錯資訊,以備以後之用。
主要是ASM過程中使用udev或者UEK核心,關於OEL核心,有的觀點認為
全稱為Oracle Enterprise Linux,簡稱OEL,是Oracle公司在2006年初發布第一個版本,Linux發行版本之一,以對Oracle軟體和硬體支援較好見長。OEL,一般人通常叫法為Oracle企業版Linux,由於Oracle提供的企業級支援計劃UBL(Unbreakable Linux),所以很多人都稱OEL為堅不可摧Linux。2010年9月,Oracle Enterprise Linux釋出新版核心——Unbreakable Enterprise Kernel,專門針對Oracle 軟體與硬體進行最佳化,最重要的是Oracle資料庫跑在OEL上效能可以提升超過75%
有的認為
oracle改了很多東西,客戶可能會有要求。
關鍵步驟:
1.在虛擬機器下建立共享盤
在虛擬機器軟體的安裝目錄下,有個vmware-vdiskmanager.exe檔案
vmware-vdiskmanager.exe -c -s 1024Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\ocr_vote.vmdk
vmware-vdiskmanager.exe -c -s 1024Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\fra.vmdk
vmware-vdiskmanager.exe -c -s 3072Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\data.vmdk
2.在虛擬機器中新增硬碟,匯流排分別設成scsi1:0,scsi2:0,scsi3:0
3.分別開啟兩臺虛擬機器目錄中的vmx檔案,在最後一行新增:
disk.locking="FALSE"
scsi1:0.SharedBus="Virtual"
scsi2:0.SharedBus="Virtual"
scsi3:0.SharedBus="Virtual"
3.建立ASM磁碟組
但是發現ORACLE官方提供的asmlib都是基於2.6.18核心,也就是redhat5核心的,沒有提供redhat6核心的版本。
[root@localhost ~]# uname -rm
2.6.32-431.el6.x86_64 x86_64
ORACLE提供的都是2.6.18的
Oracle ASMLib 2.0
Intel IA32 (x86) Architecture
Library and Tools
oracleasm-support-2.1.8-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
Drivers for kernel 2.6.18-371.3.1.el5
oracleasm-2.6.18-371.3.1.el5xen-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5debug-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5PAE-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5-2.0.5-1.el5.i686.rpm
oracleasm最新支援到oracleasm-2.6.18-238.9.1.el5
興致勃勃裝個rhel 6.5玩11gRAC,卻找不到rhel6 核心2.6.32的asm包,有些掃興.
ocfs2也沒有支援rhel6的包
在Red Hat Enterprise Linux (RHEL)6以前,Oracle均是使用ASMLib這個核心支援庫配置ASM。
ASMLIB是一種基於Linux module,專門為Oracle Automatic Storage Management特性設計的核心支援庫(kernel support library)。
但是,在2011年5月,甲骨文發表了一份Oracle資料庫ASMLib的宣告,宣告中稱甲骨文將不再提供Red Hat Enterprise Linux (RHEL)6的ASMLib和相關更新。
甲骨文在這份宣告中表示,ASMLib更新將透過Unbreakable Linux Network (ULN)來發布,並僅對Oracle Linux客戶開放。ULN雖然為甲骨文和紅帽的客戶服務,但如果客戶想要使用ASMlib,就必須使用Oracle的kernel來替換掉紅帽的。
Software Update Policy for ASMLib running on future releases of Red Hat Enterprise Linux
Red Hat Enterprise Linux 6 (RHEL6)
For RHEL6 or Oracle Linux 6, Oracle will only provide ASMLib software and updates when configured Unbreakable Enterprise Kernel (UEK). Oracle will not provide ASMLib packages for kernels distributed by Red Hat as part of RHEL 6 or the Red Hat compatible kernel in Oracle Linux 6. ASMLib updates will be delivered via Unbreakable Linux Network(ULN) which is available to customers with Oracle Linux support. ULN works with both Oracle Linux or Red Hat Linux installations, but ASMlib usage will require replacing any Red Hat kernel with UEK
Oracle 的 ASMLib 已經沒有看到支援 Redhat Enterprise Linux 6 系列的 ASMLlib 包了, 前面的新聞也說,Oracle 不會為Redhat Linux 6 系統提供此包。 11gR2 的 RAC 安裝中 OCR, Voting Disk 已經不能使用RAW,只能使用ASM。 那麼Redhat Enterprise Linux 6 系列,如何才能安裝Oracle 11gR2 RAC 呢?
用UDEV來建立ASM磁碟組
udev簡介
什麼是 udev
udev 是 Linux2.6 核心裡的一個功能,它替代了原來的 devfs,成為當前 Linux 預設的裝置管理工具。udev 以守護程式的形式執行,透過偵聽核心發出來的 uevent 來管理 /dev目錄下的裝置檔案。不像之前的裝置管理工具,udev 在使用者空間 (user space) 執行,而不在核心空間 (kernel space) 執行。
/dev/sdd
scsi_id --whitelisted --replace-whitespace --device=/dev/sdb
scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
scsi_id --whitelisted --replace-whitespace --device=/dev/sdd
建立配置檔案
/etc/udev/rules.d/99-oracle-asmdevices.rules
for i in b c d;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmdba\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
生效配置檔案
/sbin/start_udev
使用vmware,需要在vmx檔案中加入: disk.EnableUUID = "TRUE",否則UUID出不來
後來發現,每次啟動共享儲存同時只能被一臺機器掛載
原來/sbin/start_udev
fdisk -l 虛擬盤就會消失
到CREAT ASM DISK GROUP頁面時,選擇change discovery path就會出現asm盤
11.考慮不用udev,採用UEK核心
二、安裝UEK核心
UEk可以從下載安裝:
[root@ora ~]# wget
[root@ora ~]# wget
[root@ora ~]# wget
[root@ora ~]# wget
[root@ora Downloads]# rpm -ivhkernel-uek-firmware-2.6.39-300.17.3.el6uek.noarch.rpm
[root@ora Downloads]# rpm -ivhkernel-uek-2.6.39-300.17.3.el6uek.x86_64.rpm
[root@ora Downloads]# rpm -ivhoracleasm-support-2.1.5-1.el6.x86_64.rpm
[root@ora Downloads]# rpm -ivhoracleasmlib-2.0.4-1.el6.x86_64.rpm
kernel-uek-2.6.32-431
五、建立ASM Disk Volumes
5.1配置並裝載ASM核心模組
[root@ora ~]# oracleasm configure -i
Configuringthe Oracle ASM library driver.
Thiswill configure the on-boot properties of the Oracle ASM library
driver. The following questions will determinewhether the driver is
loadedon boot and what permissions it will have. The current values
willbe shown in brackets ('[]'). Hitting without typing an
answerwill keep that current value. Ctrl-Cwill abort.
Defaultuser to own the driver interface []: grid
Defaultgroup to own the driver interface []: asmadmin
StartOracle ASM library driver on boot (y/n) [n]: y
Scanfor Oracle ASM disks on boot (y/n) [y]: y
WritingOracle ASM library driver configuration: done
[root@ora ~]# oracleasm init
Creating/dev/oracleasm mount point: /dev/oracleasm
Loadingmodule "oracleasm": oracleasm
MountingASMlib driver filesystem: /dev/oracleasm
5.2建立ASM磁碟
對磁碟需要新進行分割槽,oracleasm configure -i之後需要重啟
[root@ora ~]# oracleasm createdisk CRSVOL1 /dev/sdb1
Writingdisk header: done
Instantiatingdisk: done
[root@ora ~]# oracleasm createdisk DATAVOL1 /dev/sdc1
Writingdisk header: done
Instantiatingdisk: done
[root@ora ~]# oracleasm createdisk FRAVOL1 /dev/sdd1
Writingdisk header: done
Instantiatingdisk: done
[root@node1 ~]# oracleasm createdisk CRSVOL1 /dev/sdd1
Writing disk header: done
Instantiating disk: failed
Clearing disk header: done
核心版本要和UEK版本一致但我只找到了 2.6.39-300的包,而6.5的核心為 2.6.32-431
要禁用Firewall 和SElinux
[root@ora ~]# oracleasm listdisks
CRSVOL1
DATAVOL1
DATAVOL2
FRAVOL1
dbc使用oracleasm-discover查詢ASM磁碟,所以先執行oracleasm-discover檢視是否能找到剛建立的4個磁碟
[root@ora ~]# oracleasm-discover
UsingASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASMLibrary - Generic Linux, version 2.0.4 (KABI_V2)]
Discovereddisk: ORCL:CRSVOL1 [2096753 blocks (1073537536 bytes), maxio 512]
Discovereddisk: ORCLATAVOL1 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCLATAVOL2 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCL:FRAVOL1 [62912480 blocks (32211189760 bytes), maxio 512]
透過linux提供的 dmesg 和 strace 來定位問題
[root@dga01 ~]# dmesg
sd 2:0:1:0: [sdb] Cache data unavailable
sd 2:0:1:0: [sdb] Assuming drive cache: write through
sd 2:0:1:0: [sdb] Attached SCSI disk
sd 3:0:0:0: [sde] Cache data unavailable
sd 3:0:0:0: [sde] Assuming drive cache: write through
sd 3:0:0:0: [sde] Attached SCSI disk
sd 2:0:3:0: [sdd] Cache data unavailable
sd 2:0:3:0: [sdd] Assuming drive cache: write through
sd 2:0:3:0: [sdd] Attached SCSI disk
EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null)
dracut: Mounted root filesystem /dev/sda3
dracut: Loading SELinux policy
type=1404 audit(1363446394.257:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
SELinux: 2048 avtab hash slots, 250819 rules.
SELinux: 2048 avtab hash slots, 250819 rules.
SELinux: 9 users, 12 roles, 3762 types, 187 bools, 1 sens, 1024 cats
SELinux: 81 classes, 250819 rules
SELinux: Permission audit_access in class file not defined in policy.
SELinux: Permission audit_access in class dir not defined in policy.
SELinux: Permission execmod in class dir not defined in policy.
SELinux: Permission audit_access in class lnk_file not defined in policy.
SELinux: Permission open in class lnk_file not defined in policy.
SELinux: Permission execmod in class lnk_file not defined in policy.
SELinux: Permission audit_access in class chr_file not defined in policy.
SELinux: Permission audit_access in class blk_file not defined in policy.
SELinux: Permission execmod in class blk_file not defined in policy.
SELinux: Permission audit_access in class sock_file not defined in policy.
SELinux: Permission execmod in class sock_file not defined in policy.
SELinux: Permission audit_access in class fifo_file not defined in policy.
SELinux: Permission execmod in class fifo_file not defined in policy.
SELinux: Permission syslog in class capability2 not defined in policy.
SELinux: the above unknown classes and permissions will be allowed
[root@dga01 ~]# strace -f -o asm.out /usr/sbin/oracleasm createdisk OCR /dev/sde1
3714 brk(0) = 0x1677000
3714 brk(0x1698000) = 0x1698000
3714 stat("/dev/sde1", {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 open("/dev/sde1", O_RDWR) = 4
3714 fstat(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 fstat(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 mknod("/dev/oracleasm/disks/OCR", S_IFBLK|0600, makedev(8, 65)) = -1 EACCES (Permission denied)
3714 write(2, "oracleasm-instantiate-disk: ", 28) = 28
3714 write(2, "Unable to create ASM disk \"OCR\":"..., 51) = 51
3714 close(4)
日誌中多次提到selinux 和 3714 mknod("/dev/oracleasm/disks/OCR", S_IFBLK|0600, makedev(8, 65)) = -1 EACCES (Permission denied)
問題可能出在selinux或者防火牆上,檢視selinux和防火牆狀態
[root@dga01 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@dga01 ~]# getenforce
Enforcing
iptables 與selinux均為開啟 ,嘗試關閉著兩個服務
關閉linux 防火牆
[root@dga01 ~]# iptables -F
[root@dga01 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
[root@dga01 log]# chkconfig iptables off
關閉selinux 服務
[root@dga01 ~]# setenforce 0
編輯selinux配置檔案修改 為SELINUX=disabled
[root@dga01 ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
再次檢視linux防火牆與selinux服務
[root@dga01 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@dga01 log]# getenforce
Permissive
重新建立ASM磁碟組,順利完成
[root@dga01 ~]# oracleasm createdisk OCR /dev/sde1
Writing disk header: done
Instantiating disk: done
4.設定hosts檔案
# Public Network eth0
192.168.189.128 node1.rac.com node1
192.168.189.129 node2.rac.com node2
# Virtual IP
192.168.189.126 node1-vip.rac.com node1-vip
192.168.189.127 node2-vip.rac.com node2-vip
# Private Network eth1
192.168.189.130 node1-priv.rac.com node1-priv
192.168.189.131 node2-priv.rac.com node2-priv
#SCAN IP
192.168.189。125 scan.rac.com scan
5.檢查安裝包
binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.125 elfutils-libelf-devel-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24 (32 bit) glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5 (32 bit) glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106 (32 bit) libaio-devel-0.3.106 libaio-devel-0.3.106 (32 bit) libgcc-4.1.2 libgcc-4.1.2 (32 bit)
libstdc++-4.1.2 libstdc++-4.1.2 (32 bit) libstdc++-devel 4.1.2 make-3.81 numactl-devel-0.9.8.x86_64 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-2.2.11 (32 bit) unixODBC-devel-2.2.11 unixODBC-devel-2.2.11 (32 bit)
2.2.2 透過 yum 源一鍵安裝 x64 的包
# yum -y install binutils* compat* elfutils* gcc* glibc* libaio* libgcc* libstdc* numactl* sysstat* unixODBC* make* ksh*
2.2.3 上傳並安裝 x32 的包
rpm –ivh unixODBC* compat* glibc* lib*
安裝時可使用 RHEL 6.5 DVD 做本地 YUM 源
適用於 RHEL 6.5 32位 和 64位 系統.
我使用的是 VMware 虛擬機器, 將 DVD 設定為連線, 進入系統後, 系統會將DVD掛載在 "/media/RHEL_6.5 x86_64 Disc 1" 目錄.
解除安裝先:
umount /media/RHEL_6.5\ x86_64\ Disc\ 1/
複製程式碼
建立相關目錄:
mkdir /mnt/cdrom
複製程式碼
然後將DVD掛載到 /mnt/cdrom 目錄:
mount /dev/cdrom /mnt/cdrom
複製程式碼
如果使用 iso 檔案, 先將 iso 上傳到伺服器, 例如上傳到以下目錄 /data/src/rhel/6/rhel-server-6.5-x86_64-dvd.iso , 使用以下命令掛載DVD iso
mount -o loop /data/src/rhel/6/rhel-server-6.5-x86_64-dvd.iso /mnt/cdrom
複製程式碼
生成 YUM 原始檔:
cat > /etc/yum.repos.d/rhel6.repo <
[rhel6]
name=rhel6
baseurl=file:///mnt/cdrom
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
EOF
複製程式碼
sed -i "s#remote = url + '/' + relative#remote = '/mnt/cdrom' + '/' + relative#g" /usr/lib/python2.6/site-packages/yum/yumRepo.py
複製程式碼
匯入rpm的簽名資訊
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
複製程式碼
清除快取
yum clean all
複製程式碼
如果出現以下錯誤提示
[root@localhost ~]# yum clean all
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Existing lock /var/run/yum.pid: another copy is running as pid 2267.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: PackageKit
Memory : 48 M RSS (365 MB VSZ)
Started: Sat Nov 23 01:28:11 2013 - 10:00 ago
State : Sleeping, pid: 2267
先 Kill 掉 YUM
kill -9 2267
複製程式碼
然後再
yum clean all
複製程式碼
至此, 本地源配置完畢.
6.配置ssh 免密碼登陸
ssh-keygen -t rsa
ssh-copy-id -i /home/grid/.ssh/id_rsa.pub grid@192.168.189.128
ssh-copy-id -i /home/grid/.ssh/id_rsa.pub grid@192.168.189.129
7.su - root
xhost +
否則會出現如下報錯
08-31PM. Please wait ...[grid@node1 grid]$ No protocol specified
Exception in thread "main" java.lang.NoClassDefFoundError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at java.awt.Toolkit$2.run(Toolkit.java:821)
at java.security.AccessController.doPrivileged(Native Method)
at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:804)
at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source)
at com.jgoodies.looks.LookUtils.(Unknown Source)
at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:242)
at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783)
at javax.swing.UIManager.setLookAndFeel(UIManager.java:480)
at oracle.install.commons.util.Application.startup(Application.java:758)
at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164)
at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181)
at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265)
at oracle.install.ivw.crs.driver.CRSInstaller.startup(CRSInstaller.java:96)
at oracle.install.ivw.crs.driver.CRSInstaller.main(CRSInstaller.java:103)
8.su - grid
安裝grid
9.安裝時會檢查兩個機器的eth埠名字是否一致
兩邊eth介面要一樣
eht2 改成 eht1
如果不一致修改/etc/udev/rules.d/70-persistent-net.rules
10.安裝前的檢測指令碼
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2 -verbose
檢查叢集狀態
crs_stat -t
asmca
dbca
netca
select instance_name,status from v$instance;
18. 資料庫管理工作
? RAC的啟停
oracle rac預設會開機自啟動,如需維護時可使用以下命令:
? 關閉:
crsctl stop cluster 停止本節點叢集服務
crsctl stop cluster –all 停止所有節點服務
? 開啟
crsctl start cluster 開啟本節點叢集服務
crsctl stop cluster –all 開啟所有節點服務
注:以上命令需以 root使用者執行
? RAC檢查執行狀況
以grid 使用者執行
[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
? Database檢查例狀態
[oracle@rac1 ~]$ srvctl status database -d orcl
Instance rac1 is running on node rac1
Instance rac2 is running on node rac2
? 檢查節點應用狀態及配置
[oracle@rac1 ~]$ srvctl status nodeapps
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
Network is enabled
Network is running on node: rac1
Network is running on node: rac2
GSD is disabled
GSD is not running on node: rac1
GSD is not running on node: rac2
ONS is enabled
ONS daemon is running on node: rac1
ONS daemon is running on node: rac2
eONS is enabled
eONS daemon is running on node: rac1
eONS daemon is running on node: rac2
[oracle@rac1 ~]$ srvctl config nodeapps -a -g -s -l
-l homeion has been deprecated and will be ignored.
VIP exists.:rac1
VIP exists.: /rac1-vip/10.160.1.106/255.255.255.0/eth0
VIP exists.:rac2
VIP exists.: /rac2-vip/10.160.1.107/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:
/oracle/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
? 檢視資料庫配置
[oracle@rac1 ~]$ srvctl config database -d orcl -a
Database unique name: orcl
Database name: orcl.lottemart.cn
Oracle home: /oracle/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +ORCL_DATA/orcl/spfileorcl.ora
Domain: idevelopment.info
Start homeions: open
Stop homeions: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: rac1,rac2
Disk Groups: DATA,FLASH
Services:
Database is enabled
Database is administrator managed
? 檢查 ASM狀態及配置
[oracle@rac1 ~]$ srvctl status asm
ASM is running on rac1,rac2
[oracle@rac1 ~$ srvctl config asm -a
ASM home: /oracle/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
? 檢查 TNS的狀態及配置
[oracle@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
[oracle@rac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: /oracle/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
? 檢查 SCAN 的狀態及配置
[oracle@rac1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
[oracle@rac1 ~]$ srvctl config scan
SCAN name: rac-cluster-scan.rac.localdomain, Network:
1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP:
/rac-cluster-scan.rac.localdomain
? 檢查 VIP的狀態及配置
[oracle@rac1 ~]$ srvctl status vip -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
[oracle@rac1 ~]$ srvctl status vip -n rac2
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
[oracle@rac1 ~]$ srvctl config vip -n rac1
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.11.14/255.255.255.0/eth0
[oracle@rac1 ~]$ srvctl config vip -n rac2
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.15/255.255.255.0/eth0
7.1 Verifying Cluster Database All Informations
[grid@11grac1 grid]$ crsctl status resource
NAME=ora.11grac1.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.11grac2.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.CRS.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.DATA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.FRA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.LISTENER_SCAN2.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.asm TYPE=ora.asm.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.eons TYPE=ora.eons.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.gsd TYPE=ora.gsd.type TARGET=ONLINE , ONLINE
STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.oc4j TYPE=ora.oc4j.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.racdb.db TYPE=ora.database.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.registry.acfs TYPE=ora.registry.acfs.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.scan1.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.scan2.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on 11grac1
[grid@11grac1 ~]$ cluvfy comp scan -verbose
Verifying scan
Checking Single Client Access Name (SCAN)...
SCAN VIP name Node Running? ListenerName Port Running?
---------------- ------------ ------------ ------------ ------------ ------------
scanvip 11grac2 true LISTENER 1521 true
Checking name resolution setup for "scanvip"...
SCAN Name IP Address Status Comment
------------ ------------------------ ------------------------ ----------
scanvip 192.168.60.15 passed scanvip 192.168.60.16 passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful.
7.2 Verifying Clock Synchronization across the Cluster
Nodes
[grid@11grac1 grid]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes... Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes... Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ ------------------------ 11grac1 passed Result: CTSS resource check passed
Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed
Check CTSS state started... Check: CTSS state Node Name State ------------------------------------ ------------------------ 11grac1 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ 11grac1 0.0 passed
Time offset is within the specified limits on the following set of nodes: "[11grac1]" Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes
was successful
7.3 Check the Health of the Cluster
[grid@11grac1 grid]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
7.4 Check All Database Status
[oracle@11grac1 ~]$ srvctl status database -d racdb Instance racdb1 is running on node 11grac1 Instance racdb2 is running on node 11grac2
[oracle@11grac1 ~]$ srvctl status instance -d racdb -i racdb1 Instance racdb1 is running on node 11grac1
[oracle@11grac1 ~]$ srvctl status instance -d racdb -i racdb2 Instance racdb2 is running on node 11grac2
7.5 Check Node Application Status/Configuration
[oracle@11grac1 ~]$ srvctl status nodeapps VIP oravip1 is enabled VIP oravip1 is running on node: 11grac1 VIP oravip2 is enabled VIP oravip2 is running on node: 11grac2 Network is enabled Network is running on node: 11grac1 Network is running on node: 11grac2 GSD is enabled GSD is running on node: 11grac1 GSD is running on node: 11grac2 ONS is enabled ONS daemon is running on node: 11grac1 ONS daemon is running on node: 11grac2 eONS is enabled eONS daemon is running on node: 11grac1 eONS daemon is running on node: 11grac2
[oracle@11grac1 ~]$ srvctl config nodeapps VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 eONS daemon exists. Multicast port 17385, multicast IP address 234.137.253.253, listening port 2016
7.6 List All Configured Database
[oracle@11grac1 ~]$ srvctl config database racdb
[oracle@11grac1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /11grac/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/racdb/spfileracdb.ora Domain:
Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: DATA,FRA Services: Database is enabled Database is administrator managed
7.7 Check ASM Status/Configuration
[oracle@11grac1 ~]$ srvctl status asm ASM is running on 11grac1,11grac2
[oracle@11grac1 ~]$ srvctl config asm -a ASM home: /11grac/app/11.2.0/grid ASM listener: LISTENER ASM is enabled.
7.8 Check TNS Listener Status/Configuration
[oracle@11grac1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): 11grac1,11grac2
[oracle@11grac1 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: /11grac/app/11.2.0/grid on node(s) 11grac2,11grac1 End points: TCP:1521
7.9 Check SCAN Status/Configuration
[oracle@11grac1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node 11grac2
SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node 11grac1
[oracle@11grac1 ~]$ srvctl config scan SCAN name: scanvip, Network: 1/192.168.60.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /scanvip.rac.com/192.168.60.15 SCAN VIP name: scan2, IP: /scanvip.rac.com/192.168.60.16
7.10 Check VIP Status/Configuration
[oracle@11grac1 ~]$ srvctl status vip -n 11grac1 VIP oravip1 is enabled VIP oravip1 is running on node: 11grac1
[oracle@11grac1 ~]$ srvctl status vip -n 11grac2 VIP oravip2 is enabled VIP oravip2 is running on node: 11grac2
[oracle@11grac1 ~]$ srvctl config vip -n 11grac1 VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0
[oracle@11grac1 ~]$ srvctl config vip -n 11grac2 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0
7.11 Configuration for Node Application-(VIP、GSD、ONS、
Listener)
[oracle@11grac1 ~]$ srvctl config nodeapps -a -g -s -l -l option has been deprecated and will be ignored. VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 Name: LISTENER Network: 1, Owner: grid Home:
/11grac/app/11.2.0/grid on node(s) 11grac2,11grac1 End points: TCP:1521
7.12 Check All Services
[oracle@11grac1 ~]$ su - grid -c "crs_stat -t -v"
Password: Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE 11grac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE 11grac1 ora....ac1.gsd application 0/5 0/0 ONLINE ONLINE 11grac1 ora....ac1.ons application 0/3 0/0 ONLINE ONLINE 11grac1 ora....ac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE 11grac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE 11grac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE 11grac2 ora....ac2.gsd application 0/5 0/0 ONLINE ONLINE 11grac2 ora....ac2.ons application 0/3 0/0 ONLINE ONLINE 11grac2 ora....ac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE 11grac2 ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE 11grac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE 11grac2 ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE 11grac1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE 11grac1 ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE 11grac1 ora.gsd ora.gsd.type 0/5 0/ ONLINE ONLINE 11grac1 ora....network ora....rk.type 0/5 0/ ONLINE ONLINE 11grac1 ora.oc4j ora.oc4j.type 0/5 0/0 ONLINE ONLINE 11grac1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE 11grac1 ora.racdb.db ora....se.type 0/2 0/1 ONLINE ONLINE 11grac1 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE 11grac1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE 11grac2 ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE 11grac1
7.13 Starting the Oracle Clusterware Stack
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl stop cluster (-all)
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl start cluster (-all)
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl start cluster –n 11grac1 11grac2
很久沒有關注ORACLE了,官方發行版已經更新到12c了,考慮到目前大部分使用者還在使用11g rac,今天在虛擬機器上安裝紅帽最新版redhat6.5,並克隆,安裝11G grid以及資料庫,安裝過程中有很多故事,蒐集了很多其他網友的文章,記錄了一些關鍵步驟與報錯資訊,以備以後之用。
主要是ASM過程中使用udev或者UEK核心,關於OEL核心,有的觀點認為
全稱為Oracle Enterprise Linux,簡稱OEL,是Oracle公司在2006年初發布第一個版本,Linux發行版本之一,以對Oracle軟體和硬體支援較好見長。OEL,一般人通常叫法為Oracle企業版Linux,由於Oracle提供的企業級支援計劃UBL(Unbreakable Linux),所以很多人都稱OEL為堅不可摧Linux。2010年9月,Oracle Enterprise Linux釋出新版核心——Unbreakable Enterprise Kernel,專門針對Oracle 軟體與硬體進行最佳化,最重要的是Oracle資料庫跑在OEL上效能可以提升超過75%
有的認為
oracle改了很多東西,客戶可能會有要求。
關鍵步驟:
1.在虛擬機器下建立共享盤
在虛擬機器軟體的安裝目錄下,有個vmware-vdiskmanager.exe檔案
vmware-vdiskmanager.exe -c -s 1024Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\ocr_vote.vmdk
vmware-vdiskmanager.exe -c -s 1024Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\fra.vmdk
vmware-vdiskmanager.exe -c -s 3072Mb -a lsilogic -t 2 "D:\vmware\redhat\sharedisk\data.vmdk
2.在虛擬機器中新增硬碟,匯流排分別設成scsi1:0,scsi2:0,scsi3:0
3.分別開啟兩臺虛擬機器目錄中的vmx檔案,在最後一行新增:
disk.locking="FALSE"
scsi1:0.SharedBus="Virtual"
scsi2:0.SharedBus="Virtual"
scsi3:0.SharedBus="Virtual"
3.建立ASM磁碟組
但是發現ORACLE官方提供的asmlib都是基於2.6.18核心,也就是redhat5核心的,沒有提供redhat6核心的版本。
[root@localhost ~]# uname -rm
2.6.32-431.el6.x86_64 x86_64
ORACLE提供的都是2.6.18的
Oracle ASMLib 2.0
Intel IA32 (x86) Architecture
Library and Tools
oracleasm-support-2.1.8-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
Drivers for kernel 2.6.18-371.3.1.el5
oracleasm-2.6.18-371.3.1.el5xen-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5debug-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5PAE-2.0.5-1.el5.i686.rpm
oracleasm-2.6.18-371.3.1.el5-2.0.5-1.el5.i686.rpm
oracleasm最新支援到oracleasm-2.6.18-238.9.1.el5
興致勃勃裝個rhel 6.5玩11gRAC,卻找不到rhel6 核心2.6.32的asm包,有些掃興.
ocfs2也沒有支援rhel6的包
在Red Hat Enterprise Linux (RHEL)6以前,Oracle均是使用ASMLib這個核心支援庫配置ASM。
ASMLIB是一種基於Linux module,專門為Oracle Automatic Storage Management特性設計的核心支援庫(kernel support library)。
但是,在2011年5月,甲骨文發表了一份Oracle資料庫ASMLib的宣告,宣告中稱甲骨文將不再提供Red Hat Enterprise Linux (RHEL)6的ASMLib和相關更新。
甲骨文在這份宣告中表示,ASMLib更新將透過Unbreakable Linux Network (ULN)來發布,並僅對Oracle Linux客戶開放。ULN雖然為甲骨文和紅帽的客戶服務,但如果客戶想要使用ASMlib,就必須使用Oracle的kernel來替換掉紅帽的。
Software Update Policy for ASMLib running on future releases of Red Hat Enterprise Linux
Red Hat Enterprise Linux 6 (RHEL6)
For RHEL6 or Oracle Linux 6, Oracle will only provide ASMLib software and updates when configured Unbreakable Enterprise Kernel (UEK). Oracle will not provide ASMLib packages for kernels distributed by Red Hat as part of RHEL 6 or the Red Hat compatible kernel in Oracle Linux 6. ASMLib updates will be delivered via Unbreakable Linux Network(ULN) which is available to customers with Oracle Linux support. ULN works with both Oracle Linux or Red Hat Linux installations, but ASMlib usage will require replacing any Red Hat kernel with UEK
Oracle 的 ASMLib 已經沒有看到支援 Redhat Enterprise Linux 6 系列的 ASMLlib 包了, 前面的新聞也說,Oracle 不會為Redhat Linux 6 系統提供此包。 11gR2 的 RAC 安裝中 OCR, Voting Disk 已經不能使用RAW,只能使用ASM。 那麼Redhat Enterprise Linux 6 系列,如何才能安裝Oracle 11gR2 RAC 呢?
用UDEV來建立ASM磁碟組
udev簡介
什麼是 udev
udev 是 Linux2.6 核心裡的一個功能,它替代了原來的 devfs,成為當前 Linux 預設的裝置管理工具。udev 以守護程式的形式執行,透過偵聽核心發出來的 uevent 來管理 /dev目錄下的裝置檔案。不像之前的裝置管理工具,udev 在使用者空間 (user space) 執行,而不在核心空間 (kernel space) 執行。
/dev/sdd
scsi_id --whitelisted --replace-whitespace --device=/dev/sdb
scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
scsi_id --whitelisted --replace-whitespace --device=/dev/sdd
建立配置檔案
/etc/udev/rules.d/99-oracle-asmdevices.rules
for i in b c d;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmdba\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
生效配置檔案
/sbin/start_udev
使用vmware,需要在vmx檔案中加入: disk.EnableUUID = "TRUE",否則UUID出不來
後來發現,每次啟動共享儲存同時只能被一臺機器掛載
原來/sbin/start_udev
fdisk -l 虛擬盤就會消失
到CREAT ASM DISK GROUP頁面時,選擇change discovery path就會出現asm盤
11.考慮不用udev,採用UEK核心
二、安裝UEK核心
UEk可以從下載安裝:
[root@ora ~]# wget
[root@ora ~]# wget
[root@ora ~]# wget
[root@ora ~]# wget
[root@ora Downloads]# rpm -ivhkernel-uek-firmware-2.6.39-300.17.3.el6uek.noarch.rpm
[root@ora Downloads]# rpm -ivhkernel-uek-2.6.39-300.17.3.el6uek.x86_64.rpm
[root@ora Downloads]# rpm -ivhoracleasm-support-2.1.5-1.el6.x86_64.rpm
[root@ora Downloads]# rpm -ivhoracleasmlib-2.0.4-1.el6.x86_64.rpm
kernel-uek-2.6.32-431
五、建立ASM Disk Volumes
5.1配置並裝載ASM核心模組
[root@ora ~]# oracleasm configure -i
Configuringthe Oracle ASM library driver.
Thiswill configure the on-boot properties of the Oracle ASM library
driver. The following questions will determinewhether the driver is
loadedon boot and what permissions it will have. The current values
willbe shown in brackets ('[]'). Hitting
answerwill keep that current value. Ctrl-Cwill abort.
Defaultuser to own the driver interface []: grid
Defaultgroup to own the driver interface []: asmadmin
StartOracle ASM library driver on boot (y/n) [n]: y
Scanfor Oracle ASM disks on boot (y/n) [y]: y
WritingOracle ASM library driver configuration: done
[root@ora ~]# oracleasm init
Creating/dev/oracleasm mount point: /dev/oracleasm
Loadingmodule "oracleasm": oracleasm
MountingASMlib driver filesystem: /dev/oracleasm
5.2建立ASM磁碟
對磁碟需要新進行分割槽,oracleasm configure -i之後需要重啟
[root@ora ~]# oracleasm createdisk CRSVOL1 /dev/sdb1
Writingdisk header: done
Instantiatingdisk: done
[root@ora ~]# oracleasm createdisk DATAVOL1 /dev/sdc1
Writingdisk header: done
Instantiatingdisk: done
[root@ora ~]# oracleasm createdisk FRAVOL1 /dev/sdd1
Writingdisk header: done
Instantiatingdisk: done
[root@node1 ~]# oracleasm createdisk CRSVOL1 /dev/sdd1
Writing disk header: done
Instantiating disk: failed
Clearing disk header: done
核心版本要和UEK版本一致但我只找到了 2.6.39-300的包,而6.5的核心為 2.6.32-431
要禁用Firewall 和SElinux
[root@ora ~]# oracleasm listdisks
CRSVOL1
DATAVOL1
DATAVOL2
FRAVOL1
dbc使用oracleasm-discover查詢ASM磁碟,所以先執行oracleasm-discover檢視是否能找到剛建立的4個磁碟
[root@ora ~]# oracleasm-discover
UsingASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASMLibrary - Generic Linux, version 2.0.4 (KABI_V2)]
Discovereddisk: ORCL:CRSVOL1 [2096753 blocks (1073537536 bytes), maxio 512]
Discovereddisk: ORCLATAVOL1 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCLATAVOL2 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCL:FRAVOL1 [62912480 blocks (32211189760 bytes), maxio 512]
透過linux提供的 dmesg 和 strace 來定位問題
[root@dga01 ~]# dmesg
sd 2:0:1:0: [sdb] Cache data unavailable
sd 2:0:1:0: [sdb] Assuming drive cache: write through
sd 2:0:1:0: [sdb] Attached SCSI disk
sd 3:0:0:0: [sde] Cache data unavailable
sd 3:0:0:0: [sde] Assuming drive cache: write through
sd 3:0:0:0: [sde] Attached SCSI disk
sd 2:0:3:0: [sdd] Cache data unavailable
sd 2:0:3:0: [sdd] Assuming drive cache: write through
sd 2:0:3:0: [sdd] Attached SCSI disk
EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null)
dracut: Mounted root filesystem /dev/sda3
dracut: Loading SELinux policy
type=1404 audit(1363446394.257:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
SELinux: 2048 avtab hash slots, 250819 rules.
SELinux: 2048 avtab hash slots, 250819 rules.
SELinux: 9 users, 12 roles, 3762 types, 187 bools, 1 sens, 1024 cats
SELinux: 81 classes, 250819 rules
SELinux: Permission audit_access in class file not defined in policy.
SELinux: Permission audit_access in class dir not defined in policy.
SELinux: Permission execmod in class dir not defined in policy.
SELinux: Permission audit_access in class lnk_file not defined in policy.
SELinux: Permission open in class lnk_file not defined in policy.
SELinux: Permission execmod in class lnk_file not defined in policy.
SELinux: Permission audit_access in class chr_file not defined in policy.
SELinux: Permission audit_access in class blk_file not defined in policy.
SELinux: Permission execmod in class blk_file not defined in policy.
SELinux: Permission audit_access in class sock_file not defined in policy.
SELinux: Permission execmod in class sock_file not defined in policy.
SELinux: Permission audit_access in class fifo_file not defined in policy.
SELinux: Permission execmod in class fifo_file not defined in policy.
SELinux: Permission syslog in class capability2 not defined in policy.
SELinux: the above unknown classes and permissions will be allowed
[root@dga01 ~]# strace -f -o asm.out /usr/sbin/oracleasm createdisk OCR /dev/sde1
3714 brk(0) = 0x1677000
3714 brk(0x1698000) = 0x1698000
3714 stat("/dev/sde1", {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 open("/dev/sde1", O_RDWR) = 4
3714 fstat(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 fstat(4, {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 65), ...}) = 0
3714 mknod("/dev/oracleasm/disks/OCR", S_IFBLK|0600, makedev(8, 65)) = -1 EACCES (Permission denied)
3714 write(2, "oracleasm-instantiate-disk: ", 28) = 28
3714 write(2, "Unable to create ASM disk \"OCR\":"..., 51) = 51
3714 close(4)
日誌中多次提到selinux 和 3714 mknod("/dev/oracleasm/disks/OCR", S_IFBLK|0600, makedev(8, 65)) = -1 EACCES (Permission denied)
問題可能出在selinux或者防火牆上,檢視selinux和防火牆狀態
[root@dga01 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@dga01 ~]# getenforce
Enforcing
iptables 與selinux均為開啟 ,嘗試關閉著兩個服務
關閉linux 防火牆
[root@dga01 ~]# iptables -F
[root@dga01 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
[root@dga01 log]# chkconfig iptables off
關閉selinux 服務
[root@dga01 ~]# setenforce 0
編輯selinux配置檔案修改 為SELINUX=disabled
[root@dga01 ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
再次檢視linux防火牆與selinux服務
[root@dga01 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@dga01 log]# getenforce
Permissive
重新建立ASM磁碟組,順利完成
[root@dga01 ~]# oracleasm createdisk OCR /dev/sde1
Writing disk header: done
Instantiating disk: done
4.設定hosts檔案
# Public Network eth0
192.168.189.128 node1.rac.com node1
192.168.189.129 node2.rac.com node2
# Virtual IP
192.168.189.126 node1-vip.rac.com node1-vip
192.168.189.127 node2-vip.rac.com node2-vip
# Private Network eth1
192.168.189.130 node1-priv.rac.com node1-priv
192.168.189.131 node2-priv.rac.com node2-priv
#SCAN IP
192.168.189。125 scan.rac.com scan
5.檢查安裝包
binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.125 elfutils-libelf-devel-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24 (32 bit) glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5 (32 bit) glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106 (32 bit) libaio-devel-0.3.106 libaio-devel-0.3.106 (32 bit) libgcc-4.1.2 libgcc-4.1.2 (32 bit)
libstdc++-4.1.2 libstdc++-4.1.2 (32 bit) libstdc++-devel 4.1.2 make-3.81 numactl-devel-0.9.8.x86_64 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-2.2.11 (32 bit) unixODBC-devel-2.2.11 unixODBC-devel-2.2.11 (32 bit)
2.2.2 透過 yum 源一鍵安裝 x64 的包
# yum -y install binutils* compat* elfutils* gcc* glibc* libaio* libgcc* libstdc* numactl* sysstat* unixODBC* make* ksh*
2.2.3 上傳並安裝 x32 的包
rpm –ivh unixODBC* compat* glibc* lib*
安裝時可使用 RHEL 6.5 DVD 做本地 YUM 源
適用於 RHEL 6.5 32位 和 64位 系統.
我使用的是 VMware 虛擬機器, 將 DVD 設定為連線, 進入系統後, 系統會將DVD掛載在 "/media/RHEL_6.5 x86_64 Disc 1" 目錄.
解除安裝先:
umount /media/RHEL_6.5\ x86_64\ Disc\ 1/
複製程式碼
建立相關目錄:
mkdir /mnt/cdrom
複製程式碼
然後將DVD掛載到 /mnt/cdrom 目錄:
mount /dev/cdrom /mnt/cdrom
複製程式碼
如果使用 iso 檔案, 先將 iso 上傳到伺服器, 例如上傳到以下目錄 /data/src/rhel/6/rhel-server-6.5-x86_64-dvd.iso , 使用以下命令掛載DVD iso
mount -o loop /data/src/rhel/6/rhel-server-6.5-x86_64-dvd.iso /mnt/cdrom
複製程式碼
生成 YUM 原始檔:
cat > /etc/yum.repos.d/rhel6.repo <
[rhel6]
name=rhel6
baseurl=file:///mnt/cdrom
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
EOF
複製程式碼
sed -i "s#remote = url + '/' + relative#remote = '/mnt/cdrom' + '/' + relative#g" /usr/lib/python2.6/site-packages/yum/yumRepo.py
複製程式碼
匯入rpm的簽名資訊
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
複製程式碼
清除快取
yum clean all
複製程式碼
如果出現以下錯誤提示
[root@localhost ~]# yum clean all
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Existing lock /var/run/yum.pid: another copy is running as pid 2267.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: PackageKit
Memory : 48 M RSS (365 MB VSZ)
Started: Sat Nov 23 01:28:11 2013 - 10:00 ago
State : Sleeping, pid: 2267
先 Kill 掉 YUM
kill -9 2267
複製程式碼
然後再
yum clean all
複製程式碼
至此, 本地源配置完畢.
6.配置ssh 免密碼登陸
ssh-keygen -t rsa
ssh-copy-id -i /home/grid/.ssh/id_rsa.pub grid@192.168.189.128
ssh-copy-id -i /home/grid/.ssh/id_rsa.pub grid@192.168.189.129
7.su - root
xhost +
否則會出現如下報錯
08-31PM. Please wait ...[grid@node1 grid]$ No protocol specified
Exception in thread "main" java.lang.NoClassDefFoundError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at java.awt.Toolkit$2.run(Toolkit.java:821)
at java.security.AccessController.doPrivileged(Native Method)
at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:804)
at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source)
at com.jgoodies.looks.LookUtils.
at com.jgoodies.looks.plastic.PlasticLookAndFeel.
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:242)
at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783)
at javax.swing.UIManager.setLookAndFeel(UIManager.java:480)
at oracle.install.commons.util.Application.startup(Application.java:758)
at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164)
at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181)
at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265)
at oracle.install.ivw.crs.driver.CRSInstaller.startup(CRSInstaller.java:96)
at oracle.install.ivw.crs.driver.CRSInstaller.main(CRSInstaller.java:103)
8.su - grid
安裝grid
9.安裝時會檢查兩個機器的eth埠名字是否一致
兩邊eth介面要一樣
eht2 改成 eht1
如果不一致修改/etc/udev/rules.d/70-persistent-net.rules
10.安裝前的檢測指令碼
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2 -verbose
檢查叢集狀態
crs_stat -t
asmca
dbca
netca
select instance_name,status from v$instance;
18. 資料庫管理工作
? RAC的啟停
oracle rac預設會開機自啟動,如需維護時可使用以下命令:
? 關閉:
crsctl stop cluster 停止本節點叢集服務
crsctl stop cluster –all 停止所有節點服務
? 開啟
crsctl start cluster 開啟本節點叢集服務
crsctl stop cluster –all 開啟所有節點服務
注:以上命令需以 root使用者執行
? RAC檢查執行狀況
以grid 使用者執行
[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
? Database檢查例狀態
[oracle@rac1 ~]$ srvctl status database -d orcl
Instance rac1 is running on node rac1
Instance rac2 is running on node rac2
? 檢查節點應用狀態及配置
[oracle@rac1 ~]$ srvctl status nodeapps
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
Network is enabled
Network is running on node: rac1
Network is running on node: rac2
GSD is disabled
GSD is not running on node: rac1
GSD is not running on node: rac2
ONS is enabled
ONS daemon is running on node: rac1
ONS daemon is running on node: rac2
eONS is enabled
eONS daemon is running on node: rac1
eONS daemon is running on node: rac2
[oracle@rac1 ~]$ srvctl config nodeapps -a -g -s -l
-l homeion has been deprecated and will be ignored.
VIP exists.:rac1
VIP exists.: /rac1-vip/10.160.1.106/255.255.255.0/eth0
VIP exists.:rac2
VIP exists.: /rac2-vip/10.160.1.107/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:
/oracle/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
? 檢視資料庫配置
[oracle@rac1 ~]$ srvctl config database -d orcl -a
Database unique name: orcl
Database name: orcl.lottemart.cn
Oracle home: /oracle/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +ORCL_DATA/orcl/spfileorcl.ora
Domain: idevelopment.info
Start homeions: open
Stop homeions: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: rac1,rac2
Disk Groups: DATA,FLASH
Services:
Database is enabled
Database is administrator managed
? 檢查 ASM狀態及配置
[oracle@rac1 ~]$ srvctl status asm
ASM is running on rac1,rac2
[oracle@rac1 ~$ srvctl config asm -a
ASM home: /oracle/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
? 檢查 TNS的狀態及配置
[oracle@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
[oracle@rac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: /oracle/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
? 檢查 SCAN 的狀態及配置
[oracle@rac1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
[oracle@rac1 ~]$ srvctl config scan
SCAN name: rac-cluster-scan.rac.localdomain, Network:
1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP:
/rac-cluster-scan.rac.localdomain
? 檢查 VIP的狀態及配置
[oracle@rac1 ~]$ srvctl status vip -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
[oracle@rac1 ~]$ srvctl status vip -n rac2
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
[oracle@rac1 ~]$ srvctl config vip -n rac1
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.11.14/255.255.255.0/eth0
[oracle@rac1 ~]$ srvctl config vip -n rac2
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.15/255.255.255.0/eth0
7.1 Verifying Cluster Database All Informations
[grid@11grac1 grid]$ crsctl status resource
NAME=ora.11grac1.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.11grac2.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.CRS.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.DATA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.FRA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.LISTENER_SCAN2.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.asm TYPE=ora.asm.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.eons TYPE=ora.eons.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.gsd TYPE=ora.gsd.type TARGET=ONLINE , ONLINE
STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.oc4j TYPE=ora.oc4j.type TARGET=ONLINE STATE=ONLINE on 11grac1
NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.racdb.db TYPE=ora.database.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.registry.acfs TYPE=ora.registry.acfs.type TARGET=ONLINE , ONLINE STATE=ONLINE on 11grac1, ONLINE on 11grac2
NAME=ora.scan1.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on 11grac2
NAME=ora.scan2.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on 11grac1
[grid@11grac1 ~]$ cluvfy comp scan -verbose
Verifying scan
Checking Single Client Access Name (SCAN)...
SCAN VIP name Node Running? ListenerName Port Running?
---------------- ------------ ------------ ------------ ------------ ------------
scanvip 11grac2 true LISTENER 1521 true
Checking name resolution setup for "scanvip"...
SCAN Name IP Address Status Comment
------------ ------------------------ ------------------------ ----------
scanvip 192.168.60.15 passed scanvip 192.168.60.16 passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful.
7.2 Verifying Clock Synchronization across the Cluster
Nodes
[grid@11grac1 grid]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes... Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes... Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ ------------------------ 11grac1 passed Result: CTSS resource check passed
Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed
Check CTSS state started... Check: CTSS state Node Name State ------------------------------------ ------------------------ 11grac1 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ 11grac1 0.0 passed
Time offset is within the specified limits on the following set of nodes: "[11grac1]" Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes
was successful
7.3 Check the Health of the Cluster
[grid@11grac1 grid]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
7.4 Check All Database Status
[oracle@11grac1 ~]$ srvctl status database -d racdb Instance racdb1 is running on node 11grac1 Instance racdb2 is running on node 11grac2
[oracle@11grac1 ~]$ srvctl status instance -d racdb -i racdb1 Instance racdb1 is running on node 11grac1
[oracle@11grac1 ~]$ srvctl status instance -d racdb -i racdb2 Instance racdb2 is running on node 11grac2
7.5 Check Node Application Status/Configuration
[oracle@11grac1 ~]$ srvctl status nodeapps VIP oravip1 is enabled VIP oravip1 is running on node: 11grac1 VIP oravip2 is enabled VIP oravip2 is running on node: 11grac2 Network is enabled Network is running on node: 11grac1 Network is running on node: 11grac2 GSD is enabled GSD is running on node: 11grac1 GSD is running on node: 11grac2 ONS is enabled ONS daemon is running on node: 11grac1 ONS daemon is running on node: 11grac2 eONS is enabled eONS daemon is running on node: 11grac1 eONS daemon is running on node: 11grac2
[oracle@11grac1 ~]$ srvctl config nodeapps VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 eONS daemon exists. Multicast port 17385, multicast IP address 234.137.253.253, listening port 2016
7.6 List All Configured Database
[oracle@11grac1 ~]$ srvctl config database racdb
[oracle@11grac1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /11grac/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/racdb/spfileracdb.ora Domain:
Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: DATA,FRA Services: Database is enabled Database is administrator managed
7.7 Check ASM Status/Configuration
[oracle@11grac1 ~]$ srvctl status asm ASM is running on 11grac1,11grac2
[oracle@11grac1 ~]$ srvctl config asm -a ASM home: /11grac/app/11.2.0/grid ASM listener: LISTENER ASM is enabled.
7.8 Check TNS Listener Status/Configuration
[oracle@11grac1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): 11grac1,11grac2
[oracle@11grac1 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home:
7.9 Check SCAN Status/Configuration
[oracle@11grac1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node 11grac2
SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node 11grac1
[oracle@11grac1 ~]$ srvctl config scan SCAN name: scanvip, Network: 1/192.168.60.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /scanvip.rac.com/192.168.60.15 SCAN VIP name: scan2, IP: /scanvip.rac.com/192.168.60.16
7.10 Check VIP Status/Configuration
[oracle@11grac1 ~]$ srvctl status vip -n 11grac1 VIP oravip1 is enabled VIP oravip1 is running on node: 11grac1
[oracle@11grac1 ~]$ srvctl status vip -n 11grac2 VIP oravip2 is enabled VIP oravip2 is running on node: 11grac2
[oracle@11grac1 ~]$ srvctl config vip -n 11grac1 VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0
[oracle@11grac1 ~]$ srvctl config vip -n 11grac2 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0
7.11 Configuration for Node Application-(VIP、GSD、ONS、
Listener)
[oracle@11grac1 ~]$ srvctl config nodeapps -a -g -s -l -l option has been deprecated and will be ignored. VIP exists.:11grac1 VIP exists.: /oravip1/192.168.60.13/255.255.255.0/eth0 VIP exists.:11grac2 VIP exists.: /oravip2/192.168.60.14/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 Name: LISTENER Network: 1, Owner: grid Home:
/11grac/app/11.2.0/grid on node(s) 11grac2,11grac1 End points: TCP:1521
7.12 Check All Services
[oracle@11grac1 ~]$ su - grid -c "crs_stat -t -v"
Password: Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE 11grac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE 11grac1 ora....ac1.gsd application 0/5 0/0 ONLINE ONLINE 11grac1 ora....ac1.ons application 0/3 0/0 ONLINE ONLINE 11grac1 ora....ac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE 11grac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE 11grac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE 11grac2 ora....ac2.gsd application 0/5 0/0 ONLINE ONLINE 11grac2 ora....ac2.ons application 0/3 0/0 ONLINE ONLINE 11grac2 ora....ac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE 11grac2 ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE 11grac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE 11grac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE 11grac2 ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE 11grac1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE 11grac1 ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE 11grac1 ora.gsd ora.gsd.type 0/5 0/ ONLINE ONLINE 11grac1 ora....network ora....rk.type 0/5 0/ ONLINE ONLINE 11grac1 ora.oc4j ora.oc4j.type 0/5 0/0 ONLINE ONLINE 11grac1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE 11grac1 ora.racdb.db ora....se.type 0/2 0/1 ONLINE ONLINE 11grac1 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE 11grac1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE 11grac2 ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE 11grac1
7.13 Starting the Oracle Clusterware Stack
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl stop cluster (-all)
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl start cluster (-all)
[root@11grac1 ~]# /11grac/app/11.2.0/grid/bin/crsctl start cluster –n 11grac1 11grac2
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29819001/viewspace-1272614/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 虛擬機器下red hat 6.5 linux安裝oracle11g虛擬機LinuxOracle
- 虛擬機器上靜默安裝oracle11g rac虛擬機Oracle
- vmware克隆虛擬機器centos6.5,虛擬機器從新配置虛擬機CentOS
- 1 Oracle Database 11.2.0.3.0 RAC On Oralce Linux 6.5 使用-克隆虛擬機器OracleDatabaseLinux虛擬機
- 1 Oracle Database 11.2.0.3.0 RAC On Oralce Linux 6.5 使用-虛擬機器安裝OracleDatabaseLinux虛擬機
- Red Hat Linux 5.3 (虛擬機器) 上安裝 Oracle11g RACLinux虛擬機Oracle
- centos 6.5安裝第一臺虛擬機器CentOS虛擬機
- 【RAC】使用VMware虛擬機器搭建RAC環境虛擬機
- 虛擬機器安裝rac傻瓜教程虛擬機
- 【小貼士】RedHat虛擬機器mount新硬碟Redhat虛擬機硬碟
- RedHat虛擬機器打不開磁碟問題如何解決?RedHat虛擬機器打不開磁碟的解決方法Redhat虛擬機
- Dalvik虛擬機器、Java虛擬機器與ART虛擬機器虛擬機Java
- Mac VirtualBox 7.0 下安裝Redhat 8.5虛擬機器MacRedhat虛擬機
- 虛擬機器中RedHat AS4U2安裝Oracle虛擬機RedhatOracle
- 用虛擬機器做RAC的全過程虛擬機
- 虛擬機器環境下RAC加入節點虛擬機
- RHEL 6.5下為虛擬機器橋連線配置檔案虛擬機
- java虛擬機器和Dalvik虛擬機器Java虛擬機
- Android 虛擬機器 Vs Java 虛擬機器Android虛擬機Java
- 虛擬機器虛擬機
- 虛擬機器搭建rac ASM盤啟動失敗虛擬機ASM
- 虛擬機器安裝10g rac錯誤虛擬機
- 虛擬機器VMware下 Oracle RAC環境新增磁碟虛擬機Oracle
- 連線虛擬機器oracle 和虛擬機器KEY虛擬機Oracle
- 虛擬機器(三)虛擬機器配置靜態Ip虛擬機
- Red Hat Linux 5.4 (虛擬機器) 上安裝 Oracle11g R2 RAC (ASM) 【final】Linux虛擬機OracleASM
- RedHat6.5 安裝Oracle 12c RACRedhatOracle
- 宿主機/客戶端無法連線虛擬機器/主機Oracle11g客戶端虛擬機Oracle
- centos6.5虛擬機器安裝後,沒有iptables配置檔案CentOS虛擬機
- redhat7.6安裝Oracle11G RACRedhatOracle
- PD虛擬機器 18 for Mac(Mac虛擬機器軟體)虛擬機Mac
- JVM 虛擬機器JVM虛擬機
- JVM虛擬機器JVM虛擬機
- Neo 虛擬機器虛擬機
- VMware虛擬機器虛擬機
- 虛擬機器arm虛擬環境搭建虛擬機
- redhat6.5關於rac配置DNS的問題RedhatDNS
- 搭建測試環境exadata一體機 (vm虛擬機器redhat上配置)虛擬機Redhat