RAC資料庫新增ASM磁碟組(1)
注:下列操作#開頭的為root使用者執行,$開頭的為oracle使用者執行
1、準備工作
新加入的磁碟 需要首先 進行分割槽操作, 首先通過fdisk -l命令檢視需要處理的磁碟:
[root@jssdbn1 ~]# fdisk -l
.........................
.........................
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdg doesn't contain a valid partition table/dev/sdf 和/dev/sdg為新加入的兩個磁碟,經以 上資訊可以看到,新加入的sdf和sdg均未分割槽,下面就通過fdisk命令對這兩個磁碟進行分割槽,操作如下(基本上,所需要輸入了就是n,p,1,w幾個字元):
[root@jssdbn1 ~]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.sdg 也做相同的處理, 這裡不再演示。另外提示一下,由於操作的磁碟 是共享的儲存 裝置 ,因此只需要在叢集中的一個節點中操作即可 。
操作完成後,通過fdisk -l命令 再次 檢視當前的分割槽:
[root@jssdbn1 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1958 15623212+ 8e Linux LVM
......................
..........................
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 1044 8385898+ 83 Linux
Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 1044 8385898+ 83 Linux分割槽完成了,接下來我們需要配置oracle使用者和oinstall組對裸裝置的許可權,操作過的朋友應當還有印象,沒錯,這裡我們通過/etc/udev/rules.d/60-raw.rules配置檔案來進行相關的設定,具體操作如下:
- [root@jssdbn1 ~]# vi /etc/udev/rules.d/60-raw.rules
設定內容如下:
ACTION=="add", KERNEL=="sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1",RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1",RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1",RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg1",RUN+="/bin/raw /dev/raw/raw6 %N"
KERNEL=="raw[1-6]", OWNER="oracle", GROUP="oinstall", MODE="0660"然後重新啟動udev,再檢視設定是否已生效:
[root@jssdbn1 ~]# start_udev
Starting udev: [ OK ]
[root@jssdbn1 ~]# ls /dev/raw* -l
crw------- 1 root root 162, 0 Jan 15 07:35 /dev/rawctl
/dev/raw:
total 0
crw-rw---- 1 oracle oinstall 162, 1 Jan 15 07:52 raw1
crw-rw---- 1 oracle oinstall 162, 2 Jan 15 07:52 raw2
crw-rw---- 1 oracle oinstall 162, 3 Jan 15 07:52 raw3
crw-rw---- 1 oracle oinstall 162, 4 Jan 15 07:52 raw4
crw-rw---- 1 oracle oinstall 162, 5 Jan 15 07:51 raw5
crw-rw---- 1 oracle oinstall 162, 6 Jan 15 07:51 raw6檢視當前存在的ASM磁碟:
[root@jssdbn1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2建立兩個新的磁碟,操作如下:
[root@jssdbn1 ~]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdf1
Marking disk "VOL3" as an ASM disk: [ OK ]
[root@jssdbn1 ~]# /etc/init.d/oracleasm createdisk VOL4 /dev/sdg1
Marking disk "VOL4" as an ASM disk: [ OK ]節點1操作完成後,轉到節點2上檢視:
[root@jssdbn2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@jssdbn2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4從上述資訊 看到,節點2也 已經 正常同步節點1的操作結果。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/7607759/viewspace-625450/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- ORACLE 資料庫 ASM磁碟組上新增控制檔案Oracle資料庫ASM
- 在Oracle10g RAC下新增ASM磁碟組OracleASM
- ORACLE RAC重建ASM磁碟組OracleASM
- 如何在linux 10gR2 rac 新增ASM磁碟組LinuxASM
- Oracle ASM新增磁碟組POWER OPTIONOracleASM
- 【資料遷移】RMAN遷移資料庫到ASM(一)建立ASM磁碟組資料庫ASM
- ASM磁碟組故障導致資料庫不能起來ASM資料庫
- Oracle10g RAC ASM磁碟組[zt]OracleASM
- 新增磁碟多連路磁碟併為ASM磁碟組擴容ASM
- 一次ASM新增新的磁碟組ASM
- 在Oracle10g 新增ASM磁碟組OracleASM
- ASM的管理----刪除和新增磁碟組ASM
- 【資料庫資料恢復】Oracle資料庫ASM磁碟組掉線如何恢復資料?資料庫資料恢復OracleASM
- Oracle RAC環境下ASM磁碟組擴容OracleASM
- 【資料庫資料恢復】ASM磁碟組掉線的Oracle資料庫資料恢復案例資料庫資料恢復ASMOracle
- 11G RAC 為 ASM 磁碟組增加一個磁碟。(AIX)ASMAI
- asm磁碟組依賴導致資料庫自啟動報錯ASM資料庫
- asm 磁碟組 增刪磁碟組ASM
- oracle 11.2.0.3 rac資料庫線上新增ASM儲存空間Oracle資料庫ASM
- RAC環境ASM磁碟組間修改spfile的位置ASM
- 11gR2 基於ASM磁碟組的資料庫恢復ASM資料庫
- OracleRAC新增asm磁碟組並設定歸檔位置OracleASM
- ASM磁碟組限制ASM
- Oracle資料庫 ASM磁碟線上擴容Oracle資料庫ASM
- 實現資料庫由檔案系統遷移到 ASM 磁碟組中資料庫ASM
- RAC之grid叢集安裝及ASM磁碟組配置ASM
- Oracle RAC ASM磁碟組擴容時遇到的VIP漂移OracleASM
- asm新增刪除磁碟ASM
- 如何移動asm磁碟組內的資料檔案到另外一個磁碟組ASM
- oracle 資料庫磁碟組屬性Oracle資料庫
- oracle 資料庫磁碟組屬性Oracle資料庫
- 用oracle amdu 抽取asm磁碟組的資料檔案OracleASM
- 遷移ASM磁碟組ASM
- 全面學習和應用ORACLE ASM特性--(3)新增和修改asm磁碟組OracleASM
- 【RAC】 RAC For W2K8R2 安裝--建立ASM磁碟組(六)ASM
- 11G ORACLE RAC DBCA 無法識別asm磁碟組OracleASM
- 11gR2 RAC dbca無法發現ASM磁碟組ASM
- 【RAC】RAC本地資料檔案遷移至ASM的方法(1)ASM