RAC資料庫新增ASM磁碟組(1)
注:下列操作#開頭的為root使用者執行,$開頭的為oracle使用者執行
1、準備工作
新加入的磁碟 需要首先 進行分割槽操作, 首先通過fdisk -l命令檢視需要處理的磁碟:
[root@jssdbn1 ~]# fdisk -l
.........................
.........................
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdg doesn't contain a valid partition table/dev/sdf 和/dev/sdg為新加入的兩個磁碟,經以 上資訊可以看到,新加入的sdf和sdg均未分割槽,下面就通過fdisk命令對這兩個磁碟進行分割槽,操作如下(基本上,所需要輸入了就是n,p,1,w幾個字元):
[root@jssdbn1 ~]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.sdg 也做相同的處理, 這裡不再演示。另外提示一下,由於操作的磁碟 是共享的儲存 裝置 ,因此只需要在叢集中的一個節點中操作即可 。
操作完成後,通過fdisk -l命令 再次 檢視當前的分割槽:
[root@jssdbn1 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1958 15623212+ 8e Linux LVM
......................
..........................
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 1044 8385898+ 83 Linux
Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 1044 8385898+ 83 Linux分割槽完成了,接下來我們需要配置oracle使用者和oinstall組對裸裝置的許可權,操作過的朋友應當還有印象,沒錯,這裡我們通過/etc/udev/rules.d/60-raw.rules配置檔案來進行相關的設定,具體操作如下:
- [root@jssdbn1 ~]# vi /etc/udev/rules.d/60-raw.rules
設定內容如下:
ACTION=="add", KERNEL=="sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1",RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1",RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1",RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg1",RUN+="/bin/raw /dev/raw/raw6 %N"
KERNEL=="raw[1-6]", OWNER="oracle", GROUP="oinstall", MODE="0660"然後重新啟動udev,再檢視設定是否已生效:
[root@jssdbn1 ~]# start_udev
Starting udev: [ OK ]
[root@jssdbn1 ~]# ls /dev/raw* -l
crw------- 1 root root 162, 0 Jan 15 07:35 /dev/rawctl
/dev/raw:
total 0
crw-rw---- 1 oracle oinstall 162, 1 Jan 15 07:52 raw1
crw-rw---- 1 oracle oinstall 162, 2 Jan 15 07:52 raw2
crw-rw---- 1 oracle oinstall 162, 3 Jan 15 07:52 raw3
crw-rw---- 1 oracle oinstall 162, 4 Jan 15 07:52 raw4
crw-rw---- 1 oracle oinstall 162, 5 Jan 15 07:51 raw5
crw-rw---- 1 oracle oinstall 162, 6 Jan 15 07:51 raw6檢視當前存在的ASM磁碟:
[root@jssdbn1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2建立兩個新的磁碟,操作如下:
[root@jssdbn1 ~]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdf1
Marking disk "VOL3" as an ASM disk: [ OK ]
[root@jssdbn1 ~]# /etc/init.d/oracleasm createdisk VOL4 /dev/sdg1
Marking disk "VOL4" as an ASM disk: [ OK ]節點1操作完成後,轉到節點2上檢視:
[root@jssdbn2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@jssdbn2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4從上述資訊 看到,節點2也 已經 正常同步節點1的操作結果。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/7607759/viewspace-625450/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle RAC環境下ASM磁碟組擴容OracleASM
- 一次ASM新增新的磁碟組ASM
- RAC之grid叢集安裝及ASM磁碟組配置ASM
- 【資料庫資料恢復】Oracle資料庫ASM磁碟組掉線如何恢復資料?資料庫資料恢復OracleASM
- 【資料庫資料恢復】ASM磁碟組掉線的Oracle資料庫資料恢復案例資料庫資料恢復ASMOracle
- Oracle RAC ASM磁碟組擴容時遇到的VIP漂移OracleASM
- asm磁碟組依賴導致資料庫自啟動報錯ASM資料庫
- ASM磁碟組限制ASM
- Oracle資料庫 ASM磁碟線上擴容Oracle資料庫ASM
- 遷移ASM磁碟組ASM
- Oracle RAC日常運維-ASM磁碟擴容Oracle運維ASM
- 從定位資料塊所在ASM磁碟到ASM strippingASM
- 【ASM】Oracle asm磁碟被格式化,如何掛載該磁碟組ASMOracle
- 【ASM】Oracle asm刪除磁碟組注意事項ASMOracle
- 19c rac資料庫如何新增mgmt資料庫
- ASM磁碟組更換磁碟的操作方法ASM
- ASM磁碟簡單維護,新增,刪除ASM
- ASM 磁碟組的建立及擴容ASM
- ORACLE ASM磁碟組空間溢位OracleASM
- 使用udev擴充套件ASM磁碟組dev套件ASM
- ASM磁碟組擴容操作文件ASM
- goldengate + asm + racGoASM
- Oracle ASM磁碟組擴容(AIX7.1)OracleASMAI
- ASM磁碟組ORA-15042 ORA-15096ASM
- Oracle 11.2.0.4 rac for aix acfs異常環境的克隆環境ASM磁碟組掛載緩慢OracleAIASM
- 如何檢視Oracle RAC的asm磁碟的udev對應關係OracleASMdev
- RAC+ASM+DATAGUARDASM
- Oracle RAC日常運維-DATA磁碟組故障Oracle運維
- ASM重新命名包含OCR/vote file的磁碟組ASM
- 遷移OCR和VotingDisk並刪除原ASM磁碟組ASM
- 基於裸裝置的ASM磁碟組擴容方案ASM
- 【RAC】asm_diskgroups 引數ASM
- IndexedDB 資料庫新增資料Index資料庫
- 【ASM】ASM磁碟頭被重寫,如何修復ASM
- oracle rac資料庫的安裝Oracle資料庫
- RAC開啟資料庫歸檔資料庫
- vgant 安裝oracle資料庫racOracle資料庫
- RAC資料庫心跳更換方案資料庫
- oracle rac 增加磁碟Oracle