asm 磁碟組 增刪磁碟組
sunha6 的asm 磁碟組空間有點緊張,而sunha1 的利用率又很低,於是準備從sunha1_N 刪除兩個磁碟加到sunha6上
注意,我們採用的NORmal 的冗餘方式,所以刪增減磁碟的時候,一定要2給磁碟為一個操作單位,要麼刪兩個,要門加兩個
每個failgroup 各增刪一個,要麼不操作,免得引起麻煩。
現在sunha1 上檢視asm 磁碟情況:
[code]
col name for a10
col group_number for 99999
col disk_number for 99999
col state for a10
col failgroup for a15
col path for a50
set linesize 132
select name, group_number , disk_number ,state ,failgroup ,path from v$asm_disk ;
2013-03-07
NAME GROUP_NUMBER DISK_NUMBER STATE FAILGROUP PATH
--------------- ------------ ----------- ---------- ---------- --------------------------------------------------
0 56 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s1
SUNHA1_N_0005 1 5 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
SUNHA1_N_0004 1 4 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6
SUNHA1_N_0003 1 3 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6 -------
SUNHA1_N_0000 1 0 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
SUNHA1_N_0002 1 2 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6
SUNHA1_N_0001 1 1 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6
[/code]
為了便於管理,我們決定刪除 SUNHA1_N_0005 跟 SUNHA1_N_0000 兩個磁碟,
分別來自sunha2_f2 和SUNHA1_F1 兩個failgroup
確保兩個磁碟組裡的磁碟是相等的。
執行刪除操作
[code]
SYS AS SYSDBA at +ASM > alter diskgroup sunha1_n drop disk SUNHA1_N_0005 ;
´ÅÅÌ×éÒѱä¸ü¡£
SYS AS SYSDBA at +ASM > alter diskgroup sunha1_n drop disk SUNHA1_N_0000 ;
´ÅÅÌ×éÒѱä¸ü¡£
SYS AS SYSDBA at +ASM > select * from v$asm_operation;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 37 105457 1680 62
SYS AS SYSDBA at +ASM > select name, group_number , disk_number ,state ,failgroup ,path from v$asm_disk ;
NAME GROUP_NUMBER DISK_NUMBER STATE FAILGROUP PATH
--------------- ------------ ----------- ---------------- ---------- --------------------------------------------------
0 56 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s1
SUNHA1_N_0005 1 5 DROPPING SUNHA2_F2 /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
SUNHA1_N_0004 1 4 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6
SUNHA1_N_0003 1 3 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6
SUNHA1_N_0000 1 0 DROPPING SUNHA1_F1 /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
SUNHA1_N_0002 1 2 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6
SUNHA1_N_0001 1 1 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6
ÒÑÑ¡Ôñ62ÐС£
[/code]
SUNHA1_N_0005 SUNHA1_N_0000 的磁碟狀態變為droping 了
從 v$asm_operation 看到, 磁碟組正在做rebalance
[code]
SYS AS SYSDBA at +ASM > select * from v$asm_operation;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 5274 106309 3131 32
SYS AS SYSDBA at +ASM > /
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 81424 109699 3483 8
[/code]
還剩8分鐘就rebalance 完了。
這裡需要注意的是,刪除磁碟前,一定要確認,剩餘的磁碟空間能夠容下刪除的磁碟上被佔用的空間數量。
[code]
[oracle@sunha2.pc.com ~]$ /data/oracle/crontab/sunha1/ORACLE_MONITOR/check_asm.sh
NAME STATE MOUNT_STAT PATH TOTAL(GB) USED%
-------------------- ---------- -------------- -------------------------------------------------- --------- ----------
SUNHA1_N_0000 NORMAL CACHED /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6 99.75 50.54
SUNHA1_N_0001 NORMAL CACHED /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 99.75 50.55
SUNHA1_N_0002 NORMAL CACHED /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6 99.75 50.58
SUNHA1_N_0003 NORMAL CACHED /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6 99.75 50.55
SUNHA1_N_0004 NORMAL CACHED /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6 99.75 50.59
SUNHA1_N_0005 NORMAL CACHED /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6 99.75 50.53
[oracle@sunha2.pc.com ~]$
刪除前,一共是6個盤,每個50%的使用率 ,我們在刪除磁碟前,對磁碟空間進行了整理,回收了空間,
到刪除前,使用率在32% 左右,
刪除2個磁碟後,整個磁碟組的使用率在48% 左右。
[oracle@sunha2.pc.com ~]$ /data/oracle/crontab/sunha1/ORACLE_MONITOR/check_asm.sh
NAME STATE MOUNT_STAT PATH TOTAL(GB) USED%
-------------------- ---------- -------------- -------------------------------------------------- --------- ----------
SUNHA1_N_0000 DROPPING CACHED /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6 99.75 6.76
SUNHA1_N_0001 NORMAL CACHED /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 99.75 48.59
SUNHA1_N_0002 NORMAL CACHED /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6 99.75 48.58
SUNHA1_N_0003 NORMAL CACHED /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6 99.75 48.59
SUNHA1_N_0004 NORMAL CACHED /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6 99.75 48.58
SUNHA1_N_0005 DROPPING CACHED /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6 99.75 6.76
[/code]
最後看看是否熱balance完了,切記一定rebalance 完了,才可以進行後續操作,免得破壞磁碟組。
[code]
SYS AS SYSDBA at +ASM > select * from v$asm_operation ;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 111058 111985 3682 0
SYS AS SYSDBA at +ASM > /
δѡ¶¨ÐÐ
SYS AS SYSDBA at +ASM >
SYS AS SYSDBA at +ASM > select name, group_number , disk_number ,state ,failgroup ,path from v$asm_disk ;
NAME GROUP_NUMBER DISK_NUMBER STATE FAILGROUP PATH
--------------- ------------ ----------- ---------------- ---------- --------------------------------------------------
0 56 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s1
0 57 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6
0 58 NORMAL /dev/rdsk/c4t6006016023912800ECE98D798FB1DF11d0s1
SUNHA1_N_0004 1 4 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6
SUNHA1_N_0003 1 3 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6
SUNHA1_N_0002 1 2 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6
SUNHA1_N_0001 1 1 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6
[/code]
v$asm_operation 沒有記錄了,說明rebalance 完了
sunha1_n 磁碟組裡已經沒有 SUNHA1_N_0005 SUNHA1_N_0000 這兩個磁碟了。
接下來 就可以把這倆磁碟加到sunha6 上去了。
新增磁碟也是要兩個磁碟組每個組天加相等的磁碟數目。
[code ]
[oracle@sunha6 ~]$ cd /dev/rdsk
[oracle@sunha6 rdsk]$ ls /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
/dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
[oracle@sunha6 rdsk]$ ls /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
/dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
[oracle@sunha6 rdsk]$
export ORACLE_SID=+ASM
SYS AS SYSDBA at +ASM > col name for a10
SYS AS SYSDBA at +ASM > col group_number for 99999
SYS AS SYSDBA at +ASM > col disk_number for 99999
SYS AS SYSDBA at +ASM > col state for a10
SYS AS SYSDBA at +ASM > col failgroup for a15
SYS AS SYSDBA at +ASM > col path for a50
SYS AS SYSDBA at +ASM >
SYS AS SYSDBA at +ASM > set linesize 132
SYS AS SYSDBA at +ASM > col name for a20
SYS AS SYSDBA at +ASM > /
NAME GROUP_NUMBER STATE FAILGROUP PATH
-------------------- ------------ ---------- --------------- --------------------------------------------------
SUNHA6_N_0005 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E002C5FFF791CB9DF11d0s6
SUNHA6_N_0007 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E002A5FFF791CB9DF11d0s6
SUNHA6_N_0006 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E002B5FFF791CB9DF11d0s6
SUNHA6_N_0004 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E007020339AC0A5DF11d0s6
SUNHA6_N_0009 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800C8CB861C1FB9DF11d0s6
SUNHA6_N_0011 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800C9CB861C1FB9DF11d0s6
SUNHA6_N_0010 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800CACB861C1FB9DF11d0s6
SUNHA6_N_0008 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800ECE98D798FB1DF11d0s6
[/cdoe]
注意: 我們的儲存是分為兩套陣列的,
每套陣列只為一個failgroup提供磁碟,這樣才會在儲存陣列壞掉的時候,不會影響到資料庫執行。
所以切記這點。
[code]
SYS AS SYSDBA at +ASM >
SYS AS SYSDBA at +ASM > alter diskgroup sunha6_n
2 add failgroup SUNHA6_F1 disk '/dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6' name SUNHA6_N_0003
3 add failgroup SUNHA6_F2 disk '/dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6' name SUNHA6_N_0012;
Diskgroup altered.
SYS AS SYSDBA at +ASM > select * from v$asm_operation ;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 122 207681 1560 133
[/code]
好了磁碟新增完了。
等待rebalance 完了,就可以。
補充一點,這裡的rebalance 我們只指定了一個程式,可以加字句 power N 就可以指定N 的程式同時rebalance ,
白天指定太多,會引起較大負載,就讓他慢慢跑吧。
注意,我們採用的NORmal 的冗餘方式,所以刪增減磁碟的時候,一定要2給磁碟為一個操作單位,要麼刪兩個,要門加兩個
每個failgroup 各增刪一個,要麼不操作,免得引起麻煩。
現在sunha1 上檢視asm 磁碟情況:
[code]
col name for a10
col group_number for 99999
col disk_number for 99999
col state for a10
col failgroup for a15
col path for a50
set linesize 132
select name, group_number , disk_number ,state ,failgroup ,path from v$asm_disk ;
2013-03-07
NAME GROUP_NUMBER DISK_NUMBER STATE FAILGROUP PATH
--------------- ------------ ----------- ---------- ---------- --------------------------------------------------
0 56 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s1
SUNHA1_N_0005 1 5 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
SUNHA1_N_0004 1 4 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6
SUNHA1_N_0003 1 3 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6 -------
SUNHA1_N_0000 1 0 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
SUNHA1_N_0002 1 2 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6
SUNHA1_N_0001 1 1 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6
[/code]
為了便於管理,我們決定刪除 SUNHA1_N_0005 跟 SUNHA1_N_0000 兩個磁碟,
分別來自sunha2_f2 和SUNHA1_F1 兩個failgroup
確保兩個磁碟組裡的磁碟是相等的。
執行刪除操作
[code]
SYS AS SYSDBA at +ASM > alter diskgroup sunha1_n drop disk SUNHA1_N_0005 ;
´ÅÅÌ×éÒѱä¸ü¡£
SYS AS SYSDBA at +ASM > alter diskgroup sunha1_n drop disk SUNHA1_N_0000 ;
´ÅÅÌ×éÒѱä¸ü¡£
SYS AS SYSDBA at +ASM > select * from v$asm_operation;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 37 105457 1680 62
SYS AS SYSDBA at +ASM > select name, group_number , disk_number ,state ,failgroup ,path from v$asm_disk ;
NAME GROUP_NUMBER DISK_NUMBER STATE FAILGROUP PATH
--------------- ------------ ----------- ---------------- ---------- --------------------------------------------------
0 56 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s1
SUNHA1_N_0005 1 5 DROPPING SUNHA2_F2 /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
SUNHA1_N_0004 1 4 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6
SUNHA1_N_0003 1 3 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6
SUNHA1_N_0000 1 0 DROPPING SUNHA1_F1 /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
SUNHA1_N_0002 1 2 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6
SUNHA1_N_0001 1 1 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6
ÒÑÑ¡Ôñ62ÐС£
[/code]
SUNHA1_N_0005 SUNHA1_N_0000 的磁碟狀態變為droping 了
從 v$asm_operation 看到, 磁碟組正在做rebalance
[code]
SYS AS SYSDBA at +ASM > select * from v$asm_operation;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 5274 106309 3131 32
SYS AS SYSDBA at +ASM > /
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 81424 109699 3483 8
[/code]
還剩8分鐘就rebalance 完了。
這裡需要注意的是,刪除磁碟前,一定要確認,剩餘的磁碟空間能夠容下刪除的磁碟上被佔用的空間數量。
[code]
[oracle@sunha2.pc.com ~]$ /data/oracle/crontab/sunha1/ORACLE_MONITOR/check_asm.sh
NAME STATE MOUNT_STAT PATH TOTAL(GB) USED%
-------------------- ---------- -------------- -------------------------------------------------- --------- ----------
SUNHA1_N_0000 NORMAL CACHED /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6 99.75 50.54
SUNHA1_N_0001 NORMAL CACHED /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 99.75 50.55
SUNHA1_N_0002 NORMAL CACHED /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6 99.75 50.58
SUNHA1_N_0003 NORMAL CACHED /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6 99.75 50.55
SUNHA1_N_0004 NORMAL CACHED /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6 99.75 50.59
SUNHA1_N_0005 NORMAL CACHED /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6 99.75 50.53
[oracle@sunha2.pc.com ~]$
刪除前,一共是6個盤,每個50%的使用率 ,我們在刪除磁碟前,對磁碟空間進行了整理,回收了空間,
到刪除前,使用率在32% 左右,
刪除2個磁碟後,整個磁碟組的使用率在48% 左右。
[oracle@sunha2.pc.com ~]$ /data/oracle/crontab/sunha1/ORACLE_MONITOR/check_asm.sh
NAME STATE MOUNT_STAT PATH TOTAL(GB) USED%
-------------------- ---------- -------------- -------------------------------------------------- --------- ----------
SUNHA1_N_0000 DROPPING CACHED /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6 99.75 6.76
SUNHA1_N_0001 NORMAL CACHED /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 99.75 48.59
SUNHA1_N_0002 NORMAL CACHED /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6 99.75 48.58
SUNHA1_N_0003 NORMAL CACHED /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6 99.75 48.59
SUNHA1_N_0004 NORMAL CACHED /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6 99.75 48.58
SUNHA1_N_0005 DROPPING CACHED /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6 99.75 6.76
[/code]
最後看看是否熱balance完了,切記一定rebalance 完了,才可以進行後續操作,免得破壞磁碟組。
[code]
SYS AS SYSDBA at +ASM > select * from v$asm_operation ;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 111058 111985 3682 0
SYS AS SYSDBA at +ASM > /
δѡ¶¨ÐÐ
SYS AS SYSDBA at +ASM >
SYS AS SYSDBA at +ASM > select name, group_number , disk_number ,state ,failgroup ,path from v$asm_disk ;
NAME GROUP_NUMBER DISK_NUMBER STATE FAILGROUP PATH
--------------- ------------ ----------- ---------------- ---------- --------------------------------------------------
0 56 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s1
0 57 NORMAL /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6
0 58 NORMAL /dev/rdsk/c4t6006016023912800ECE98D798FB1DF11d0s1
SUNHA1_N_0004 1 4 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800B0557B32B813E011d0s6
SUNHA1_N_0003 1 3 NORMAL SUNHA2_F2 /dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6
SUNHA1_N_0002 1 2 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008D4BB66773C4DF11d0s6
SUNHA1_N_0001 1 1 NORMAL SUNHA1_F1 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6
[/code]
v$asm_operation 沒有記錄了,說明rebalance 完了
sunha1_n 磁碟組裡已經沒有 SUNHA1_N_0005 SUNHA1_N_0000 這兩個磁碟了。
接下來 就可以把這倆磁碟加到sunha6 上去了。
新增磁碟也是要兩個磁碟組每個組天加相等的磁碟數目。
[code ]
[oracle@sunha6 ~]$ cd /dev/rdsk
[oracle@sunha6 rdsk]$ ls /dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
/dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6
[oracle@sunha6 rdsk]$ ls /dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
/dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6
[oracle@sunha6 rdsk]$
export ORACLE_SID=+ASM
SYS AS SYSDBA at +ASM > col name for a10
SYS AS SYSDBA at +ASM > col group_number for 99999
SYS AS SYSDBA at +ASM > col disk_number for 99999
SYS AS SYSDBA at +ASM > col state for a10
SYS AS SYSDBA at +ASM > col failgroup for a15
SYS AS SYSDBA at +ASM > col path for a50
SYS AS SYSDBA at +ASM >
SYS AS SYSDBA at +ASM > set linesize 132
SYS AS SYSDBA at +ASM > col name for a20
SYS AS SYSDBA at +ASM > /
NAME GROUP_NUMBER STATE FAILGROUP PATH
-------------------- ------------ ---------- --------------- --------------------------------------------------
SUNHA6_N_0005 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E002C5FFF791CB9DF11d0s6
SUNHA6_N_0007 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E002A5FFF791CB9DF11d0s6
SUNHA6_N_0006 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E002B5FFF791CB9DF11d0s6
SUNHA6_N_0004 1 NORMAL SUNHA6_F1 /dev/rdsk/c4t6006016010511E007020339AC0A5DF11d0s6
SUNHA6_N_0009 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800C8CB861C1FB9DF11d0s6
SUNHA6_N_0011 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800C9CB861C1FB9DF11d0s6
SUNHA6_N_0010 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800CACB861C1FB9DF11d0s6
SUNHA6_N_0008 1 NORMAL SUNHA6_F2 /dev/rdsk/c4t6006016023912800ECE98D798FB1DF11d0s6
[/cdoe]
注意: 我們的儲存是分為兩套陣列的,
每套陣列只為一個failgroup提供磁碟,這樣才會在儲存陣列壞掉的時候,不會影響到資料庫執行。
所以切記這點。
[code]
SYS AS SYSDBA at +ASM >
SYS AS SYSDBA at +ASM > alter diskgroup sunha6_n
2 add failgroup SUNHA6_F1 disk '/dev/rdsk/c4t6006016010511E008F4BB66773C4DF11d0s6' name SUNHA6_N_0003
3 add failgroup SUNHA6_F2 disk '/dev/rdsk/c4t6006016023912800B1557B32B813E011d0s6' name SUNHA6_N_0012;
Diskgroup altered.
SYS AS SYSDBA at +ASM > select * from v$asm_operation ;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- -----------
1 REBAL RUN 1 1 122 207681 1560 133
[/code]
好了磁碟新增完了。
等待rebalance 完了,就可以。
補充一點,這裡的rebalance 我們只指定了一個程式,可以加字句 power N 就可以指定N 的程式同時rebalance ,
白天指定太多,會引起較大負載,就讓他慢慢跑吧。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/133735/viewspace-755458/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 在ASM磁碟組中刪除一個磁碟ASM
- ASM磁碟組刪除DISK操作ASM
- ASM磁碟組限制ASM
- 【ASM】Oracle asm刪除磁碟組注意事項ASMOracle
- ASM的管理----刪除和新增磁碟組ASM
- 遷移ASM磁碟組ASM
- 11.2刪除第一個ASM磁碟組ASM
- ORACLE RAC重建ASM磁碟組OracleASM
- ASM磁碟組修改重建操作ASM
- ASM磁碟組空間不足ASM
- asm磁碟組建立錯誤,用中轉儲存,重建磁碟組ASM
- ASM磁碟組更換磁碟的操作方法ASM
- 新增磁碟多連路磁碟併為ASM磁碟組擴容ASM
- 修改ASM磁碟組的屬性ASM
- 修改ASM磁碟組冗餘模式ASM模式
- 有效管理 ASM 磁碟組空間ASM
- Oracle ASM新增磁碟組POWER OPTIONOracleASM
- Oracle ASM磁碟組常用操作命令OracleASM
- Oracle ASM異常dismount磁碟組OracleASM
- 【ASM】Oracle asm磁碟被格式化,如何掛載該磁碟組ASMOracle
- Windows 下使用檔案模擬磁碟配置ASM磁碟組WindowsASM
- 使用udev擴充套件ASM磁碟組dev套件ASM
- ASM 磁碟組的建立及擴容ASM
- ORACLE ASM磁碟組空間溢位OracleASM
- ASM磁碟組擴容操作文件ASM
- 通過FTP訪問ASM磁碟組FTPASM
- ASM磁碟組不能自動掛載ASM
- 遷移OCR和VotingDisk並刪除原ASM磁碟組ASM
- 11G RAC 為 ASM 磁碟組增加一個磁碟。(AIX)ASMAI
- 如何移動asm磁碟組內的資料檔案到另外一個磁碟組ASM
- 一次ASM新增新的磁碟組ASM
- Oracle ASM磁碟組擴容(AIX7.1)OracleASMAI
- html監控ASM磁碟組使用率HTMLASM
- ASM中磁碟組許可權問題ASM
- 遷移ocr/votedisk/asm spfile所在磁碟組ASM
- ASM磁碟組丟失member kfed修復ASM
- 對oracle asm 磁碟組進行檢查OracleASM
- 在Oracle10g 新增ASM磁碟組OracleASM