asm 磁碟切換
計劃本週做 asm 磁碟的切換準備工作,上週已經做好了lun的劃分工作,
sunha5_n 包括兩個磁碟組, sunha5_f1, sunha5_f2 ,我們原定的分別跨在兩個儲存上,
前段時間,因為磁碟空間緊張,臨時新增磁碟空間打破了這種格局,對維護帶來一定的難度,需要調整回去,本次操作的目的在於此
刪除 sunha5_n_0000 磁碟,新增一個新磁碟代替之。
先是刪除很輕鬆
新增新磁碟
系統報錯,無法新增磁碟 ,問題出在哪裡呢?
這個錯誤時報 asm 磁碟的 名 被佔用,為新加磁碟指定磁碟名,當時,無法確定是什麼問題,於是決定先回滾
進過確認,發現新增的磁碟是新開的lun 沒有使用過,不存在什麼問題,許可權也夠,問題應該是出在asm 上
原來刪除了之後,系統自動發動了rebalance 操作,先前發出的三處的命令,並沒有真正的刪除磁碟,需要先吧這個磁碟的資料,移動到其他盤上,才會真正的刪除磁碟,而asm 分配新加磁碟的命令是指派的asm 磁碟名稱總是從最小沒有使用的磁碟名開始,
導致 把 sunha5_n-0000 分給了新磁碟 而舊盤還沒有從cache中清理掉,導致了衝突。
知道了原因就好辦了,為了節省rebalance 的時間,我們採用了先加盤,然後再刪除盤, 有個缺點會導致當前命令視窗阻塞。
另開一個視窗 刪除磁碟sunha5_n_0000
執行過程中,我們的asm 磁碟組檢查程式碼成功的檢測到asm磁碟故障。
NAME STATE MOUNT_STAT PATH TOTAL(GB) USED%
-------------------- ---------- -------------- -------------------------------------------------- --------- ----------
SUNHA5_N_0000 DROPPING CACHED /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 99.75 36.8
SUNHA5_N_0001 NORMAL CACHED /dev/rdsk/c4t6006016023912800BA383E8BACBADF11d0s6 99.75 80.49
SUNHA5_N_0002 NORMAL CACHED /dev/rdsk/c4t6006016023912800B02840888FB1DF11d0s6 99.75 80.49
SUNHA5_N_0003 NORMAL CACHED /dev/rdsk/c4t60060160239128008E54346EACBADF11d0s6 99.75 80.5
SUNHA5_N_0004 NORMAL CACHED /dev/rdsk/c4t6006016010511E00128706D3AABADF11d0s6 99.75 80.49
SUNHA5_N_0005 NORMAL CACHED /dev/rdsk/c4t6006016010511E002D5FFF791CB9DF11d0s6 99.75 80.49
SUNHA5_N_0006 NORMAL CACHED /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6 99.75 80.49
SUNHA5_N_0007 NORMAL CACHED /dev/rdsk/c4t6006016010511E00E4F60988C0A5DF11d0s6 99.75 80.49
SUNHA5_N_0008 NORMAL CACHED /dev/rdsk/c4t6006016023912800CBCB861C1FB9DF11d0s6 99.75 80.49
SUNHA5_N_0009 NORMAL CACHED /dev/rdsk/c4t6006016010511E008E4BB66773C4DF11d0s6 99.75 80.49
SUNHA5_N_0010 NORMAL CACHED /dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6 99.75 43.69
最後終於完成了
sunha5_n 包括兩個磁碟組, sunha5_f1, sunha5_f2 ,我們原定的分別跨在兩個儲存上,
前段時間,因為磁碟空間緊張,臨時新增磁碟空間打破了這種格局,對維護帶來一定的難度,需要調整回去,本次操作的目的在於此
刪除 sunha5_n_0000 磁碟,新增一個新磁碟代替之。
先是刪除很輕鬆
01 | 1* select name, failgroup ,path from v$asm_disk where name is not null order by name |
02 | SYS AS SYSDBA at +ASM > / |
03 |
04 | NAME FAILGROUP PATH |
05 | -------------------- --------------- -------------------------------------------------- |
06 | SUNHA5_N_0000 SUNHA5_F2 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 |
07 | SUNHA5_N_0001 SUNHA5_F2 /dev/rdsk/c4t6006016023912800BA383E8BACBADF11d0s6 |
08 | SUNHA5_N_0002 SUNHA5_F2 /dev/rdsk/c4t6006016023912800B02840888FB1DF11d0s6 |
09 | SUNHA5_N_0003 SUNHA5_F2 /dev/rdsk/c4t60060160239128008E54346EACBADF11d0s6 |
10 | SUNHA5_N_0004 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00128706D3AABADF11d0s6 |
11 | SUNHA5_N_0005 SUNHA5_F1 /dev/rdsk/c4t6006016010511E002D5FFF791CB9DF11d0s6 |
12 | SUNHA5_N_0006 SUNHA5_F1 /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6 |
13 | SUNHA5_N_0007 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00E4F60988C0A5DF11d0s6 |
14 | SUNHA5_N_0008 SUNHA5_F2 /dev/rdsk/c4t6006016023912800CBCB861C1FB9DF11d0s6 |
15 | SUNHA5_N_0009 SUNHA5_F1 /dev/rdsk/c4t6006016010511E008E4BB66773C4DF11d0s6 |
1 | SYS AS SYSDBA at +ASM > alter diskgroup sunha5_n drop disk SUNHA5_N_0000 ; |
2 |
3 | Diskgroup altered. |
新增新磁碟
01 | SYS AS SYSDBA at +ASM > alter diskgroup sunha5_n add failgroup sunha5_f2 disk '/dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6'; |
02 | alter diskgroup sunha5_n add failgroup sunha5_f2 disk '/dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6' |
03 | * |
04 | ERROR at line 1: |
05 | ORA-15032: not all alterations performed |
06 | ORA-15010: name is already used by an existing ASM disk |
07 |
08 |
09 | SYS AS SYSDBA at +ASM > edit |
10 | Wrote file afiedt.buf |
11 | "afiedt.buf" 2 DD£?110 ×?·? |
12 |
13 | alter diskgroup sunha5_n add failgroup sunha5_f2 disk '/dev/rdsk/c4t6006016023912800AF557B32B813E011d0s6' |
14 | / |
15 | ~ |
系統報錯,無法新增磁碟 ,問題出在哪裡呢?
這個錯誤時報 asm 磁碟的 名 被佔用,為新加磁碟指定磁碟名,當時,無法確定是什麼問題,於是決定先回滾
01 | SYS AS SYSDBA at +ASM > alter diskgroup sunha5_n undrop disks; |
02 |
03 | Diskgroup altered. |
04 |
05 | 1* select name,failgroup,path from v$asm_disk order by failgroup ,name |
06 | SYS AS SYSDBA at +ASM > / |
07 |
08 | NAME FAILGROUP PATH |
09 | -------------------- --------------- -------------------------------------------------- |
10 | SUNHA5_N_0004 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00128706D3AABADF11d0s6 |
11 | SUNHA5_N_0005 SUNHA5_F1 /dev/rdsk/c4t6006016010511E002D5FFF791CB9DF11d0s6 |
12 | SUNHA5_N_0006 SUNHA5_F1 /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6 |
13 | SUNHA5_N_0007 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00E4F60988C0A5DF11d0s6 |
14 | SUNHA5_N_0009 SUNHA5_F1 /dev/rdsk/c4t6006016010511E008E4BB66773C4DF11d0s6 |
15 | SUNHA5_N_0000 SUNHA5_F2 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 |
16 | SUNHA5_N_0001 SUNHA5_F2 /dev/rdsk/c4t6006016023912800BA383E8BACBADF11d0s6 |
17 | SUNHA5_N_0002 SUNHA5_F2 /dev/rdsk/c4t6006016023912800B02840888FB1DF11d0s6 |
18 | SUNHA5_N_0003 SUNHA5_F2 /dev/rdsk/c4t60060160239128008E54346EACBADF11d0s6 |
19 | SUNHA5_N_0008 SUNHA5_F2 /dev/rdsk/c4t6006016023912800CBCB861C1FB9DF11d0s6 |
20 |
21 | 磁碟又找回來了 |
進過確認,發現新增的磁碟是新開的lun 沒有使用過,不存在什麼問題,許可權也夠,問題應該是出在asm 上
1 | SYS AS SYSDBA at +ASM > select * from v$asm_operation; |
2 |
3 | GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES |
4 | ------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- ----------- |
5 | 2 REBAL RUN 1 1 23154 95589 1470 49 |
原來刪除了之後,系統自動發動了rebalance 操作,先前發出的三處的命令,並沒有真正的刪除磁碟,需要先吧這個磁碟的資料,移動到其他盤上,才會真正的刪除磁碟,而asm 分配新加磁碟的命令是指派的asm 磁碟名稱總是從最小沒有使用的磁碟名開始,
導致 把 sunha5_n-0000 分給了新磁碟 而舊盤還沒有從cache中清理掉,導致了衝突。
知道了原因就好辦了,為了節省rebalance 的時間,我們採用了先加盤,然後再刪除盤, 有個缺點會導致當前命令視窗阻塞。
1 | SYS AS SYSDBA at +ASM >alter diskgroup sunha5_n |
2 | add failgroup sunha5_f2 disk '/dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6' |
3 | name 'SUNHA5_N_0010' reba |
4 | lance wait |
5 | / |
另開一個視窗 刪除磁碟sunha5_n_0000
01 | SYS AS SYSDBA at +ASM > col name for a20 |
02 | SYS AS SYSDBA at +ASM > col path for a50 |
03 | SYS AS SYSDBA at +ASM > col failgroup for a10 |
04 | SYS AS SYSDBA at +ASM > select name ,failgroup ,path from v$asm_disk where name is not null order by failgroup ,name; |
05 |
06 | NAME FAILGROUP PATH |
07 | -------------------- ---------- -------------------------------------------------- |
08 | SUNHA5_N_0004 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00128706D3AABADF11d0s6 |
09 | SUNHA5_N_0005 SUNHA5_F1 /dev/rdsk/c4t6006016010511E002D5FFF791CB9DF11d0s6 |
10 | SUNHA5_N_0006 SUNHA5_F1 /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6 |
11 | SUNHA5_N_0007 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00E4F60988C0A5DF11d0s6 |
12 | SUNHA5_N_0009 SUNHA5_F1 /dev/rdsk/c4t6006016010511E008E4BB66773C4DF11d0s6 |
13 | SUNHA5_N_0000 SUNHA5_F2 /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 |
14 | SUNHA5_N_0001 SUNHA5_F2 /dev/rdsk/c4t6006016023912800BA383E8BACBADF11d0s6 |
15 | SUNHA5_N_0002 SUNHA5_F2 /dev/rdsk/c4t6006016023912800B02840888FB1DF11d0s6 |
16 | SUNHA5_N_0003 SUNHA5_F2 /dev/rdsk/c4t60060160239128008E54346EACBADF11d0s6 |
17 | SUNHA5_N_0008 SUNHA5_F2 /dev/rdsk/c4t6006016023912800CBCB861C1FB9DF11d0s6 |
18 | SUNHA5_N_0010 SUNHA5_F2 /dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6 |
19 | 新磁碟已經新增上去了 |
20 |
21 |
22 | SYS AS SYSDBA at +ASM > alter diskgroup sunha5_n drop disk SUNHA5_N_0000; |
23 |
24 |
25 | Diskgroup altered. |
26 | 刪掉舊盤 |
27 | SYS AS SYSDBA at +ASM > select * from v$asm_operation ; |
28 |
29 | GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES |
30 | ------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- ----------- |
31 | 1 REBAL RUN 1 1 4802 121171 1850 62 |
32 |
33 |
34 | SYS AS SYSDBA at +ASM > alter diskgroup sunha5_n rebalance power 5 ; |
35 |
36 | Diskgroup altered. |
37 | 並行rebalance |
38 | SYS AS SYSDBA at +ASM > select * from v$asm_operation ; |
39 |
40 | GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES |
41 | ------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- ----------- |
42 | 1 REBAL RUN 5 5 72 86286 1993 |
執行過程中,我們的asm 磁碟組檢查程式碼成功的檢測到asm磁碟故障。
NAME STATE MOUNT_STAT PATH TOTAL(GB) USED%
-------------------- ---------- -------------- -------------------------------------------------- --------- ----------
SUNHA5_N_0000 DROPPING CACHED /dev/rdsk/c4t6006016010511E008C4BB66773C4DF11d0s6 99.75 36.8
SUNHA5_N_0001 NORMAL CACHED /dev/rdsk/c4t6006016023912800BA383E8BACBADF11d0s6 99.75 80.49
SUNHA5_N_0002 NORMAL CACHED /dev/rdsk/c4t6006016023912800B02840888FB1DF11d0s6 99.75 80.49
SUNHA5_N_0003 NORMAL CACHED /dev/rdsk/c4t60060160239128008E54346EACBADF11d0s6 99.75 80.5
SUNHA5_N_0004 NORMAL CACHED /dev/rdsk/c4t6006016010511E00128706D3AABADF11d0s6 99.75 80.49
SUNHA5_N_0005 NORMAL CACHED /dev/rdsk/c4t6006016010511E002D5FFF791CB9DF11d0s6 99.75 80.49
SUNHA5_N_0006 NORMAL CACHED /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6 99.75 80.49
SUNHA5_N_0007 NORMAL CACHED /dev/rdsk/c4t6006016010511E00E4F60988C0A5DF11d0s6 99.75 80.49
SUNHA5_N_0008 NORMAL CACHED /dev/rdsk/c4t6006016023912800CBCB861C1FB9DF11d0s6 99.75 80.49
SUNHA5_N_0009 NORMAL CACHED /dev/rdsk/c4t6006016010511E008E4BB66773C4DF11d0s6 99.75 80.49
SUNHA5_N_0010 NORMAL CACHED /dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6 99.75 43.69
最後終於完成了
01 | 1* select name ,failgroup ,path from v$asm_disk where name is not null order by failgroup,name |
02 | SYS AS SYSDBA at +ASM > / |
03 |
04 | NAME FAILGROUP PATH |
05 | -------------------- ---------- -------------------------------------------------- |
06 | SUNHA5_N_0004 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00128706D3AABADF11d0s6 |
07 | SUNHA5_N_0005 SUNHA5_F1 /dev/rdsk/c4t6006016010511E002D5FFF791CB9DF11d0s6 |
08 | SUNHA5_N_0006 SUNHA5_F1 /dev/rdsk/c4t6006016010511E007C165A71C0A5DF11d0s6 |
09 | SUNHA5_N_0007 SUNHA5_F1 /dev/rdsk/c4t6006016010511E00E4F60988C0A5DF11d0s6 |
10 | SUNHA5_N_0009 SUNHA5_F1 /dev/rdsk/c4t6006016010511E008E4BB66773C4DF11d0s6 |
11 | SUNHA5_N_0001 SUNHA5_F2 /dev/rdsk/c4t6006016023912800BA383E8BACBADF11d0s6 |
12 | SUNHA5_N_0002 SUNHA5_F2 /dev/rdsk/c4t6006016023912800B02840888FB1DF11d0s6 |
13 | SUNHA5_N_0003 SUNHA5_F2 /dev/rdsk/c4t60060160239128008E54346EACBADF11d0s6 |
14 | SUNHA5_N_0008 SUNHA5_F2 /dev/rdsk/c4t6006016023912800CBCB861C1FB9DF11d0s6 |
15 | SUNHA5_N_0010 SUNHA5_F2 /dev/rdsk/c4t6006016023912800AE557B32B813E011d0s6 |
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/133735/viewspace-722451/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- ASM下更換磁碟ASM
- asm 磁碟更換記ASM
- ASM磁碟組更換磁碟的操作方法ASM
- 12c下更換ASM磁碟ASM
- RAC線上替換OCR、DATA、FRA等ASM磁碟ASM
- oracle11g crs線上更換asm磁碟OracleASM
- 【ASM】如何建立ASM磁碟ASM
- ASM之建立ASM磁碟ASM
- ASM磁碟頭ASM
- ASM 增加磁碟ASM
- Oracle asm磁碟中新加磁碟OracleASM
- asm 磁碟組 增刪磁碟組ASM
- ASM磁碟組限制ASM
- asm磁碟normal模式ASMORM模式
- ASM磁碟大小限制ASM
- 如何建立ASM磁碟ASM
- asm磁碟管理篇ASM
- 【ASM】Oracle asm磁碟被格式化,如何掛載該磁碟組ASMOracle
- 遷移ASM磁碟組ASM
- 配置ASM磁碟-轉載ASM
- ASM磁碟頭比較ASM
- asm新增刪除磁碟ASM
- 程式切換(上下文切換)
- 【ASM】ASM磁碟頭被重寫,如何修復ASM
- 【ASM】Oracle asm刪除磁碟組注意事項ASMOracle
- Linux 磁碟對應 ASM diskgroup 中的磁碟LinuxASM
- 在ASM磁碟組中刪除一個磁碟ASM
- 新增磁碟多連路磁碟併為ASM磁碟組擴容ASM
- asm新增和刪除磁碟ASM
- 揭祕ASM磁碟頭資訊ASM
- ASM 磁碟頭資訊備份ASM
- ASM之磁碟建立及管理ASM
- 修復ASM磁碟頭(一)ASM
- 修復ASM磁碟頭(二)ASM
- ORACLE RAC重建ASM磁碟組OracleASM
- linux下新增ASM磁碟LinuxASM
- ASM磁碟故障診斷(二)ASM
- ASM磁碟故障診斷(一)ASM