Oracle RAC環境下ASM磁碟組擴容
Oracle RAC環境下ASM磁碟組擴容
生產環境注意調整以下引數:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ 1.節點間滾動新增UDEV磁碟
+ 2.ASM 記憶體大小
+ 3.POWER_LIMIT別弄太大
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
實驗環境
--系統環境:Centos 6.10
--資料庫:Oracle 11g RAC
--磁碟繫結:udev
實驗目的
--儲存空間不足,需要給磁碟組DATA加一塊磁碟
準備工作
1.檢查各個節點database、asm及grid日誌是否有錯誤資訊
2.檢查各個節點服務狀態(切換到grid使用者)
--列出資料庫名
[grid@node2 ~]$ srvctl config database
RacNode
[grid@node2 ~]$
--例項狀態
[grid@node2 ~]$ srvctl status database -d RacNode
Instance RacNode1 is running on node node1
Instance RacNode2 is running on node node2
[grid@node2 ~]$
--叢集狀態:
[grid@node2 ~]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@node2 ~]$
--叢集資源狀態
[grid@node2 ~]$ crsctl status res -t
3.ASM磁碟組及磁碟檢查
[grid@node1 ~]$ export ORACLE_SID=+ASM1
[grid@node1 ~]$ sqlplus /nolog
SQL> conn /as sysasm
--顯示使用ASM磁碟組的資料庫
SQL> col INSTANCE_NAME format a20
SQL> col SOFTWARE_VERSION format a20
SQL> select * from gv$asm_client order by 1,2;
INST_ID GROUP_NUMBER INSTANCE_NAME DB_NAME STATUS SOFTWARE_VERSION COMPATIBLE_VERSION
---------- ------------ -------------------- -------- ------------ -------------------- ---------------------
1 1 RacNode1 RacNode CONNECTED 11.2.0.4.0 11.2.0.4.0
1 1 +ASM1 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
1 3 +ASM1 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
2 1 RacNode2 RacNode CONNECTED 11.2.0.4.0 11.2.0.4.0
2 1 +ASM2 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
2 3 +ASM2 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
SQL>
--顯示磁碟組
SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb,allocation_unit_size/1024/1024 unit_mb from v$asm_diskgroup order by 1;
GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB UNIT_MB
------------ ------------------------------ ----------- ------ ---------- ---------- -------------- ----------
1 DATA MOUNTED NORMAL 4096 477 -273 1
2 FLASH MOUNTED EXTERN 2048 1951 1951 1
3 OCRVOTE MOUNTED NORMAL 3072 2146 561 1
SQL>
--顯示磁碟
SQL> col NAME format a25
SQL> col PATH format a40
SQL> col FAILGROUP format a25
SQL> select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1,2;
GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA
------------ ------------------------- ---------------------------------------- ------- -------- ------- ---------- ---------- ------------------------- ---------
1 DATA_0000 /dev/asm-data01 CACHED NORMAL UNKNOWN 1024 118 DATA_0000 11-DEC-18
1 DATA_0001 /dev/asm-data02 CACHED NORMAL UNKNOWN 1024 125 DATA_0001 11-DEC-18
1 DATA_0002 /dev/asm-data03 CACHED NORMAL UNKNOWN 1024 115 DATA_0002 11-DEC-18
1 DATA_0003 /dev/asm-data04 CACHED NORMAL UNKNOWN 1024 119 DATA_0003 11-DEC-18
2 FLASH_0000 /dev/asm-arch01 CACHED NORMAL UNKNOWN 1024 975 FLASH_0000 11-DEC-18
2 FLASH_0001 /dev/asm-arch02 CACHED NORMAL UNKNOWN 1024 976 FLASH_0001 11-DEC-18
3 OCRVOTE_0000 /dev/asm-ocrvote01 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0000 11-DEC-18
3 OCRVOTE_0001 /dev/asm-ocrvote02 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0001 11-DEC-18
3 OCRVOTE_0002 /dev/asm-ocrvote03 CACHED NORMAL UNKNOWN 1024 716 OCRVOTE_0002 11-DEC-18
SQL>
4.系統層面UDEV資訊
[root@node2 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB132e6928-d49d18d4", NAME="asm-ocrvote01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB58647ee7-b466963a", NAME="asm-ocrvote02", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB4c771d58-f17105b9", NAME="asm-ocrvote03", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB88c685cb-3a4633f4", NAME="asm-data01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB1b30fd18-af14e003", NAME="asm-data02", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB67a4423a-e151f28b", NAME="asm-data03", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB81f04366-170fc910", NAME="asm-data04", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB9a0f6e52-bcfcb52c", NAME="asm-arch01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB1f2bac75-512a46ae", NAME="asm-arch02", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@node2 ~]#
[root@node2 ~]# ll /dev/asm-*
brw-rw---- 1 grid asmadmin 8, 16 Dec 12 17:22 /dev/asm-arch01
brw-rw---- 1 grid asmadmin 8, 32 Dec 12 16:21 /dev/asm-arch02
brw-rw---- 1 grid asmadmin 8, 48 Dec 12 17:22 /dev/asm-data01
brw-rw---- 1 grid asmadmin 8, 64 Dec 12 17:22 /dev/asm-data02
brw-rw---- 1 grid asmadmin 8, 80 Dec 12 17:22 /dev/asm-data03
brw-rw---- 1 grid asmadmin 8, 96 Dec 12 17:22 /dev/asm-data04
brw-rw---- 1 grid asmadmin 8, 112 Dec 12 17:22 /dev/asm-ocrvote01
brw-rw---- 1 grid asmadmin 8, 128 Dec 12 17:22 /dev/asm-ocrvote02
brw-rw---- 1 grid asmadmin 8, 144 Dec 12 17:22 /dev/asm-ocrvote03
[root@node2 ~]#
進入實驗階段
1.停止對應資料庫的業務
2.檢查資料庫會話及停止監聽
--檢查各個節點監聽狀態
[grid@node1 ~]$ srvctl status listener -n node1
Listener LISTENER is enabled on node(s): node1
Listener LISTENER is running on node(s): node1
[grid@node1 ~]$ srvctl status listener -n node2
Listener LISTENER is enabled on node(s): node2
Listener LISTENER is running on node(s): node2
[grid@node1 ~]$
--禁止監聽自啟動
[grid@node1 ~]$ srvctl disable listener -n node1
[grid@node1 ~]$ srvctl disable listener -n node2
--停止監聽
[grid@node1 ~]$ srvctl stop listener -n node1
[grid@node1 ~]$ srvctl stop listener -n node2
--檢視停止及關閉自啟後的監聽狀態
[grid@node1 ~]$ srvctl status listener -n node1
Listener LISTENER is disabled on node(s): node1
Listener LISTENER is not running on node(s): node1
[grid@node1 ~]$ srvctl status listener -n node2
Listener LISTENER is disabled on node(s): node2
Listener LISTENER is not running on node(s): node2
[grid@node1 ~]$
3.關閉資料庫
--檢查資料庫配置
[grid@node1 ~]$ srvctl config database -d RacNode
Database unique name: RacNode
Database name: RacNode
Oracle home: /u01/app/oracle/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/RacNode/spfileRacNode.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RacNode
Database instances: RacNode1,RacNode2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[grid@node1 ~]$
--禁止資料庫自啟動(需切換root使用者)
[root@node2 ~]# cd /u01/app/11.2.0/grid/bin
[root@node2 bin]# ./srvctl disable database -d RacNode
[root@node2 bin]#
--關閉資料庫
[grid@node1 ~]$ srvctl stop database -d RacNode
[grid@node1 ~]$
--檢查關閉後資料庫狀態
[grid@node1 ~]$ srvctl status database -d RacNode
Instance RacNode1 is not running on node node1
Instance RacNode2 is not running on node node2
[grid@node1 ~]$
4.關閉叢集軟體
--檢視各個節點叢集是否為自啟動
[root@node1 bin]# ./crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@node1 bin]#
[root@node2 bin]# ./crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
--禁止各個節點的自啟動
[root@node1 bin]# ./crsctl disable has
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@node1 bin]#
[root@node2 bin]# ./crsctl disable has
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@node2 bin]#
--檢視各個節點禁止自啟動是否生效
[root@node1 bin]# ./crsctl config has
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@node1 bin]#
[root@node2 bin]# ./crsctl config has
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@node2 bin]#
--停止各個節點叢集
[root@node1 bin]# ./crsctl stop has
[root@node2 bin]# ./crsctl stop has
5.系統層面新增磁碟(儲存工程師協助完成)
6.各個節點使用UDEV新增磁碟
--確定磁碟機代號
fdisk -l
/dev/sdk
/dev/sdl
/dev/sdm
/dev/sdn
--獲取繫結規則
for i in k l m n
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB00e4d091-5990307e", NAME="asm-data05", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB8bc8125c-72dacc92", NAME="asm-data06", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBc195c913-00f6c68e", NAME="asm-data07", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB720e4cca-32c20936", NAME="asm-data08", OWNER="grid", GROUP="asmadmin", MODE="0660"
--各個節點將規則新增到rule檔案99-oracle-asmdevices.rules
--重啟udev
[root@node1 rules.d]# start_udev
[root@node2 rules.d]# start_udev
--確認新增磁碟,確保各個節點可以識別到磁碟且一致
[root@node2 rules.d]# ll /dev/asm-*
brw-rw---- 1 grid asmadmin 8, 16 Dec 13 16:02 /dev/asm-arch01
brw-rw---- 1 grid asmadmin 8, 32 Dec 13 16:02 /dev/asm-arch02
brw-rw---- 1 grid asmadmin 8, 48 Dec 13 16:02 /dev/asm-data01
brw-rw---- 1 grid asmadmin 8, 64 Dec 13 16:02 /dev/asm-data02
brw-rw---- 1 grid asmadmin 8, 80 Dec 13 16:02 /dev/asm-data03
brw-rw---- 1 grid asmadmin 8, 96 Dec 13 16:02 /dev/asm-data04
brw-rw---- 1 grid asmadmin 8, 160 Dec 13 16:13 /dev/asm-data05
brw-rw---- 1 grid asmadmin 8, 176 Dec 13 16:13 /dev/asm-data06
brw-rw---- 1 grid asmadmin 8, 192 Dec 13 16:13 /dev/asm-data07
brw-rw---- 1 grid asmadmin 8, 208 Dec 13 16:13 /dev/asm-data08
brw-rw---- 1 grid asmadmin 8, 112 Dec 13 16:02 /dev/asm-ocrvote01
brw-rw---- 1 grid asmadmin 8, 128 Dec 13 16:02 /dev/asm-ocrvote02
brw-rw---- 1 grid asmadmin 8, 144 Dec 13 16:02 /dev/asm-ocrvote03
[root@node2 rules.d]#
7.啟動叢集
[root@node1 bin]# ./crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@node2 bin]# ./crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@node2 bin]#
--檢查叢集的各個元件是否啟動正常
[grid@node2 ~]$ crsctl status res -t
此時,監聽和資料庫服務是停掉的
8.ASM擴容
--檢查asm是否識別到未新增的磁碟
SQL> set line 200
SQL> col NAME format a25
SQL> col PATH format a40
SQL> col FAILGROUP format a25
SQL> select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1,2;
GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA
------------ ------------------------- ---------------------------------------- ------- -------- ------- ---------- ---------- ------------------------- ---------
0 /dev/asm-data07 CLOSED NORMAL UNKNOWN 0 0
0 /dev/asm-data08 CLOSED NORMAL UNKNOWN 0 0
0 /dev/asm-data06 CLOSED NORMAL UNKNOWN 0 0
0 /dev/asm-data05 CLOSED NORMAL UNKNOWN 0 0
1 DATA_0000 /dev/asm-data01 CACHED NORMAL UNKNOWN 1024 118 DATA_0000 11-DEC-18
1 DATA_0001 /dev/asm-data02 CACHED NORMAL UNKNOWN 1024 125 DATA_0001 11-DEC-18
1 DATA_0002 /dev/asm-data03 CACHED NORMAL UNKNOWN 1024 115 DATA_0002 11-DEC-18
1 DATA_0003 /dev/asm-data04 CACHED NORMAL UNKNOWN 1024 119 DATA_0003 11-DEC-18
2 FLASH_0000 /dev/asm-arch01 CACHED NORMAL UNKNOWN 1024 975 FLASH_0000 11-DEC-18
2 FLASH_0001 /dev/asm-arch02 CACHED NORMAL UNKNOWN 1024 976 FLASH_0001 11-DEC-18
3 OCRVOTE_0000 /dev/asm-ocrvote01 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0000 11-DEC-18
3 OCRVOTE_0001 /dev/asm-ocrvote02 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0001 11-DEC-18
3 OCRVOTE_0002 /dev/asm-ocrvote03 CACHED NORMAL UNKNOWN 1024 716 OCRVOTE_0002 11-DEC-18
SQL>
--給磁碟組DATA新增磁碟
SQL> alter diskgroup DATA add disk '/dev/asm-data05' rebalance power 5;
SQL> alter diskgroup DATA add disk '/dev/asm-data06' rebalance power 5;
注:rebalance power的級別從1到11中選擇一個數值;數值越大,rebalance速度越快,對現有執行系統影響也越大。需要根據當時業務權衡選擇適合的級別;
--監控磁碟組rebalance完成情況
select * from v$asm_operation;
--檢查新增結果(本次實驗有2塊盤未新增)
SQL> col name format a20
SQL> set line 200
SQL> col name format a20
SQL> col path format a40
SQL> select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1,2;
GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA
------------ -------------------- ---------------------------------------- ------- -------- ------- ---------- ---------- ------------------------------ ---------
0 /dev/asm-data08 CLOSED NORMAL UNKNOWN 0 0
0 /dev/asm-data06 CLOSED NORMAL UNKNOWN 0 0
0 /dev/asm-data07 CLOSED NORMAL UNKNOWN 0 0
1 DATA_0000 /dev/asm-data01 CACHED NORMAL UNKNOWN 1024 300 DATA_0000 11-DEC-18
1 DATA_0001 /dev/asm-data02 CACHED NORMAL UNKNOWN 1024 297 DATA_0001 11-DEC-18
1 DATA_0002 /dev/asm-data03 CACHED NORMAL UNKNOWN 1024 298 DATA_0002 11-DEC-18
1 DATA_0003 /dev/asm-data04 CACHED NORMAL UNKNOWN 1024 298 DATA_0003 11-DEC-18
1 DATA_0004 /dev/asm-data05 CACHED NORMAL UNKNOWN 1024 306 DATA_0004 13-DEC-18
2 FLASH_0000 /dev/asm-arch01 CACHED NORMAL UNKNOWN 1024 975 FLASH_0000 11-DEC-18
2 FLASH_0001 /dev/asm-arch02 CACHED NORMAL UNKNOWN 1024 976 FLASH_0001 11-DEC-18
3 OCRVOTE_0000 /dev/asm-ocrvote01 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0000 11-DEC-18
3 OCRVOTE_0001 /dev/asm-ocrvote02 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0001 11-DEC-18
3 OCRVOTE_0002 /dev/asm-ocrvote03 CACHED NORMAL UNKNOWN 1024 716 OCRVOTE_0002 11-DEC-18
SQL>
9.啟動資料庫
--啟動資料庫自啟動服務(ora.racnode.db),否則無法用srvctl啟動資料庫
[root@node1 bin]# ./srvctl enable database -d RacNode
[root@node1 bin]#
[grid@node2 ~]$ srvctl start database -d RacNode
[grid@node2 ~]$ srvctl status database -d RacNode
Instance RacNode1 is running on node node1
Instance RacNode2 is running on node node2
[grid@node2 ~]$
10.啟動監聽
--啟動監聽自啟動服務,否則無法用srvctl啟動資料庫
[grid@node2 ~]$ srvctl enable listener -n node1
[grid@node2 ~]$ srvctl enable listener -n node2
[grid@node2 ~]$ srvctl start listener -n node1
[grid@node2 ~]$ srvctl start listener -n node2
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31520497/viewspace-2695613/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle RAC ASM磁碟組擴容時遇到的VIP漂移OracleASM
- Oracle RAC日常運維-ASM磁碟擴容Oracle運維ASM
- Oracle ASM磁碟組擴容(AIX7.1)OracleASMAI
- RAC環境ASM磁碟組間修改spfile的位置ASM
- ORACLE RAC重建ASM磁碟組OracleASM
- ASM 磁碟組的建立及擴容ASM
- ASM磁碟組擴容操作文件ASM
- oracle 11g ASM 磁碟組線上擴容實驗:OracleASM
- Oracle 11.2.0.4 rac for aix acfs異常環境的克隆環境ASM磁碟組掛載緩慢OracleAIASM
- 在Oracle10g RAC下新增ASM磁碟組OracleASM
- 新增磁碟多連路磁碟併為ASM磁碟組擴容ASM
- (轉)Oracle rac環境下清除asm例項OracleASM
- Oracle資料庫 ASM磁碟線上擴容Oracle資料庫ASM
- Oracle10g RAC ASM磁碟組[zt]OracleASM
- RAC和ASM環境下打patchASM
- 基於裸裝置的ASM磁碟組擴容方案ASM
- 虛擬機器VMware下 Oracle RAC環境新增磁碟虛擬機Oracle
- oracle 11g資料庫ASM磁碟組儲存空間擴容實施Oracle資料庫ASM
- Oracle ASM擴容(NFS)OracleASMNFS
- Oracle10g RAC ASM 環境日常管理OracleASM
- 使用udev擴充套件ASM磁碟組dev套件ASM
- 11.2環境ASM例項spfile放在ASM磁碟組的訪問方式ASM
- ORACLE RAC 環境下修改IPOracle
- 11G ORACLE RAC DBCA 無法識別asm磁碟組OracleASM
- oracle rac 無法建立asm磁碟OracleASM
- RAC資料庫新增ASM磁碟組(1)資料庫ASM
- oracle RAC+DG 擴容ASM和表空間(Linux)OracleASMLinux
- Oracle ASM新增磁碟組POWER OPTIONOracleASM
- Oracle ASM磁碟組常用操作命令OracleASM
- Oracle ASM異常dismount磁碟組OracleASM
- RAC環境下的redo日誌組重建
- UDEV方式配置Oracle RAC ASM共享磁碟devOracleASM
- 【ASM】Oracle asm刪除磁碟組注意事項ASMOracle
- 11G RAC 為 ASM 磁碟組增加一個磁碟。(AIX)ASMAI
- asm 磁碟組 增刪磁碟組ASM
- ORACLE RAC環境下某節點的+ASM註冊到CRS資源中OracleASM
- 【ASM】Oracle asm磁碟被格式化,如何掛載該磁碟組ASMOracle
- Oracle RAC 環境下的連線管理Oracle