修復ASM磁碟頭(一)
客戶的資料庫是一套Oracle 11gR2 RAC for AIX,在資料庫層面使用的是ASM管理資料。這是一個真實的案例,客戶上了一套IBM SVC虛擬化儲存產品,原有Oracle資料庫系統的儲存要先對映給SVC裝置,再透過SVC重新對映到伺服器,原本這是一件比較簡單,且不存在較大風險的問題,但IBM實施工程師將透過SVC重新對映到系統的磁碟畫蛇添足的分配了PVID,導致ASM磁碟的頭被破壞,所有ASM磁碟組無法成功載入。
《AIX平臺下磁碟的PVID對ASM磁碟的破壞》:http://blog.itpub.net/23135684/viewspace-1079207/
透過這篇文章來討論一個該場景下的恢復手段和方法。我們實驗的環境是一個Oracle 11.2.0.3.0 Restart Database,使用ASM進行資料管理。第一篇文章將討論透過ASMCMD中的md_backup和md_restore命令是否能解決這樣的問題。
一.檢查實驗環境。
# pwd
/u01/app/oracle/product/11.2.0/db_1
# cd /u01/app/11.2.0/grid/bin
# ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
ONLINE ONLINE localhost
ora.FRA01.dg
ONLINE ONLINE localhost
ora.LISTENER.lsnr
ONLINE ONLINE localhost
ora.OCRVDISK.dg
ONLINE ONLINE localhost
ora.asm
ONLINE ONLINE localhost Started
ora.ons
OFFLINE OFFLINE localhost
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE localhost
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE localhost
ora.orcl.db
1 ONLINE ONLINE localhost Open
所有相關服務都是ONLINE狀態。
# lspv
hdisk0 00cc1ad4ef095bf0 rootvg active
hdisk1 00cc1ad46aff307f None
hdisk2 none None
hdisk3 none None
hdisk4 none None
伺服器上一共掛載了5張磁碟。
# su - grid
$ asmcmd -p
ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 51200 49637 0 49637 0 N DATA01/
MOUNTED EXTERN N 512 4096 1048576 51200 50980 0 50980 0 N FRA01/
MOUNTED EXTERN N 512 4096 1048576 2048 1989 0 1989 0 N OCRVDISK/
ASM包括3個磁碟組,分別是DATA01,FRA01,OCRVDISK。
下面是每個磁碟組和磁碟的對應關係:
ASMCMD [+] > lsdsk -G ocrvdisk
Path
/dev/rhdisk2
ASMCMD [+] > lsdsk -G data01
Path
/dev/rhdisk3
ASMCMD [+] > lsdsk -G fra01
Path
/dev/rhdisk4
下面檢視的是資料庫的狀態:
# su - oracle
$ sql
SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 19 15:37:14 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL> set linesize 200
SQL> col name format a50
SQL> select name ,status from v$datafile;
NAME STATUS
-------------------------------------------------- -------
+DATA01/orcl/datafile/system.256.856020443 SYSTEM
+DATA01/orcl/datafile/sysaux.257.856020443 ONLINE
+DATA01/orcl/datafile/undotbs1.258.856020443 ONLINE
+DATA01/orcl/datafile/users.259.856020443 ONLINE
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
-------------------------------------------------- --------------------
ORCL READ WRITE
SQL> select status from v$instance;
STATUS
------------
OPEN
二.使用md_backup命令備份ASM後設資料。
# su - grid
$ asmcmd -p
透過help md_backup命令可以獲得詳細的幫助資訊。
ASMCMD [+] > help md_backup
md_backup
The md_backup command creates a backup file containing metadata
for one or more disk groups.
Volume and Oracle Automatic Storage Management Cluster File System
(Oracle ACFS) file system information is not backed up.
md_backup backup_file [-G diskgroup [,diskgroup,...]]
The options for the md_backup command are described below.
backup_file - Specifies the backup file in which you want to
store the metadata.
-G diskgroup - Specifies the disk group name of the disk group
that must be backed up
By default all the mounted disk groups are included in the backup file,
which is saved in the current working directory.
The first example shows the use of the backup command when you run it
without the disk group option. This example backs up all of the mounted
disk groups and creates the backup image in the current working
directory. The second example creates a backup of disk group DATA and
FRA. The backup that this example creates is saved in the
/tmp/dgbackup20090716 file.
ASMCMD [+] > md_backup /tmp/dgbackup20090716
ASMCMD [+] > md_backup /tmp/dgbackup20090716 -G DATA,FRA
Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ASM
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ARCHIVELOG/2009_07_13
Current alias directory path: ORCL/BACKUPSET/2009_07_14
Current alias directory path: ORCL/ARCHIVELOG/2009_07_14
Current alias directory path: ORCL
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/ARCHIVELOG
Current alias directory path: ORCL/BACKUPSET
Current alias directory path: ORCL/ONLINELOG
備份所有ASM磁碟組的後設資料資訊:
ASMCMD [+] > md_backup /tmp/dggroup20140819
Disk group metadata to be backed up: OCRVDISK
Disk group metadata to be backed up: DATA01
Disk group metadata to be backed up: FRA01
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ASM
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ONLINELOG
三.為ASM磁碟分配PVID。
1).停止所有CRS服務:
# pwd
/u01/app/11.2.0/grid/bin
# ./crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'localhost'
CRS-2673: Attempting to stop 'ora.OCRVDISK.dg' on 'localhost'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'localhost'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'localhost'
CRS-2677: Stop of 'ora.orcl.db' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.DATA01.dg' on 'localhost'
CRS-2673: Attempting to stop 'ora.FRA01.dg' on 'localhost'
CRS-2677: Stop of 'ora.DATA01.dg' on 'localhost' succeeded
CRS-2677: Stop of 'ora.FRA01.dg' on 'localhost' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'localhost' succeeded
CRS-2677: Stop of 'ora.OCRVDISK.dg' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'localhost'
CRS-2677: Stop of 'ora.asm' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'localhost'
CRS-2677: Stop of 'ora.cssd' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'localhost'
CRS-2677: Stop of 'ora.evmd' on 'localhost' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'localhost' has completed
CRS-4133: Oracle High Availability Services has been stopped.
# ps -ef | grep d.bin
# ps -ef | grep ora
root 7536792 4456600 0 15:42:41 pts/0 0:00 grep ora
oracle 8650932 7667784 0 15:23:16 pts/1 0:00 -ksh
2).為所有ASM磁碟分配PVID:
# lspv
hdisk0 00cc1ad4ef095bf0 rootvg active
hdisk1 00cc1ad46aff307f None
hdisk2 none None
hdisk3 none None
hdisk4 none None
# chdev -l hdisk2 -a pv=yes
hdisk2 changed
# chdev -l hdisk3 -a pv=yes
hdisk3 changed
# chdev -l hdisk4 -a pv=yes
hdisk4 changed
# lspv
hdisk0 00cc1ad4ef095bf0 rootvg active
hdisk1 00cc1ad46aff307f None
hdisk2 00cc1ad4f0022b6c None
hdisk3 00cc1ad4f0024a5c None
hdisk4 00cc1ad4f0026653 None
3).重新啟動所有CRS服務:
# pwd
/u01/app/11.2.0/grid/bin
# ./crsctl start has
CRS-4123: Oracle High Availability Services has been started.
經過一段時間後,ASM例項能正常的啟動,但是所有的磁碟組都無法ONLINE:
#./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
ONLINE OFFLINE localhost
ora.FRA01.dg
ONLINE OFFLINE localhost
ora.LISTENER.lsnr
ONLINE ONLINE localhost
ora.OCRVDISK.dg
ONLINE OFFLINE localhost
ora.asm
ONLINE ONLINE localhost Started
ora.ons
OFFLINE OFFLINE localhost
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE localhost
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE localhost
ora.orcl.db
1 ONLINE OFFLINE Instance Shutdown
4).檢視ASM例項告警日誌:
# su - grid
$ cd $ORACLE_HOME/log/diag/asm/+asm/+ASM/trace
$ ls *.log
alert_+ASM.log
$ tail -200 alert*.log
......
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/11.2.0/grid/dbs/arch
Autotune of undo retention is turned on.
IMODE=BR
ILAT =0
LICENSE_MAX_USERS = 0
SYS auditing is disabled
NOTE: Volume support enabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option.
ORACLE_HOME = /u01/app/11.2.0/grid
System name: AIX
Node name: localhost
Release: 1
Version: 6
Machine: 00CC1AD44C00
WARNING: using default parameter settings without any parameter file
Tue Aug 19 15:45:03 2014
PMON started with pid=2, OS id=7143492
Tue Aug 19 15:45:03 2014
PSP0 started with pid=3, OS id=7929858
Tue Aug 19 15:45:04 2014
VKTM started with pid=4, OS id=4259946 at elevated priority
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Tue Aug 19 15:45:04 2014
GEN0 started with pid=5, OS id=7077996
Tue Aug 19 15:45:04 2014
DIAG started with pid=6, OS id=6684742
Tue Aug 19 15:45:04 2014
DIA0 started with pid=7, OS id=6357128
Tue Aug 19 15:45:04 2014
MMAN started with pid=8, OS id=7012498
Tue Aug 19 15:45:04 2014
DBW0 started with pid=9, OS id=5636096
Tue Aug 19 15:45:04 2014
LGWR started with pid=10, OS id=5373998
Tue Aug 19 15:45:04 2014
CKPT started with pid=11, OS id=5898392
Tue Aug 19 15:45:05 2014
SMON started with pid=12, OS id=5308618
Tue Aug 19 15:45:05 2014
RBAL started with pid=13, OS id=8454162
Tue Aug 19 15:45:05 2014
GMON started with pid=14, OS id=12583112
Tue Aug 19 15:45:05 2014
MMON started with pid=15, OS id=5111896
Tue Aug 19 15:45:05 2014
MMNL started with pid=16, OS id=5046518
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Tue Aug 19 15:45:05 2014
SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:2} */
SQL> ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:2} */
SQL> ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group DATA01 number=1 incarn=0xbea398e5
NOTE: cache began mount (first) of group DATA01 number=1 incarn=0xbea398e5
Tue Aug 19 15:45:05 2014
SQL> ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
ERROR: no read quorum in group: required 2, found 0 disks
Tue Aug 19 15:45:06 2014
ALTER SYSTEM SET local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1521))))' SCOPE=MEMORY SID='+ASM';
NOTE: cache dismounting (clean) group 1/0xBEA398E5 (DATA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7274670, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xBEA398E5 (DATA01)
NOTE: cache ending mount (fail) of group DATA01 number=1 incarn=0xbea398e5
NOTE: cache deleting context for group DATA01 1/0xbea398e5
GMON dismounting group 1 at 2 for pid 17, osid 7274670
ERROR: diskgroup DATA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA01"
ERROR: ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
Tue Aug 19 15:45:06 2014
SQL> ALTER DISKGROUP OCRVDISK MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group FRA01 number=1 incarn=0xc1e398e7
NOTE: cache began mount (first) of group FRA01 number=1 incarn=0xc1e398e7
Tue Aug 19 15:45:06 2014
NOTE: No asm libraries found in the system
ASM Health Checker found 1 new failures
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xC1E398E7 (FRA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7798844, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xC1E398E7 (FRA01)
NOTE: cache ending mount (fail) of group FRA01 number=1 incarn=0xc1e398e7
NOTE: cache deleting context for group FRA01 1/0xc1e398e7
GMON dismounting group 1 at 4 for pid 18, osid 7798844
ERROR: diskgroup FRA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "FRA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "FRA01"
ERROR: ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group OCRVDISK number=1 incarn=0xca3398ea
NOTE: cache began mount (first) of group OCRVDISK number=1 incarn=0xca3398ea
SQL> ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
SQL> ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
Tue Aug 19 15:45:07 2014
NOTE: No asm libraries found in the system
ASM Health Checker found 1 new failures
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xCA3398EA (OCRVDISK)
NOTE: messaging CKPT to quiesce pins Unix process pid: 9896018, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xCA3398EA (OCRVDISK)
NOTE: cache ending mount (fail) of group OCRVDISK number=1 incarn=0xca3398ea
NOTE: cache deleting context for group OCRVDISK 1/0xca3398ea
GMON dismounting group 1 at 6 for pid 19, osid 9896018
ERROR: diskgroup OCRVDISK was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "OCRVDISK" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "OCRVDISK"
ERROR: ALTER DISKGROUP OCRVDISK MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group DATA01 number=1 incarn=0xd22398ed
NOTE: cache began mount (first) of group DATA01 number=1 incarn=0xd22398ed
Tue Aug 19 15:45:08 2014
NOTE: No asm libraries found in the system
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xD22398ED (DATA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7798844, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xD22398ED (DATA01)
NOTE: cache ending mount (fail) of group DATA01 number=1 incarn=0xd22398ed
NOTE: cache deleting context for group DATA01 1/0xd22398ed
GMON dismounting group 1 at 8 for pid 18, osid 7798844
ERROR: diskgroup DATA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA01"
ERROR: ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group FRA01 number=1 incarn=0xd37398f0
NOTE: cache began mount (first) of group FRA01 number=1 incarn=0xd37398f0
Tue Aug 19 15:45:09 2014
NOTE: No asm libraries found in the system
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xD37398F0 (FRA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7274670, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xD37398F0 (FRA01)
NOTE: cache ending mount (fail) of group FRA01 number=1 incarn=0xd37398f0
NOTE: cache deleting context for group FRA01 1/0xd37398f0
GMON dismounting group 1 at 10 for pid 17, osid 7274670
ERROR: diskgroup FRA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "FRA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "FRA01"
ERROR: ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
Tue Aug 19 15:45:09 2014
NOTE: No asm libraries found in the system
ASM Health Checker found 1 new failures
ASM Health Checker found 1 new failures
ASM Health Checker found 1 new failures
四.使用md_restore命令恢復ASM磁碟組後設資料。
# su - grid
$ asmcmd -p
ASMCMD [+] > lsdg
ASMCMD [+] > lsdsk
ASMCMD [+] >
執行help md_restore可以檢視詳細的幫助資訊:
ASMCMD [+] > help md_restore
md_restore
This command restores a disk group metadata backup.
md_restore backup_file [--silent][--full|--nodg|--newdg -o 'old_diskgroup:new_diskgroup [,...]'][-S sql_script_file] [-G 'diskgroup [,diskgroup...]']
The options for the md_restore command are described below.
backup_file - Reads the metadata information from
backup_file.
--silent - Ignore errors. Normally, if md_restore
encounters an error, it will stop.
Specifying this flag ignores any errors.
忽略報錯資訊。
--full - Specifies to create a disk group and restore
metadata.
指明建立一個磁碟組,恢復該磁碟組的後設資料資訊。
--nodg - Specifies to restore metadata only.
指明只恢復後設資料資訊。
--newdg -o old_diskgroup:new_diskgroup - Specifies to create a disk
group with a different name when restoring
metadata. The -o option is required
with --newdg.
指明當恢復後設資料資訊的時候建立一個不同名稱的磁碟組,-o選項是必須要指定的。
-S sql_script_file - Write SQL commands to the specified SQL
script file instead of executing the commands.
-G diskgroup - Select the disk groups to be restored.
If no disk groups are defined, then all
disk groups will be restored.
如果不指定該引數,那麼所有磁碟組的後設資料資訊都將被恢復。
The first example restores the disk group DATA from the backup script
and creates a copy. The second example takes an existing disk group
DATA and restores its metadata. The third example restores disk group
DATA completely but the new disk group that is created is called DATA2.
The fourth example restores from the backup file after applying the
overrides defined in the override.sql script file
ASMCMD [+] > md_restore --full -G data --silent /tmp/dgbackup20090714
ASMCMD [+] > md_restore --nodg -G data --silent /tmp/dgbackup20090714
ASMCMD [+] > md_restore --newdg -o 'data:data2' --silent /tmp/dgbackup20090714
ASMCMD [+] > md_restore -S override.sql --silent /tmp/dgbackup20090714
1).首先我們嘗試使用--nodg引數恢復後設資料資訊:
ASMCMD [+] > md_restore --nodg --silent /tmp/dggroup20140819
Current Diskgroup metadata being restored: OCRVDISK
ASMCMD-9360: ADD or ALTER ATTRIBUTE failed
ORA-15032: not all alterations performed
ORA-15001: diskgroup "OCRVDISK" does not exist or is not mounted (DBD ERROR: OCIStmtExecute)
......
Current Diskgroup metadata being restored: DATA01
ASMCMD-9360: ADD or ALTER ATTRIBUTE failed
ORA-15032: not all alterations performed
ORA-15001: diskgroup "DATA01" does not exist or is not mounted (DBD ERROR: OCIStmtExecute)
......
Current Diskgroup metadata being restored: FRA01
ASMCMD-9360: ADD or ALTER ATTRIBUTE failed
ORA-15032: not all alterations performed
ORA-15001: diskgroup "FRA01" does not exist or is not mounted (DBD ERROR: OCIStmtExecute)
......
從這個例子可以看出,要想使用md_restore命令恢復磁碟組後設資料資訊,ASM磁碟組必須是在MOUNT狀態。
2).接下來我們嘗試使用--full引數進行恢復:
ASMCMD [+] > md_restore --full --silent /tmp/dggroup20140819
Current Diskgroup metadata being restored: OCRVDISK
Diskgroup OCRVDISK created!
System template PARAMETERFILE modified!
System template XTRANSPORT modified!
System template CONTROLFILE modified!
System template DATAFILE modified!
System template BACKUPSET modified!
System template FLASHFILE modified!
System template CHANGETRACKING modified!
System template DATAGUARDCONFIG modified!
System template ARCHIVELOG modified!
System template TEMPFILE modified!
System template OCRFILE modified!
System template DUMPSET modified!
System template ONLINELOG modified!
System template FLASHBACK modified!
System template AUTOBACKUP modified!
System template ASMPARAMETERFILE modified!
Directory +OCRVDISK/ASM re-created!
Directory +OCRVDISK/ASM/ASMPARAMETERFILE re-created!
Current Diskgroup metadata being restored: DATA01
Diskgroup DATA01 created!
System template PARAMETERFILE modified!
System template CHANGETRACKING modified!
System template CONTROLFILE modified!
System template DATAFILE modified!
System template DATAGUARDCONFIG modified!
System template FLASHFILE modified!
System template XTRANSPORT modified!
System template TEMPFILE modified!
System template BACKUPSET modified!
System template OCRFILE modified!
System template ARCHIVELOG modified!
System template DUMPSET modified!
System template ONLINELOG modified!
System template AUTOBACKUP modified!
System template FLASHBACK modified!
System template ASMPARAMETERFILE modified!
Directory +DATA01/ORCL re-created!
Directory +DATA01/ORCL/CONTROLFILE re-created!
Directory +DATA01/ORCL/DATAFILE re-created!
Directory +DATA01/ORCL/ONLINELOG re-created!
Directory +DATA01/ORCL/PARAMETERFILE re-created!
Directory +DATA01/ORCL/TEMPFILE re-created!
Current Diskgroup metadata being restored: FRA01
Diskgroup FRA01 created!
System template ONLINELOG modified!
System template AUTOBACKUP modified!
System template CONTROLFILE modified!
System template DATAGUARDCONFIG modified!
System template CHANGETRACKING modified!
System template DUMPSET modified!
System template BACKUPSET modified!
System template ASMPARAMETERFILE modified!
System template DATAFILE modified!
System template FLASHBACK modified!
System template OCRFILE modified!
System template FLASHFILE modified!
System template PARAMETERFILE modified!
System template TEMPFILE modified!
System template XTRANSPORT modified!
System template ARCHIVELOG modified!
Directory +FRA01/ORCL re-created!
Directory +FRA01/ORCL/CONTROLFILE re-created!
Directory +FRA01/ORCL/ONLINELOG re-created!
使用--full引數,md_restore首先建立了ASM磁碟組,然後恢復了該磁碟組後設資料資訊。
ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 51200 51148 0 51148 0 N DATA01/
MOUNTED EXTERN N 512 4096 1048576 51200 51148 0 51148 0 N FRA01/
MOUNTED EXTERN N 512 4096 1048576 2048 1996 0 1996 0 N OCRVDISK/
ASMCMD [+] > lsdsk -G data01
Path
/dev/rhdisk3
ASMCMD [+] > lsdsk -G fra01
Path
/dev/rhdisk4
ASMCMD [+] > lsdsk -G ocrvdisk
Path
/dev/rhdisk2
ASMCMD [+] > ls
DATA01/
FRA01/
OCRVDISK/
ASMCMD [+] > cd data01/ORCL/orcl/
ASMCMD [+data01/orcl] > ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
ASMCMD [+data01/orcl] > cd datafile
ASMCMD [+data01/orcl/datafile] > ls
ASMCMD [+data01/orcl/datafile] > cd ..
ASMCMD [+data01/orcl] > cd controlfile
ASMCMD [+data01/orcl/controlfile] > ls
後設資料資訊恢復了,但是所有的ASM檔案都不在了。
由此不難看出,md_backup和md_restore只是備份和恢復後設資料資訊,ASM磁碟組中的ASM檔案會丟失,達不到我們預期的效果。
另外需要注意兩點:
1).--newdg和--full的效果其實是相同的,只是--newdg會對磁碟組進行重新命名,依然會先建立磁碟組,然後恢復後設資料資訊。
2).在AIX平臺,恢復後設資料或重建ASM磁碟組之前應該使用chdev -l hdiskx -a pv=clear命令先將磁碟的PVID清除掉,這是個好習慣!
--end--
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/23135684/viewspace-1260502/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 【ASM】ASM磁碟頭被重寫,如何修復ASM
- ASM磁碟頭資訊損壞和修復(kfed/dd)ASM
- 磁碟修復工具TestDisk
- Oracle asm磁碟損壞異常恢復OracleASM
- win10一直卡在磁碟修復怎麼修復_win10一直卡在磁碟修復開不了機如何處理Win10
- 電腦開機修復磁碟半天 win10一直卡在磁碟修復怎麼辦Win10
- 【LINUX】Oracle資料庫 linux磁碟頭資料損壞修復LinuxOracle資料庫
- 「磁碟工具」不能修復磁碟時,怎麼做?
- linux下修復磁碟損壞Linux
- ASM磁碟組限制ASM
- 【ASM】Oracle asm磁碟被格式化,如何掛載該磁碟組ASMOracle
- 一次ASM新增新的磁碟組ASM
- 遷移ASM磁碟組ASM
- 【ASM】Oracle asm刪除磁碟組注意事項ASMOracle
- 從定位資料塊所在ASM磁碟到ASM strippingASM
- ASM磁碟組更換磁碟的操作方法ASM
- 如何掃描和修復 Linux 磁碟錯誤Linux
- RAID10磁碟陣列損壞修復操作AI陣列
- 【資料庫資料恢復】Oracle資料庫ASM磁碟組掉線如何恢復資料?資料庫資料恢復OracleASM
- ASM磁碟組擴容操作文件ASM
- ASM 磁碟組的建立及擴容ASM
- ORACLE ASM磁碟組空間溢位OracleASM
- 使用udev擴充套件ASM磁碟組dev套件ASM
- linux RAID10磁碟陣列損壞的修復LinuxAI陣列
- Linux系統磁碟資料修復工具,建議收藏!Linux
- Mac如何修復“無法解除安裝磁碟”錯誤Mac
- 如何在Mac電腦中修復磁碟許可權?Mac
- Android熱修復原理(一)熱修復框架對比和程式碼修復Android框架
- 磁頭損壞的修復方法有哪些
- system資料檔案頭損壞修復
- 【資料庫資料恢復】ASM磁碟組掉線的Oracle資料庫資料恢復案例資料庫資料恢復ASMOracle
- ORACLE ASM的SSD磁碟空間回收分析OracleASM
- Oracle RAC日常運維-ASM磁碟擴容Oracle運維ASM
- Oracle ASM磁碟組擴容(AIX7.1)OracleASMAI
- ASM磁碟組ORA-15042 ORA-15096ASM
- 【BUILD_ORACLE】使用ASMLib包搭建ASM磁碟UIOracleASM
- ASM磁碟簡單維護,新增,刪除ASM
- [20201218]資料檔案OS頭的修復.txt
- Oracle資料庫 ASM磁碟線上擴容Oracle資料庫ASM