Oracle 12c新特性--ASMFD(ASM Filter Driver)特性

lhrbest發表於2019-08-15


Oracle 12c新特性--ASMFD(ASM Filter Driver)特性


https://www.oracle.com/technetwork/cn/articles/database/asmfd-2398572-zhs.html

https://docs.oracle.com/en/database/oracle/oracle-database/18/ostmg/administer-filter-driver.html#GUID-7B046A8B-06ED-48D6-81B0-3A29F45BA372


從Oracle 12.1.0.2開始,可以使用asmfd來取代udev規則下的asm磁碟裝置繫結,同時他也具有過濾非法IO操作的特性。ASMFD是ASMLIB和UDEV的替代產品,實際上,ASMFD也用到了UDEV。

可以在安裝GRID時配置ASMFD,也可以在安裝Grid後再進行配置。

檢查作業系統版本是否支援ASMFD,可以使用如下程式碼:

acfsdriverstate -orahome $ORACLE_HOME supported


ASMFD 12.1 中就引入的新特性,它可以不用手動配置 ASM 磁碟,更重要的是它可以保護磁碟被其他非 Oracle 操作複寫,例如 dd echo 等命令。





下面的環境基於RHEL 7.4,測試將UDEV儲存轉移到AFD磁碟路徑。


一、afd配置調整

1、root使用者下新增grid環境變數


[root@rac1 ~]# export ORACLE_HOME=/u01/app/12.2.0/grid

[root@rac1 ~]# export ORACLE_BASE=/tmp



2、獲取當前asm磁碟組發現路徑


[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget

parameter:/dev/asm*

profile:/dev/asm*



3、新增AFD發現路徑


[root@rac1 ~]# asmcmd dsset '/dev/asm*','AFD:*'


[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget   

parameter:/dev/asm*, AFD:*

profile:/dev/asm*,AFD:*



4、檢視節點資訊


[root@rac1 ~]# olsnodes -a

rac1    Hub

rac2    Hub



以下需要在所有節點執行

5、停止crs


[root@rac1 ~]# crsctl stop crs

1

6、安裝oracle afd

節點1、節點2


載入以及配置AFD


[root@rac1 yum.repos.d]# asmcmd afd_configure

1

備註:在7.4以上的redhat或者centos下需要升級kmod才可以啟用AFD,在前面一篇文章中已有介紹

解決在RHEL/CentOS7.4以上版本無法使用AFD(Oracle ASMFD)特性

https://blog.csdn.net/kiral07/article/details/87629679

載入afd過程:


AFD-627: AFD distribution files found.

AFD-634: Removing previous AFD installation.

AFD-635: Previous AFD components successfully removed.

AFD-636: Installing requested AFD software.

AFD-637: Loading installed AFD drivers.

AFD-9321: Creating udev for AFD.

AFD-9323: Creating module dependencies - this may take some time.

AFD-9154: Loading 'oracleafd.ko' driver.

AFD-649: Verifying AFD devices.

AFD-9156: Detecting control device '/dev/oracleafd/admin'.

AFD-638: AFD installation correctness verified.

Modifying resource dependencies - this may take some time.



查詢afd狀態資訊


[root@rac1 yum.repos.d]# asmcmd afd_state

ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac1'



7、啟動crs


[root@rac1 yum.repos.d]# crsctl start crs

CRS-4123: Oracle High Availability Services has been started.



8、檢視當前儲存裝置


[root@rac1 yum.repos.d]# ll /dev/mapper/mpath*

lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathc -> ../dm-1

lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathd -> ../dm-0


此處使用多路徑裝置mpathc、mpathd

[root@rac2 ~]# multipath -ll

mpathd (14f504e46494c45526147693538302d577037452d39596459) dm-1 OPNFILER,VIRTUAL-DISK    

size=30G features='0' hwhandler='0' wp=rw

|-+- policy='service-time 0' prio=1 status=active

| `- 34:0:0:1 sdc             8:32  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 35:0:0:1 sde             8:64  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 36:0:0:1 sdg             8:96  active ready running

`-+- policy='service-time 0' prio=1 status=enabled

  `- 37:0:0:1 sdi             8:128 active ready running

mpathc (14f504e46494c45524f444c7844412d717a557a2d6b7a6752) dm-0 OPNFILER,VIRTUAL-DISK    

size=40G features='0' hwhandler='0' wp=rw

|-+- policy='service-time 0' prio=1 status=active

| `- 34:0:0:0 sdb             8:16  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 35:0:0:0 sdd             8:48  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 36:0:0:0 sdf             8:80  active ready running

`-+- policy='service-time 0' prio=1 status=enabled

  `- 37:0:0:0 sdh             8:112 active ready running



9、新增afd發現路徑


切換到grid使用者

[root@rac2 ~]# su - grid


使用afd_dsset新增儲存路徑

[grid@rac2:/home/grid]$asmcmd afd_dsset '/dev/mapper/mpath*'


[grid@rac2:/home/grid]$asmcmd afd_dsget

AFD discovery string: /dev/mapper/mpath*


此時未新增afd label所以為空

[grid@rac2:/home/grid]$asmcmd afd_lsdsk

There are no labelled devices.



至此從步驟5,在rac所有節點已執行所有命令


二、轉移UDEV裝置到AFD路徑

1、檢視當前ocr以及voting files磁碟組


  [root@rac1 ~]# ocrcheck -config

Oracle Cluster Registry configuration is :

         Device/File Name         :       +CRS

[root@rac1 ~]# crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   4e20265767f54f49bf12bd72f367217f (/dev/asm_crs) [CRS]

Located 1 voting disk(s).



2、檢視crs磁碟組對應的udev儲存路徑


[root@rac1 ~]# su - grid


[grid@rac1:/home/grid]$asmcmd lsdsk -G crs

Path

/dev/asm_crs



3、停止rac叢集


[root@rac1 ~]# crsctl stop cluster -all

1

4、轉移udev裝置到afd


使用label新增asmcrs磁碟組,將udev規則下的磁碟路徑轉移到afd

[grid@rac1:/home/grid]$asmcmd afd_label asmcrs /dev/mapper/mpathc --migrate       


crs磁碟組已載入完畢

[grid@rac1:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc


備註:由於當前磁碟組已被asm使用,必須使用migrate才可以進行轉移。


新增另外一塊data磁碟組

[grid@rac1:/home/grid]$asmcmd afd_label asmdata /dev/mapper/mpathd --migrate    


檢視afd磁碟組,已載入完畢

[grid@rac1:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd



5、在其餘節點掃描afd裝置


[grid@rac2:/home/grid]$asmcmd afd_scan


[grid@rac2:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd



6、啟動rac叢集


[root@rac1 ~]# crsctl start cluster -all

1

7、在asm例項下查詢asm磁碟資訊

原udev asm儲存路徑資訊還能檢視到


[root@rac1 ~]# asmcmd lsdsk

Path

AFD:ASMCRS

AFD:ASMDATA


SQL> col name for a10

SQL> col label for a10

SQL> col path for a15

SQL> select NAME,LABEL,PATH from V$ASM_DISK;


NAME       LABEL      PATH

---------- ---------- ---------------

           ASMDATA    /dev/asm_data ---->之前udev asm路徑

           ASMCRS     /dev/asm_crs  ---->之前udev asm路徑

CRS_0000   ASMCRS     AFD:ASMCRS

DATA_0000  ASMDATA    AFD:ASMDATA



8、修改發現路徑


[grid@rac1:/home/grid]$asmcmd dsget

parameter:/dev/asm*, AFD:*

profile:/dev/asm*,AFD:*


只保留afd路徑

[grid@rac1:/home/grid]$asmcmd dsset 'AFD:*'

[grid@rac1:/home/grid]$asmcmd dsget

parameter:AFD:*

profile:AFD:*


再次查詢udev路徑下的裝置已沒有

SQL> select NAME,LABEL,PATH from V$ASM_DISK;


NAME       LABEL      PATH

---------- ---------- ---------------

CRS_0000   ASMCRS     AFD:ASMCRS

DATA_0000  ASMDATA    AFD:ASMDATA



9、移除UDEV規則檔案


[root@rac1 ~]# ll -hrt /etc/udev/rules.d/

total 12K

-rw-r--r-- 1 root root 297 Nov  3 17:04 99-oracle-asmdevices.rules.old

-rw-r--r-- 1 root root 224 Feb 15 17:13 53-afd.rules

-rw-r--r-- 1 root root 957 Feb 18 08:55 70-persistent-ipoib.rules


99-oracle-asmdevices.rules重新命名之後已無法發現磁碟

[root@rac1 ~]# ll /dev/asm*

ls: cannot access /dev/asm*: No such file or directory


使用afd特性的磁碟組未受影響

[grid@rac2:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd



到此為止已完成afd的配置與載入


三、ASM磁碟組dd格式化測試

Oracle的afd特性可以過濾掉”非規範“的io操作,只要不是用於oracle的io操作都會被過濾掉,下面使用dd格式化asm整個磁碟組做測試


1、增加一個磁碟組”asmtest“用來做dd格式化實驗


[root@rac1 ~]# asmcmd afd_label asmtest /dev/mapper/mpathe

[root@rac1 ~]# asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd

ASMTEST                     ENABLED   /dev/mapper/mpathe



[root@rac1 ~]# su - grid


[grid@rac1:/home/grid]$sqlplus / as sysasm


建立asmtest磁碟組

SQL> create diskgroup asmtest external redundancy disk 'AFD:asmtest';


Diskgroup created.



2、建立測試表空間asmtest以及測試表


SQL>  create tablespace asmtest datafile '+asmtest' size 100m;

SQL>  create table afd (id number) tablespace asmtest;


SQL>  insert into afd values (1);


1 row created.


SQL> commit;


Commit complete.


SQL> select * from afd; 


        ID

----------

         1

         




3、dd格式化

格式化整個磁碟組”asmtest“ —>/dev/mapper/mpathe


[root@rac1 ~]# dd if=/dev/zero of=/dev/mapper/mpathe

dd: writing to dev/mapper/mpathe No space left on device

2097153+0 records in

2097152+0 records out

1073741824 bytes (1.1 GB) copied, 70.6 s, 15.2 MB/s



4、再次做建立表空間操作


SQL>  create tablespace asmtest2 datafile '+asmtest' size 100m;

SQL>  create table afd2 (id number) tablespace asmtest2;


SQL>  insert into afd2 values (2);


1 row created.


SQL> commit;


Commit complete.


SQL> select * from afd; 


        ID

----------

         2

SQL>  ALTER system checkpoint;


System altered.


checkpoint之後也沒有報錯



5、禁用afd Filter


[root@rac1 ~]# asmcmd afd_filter -d

備註(-d是disable、-e是enable)

[root@rac1 ~]# asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                     DISABLED   /dev/mapper/mpathc

ASMDATA                    DISABLED   /dev/mapper/mpathd

ASMTEST                    DISABLED   /dev/mapper/mpathe



6、再次做dd格式化


[root@rac1 ~]# dd if=/dev/zero of=/dev/mapper/mpathe

dd: writing to dev/mapper/mpathe No space left on device

2097153+0 records in

2097152+0 records out

1073741824 bytes (1.1 GB) copied, 89.2804 s, 12.0 MB/s



7、在資料庫中測試


SQL> insert into afd values (3);


1 row created.


SQL> commit;


Commit complete.


SQL> alter system checkpoint;

alter system checkpoint

*

ERROR at line 1:

ORA-03113: end-of-file on communication channel

Process ID: 21718

Session ID: 35 Serial number: 63358


資料庫已崩潰



資料庫重啟之後已無法啟動


[oracle@rac1:/home/oracle]$sqlplus / as sysdba


SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 18 14:55:41 2019


Copyright © 1982, 2016, Oracle.  All rights reserved.


Connected to an idle instance.


SQL> startup 

ORA-39510: CRS error performing start on instance 'orcl1' on 'orcl'

CRS-2672: Attempting to start 'ora.AFDTEST.dg' on 'rac1'

CRS-2672: Attempting to start 'ora.AFDTEST.dg' on 'rac2'

CRS-2674: Start of 'ora.AFDTEST.dg' on 'rac1' failed

CRS-2674: Start of 'ora.AFDTEST.dg' on 'rac2' failed

CRS-0215: Could not start resource 'ora.orcl.db 1 1'.

clsr_start_resource:260 status:215

clsrapi_start_db:start_asmdbs status:215



四、擴充研究

在配置完afd之後,在dev路徑下會有lable過後的磁碟


[root@rac1 ~]# ll /dev/oracleafd/disks/

total 8

-rwxrwx--- 1 grid oinstall 19 Feb 18 13:40 ASMCRS

-rwxrwx--- 1 grid oinstall 19 Feb 18 13:40 ASMDATA


檢視此裝置內容發現對應為多路徑裝置

[root@rac1 ~]# cat /dev/oracleafd/disks/ASMCRS 

/dev/mapper/mpathc

[root@rac1 ~]# cat /dev/oracleafd/disks/ASMDATA 

/dev/mapper/mpathd



udev規則下會有afd的規則檔案


[grid@rac1:/home/grid]$ll -hrt /etc/udev/rules.d/

total 12K

-rw-r--r-- 1 root root 297 Nov  3 17:04 99-oracle-asmdevices.rules.old

-rw-r--r-- 1 root root 224 Feb 15 17:13 53-afd.rules

-rw-r--r-- 1 root root 957 Feb 18 08:55 70-persistent-ipoib.rules



[grid@rac1:/home/grid]$cat /etc/udev/rules.d/53-afd.rules 

#

# AFD devices

KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"

KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"

KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


五、總結

Oracle的afd特性可以將”危險“IO操作進行重定向,具體原理不得而知,其本質還是通過系統核心使用udev規則載入磁碟裝置。






Oracle 12C R2-新特性-自動配置ASMFD

說明

ASMFD 12.1 中就引入的新特性,它可以不用手動配置 ASM 磁碟,更重要的是它可以保護磁碟被其他非 Oracle 操作複寫,例如 dd echo 等命令。

更為詳盡的介紹,請檢視官方文件:

--12.1 新特性 ASMFD

https://docs.oracle.com/database/121/OSTMG/GUID-2F5E344F-AFC2-4768-8C00-6F3C56302123.htm#OSTMG95729

 

12.2 中, ASMFD 被加強了,它可以自動配置磁碟,只需要執行一個命令,該磁碟就可以被 ASM 使用。

非常方便。

具體例子

2.1   建立目錄

[root@rac1 software]# mkdir -p /u01/app/12.2.0/grid
[root@rac1 software]# chown grid:oinstall /u01/app/12.2.0/grid

2.2   解壓 GRID 軟體

使用 grid 使用者解壓

[grid@rac1 software]# cd /u01/app/12.2.0/grid
[grid@rac1 grid]# unzip -q /software/grid_home_image.zip

2.3   配置共享磁碟用於 ASMFD

2.3.1  使用 root 使用者登入,並設定 ORACLE_HOME ORACLE_BASE

[root@rac1 grid]# su - root
[root@rac1 grid]# export ORACLE_HOME=/u01/app/12.2.0/grid
[root@rac1 grid]# export ORACLE_BASE=/tmp
[root@rac1 grid]# echo $ORACLE_BASE
/tmp
[root@rac1 grid]# echo $ORACLE_HOME
/u01/app/12.2.0/grid

2.3.2  使用 ASMCMD 命令來為 ASMFD 提供磁碟

如下,初始化三塊磁碟,就不需要使用 udev ASMLIB 等方式來繫結磁碟並賦許可權

[root@rac1 grid]# /u01/app/12.2.0/grid/bin/asmcmd afd_label DATA1 /dev/sdb --init
[root@rac1 grid]#

2.3.3  驗證磁碟是否被標記為 ASMFD 使用

[root@rac1 grid]# /u01/app/12.2.0/grid/bin/asmcmd afd_lslbl /dev/sdb
--------------------------------------------------------------------------------
Label                     Duplicate  Path
====================================DATA1                              /dev/sdb

然後下面安裝 GRID 的時候,就可以直接使用該磁碟了 /dev/sdb

注意 :/dev/sdb 重啟後,可能會變名稱,所以 Oracle 使用了 label 標籤將其繫結

2.3.4  重置 ORACLE_BASE

unset ORACLE_BASE

2.4   開始安裝 GRID

./gridSetup.sh

更多資訊,請參考官方文件:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/tdprc/installing-oracle-grid.html#GUID-72D1994F-E838-415A-8E7D-30EA780D74A8


ASMFD (ASM Filter Driver) 支援的作業系統版本


ASMFD 12.1.0.2  Supported Platforms

Vendor Version Update/Kernel Architecture Bug or PSU
Oracle Linux – RedHat Compatible Kernel 5 Update 3 and later, 2.6.18 kernel series (RedHat Compatible Kernel) X86_64 12.1.0.2.3 DB PSU Patch
Oracle Linux – Unbreakable Enterprise Kernel   5 Update 3 and later, 2.6.39-100 and later UEK 2.6.39 kernels   X86_64

12.1.0.2.3 DB PSU Patch

(See Note 1)

 Oracle Linux – RedHat Compatible Kernel  6 All Updates, 2.6.32-71 and later 2.6.32 RedHat Compatible kernels  X86_64 12.1.0.2.3 DB PSU Patch
Oracle Linux - Unbreakable Enterprise Kernel 6 All Updates, 2.6.39-100 and later UEK 2.6.39 kernels  X86_64

12.1.0.2.3 DB PSU Patch

(See Note 1)

Oracle Linux - Unbreakable Enterprise Kernel 6 All Updates, 3.8.13-13 and later UEK 3.8.13 kernels  X86_64 12.1.0.2.3 DB PSU Patch (See Note 1)
Oracle Linux - Unbreakable Enterprise Kernel 6 All Updates, 4.1.12 and later UEK 4.1.12 kernels X86_64 12.1.0.2.160719 (Base Bug  22810422 )
Oracle Linux – RedHat Compatible Kernel 7

 Update 0,  RedHat Compatible Kernel 3.10.0-123

 

 X86_64 12.1.0.2.3  (Base Bug  18321597 )
Oracle Linux – RedHat Compatible Kernel 7

Update 1 and later, 3.10-0-229 and later RedHat Compatible kernels

 

 X86_64 12.1.0.2.160119 (Base Bug   21233961 )
Oracle Linux - Unbreakable Enterprise Kernel 7

All Updates, 3.8.13-35 and later UEK 3.8.13 kernels

 

 X86_64

12.1.0.2.3  (Base Bug  18321597 )

Base ( See Note 1 )

Oracle Linux - Unbreakable Enterprise Kernel 7

All Updates, 4.1.12 and later UEK 4.1.12 kernels

X86_64

12.1.0.2.160719 (Base Bug  22810422 )

RedHat Linux  5 Update 3 and later, 2.6.18 kernel series  X86_64 12.1.0.2.3 DB PSU Patch
 RedHat Linux  6 All Updates, 2.6.32-279 and later RedHat kernels  X86_64 12.1.0.2.3 DB PSU Patch
 RedHat Linux 7

Update 0,  3.10.0-123 kernel

 X86_64   12.1.0.2.3  (Base Bug  18321597 )
RedHat Linux 7

Update 1 and later, 3.10.0-229 and later RedHat kernels

 X86_64 12.1.0.2.160119 (Base Bug   21233961 )
RedHat Linux 7

Update 4

X86_64 12.1.0.2.170718ACFSPSU + Patch 26247490
Novell SLES 11 SP2 X86_64 Base 
Novell SLES 11 SP3 X86_64 Base 
Novell SLES 11 SP4 X86_64  12.1.0.2.160419 (Base Bug  21231953 )
Novell SLES 12 SP1 X86_64 12.1.0.2.170117ACFSPSU(Base Bug  23321114 )

 

12c新特性ASMFD

從12.1.0.2開始,Oracle 引入了ASMFD(ASM Filter Driver),ASMFD只適應於Linux平臺。安裝完Grid Infrastructure後,你可以決定是否配置她。如果之前使用了ASMLIB(可以簡單的理解為對裝置做標籤來標識磁碟)或者udev(可以動態管理裝置),遷移到ASMFD之後,需要解除安裝ASMLIB或禁用udev的規則。通過Filter driver可以過濾無效的請求,避免因為非oracle的I/O請求造成意外的覆寫,進而保證了系統的安全和穩定。


官方文件中關於ASMFD的描述

This feature is available on Linux systems starting with Oracle Database 12c Release 1 (12.1.0.2).

Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks.Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks. After installation of Oracle Grid Infrastructure, you can optionally configure Oracle ASMFD for your system. If ASMLIB is configured for an existing Oracle ASM installation, then you must explicitly migrate the existing ASMLIB configuration to Oracle ASMFD.

The Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.


The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

ASMFD會拒絕所有的無效的I/O請求。這種行為可以避免因為意外的覆寫造成ASM Disk的損壞或磁碟組中檔案的損壞。比如她會過濾出所有可能造成覆寫的non-oracle的I/O請求。

本文以Oracle Restart(測試版本12.1.0.2.0)環境測試為例來說明如何安裝配置ASMFD。首先安裝GI(Install Softeware Only),然後配置ASMFD,配置Label ASMFD Disks,建立ASM例項,建立ASM磁碟組(ASMFD),建立spfile並遷移至ASM磁碟組。最後在啟用和關閉Filter功能情況下分別測試。
詳情參考:http://docs.oracle.com/database/121/OSTMG/GUID-06B3337C-07A3-4B3F-B6CD-04F2916C11F6.htm

配置Oracle Restart(SIHA)
[root@db1 ~]# /orgrid/oracle/product/121/root.sh 
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= orgrid
    ORACLE_HOME=  /orgrid/oracle/product/121

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/orgrid/oracle/product/121/perl/bin/perl -I/orgrid/oracle/product/121/perl/lib -I/orgrid/oracle/product/121/crs/install  /orgrid/oracle/product/121/crs/install/roothas.pl  執行這個指令碼配置HAS,可以不必在GUI下執行  

To configure Grid Infrastructure for a Cluster execute the following command as orgrid user:

/orgrid/oracle/product/121/crs/config/config.sh 

安裝GI,選擇只安裝軟體,如果要配置RAC,需要 執行config.sh指令碼( 必須在GUI模式下執行 ),會讓你輸入cluster資訊,scan資訊,感興趣的可以嘗試下。

This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

[root@db1 ~]# 
[root@db1 ~]# /orgrid/oracle/product/121/perl/bin/perl -I/orgrid/oracle/product/121/perl/lib -I/orgrid/oracle/product/121/crs/install /orgrid/oracle/product/121/crs/install/roothas.pl
Using configuration parameter file: /orgrid/oracle/product/121/crs/install/crsconfig_params
LOCAL ADD MODE 
Creating OCR keys for user 'orgrid', privgrp 'asmadmin'..
Operation successful.
LOCAL ONLY MODE 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node db1 successfully pinned.
2016/05/16 22:10:54 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

db1     2016/05/16 22:11:11     /orgrid/oracle/product/121/cdata/db1/backup_20160516_221111.olr     0     
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db1'
CRS-2673: Attempting to stop 'ora.evmd' on 'db1'
CRS-2677: Stop of 'ora.evmd' on 'db1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/05/16 22:12:19 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

用udev繫結,載入並檢視
[root@db1 ~]# cd /etc/udev/rules.d/
[root@db1 rules.d]# cat 99-oracle-asmdevices.rules 
KERNEL=="sdb1",NAME="asmdisk1",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdb2",NAME="asmdisk2",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdb3",NAME="asmdisk3",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdb4",NAME="asmdisk4",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc1",NAME="asmdisk5",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc2",NAME="asmdisk6",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc3",NAME="asmdisk7",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc4",NAME="asmdisk8",OWNER="orgrid",GROUP="asmadmin",MODE="0660"  
[root@db1 rules.d]# 
[root@db1 rules.d]# udevadm control --reload-rules
[root@db1 rules.d]# udevadm trigger
[root@db1 rules.d]# ls -l /dev/asmdisk*
brw-rw---- 1 orgrid asmadmin 8, 17 May 16 23:03 /dev/asmdisk1
brw-rw---- 1 orgrid asmadmin 8, 18 May 16 23:03 /dev/asmdisk2
brw-rw---- 1 orgrid asmadmin 8, 19 May 16 23:03 /dev/asmdisk3
brw-rw---- 1 orgrid asmadmin 8, 20 May 16 23:03 /dev/asmdisk4
brw-rw---- 1 orgrid asmadmin 8, 33 May 16 23:03 /dev/asmdisk5
brw-rw---- 1 orgrid asmadmin 8, 34 May 16 23:03 /dev/asmdisk6
brw-rw---- 1 orgrid asmadmin 8, 35 May 16 23:03 /dev/asmdisk7
brw-rw---- 1 orgrid asmadmin 8, 36 May 16 23:03 /dev/asmdisk8
[root@db1 rules.d]#

更多關於udev,請參考 http://www.ibm.com/developerworks/cn/linux/l-cn-udev/


檢視ASMFD是否安裝
[root@db1 ~]# export ORACLE_HOME=/orgrid/oracle/product/121
[root@db1 ~]# export ORACLE_SID=+ASM
[root@db1 ~]# export PATH=$ORACLE_HOME/bin:$PATH

[root@db1 ~]#  $ORACLE_HOME/bin/asmcmd afd_state

Connected to an idle instance.

ASMCMD-9526: The AFD state is  'NOT INSTALLED'  and filtering is 'DEFAULT' on host 'db1'
[root@db1 ~]# 

安裝ASMFD(必須先關掉CRS(RAC)/HAS(SIHA)服務)
[root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
Connected to an idle instance.
ASMCMD-9523: command cannot be used when Oracle Clusterware stack is up
[root@db1 ~]# crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db1'
CRS-2673: Attempting to stop 'ora.evmd' on 'db1'
CRS-2677: Stop of 'ora.evmd' on 'db1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
[root@db1 ~]#

檢視ASMFD詳情
[orgrid@db1 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is ' LOADED ' and filtering is 'DEFAULT' on host 'db1'
[root@db1 ~]# /orgrid/oracle/product/121/bin/crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@db1 ~]#
[orgrid@db1 ~]$
[orgrid@db1 bin]$ pwd
/orgrid/oracle/product/121/bin
[orgrid@db1 bin]$ ls -ltr afd*
-rwxr-x--- 1 orgrid asmadmin     1000 May 23  2014 afdroot
-rwxr-xr-x 1 orgrid asmadmin 72836515 Jul  1  2014 afdboot
-rwxr-xr-x 1 orgrid asmadmin   184403 Jul  1  2014 afdtool.bin
-rwxr-x--- 1 orgrid asmadmin      766 May 16 23:29 afdload
-rwxr-x--- 1 orgrid asmadmin     1254 May 16 23:29 afddriverstate
-rwxr-xr-x 1 orgrid asmadmin     2829 May 16 23:29 afdtool
[root@db1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ons
               OFFLINE OFFLINE      db1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       db1                      STABLE
ora.evmd
      1        ONLINE  ONLINE       db1                      STABLE
--------------------------------------------------------------------------------
[root@db1 ~]#
安裝成功後,你看到afd的一些檔案,還能看到資源ora.driver.afd

用afd_label標識磁碟
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK1 /dev/asmdisk1
Connected to an idle instance.
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/asmdisk1
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK2 /dev/asmdisk2
Connected to an idle instance.
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/asmdisk1
ASMDISK2                    ENABLED   /dev/asmdisk2
[orgrid@db1 bin]$ asmcmd
Connected to an idle instance.
ASMCMD> afd_label ASMDISK3 /dev/asmdisk3
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/asmdisk1
ASMDISK2                    ENABLED   /dev/asmdisk2
ASMDISK3                    ENABLED   /dev/asmdisk3
ASMCMD>

[root@db1 rules.d]# ls -ltr|tail -5
-rw-r--r--. 1 root root  789 Mar 10 05:18 70-persistent-cd.rules
-rw-r--r--. 1 root root  341 Mar 10 05:25 99-vmware-scsi-udev.rules
-rw-r--r--  1 root root  190 May 16 22:11 55-usm.rules
-rw-r--r--  1 root root  600 May 16 23:03 99-oracle-asmdevices.rules
-rw-r--r--  1 root root  230 May 17 00:31 53-afd.rules
[root@db1 rules.d]#
[orgrid@db1 rules.d]$ pwd
/etc/udev/rules.d
[root@db1 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="orgrid", GROUP="asmadmin", MODE="0770"
KERNEL=="oracleafd/*", OWNER="orgrid", GROUP="asmadmin", MODE="0770"
KERNEL=="oracleafd/disks/*", OWNER="orgrid", GROUP="asmadmin", MODE="0660"
[root@db1 rules.d]# cat 55-usm.rules
#
# ADVM devices
KERNEL=="asm/*",      GROUP="asmadmin", MODE="0770"
KERNEL=="asm/.*",     GROUP="asmadmin", MODE="0770"
#
# ACFS devices
KERNEL=="ofsctl",     GROUP="asmadmin", MODE="0664"
[root@db1 rules.d]#

安裝後會看到udev rules下面多了一些檔案,實際上ASMFD仍使用了udev

建立ASM例項(也可以通過asmca去建立)
[orgrid@db1 dbs]$ srvctl add asm
[orgrid@db1 dbs]$ ps -ef|grep pmon
orgrid    42414  36911  0 14:26 pts/2    00:00:00 grep pmon
[orgrid@db1 dbs]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.asm
               OFFLINE OFFLINE      db1                      STABLE
ora.ons
               OFFLINE OFFLINE      db1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       db1                      STABLE
ora.evmd
      1        ONLINE  ONLINE       db1                      STABLE
--------------------------------------------------------------------------------
[orgrid@db1 dbs]$
[orgrid@db1 ~]$ cat $ORACLE_HOME/dbs/init*.ora
*.asm_power_limit=1
*.diagnostic_dest='/orgrid/grid_base'
*.instance_type='asm'
*.large_pool_size=12M
*.memory_target=1024M
*.remote_login_passwordfile='EXCLUSIVE'
[orgrid@db1 ~]$
[orgrid@db1 ~]$ ps -ef|grep pmon
orgrid    42724  42694  0 14:30 pts/2    00:00:00 grep pmon
[orgrid@db1 ~]$ srvctl start asm
[orgrid@db1 ~]$ ps -ef|grep pmon
orgrid    42807      1  0 14:30 ?        00:00:00 asm_pmon_+ASM
orgrid    42888  42694  0 14:31 pts/2    00:00:00 grep pmon
[orgrid@db1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.asm
               ONLINE  ONLINE       db1                      Started,STABLE
ora.ons
               OFFLINE OFFLINE      db1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       db1                      STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       db1                      STABLE
ora.evmd
      1        ONLINE  ONLINE       db1                      STABLE
--------------------------------------------------------------------------------
[orgrid@db1 ~]$

通過asmca建立DiskGroup
[orgrid@db1 ~]$ asmca -silent  -sysAsmPassword oracle -asmsnmpPassword oracle -createDiskGroup -diskString 'AFD:*' -diskGroupName DATA_AFD -disk 'AFD:ASMDISK1' -disk 'AFD:ASMDISK2' -redundancy Normal -au_size 4  -compatible.asm 12.1 -compatible.rdbms 12.1
Disk Group DATA_AFD created successfully.
[orgrid@db1 ~]$ 

建立spfile並遷移到磁碟組
[orgrid@db1 ~]$ asmcmd spget
[orgrid@db1 ~]$
[orgrid@db1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue May 17 15:09:26 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
SQL> show parameter spf
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string
SQL> create spfile='+DATA_AFD' from pfile;
File created.
SQL> show parameter spf
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
[orgrid@db1 ~]$ asmcmd spget
+DATA_AFD/ASM/ASMPARAMETERFILE/registry.253.912092995
[orgrid@db1 ~]$

備份並移除udev rule檔案99-oracle-asmdevices.rules

重新命名99-oracle-asmdevices.rules為99-oracle-asmdevices.rules.bak。如果不move 99-oracle-asmdevices.rules檔案,下次重啟之後,之前ASMFD標識過的磁碟,看不到。
[orgrid@db1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.
[root@db1 ~]# ls -l /dev/oracleafd/disks
total 0
[root@db1 ~]# ls -l /dev/oracleafd/
admin  disks/

設定磁碟Discovery String字串
ASMCMD> afd_dsget
AFD discovery string:
ASMCMD> afd_dsset '/dev/sd*'       --設定ASMFD discovery string為原來物理磁碟的資訊
ASMCMD> afd_dsget
AFD discovery string: '/dev/sd*'
ASMCMD>
[orgrid@db1 ~]$ asmcmd afd_dsget
AFD discovery string: '/dev/sd*'
[orgrid@db1 ~]$ asmcmd dsget       --設定ASM磁碟組iscovery string為AFD:*     
parameter:AFD:*
profile:AFD:*
[orgrid@db1 ~]$

重啟伺服器並驗證
[root@db1 ~]# ls -l /dev/oracleafd/disks/
total 12
-rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK1
-rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK2
-rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK3
[root@db1 ~]#
ASMCMD> lsdsk  --candidate   
Path
AFD:ASMDISK2
AFD:ASMDISK3
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                   DISABLED   /dev/sdb1
ASMDISK2                   DISABLED   /dev/sdb2
ASMDISK3                   DISABLED   /dev/sdb3
ASMCMD>
[orgrid@db1 ~]$ ls -l /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 10 May 17 00:30 ASMDISK1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 17 00:30 ASMDISK2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 May 17 00:31 ASMDISK3 -> ../../sdb3
[orgrid@db1 ~]$
重啟後會發現,ASMFD用的磁碟的屬性變成了root許可權

啟用Filter功能
ASMCMD> help afd_filter
afd_filter
        Sets the AFD filtering mode on a given disk path.
        If the command is executed without specifying a disk path then
        filtering is set at node level.
ASMCMD>
ASMCMD> afd_filter -e /dev/sdb2
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                   DISABLED   /dev/sdb1
ASMDISK2                   DISABLED   /dev/sdb2
ASMDISK3                   DISABLED   /dev/sdb3
ASMCMD> afd_filter -e
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/sdb1
ASMDISK2                    ENABLED   /dev/sdb2
ASMDISK3                    ENABLED   /dev/sdb3
ASMCMD>

建立新磁碟組DATA_PGOLD
SQL> create diskgroup DATA_PGOLD external redundancy disk 'AFD:ASMDISK3';
Diskgroup created.

SQL> 
[orgrid@db1 ~]$ kfed read AFD:ASMDISK3
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                   771071217 ; 0x00c: 0x2df59cf1
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKASMDISK3 ; 0x000: length=16
kfdhdb.driver.reserved[0]:   1145918273 ; 0x008: 0x444d5341
kfdhdb.driver.reserved[1]:    843797321 ; 0x00c: 0x324b5349
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                ASMDISK3 ; 0x028: length=8
kfdhdb.grpname:              DATA_PGOLD ; 0x048: length=10
kfdhdb.fgname:                 ASMDISK2 ; 0x068: length=8
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33035808 ; 0x0a8: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0
kfdhdb.crestmp.lo:           3231790080 ; 0x0ac: USEC=0x0 MSEC=0x4d SECS=0xa MINS=0x30
kfdhdb.mntstmp.hi:             33035808 ; 0x0b0: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0
kfdhdb.mntstmp.lo:           3239631872 ; 0x0b4: USEC=0x0 MSEC=0x237 SECS=0x11 MINS=0x30
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    2055 ; 0x0c4: 0x00000807
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             33035808 ; 0x0e4: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0
kfdhdb.grpstmp.lo:           3231717376 ; 0x0e8: USEC=0x0 MSEC=0x6 SECS=0xa MINS=0x30

在啟用Filter功能下,用dd做測試
[root@db1 log]# dd if=/dev/zero of=/dev/sdb3
dd: writing to `/dev/sdb3': No space left on device
4209031+0 records in
4209030+0 records out
2155023360 bytes (2.2 GB) copied, 235.599 s, 9.1 MB/s
[root@db1 log]#

[root@db1 ~]# strings -a /dev/sdb3
ORCLDISKASMDISK3
ASMDISK3
DATA_PGOLD
ASMDISK3
0        
...省去了一部分
ORCLDISKASMDISK3
ASMDISK3
DATA_PGOLD
ASMDISK3
[root@db1 ~]#
[root@db1 ~]#

通過strings檢視/dev/sdb3,可以發現,裡面的內容並沒有被清空


解除安裝、掛載磁碟組正常
[orgrid@db1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/
ASMCMD> umount data_pgold
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
ASMCMD> mount data_pgold
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/
ASMCMD>

/var/log/messages裡顯示的錯誤資訊
[root@db1 log]# tail -3 messages
May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173]  afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18)  not supported i=2 start=8418038 seccnt=2  pstart=4209030  pend=8418060
May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173]  afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18)  not supported i=2 start=8418040 seccnt=2  pstart=4209030  pend=8418060
May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173]  afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18)  not supported i=2 s
[root@db1 log]#

在關閉Filter功能情況下做測試
ASMCMD> afd_filter -d
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                   DISABLED   /dev/sdb1
ASMDISK2                   DISABLED   /dev/sdb2
ASMDISK3                   DISABLED   /dev/sdb3
ASMCMD> exit
[orgrid@db1 ~]$

備份磁碟的前1024位元組並清除,普通使用者沒許可權讀
[orgrid@db1 ~]$ dd if=/dev/sdb3 of=block1 bs=1024 count=1
dd: opening `/dev/sdb3': Permission denied
[orgrid@db1 ~]$ exit
logout
[root@db1 ~]# dd if=/dev/sdb3 of=block1 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.00236493 s, 433 kB/s
[root@db1 ~]# dd if=/dev/zero of=/dev/sdb3 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000458225 s, 2.2 MB/s
[root@db1 ~]# su - orgrid

解除安裝、掛載磁碟組DATA_PGOLD
[orgrid@db1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/
ASMCMD> umount data_pgold
ASMCMD> mount data_pgold
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA_PGOLD" cannot be mounted
ORA-15040: diskgroup is incomplete (DBD ERROR: OCIStmtExecute)
ASMCMD> 
可以看出,關閉了Filter功能之後,就會失去保護功能

通過kfed修復
[root@db1 ~]# /orgrid/oracle/product/121/bin/kfed repair /dev/sdb3
[root@db1 ~]# su - orgrid
[orgrid@db1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
ASMCMD> mount data_pgold
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/
ASMCMD>

通過之前dd備份的塊做修復
[root@db1 ~]# dd if=block1 of=/dev/sdb2 bs=1024 count=1 conv=notrunc
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000467297 s, 2.2 MB/s
[root@db1 ~]# su - orgrid
[orgrid@db1 ~]$ asmcmd
ASMCMD> mount data_pgold
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/

ASMCMD> exit
[orgrid@db1 ~]$

增加AFD DISK,一般使用者沒許可權新增,必須用root使用者
ASMCMD> help afd_label
afd_label
        To set the given label to the specified disk
ASMCMD>
[orgrid@db1 ~]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK4 /dev/sdb4
ORA-15227: could not perform label set/clear operation
ORA-15031: disk specification '/dev/sdb4' matches no disks (DBD ERROR: OCIStmtExecute)
ASMCMD-9513: ASM disk label set operation failed.
[root@db1 ~]# /orgrid/oracle/product/121/bin/asmcmd afd_label ASMDISK4 /dev/sdb4
Connected to an idle instance.
[root@db1 ~]#

常見問題和解答
問:執行afd_configure時遇到ASMCMD-9524: AFD configuration failed 'ERROR: OHASD start failed'
答:安裝是如果遇到這個錯誤,需要安裝p19035573_121020_Generic.zip,這個patch實際上是一個asmcmdsys.pm檔案

問:什麼時候用afd_label --migrate
答:如果是從現有DiskGroup遷移到ASMFD,需要加引數--migrate,否則不需要


Reference

Configure ASMFD

http://docs.oracle.com/database/121/OSTMG/GUID-2F5E344F-AFC2-4768-8C00-6F3C56302123.htm#OSTMG95729

http://docs.oracle.com/database/121/OSTMG/GUID-BB2B3A64-4B83-4A6D-816C-6472FAF9B27A.htm#OSTMG95909
Configure in Restart

http://docs.oracle.com/database/121/OSTMG/GUID-06B3337C-07A3-4B3F-B6CD-04F2916C11F6.htm

http://www.ibm.com/developerworks/cn/linux/l-cn-udev/

https://wiki.archlinux.org/index.php/Udev#Setting_static_device_names

 

 

ASMFD 12.2.0.1  Supported Platforms

Vendor

   Version         Update/Kernel     Architecture       Bug or PSU

Oracle Linux – RedHat Compatible Kernel

6

All Updates, 2.6.32-71 and later 2.6.32 RedHat Compatible kernels

X86_64 Base

Oracle Linux - Unbreakable Enterprise Kernel

6

All Updates, 2.6.39-100 and later UEK 2.6.39 kernels

X86_64 Base

Oracle Linux – Unbreakable Enterprise Kernel

6 All Updates, 3.8.13-13 and later UEK 3.8.13 kernels X86_64 Base

Oracle Linux – Unbreakable Enterprise Kernel

 

 

6

All Updates, 4.1 and later UEK 4.1 kernels

X86_64 Base

Oracle Linux – RedHat Compatible Kernel

7 GA release, 3.10.0-123 and through 3.10.0-513 X86_64   Base

Oracle Linux – RedHat Compatible Kernel

7 Update 3, 3.10.0-514 and later X86_64 Base + Patch 25078431

Oracle Linux – RedHat Compatible Kernel

7 Update 4, 3.10.0-663 and later RedHat Compatible Kernels X86_64 12.2.0.1.180116
(Base Bug 26247490)

Oracle Linux – Unbreakable Enterprise Kernel

7 All Updates, 3.8.13-35 and later UEK 3.8.13 kernels X86_64 Base

Oracle Linux – Unbreakable Enterprise Kernel

7 All Updates, 4.1 and later UEK 4.1 kernels X86_64   Base

RedHat Linux 

6 All Updates, 2.6.32-279 and later RedHat kernels X86_64 Base

RedHat Linux

7 GA release, 3.10.0-123 and through 3.10.0-513 X86_64 Base

RedHat Linux

7 Update 4, 3.10.0-663 and later RedHat Compatible Kernels X86_64 12.2.0.1.180116
(Base Bug 26247490)

Novell SLES

12 GA, SP1 X86_64 Base

Solaris

10 Update 10 or later X86_64 and SPARC64 Base

Solaris

11 Update 10 and later X86_64 and SPARC64 Base


解決在RHEL/CentOS7.4以上版本無法使用AFD(Oracle ASMFD)特性

在7.4以上的redhat或者centos,配置afd不能配置成功,AFD is not ‘supported’,經過查詢MOS資料,發現在7.4以上的redhat核心需要升級”kmod”


1、載入afd


[root@rac1 ~]# asmcmd afd_configure

ASMCMD-9520: AFD is not 'supported' -----僅輸出不支援的資訊,無法得知具體內容


2、檢視afd的支援的核心版本


[root@rac1 ~]# afdroot install

AFD-620: AFD is not supported on this operating system version: '3.10.0-693.el7.x86_64'


[root@rac1 ~]#  acfsdriverstate -orahome $ORACLE_HOME supported

ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-693.el7.x86_64'

ACFS-9201: Not Supported 



[root@rac1 ~]# uname -a       

Linux rac1 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux

----檢視當前系統核心版本確實不被支援


[root@rac1 ~]# cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 7.4 (Maipo)


3、檢視kmod版本


[root@rac2 ~]# rpm -qa|grep kmod

kmod-libs-20-15.el7.x86_64

kmod-20-15.el7.x86_64   ----20-15版本


4、升級kmod


[root@rac1 yum.repos.d]# yum install kmod

Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Local                                                                                           | 3.6 kB  00:00:00     

(1/2): Local/group_gz                                                                           | 166 kB  00:00:00     

(2/2): Local/primary_db                                                                         | 3.1 MB  00:00:00     

Resolving Dependencies

--> Running transaction check

---> Package kmod.x86_64 0:20-15.el7 will be updated

---> Package kmod.x86_64 0:20-21.el7 will be an update

--> Finished Dependency Resolution


Dependencies Resolved


=======================================================================================================================

 Package                   Arch                        Version                        Repository                  Size

=======================================================================================================================

Updating:

 kmod                      x86_64                      20-21.el7                      Local                      121 k


Transaction Summary

=======================================================================================================================

Upgrade  1 Package


Total download size: 121 k

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Warning: RPMDB altered outside of yum.

  Updating   : kmod-20-21.el7.x86_64                                                                               1/2 

  Cleanup    : kmod-20-15.el7.x86_64                                                                               2/2 

  Verifying  : kmod-20-21.el7.x86_64                                                                               1/2 

  Verifying  : kmod-20-15.el7.x86_64                                                                               2/2 


Updated:

  kmod.x86_64 0:20-21.el7                                                                                              


Complete!


5、再次檢視lmod


[grid@rac1:/home/grid]$rpm -qa|grep kmod

kmod-libs-20-15.el7.x86_64

kmod-20-21.el7.x86_64  --->已升級到20-21版本


6、檢視afd驅動資訊


[root@rac1 yum.repos.d]#  acfsdriverstate -orahome $ORACLE_HOME supported

ACFS-9200: Supported 升級kmod之後afd驅動已支援


7、重新安裝afd

載入以及配置AFD


[root@rac1 yum.repos.d]# asmcmd afd_configure

載入afd過程:


AFD-627: AFD distribution files found.

AFD-634: Removing previous AFD installation.

AFD-635: Previous AFD components successfully removed.

AFD-636: Installing requested AFD software.

AFD-637: Loading installed AFD drivers.

AFD-9321: Creating udev for AFD.

AFD-9323: Creating module dependencies - this may take some time.

AFD-9154: Loading 'oracleafd.ko' driver.

AFD-649: Verifying AFD devices.

AFD-9156: Detecting control device '/dev/oracleafd/admin'.

AFD-638: AFD installation correctness verified.

Modifying resource dependencies - this may take some time.


查詢afd狀態資訊


[root@rac1 yum.repos.d]# asmcmd afd_state

ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac1'


無任何報錯,配置成功


MOS id:2303388.1

ACFS and AFD report “Not Supported” after installing appropriate Oracle Grid Infrastructure Patches on RedHat (文件 ID 2303388.1)


瞭解KMOD

https://www.linux.org/docs/man8/kmod.html

上述為linux對kmod的一段簡單描述,簡單意思是KMOD是管理linux核心的一個模組,對於使用者來說不會直接使用kmod,而是其他一些系統命令會呼叫。

進入sbin目錄發現kmod被做了軟連線,很多系統命令會被重定向


[grid@rac1:/sbin]$pwd

/sbin

[grid@rac1:/sbin]$ls -alt | grep kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 insmod -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 lsmod -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 modinfo -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 modprobe -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 rmmod -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 depmod -> ../bin/kmod


Oracle ASM Filter Driver (ASMFD) – New Features for Oracle ASM 12.1.0.2

作者:張樂奕

2014 年 12 月釋出

什麼是 Oracle ASM Filter Driver (ASMFD)?

簡單地說,這是一個可以取代 ASMLIB 和 udev 設定的新功能,並且還增加了 I/O Filter 功能,這也體現在該功能的命名中。ASMFD 目前只在 Linux 作業系統中有效,並且必須要使用最新版的 Oracle ASM 12.1.0.2。在之前,由於 Linux 作業系統對於塊裝置的發現順序不定,所以在系統重啟以後,無法保證原來的 /dev/sda 還是 sda,所以不能直接使用這樣原始的裝置名來做 ASM Disk 的 Path,因此出現了 ASMLIB,以 Label 的方式給予裝置一個固定的名字,而 ASM 則直接使用這個固定的名字來建立 ASM 磁碟,後來 ASMLIB 必須要 ULN 帳號才可以下載了,於是大家全部轉到了 udev 方式,我還因此寫了幾篇文章來闡述在 Linux 中如何設定 udev rule。比如:

How to use udev for Oracle ASM in Oracle Linux 6

Oracle Datafiles & Block Device & Parted & Udev

現在 Oracle 推出了 ASMFD,可以一舉取代 ASMLIB 和手動設定 udev rules 檔案的繁瑣,並且最重要的是 I/O Filter 功能。

什麼是 I/O Filter 功能?

文件原文如下:

The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

意思是:該功能可以拒絕所有無效的 I/O 請求,最主要的作用是防止意外覆寫 ASM 磁碟的底層盤,在後面的測試中可以看到對於  root  使用者的  dd  全盤清零這樣的變態操作也都是可以過濾的。

真是不錯,那麼該怎麼啟用這個功能呢?

通常我們原先的 ASM 中都應該使用的是 ASMLIB 或者 udev 繫結的裝置,這裡就直接描述如何將原先的裝置名重新命名為新的 AFD 裝置名。

--確認目前 ASMFD 模組(以下簡稱 AFD)的狀態,未載入。
[grid@dbserver1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'NOT INSTALLED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com'
 
--獲取當前 ASM 磁碟發現路徑,我這裡是使用 udev 繫結的名稱
[grid@dbserver1 ~]$ asmcmd dsget
parameter:/dev/asm*
profile:/dev/asm*
 
--設定 ASM 磁碟路徑,將新的 Disk String 加入
--可以看到在設定該引數時,ASM 會檢查現有已經載入的磁碟,如果不在發現路徑上,將會報錯。
[grid@dbserver1 ~]$ asmcmd dsset AFD:*
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path '/dev/asm-disk7' is not in the discovery set (DBD ERROR: OCIStmtExecute)
 
--因此我們必須將新的路徑加在原始路徑後面,設定成多種路徑,該操作會執行一段時間,視 ASM 磁碟多少而定
[grid@dbserver1 ~]$ asmcmd dsset '/dev/asm*','AFD:*'
 
[grid@dbserver1 ~]$ asmcmd dsget
parameter:/dev/asm*, AFD:*
profile:/dev/asm*,AFD:*
 
--檢查現在 GI 環境中的節點。
[grid@dbserver1 ~]$ olsnodes -a
dbserver1       Hub
dbserver2       Hub
 
--以下命令必須在所有 Hub 節點上都執行,可以使用 Rolling 的方式。以下命令有些需要 root 使用者,有些需要 grid 使用者,請注意 # 
或者 $ 不同的提示符表示不同的使用者。
--先停止crs
[root@dbserver1 ~]# crsctl stop crs
 
--作 AFD Configure,實際上這是一個解壓程式包,安裝,並載入 Driver 的過程,需要消耗一些時間
[root@dbserver1 ~]# asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
 
--結束以後,可以再次檢查 AFD 狀態,顯示已載入。
[root@dbserver1 ~]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com'
 
--接下來需要設定 AFD 自己的磁碟發現路徑了,其實這裡很像以前 ASMLIB 的操作。
--設定 AFD 磁碟發現路徑,必須先啟動 CRS,否則將會遇到下面的錯誤。同時也可以看到這個資訊是儲存在每個節點自己的 OLR 中,因此
必須在所有節點中都設定。
[root@dbserver1 ~]# asmcmd afd_dsget
Connected to an idle instance.
ASMCMD-9511: failed to obtain required AFD disk string from Oracle Local Repository
[root@dbserver1 ~]#
[root@dbserver1 ~]# asmcmd afd_dsset '/dev/sd*'
Connected to an idle instance.
ASMCMD-9512: failed to update AFD disk string in Oracle Local Repository.
 
--啟動 CRS
[root@dbserver1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
 
--此時檢視後臺的 ASM 告警日誌,可以看到載入的磁碟仍然使用的是原始路徑。但是也可以看到 libafd 已經成功載入。
2014-11-20 17:01:04.545000 +08:00
NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libafd12.so
ORACLE_BASE from environment = /u03/app/grid
SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:9:3} */
NOTE: Diskgroup used for Voting files is:
  CRSDG
Diskgroup with spfile:CRSDG
NOTE: Diskgroup used for OCR is:CRSDG
NOTE: Diskgroups listed in ASM_DISKGROUP are
  DATADG
NOTE: cache registered group CRSDG 1/0xB8E8EA0B
NOTE: cache began mount (first) of group CRSDG 1/0xB8E8EA0B
NOTE: cache registered group DATADG 2/0xB8F8EA0C
NOTE: cache began mount (first) of group DATADG 2/0xB8F8EA0C
NOTE: Assigning number (1,2) to disk (/dev/asm-disk3)
NOTE: Assigning number (1,1) to disk (/dev/asm-disk2)
NOTE: Assigning number (1,0) to disk (/dev/asm-disk1)
NOTE: Assigning number (1,5) to disk (/dev/asm-disk10)
NOTE: Assigning number (1,3) to disk (/dev/asm-disk8)
NOTE: Assigning number (1,4) to disk (/dev/asm-disk9)
NOTE: Assigning number (2,3) to disk (/dev/asm-disk7)
NOTE: Assigning number (2,2) to disk (/dev/asm-disk6)
NOTE: Assigning number (2,1) to disk (/dev/asm-disk5)
NOTE: Assigning number (2,5) to disk (/dev/asm-disk12)
NOTE: Assigning number (2,0) to disk (/dev/asm-disk4)
NOTE: Assigning number (2,6) to disk (/dev/asm-disk13)
NOTE: Assigning number (2,4) to disk (/dev/asm-disk11)
 
--將 afd_ds 設定為 ASM 磁碟的底層磁碟裝置名,這樣以後就不再需要手工配置 udev rules 了。
[grid@dbserver1 ~]$ asmcmd afd_dsset '/dev/sd*'
 
[grid@dbserver1 ~]$ asmcmd afd_dsget
AFD discovery string: /dev/sd*
 
--我在測試的時候,上面犯了一個錯誤,將路徑設定為了“dev/sd*”,缺少了最開始的根目錄。因此此處沒有發現任何磁碟,如果你的測試中,
這一步已經可以發現磁碟,請告訴我。
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.
 
--再次提醒,到此為止的所有命令,都必須要在叢集環境的所有節點中都執行。
--接下來就是將原先的 ASM 磁碟路徑從 udev 轉到 AFD
--先檢查現在的磁碟路徑
[root@dbserver1 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :     +CRSDG
 
[root@dbserver1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4838a0ee7bfa4fbebf8ff9f58642c965 (/dev/asm-disk1) [CRSDG]
 2. ONLINE   72057097a36e4f02bfc7b5e23672e4cc (/dev/asm-disk2) [CRSDG]
 3. ONLINE   7906e2fb24d24faebf9b82bba6564be3 (/dev/asm-disk3) [CRSDG]
Located 3 voting disk(s).
 
[root@dbserver1 ~]# su - grid
[grid@dbserver1 ~]$ asmcmd lsdsk -G CRSDG
Path
/dev/asm-disk1
/dev/asm-disk10
/dev/asm-disk2
/dev/asm-disk3
/dev/asm-disk8
/dev/asm-disk9
 
--由於要修改 OCR 所在的磁碟,因此修改之前需要停止 Cluster。
[root@dbserver1 ~]# crsctl stop cluster -all
 
--直接修改會報錯,因為 /dev/asm-disk1 已經存在在 ASM 中了。
[grid@dbserver1 ~]$ asmcmd afd_label asmdisk01 /dev/asm-disk1
Connected to an idle instance.
ASMCMD-9513: ASM disk label set operation failed.
disk /dev/asm-disk1 is already provisioned for ASM
 
--必須要增加 migrate 關鍵字,才可以成功。
[grid@dbserver1 ~]$ asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate
Connected to an idle instance.
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/asm-disk1
 
--在我的測試 ASM 中,一共有 13 塊磁碟,因此依次修改。
asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate
asmcmd afd_label asmdisk02 /dev/asm-disk2 --migrate
asmcmd afd_label asmdisk03 /dev/asm-disk3 --migrate
asmcmd afd_label asmdisk04 /dev/asm-disk4 --migrate
asmcmd afd_label asmdisk05 /dev/asm-disk5 --migrate
asmcmd afd_label asmdisk06 /dev/asm-disk6 --migrate
asmcmd afd_label asmdisk07 /dev/asm-disk7 --migrate
asmcmd afd_label asmdisk08 /dev/asm-disk8 --migrate
asmcmd afd_label asmdisk09 /dev/asm-disk9 --migrate
asmcmd afd_label asmdisk10 /dev/asm-disk10 --migrate
asmcmd afd_label asmdisk11 /dev/asm-disk11 --migrate
asmcmd afd_label asmdisk12 /dev/asm-disk12 --migrate
asmcmd afd_label asmdisk13 /dev/asm-disk13 --migrate
 
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/asm-disk1
ASMDISK02                   ENABLED   /dev/asm-disk2
ASMDISK03                   ENABLED   /dev/asm-disk3
ASMDISK04                   ENABLED   /dev/asm-disk4
ASMDISK05                   ENABLED   /dev/asm-disk5
ASMDISK06                   ENABLED   /dev/asm-disk6
ASMDISK07                   ENABLED   /dev/asm-disk7
ASMDISK08                   ENABLED   /dev/asm-disk8
ASMDISK09                   ENABLED   /dev/asm-disk9
ASMDISK10                   ENABLED   /dev/asm-disk10
ASMDISK11                   ENABLED   /dev/asm-disk11
ASMDISK12                   ENABLED   /dev/asm-disk12
ASMDISK13                   ENABLED   /dev/asm-disk13
 
--在另外的節點中,不再需要作 label,而是直接 scan 即可,這跟使用 ASMLIB 的操作非常相像。
[grid@dbserver2 ~]$ asmcmd afd_scan
Connected to an idle instance.
[grid@dbserver2 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK12                   ENABLED   /dev/asm-disk12
ASMDISK09                   ENABLED   /dev/asm-disk9
ASMDISK08                   ENABLED   /dev/asm-disk8
ASMDISK11                   ENABLED   /dev/asm-disk11
ASMDISK10                   ENABLED   /dev/asm-disk10
ASMDISK13                   ENABLED   /dev/asm-disk13
ASMDISK01                   ENABLED   /dev/asm-disk1
ASMDISK04                   ENABLED   /dev/asm-disk4
ASMDISK06                   ENABLED   /dev/asm-disk6
ASMDISK07                   ENABLED   /dev/asm-disk7
ASMDISK05                   ENABLED   /dev/asm-disk5
ASMDISK03                   ENABLED   /dev/asm-disk3
ASMDISK02                   ENABLED   /dev/asm-disk2
 
--重新啟動 Cluster
[root@dbserver1 ~]# crsctl start cluster -all
 
--可以看到 ASM 告警日誌中已經顯示開始使用新的名稱。關於其中 WARNING 的含義表示目前 AFD 還不支援 Advanced Format 格式的
磁碟,普通磁碟格式一個扇區是 512 位元組,而高階格式則為 4K 位元組。
2014-11-20 17:46:16.695000 +08:00
* allocate domain 1, invalid = TRUE
* instance 2 validates domain 1
NOTE: cache began mount (not first) of group CRSDG 1/0x508D0B98
NOTE: cache registered group DATADG 2/0x509D0B99
* allocate domain 2, invalid = TRUE
* instance 2 validates domain 2
NOTE: cache began mount (not first) of group DATADG 2/0x509D0B99
WARNING: Library 'AFD Library - Generic , version 3 (KABI_V3)' does not support advanced format disks
NOTE: Assigning number (1,0) to disk (AFD:ASMDISK01)
NOTE: Assigning number (1,1) to disk (AFD:ASMDISK02)
NOTE: Assigning number (1,2) to disk (AFD:ASMDISK03)
NOTE: Assigning number (1,3) to disk (AFD:ASMDISK08)
NOTE: Assigning number (1,4) to disk (AFD:ASMDISK09)
NOTE: Assigning number (1,5) to disk (AFD:ASMDISK10)
NOTE: Assigning number (2,0) to disk (AFD:ASMDISK04)
NOTE: Assigning number (2,1) to disk (AFD:ASMDISK05)
NOTE: Assigning number (2,2) to disk (AFD:ASMDISK06)
NOTE: Assigning number (2,3) to disk (AFD:ASMDISK07)
NOTE: Assigning number (2,4) to disk (AFD:ASMDISK11)
NOTE: Assigning number (2,5) to disk (AFD:ASMDISK12)
NOTE: Assigning number (2,6) to disk (AFD:ASMDISK13)
 
--檢查磁碟載入路徑,以及功能全部是 AFD 樣式了。
[grid@dbserver1 ~]$ asmcmd lsdsk
Path
AFD:ASMDISK01
AFD:ASMDISK02
AFD:ASMDISK03
AFD:ASMDISK04
AFD:ASMDISK05
AFD:ASMDISK06
AFD:ASMDISK07
AFD:ASMDISK08
AFD:ASMDISK09
AFD:ASMDISK10
AFD:ASMDISK11
AFD:ASMDISK12
AFD:ASMDISK13
 
--但是我們可以看到在資料字典中仍然存在之前的磁碟路徑。
SQL> select NAME,LABEL,PATH from V$ASM_DISK;
 
NAME                 LABEL                           PATH
-------------------- ------------------------------- ---------------------------------------------
                                                     /dev/asm-disk7
                                                     /dev/asm-disk6
                                                     /dev/asm-disk13
                                                     /dev/asm-disk12
                                                     /dev/asm-disk11
                                                     /dev/asm-disk4
                                                     /dev/asm-disk2
                                                     /dev/asm-disk9
                                                     /dev/asm-disk3
                                                     /dev/asm-disk5
                                                     /dev/asm-disk10
                                                     /dev/asm-disk8
                                                     /dev/asm-disk1
CRSDG_0000           ASMDISK01                       AFD:ASMDISK01
CRSDG_0001           ASMDISK02                       AFD:ASMDISK02
CRSDG_0002           ASMDISK03                       AFD:ASMDISK03
DATADG_0000          ASMDISK04                       AFD:ASMDISK04
DATADG_0001          ASMDISK05                       AFD:ASMDISK05
DATADG_0002          ASMDISK06                       AFD:ASMDISK06
DATADG_0003          ASMDISK07                       AFD:ASMDISK07
CRSDG_0003           ASMDISK08                       AFD:ASMDISK08
CRSDG_0004           ASMDISK09                       AFD:ASMDISK09
CRSDG_0005           ASMDISK10                       AFD:ASMDISK10
DATADG_0004          ASMDISK11                       AFD:ASMDISK11
DATADG_0005          ASMDISK12                       AFD:ASMDISK12
DATADG_0006          ASMDISK13                       AFD:ASMDISK13
 
26 rows selected.
 
--需要將 ASM 磁碟發現路徑(注意,這跟設定 AFD 磁碟發現路徑不是一個命令)中原先的路徑去除,只保留 AFD 路徑。
[grid@dbserver1 ~]$ asmcmd dsset 'AFD:*'
[grid@dbserver1 ~]$ asmcmd dsget
parameter:AFD:*
profile:AFD:*
 
--再次重啟 ASM,一切正常了。
SQL> select NAME,LABEL,PATH from V$ASM_DISK;
 
NAME                 LABEL                           PATH
-------------------- ------------------------------- -------------------------------------------------------
CRSDG_0000           ASMDISK01                       AFD:ASMDISK01
CRSDG_0001           ASMDISK02                       AFD:ASMDISK02
CRSDG_0002           ASMDISK03                       AFD:ASMDISK03
DATADG_0000          ASMDISK04                       AFD:ASMDISK04
DATADG_0001          ASMDISK05                       AFD:ASMDISK05
DATADG_0002          ASMDISK06                       AFD:ASMDISK06
DATADG_0003          ASMDISK07                       AFD:ASMDISK07
CRSDG_0003           ASMDISK08                       AFD:ASMDISK08
CRSDG_0004           ASMDISK09                       AFD:ASMDISK09
CRSDG_0005           ASMDISK10                       AFD:ASMDISK10
DATADG_0004          ASMDISK11                       AFD:ASMDISK11
DATADG_0005          ASMDISK12                       AFD:ASMDISK12
DATADG_0006          ASMDISK13                       AFD:ASMDISK13
 
13 rows selected.
 
--收尾工作,將原先的 udev rules 檔案移除。當然,這要在所有節點中都執行。以後如果伺服器再次重啟,AFD 就會完全接管了。
[root@dbserver1 ~]# mv /etc/udev/rules.d/99-oracle-asmdevices.rules ~oracle/

還有什麼發現?

其實,AFD 也在使用 udev。囧。

[grid@dbserver1 ~]$ cat /etc/udev/rules.d/53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmdba", MODE="0770"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmdba", MODE="0770"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmdba", MODE="0660"

Label 過後的磁碟在 /dev/oracleafd/disks 目錄中可以找到。

[root@dbserver2 disks]# ls -l /dev/oracleafd/disks
total 52
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK01
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK02
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK03
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK04
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK05
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK06
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK07
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK08
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK09
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK10
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK11
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK12
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK13

這裡有一個很大不同,所有磁碟的屬主變成了 root,並且只有 root 才有寫入的許可權。很多文章認為,這就是 AFD 的 filter 功能體現了,因為現在用 oracle 或者 grid 使用者都沒有辦法直接對 ASM 磁碟進行寫入操作,自然就獲得了一層保護。比如以下命令會直接報許可權不足。

[oracle@dbserver1 disks]$ echo "do some evil" > ASMDISK99
-bash: ASMDISK99: Permission denied

但是如果你認為這就是 AFD 的保護功能,那也太小看 Oracle 了,僅僅是這樣也對不起名字中 Filter 字樣。且看後面分解。

作業系統中也可以看到 AFD 磁碟和底層磁碟的對應關係。

[grid@dbserver1 /]$ ls -l /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK01 -> ../../sdc
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK02 -> ../../sdd
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK03 -> ../../sde
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK04 -> ../../sdf
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK05 -> ../../sdg
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK06 -> ../../sdh
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK07 -> ../../sdi
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK08 -> ../../sdj
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK09 -> ../../sdk
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK10 -> ../../sdl
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK11 -> ../../sdm
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK12 -> ../../sdn
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK13 -> ../../sdo

再次重啟伺服器以後,afd_lsdsk 的結果中顯示的路徑都已經變為底層磁碟,但是 Filtering 卻變成了 DISABLED。不要在意這裡的 Label 和 Path 的對應和上面的不一樣,因為有些是在節點 1 中執行的結果,有些是在節點 2 中執行的結果,而這也是 AFD 功能的展示,不管兩邊機器發現塊裝置的順序是不是一樣,只要繫結了 AFD 的 Label,就沒問題了。

ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                  DISABLED   /dev/sdd
ASMDISK02                  DISABLED   /dev/sde
ASMDISK03                  DISABLED   /dev/sdf
ASMDISK04                  DISABLED   /dev/sdg
ASMDISK05                  DISABLED   /dev/sdh
ASMDISK06                  DISABLED   /dev/sdi
ASMDISK07                  DISABLED   /dev/sdj
ASMDISK08                  DISABLED   /dev/sdk
ASMDISK09                  DISABLED   /dev/sdl
ASMDISK10                  DISABLED   /dev/sdm
ASMDISK11                  DISABLED   /dev/sdn
ASMDISK12                  DISABLED   /dev/sdo
ASMDISK13                  DISABLED   /dev/sdp

最後,該來測試一下 I/O Filter 功能了吧,等好久了!

對,這才是重點。

先看一下如何啟用或者禁用 Filter 功能。在我的測試中,單獨設定某塊盤啟用還是禁用是不生效的,只能全域性啟用或者禁用。

[grid@dbserver1 ~]$ asmcmd help afd_filter
afd_filter
        Sets the AFD filtering mode on a given disk path.
        If the command is executed without specifying a disk path then
        filtering is set at node level.
 
Synopsis
        afd_filter {-e | -d } [<disk-path>]
 
Description
        The options for afd_filter are described below
 
        -e      -  enable  AFD filtering mode
        -d      -  disable AFD filtering mode
 
Examples
        The following example uses afd_filter to enable AFD filtering
        on a given diskpath.
 
        ASMCMD [+] >afd_filter -e /dev/sdq
 
See Also
       afd_lsdsk afd_state

啟用 Filter 功能。

[grid@dbserver1 ~]$ asmcmd afd_filter -e
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/sdb
ASMDISK02                   ENABLED   /dev/sdc
ASMDISK03                   ENABLED   /dev/sdd
ASMDISK04                   ENABLED   /dev/sde
ASMDISK05                   ENABLED   /dev/sdf
ASMDISK06                   ENABLED   /dev/sdg
ASMDISK07                   ENABLED   /dev/sdh
ASMDISK08                   ENABLED   /dev/sdi
ASMDISK09                   ENABLED   /dev/sdj
ASMDISK10                   ENABLED   /dev/sdk
ASMDISK11                   ENABLED   /dev/sdl
ASMDISK12                   ENABLED   /dev/sdm
ASMDISK13                   ENABLED   /dev/sdn

為了以防萬一,不破壞我自己的實驗環境,增加了一塊磁碟來作測試。

[root@dbserver1 ~]# asmcmd afd_label asmdisk99 /dev/sdo
Connected to an idle instance.
[root@dbserver1 ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/sdb
ASMDISK02                   ENABLED   /dev/sdc
ASMDISK03                   ENABLED   /dev/sdd
ASMDISK04                   ENABLED   /dev/sde
ASMDISK05                   ENABLED   /dev/sdf
ASMDISK06                   ENABLED   /dev/sdg
ASMDISK07                   ENABLED   /dev/sdh
ASMDISK08                   ENABLED   /dev/sdi
ASMDISK09                   ENABLED   /dev/sdj
ASMDISK10                   ENABLED   /dev/sdk
ASMDISK11                   ENABLED   /dev/sdl
ASMDISK12                   ENABLED   /dev/sdm
ASMDISK13                   ENABLED   /dev/sdn
ASMDISK99                   ENABLED   /dev/sdo

建立一個新的磁碟組。

[grid@dbserver1 ~]$ sqlplus / AS sysasm
SQL> CREATE diskgroup DGTEST external redundancy disk 'AFD:ASMDISK99';
 
Diskgroup created.

先用  KFED  讀取一下磁碟頭,驗證一下確實無誤。

[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1854585587 ; 0x00c: 0x6e8abaf3
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17
kfdhdb.driver.reserved[0]:   1145918273 ; 0x008: 0x444d5341
kfdhdb.driver.reserved[1]:    961237833 ; 0x00c: 0x394b5349
kfdhdb.driver.reserved[2]:           57 ; 0x010: 0x00000039
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:               ASMDISK99 ; 0x028: length=9
kfdhdb.grpname:                  DGTEST ; 0x048: length=6
kfdhdb.fgname:                ASMDISK99 ; 0x068: length=9

直接使用  dd  嘗試將整個磁碟清零。 dd  命令本身沒有任何錯誤返回。

[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo
dd: writing to `/dev/sdo': No space left on device
409601+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 19.9602 s, 10.5 MB/s

之後重新  mount  磁碟組,如果磁碟被清零,在重新  mount  的時候一定會出現錯誤,而現在正常掛載。

SQL>  ALTER diskgroup DGTEST dismount;
 
Diskgroup altered.
 
SQL>  ALTER diskgroup DGTEST mount;
 
Diskgroup altered.

覺得不過癮?那再建立一個表空間,插入一些資料,做一次 checkpoint,仍然一切正常。

SQL> CREATE tablespace test datafile '+DGTEST' SIZE 100M;
 
Tablespace created.
 
SQL> CREATE TABLE t_afd (n NUMBER) tablespace test;
 TABLE created.
 
SQL> INSERT INTO t_afd VALUES(1);
 1 ROW created.
 
SQL> commit;
 
Commit complete.
 
SQL> ALTER system checkpoint;
 
System altered.
 
SQL> SELECT COUNT(*) FROM t_afd;
   COUNT(*)----------
         1

但是詭異的是,這時候在作業系統級別直接去讀取 /dev/sdo 的內容,會顯示全部都已經被清空為 0 了。

[root@dbserver1 ~]# od -c -N 256 /dev/sdo
0000000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0000400

使用  strings  命令也完全看不到任何有意義的字元。

[root@dbserver1 disks]# strings /dev/sdo
[root@dbserver1 disks]#

但是,千萬不要被這樣的假象迷惑,以為磁碟真的被清空了,在  dd  的時候,/var/log/message 會產生大量日誌,明確表示這些在 ASM 管理的裝置上的 IO 操作都是不被支援,這才是 Filter 真正起作用的場合。

afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=224)  not supported

使用  kfed ,仍然可以讀取到正常的資訊。

[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1854585587 ; 0x00c: 0x6e8abaf3
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17
......

直到重新啟動伺服器(重新啟動 ASM,重新啟動 Cluster,在作業系統仍然看到的是清零後的資料),所有的資料又回來了。目前還不確認 Oracle 是使用了怎樣的重定向技術實現了這樣的神奇效果。

[root@dbserver1 ~]# od -c -N 256 /dev/sdo
0000000 001 202 001 001  \0  \0  \0  \0  \0  \0  \0 200   u 177   D   I
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000040   O   R   C   L   D   I   S   K   A   S   M   D   I   S   K   9
0000060   9  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000100  \0  \0 020  \n  \0  \0 001 003   A   S   M   D   I   S   K   9
0000120   9  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000140  \0  \0  \0  \0  \0  \0  \0  \0   D   G   T   E   S   T  \0  \0
0000160  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000200  \0  \0  \0  \0  \0  \0  \0  \0   A   S   M   D   I   S   K   9
0000220   9  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000240  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0000300  \0  \0  \0  \0  \0  \0  \0  \0 022 257 367 001  \0   X  \0 247
0000320 022 257 367 001  \0   h 036 344  \0 002  \0 020  \0  \0 020  \0
0000340 200 274 001  \0 310  \0  \0  \0 002  \0  \0  \0 001  \0  \0  \0
0000360 002  \0  \0  \0 002  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000400
[root@dbserver1 ~]# 
[root@dbserver1 ~]# strings /dev/sdo | grep ASM
ORCLDISKASMDISK99
ASMDISK99
ASMDISK99
ORCLDISKASMDISK99
ASMDISK99
ASMDISK99
ASMDISK99
ASMDISK99
ASMPARAMETERFILE
ASMPARAMETERBAKFILE
ASM_STALE

最後將 Filter 禁用之後再測試。

[root@dbserver1 ~]# asmcmd afd_filter -d
Connected to an idle instance.
[root@dbserver1 ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                  DISABLED   /dev/sdb
ASMDISK02                  DISABLED   /dev/sdc
ASMDISK03                  DISABLED   /dev/sdd
ASMDISK04                  DISABLED   /dev/sde
ASMDISK05                  DISABLED   /dev/sdf
ASMDISK06                  DISABLED   /dev/sdg
ASMDISK07                  DISABLED   /dev/sdh
ASMDISK08                  DISABLED   /dev/sdi
ASMDISK09                  DISABLED   /dev/sdj
ASMDISK10                  DISABLED   /dev/sdk
ASMDISK11                  DISABLED   /dev/sdl
ASMDISK12                  DISABLED   /dev/sdm
ASMDISK13                  DISABLED   /dev/sdn
ASMDISK99                  DISABLED   /dev/sdo

同樣使用  dd  命令清零整個磁碟。

[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo
dd: writing to `/dev/sdo': No space left on device
409601+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 4.46444 s, 47.0 MB/s

重新  mount  磁碟組,如期報錯,磁碟組無法載入。

SQL> alter diskgroup DGTEST dismount;
 
Diskgroup altered.
 
SQL> alter diskgroup DGTEST mount;
alter diskgroup DGTEST mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DGTEST" cannot be mounted
ORA-15040: diskgroup is incomplete

重新啟動資料庫,也會發現由於表空間中資料庫不存在而導致資料庫無法正常 Open。

SQL> startup
ORACLE instance started.
 
Total System Global Area  838860800 bytes
Fixed SIZE                  2929936 bytes
Variable >SIZE             385878768 bytesDATABASE Buffers          226492416 bytes
Redo Buffers                5455872 bytes
In-Memory Area            218103808 bytesDATABASE mounted.
ORA-01157: cannot identify/LOCK DATA file 15 - see DBWR trace file
ORA-01110: DATA file 15: '+DGTEST/CDB12C/DATAFILE/test.256.864163075'





About Me

........................................................................................................................

● 本文作者:小麥苗,部分內容整理自網路,若有侵權請聯絡小麥苗刪除

● 本文在itpub、部落格園、CSDN和個人微 信公眾號( xiaomaimiaolhr )上有同步更新

● 本文itpub地址: http://blog.itpub.net/26736162

● 本文部落格園地址: http://www.cnblogs.com/lhrbest

● 本文CSDN地址: https://blog.csdn.net/lihuarongaini

● 本文pdf版、個人簡介及小麥苗雲盤地址: http://blog.itpub.net/26736162/viewspace-1624453/

● 資料庫筆試面試題庫及解答: http://blog.itpub.net/26736162/viewspace-2134706/

● DBA寶典今日頭條號地址: http://www.toutiao.com/c/user/6401772890/#mid=1564638659405826

........................................................................................................................

● QQ群號: 230161599 (滿) 、618766405

● 微 信群:可加我微 信,我拉大家進群,非誠勿擾

● 聯絡我請加QQ好友 646634621 ,註明新增緣由

● 於 2019-08-01 06:00 ~ 2019-08-31 24:00 在西安完成

● 最新修改時間:2019-08-01 06:00 ~ 2019-08-31 24:00

● 文章內容來源於小麥苗的學習筆記,部分整理自網路,若有侵權或不當之處還請諒解

● 版權所有,歡迎分享本文,轉載請保留出處

........................................................................................................................

小麥苗的微店 https://weidian.com/s/793741433?wfr=c&ifr=shopdetail

小麥苗出版的資料庫類叢書 http://blog.itpub.net/26736162/viewspace-2142121/

小麥苗OCP、OCM、高可用網路班 http://blog.itpub.net/26736162/viewspace-2148098/

小麥苗騰訊課堂主頁 https://lhr.ke.qq.com/

........................................................................................................................

使用 微 信客戶端 掃描下面的二維碼來關注小麥苗的微 信公眾號( xiaomaimiaolhr )及QQ群(DBA寶典)、新增小麥苗微 信, 學習最實用的資料庫技術。

........................................................................................................................

歡迎與我聯絡

 

 



來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26736162/viewspace-2653737/,如需轉載,請註明出處,否則將追究法律責任。

相關文章