使用OpenFiler來模擬儲存配置RAC中ASM共享盤及多路徑(multipath)的測試

strivechao發表於2018-06-22

第一章 本篇總覽

 

之前釋出了一篇《Oracle_lhr_RAC 12cR1安裝》,但是其中的儲存並沒有使用多路徑,而是使用了VMware自身提供的儲存。所以,年前最後一件事就是把多路徑學習一下,本文介紹了OpenFiler、iSCSI和多路徑的配置。

本文內容:

wpsE474.tmp 

 

第二章 安裝OpenFiler

OpenFile是在rPath Linux基礎上開發的,它能夠作為一個獨立的Linux作業系統發行。Openfiler是一款非常好的儲存管理作業系統,開源免費,透過web介面對儲存磁碟的管理,支援現在流行的網路儲存技術IP-SANNAS,支援iSCSINFSSMB/CIFSFTP等協議。

本次安裝OpenFiler鎖需要的軟體如下所示:

序號

型別

內容

1

openfiler

openfileresa-2.99.1-x86_64-disc1.iso

注:這些軟體小麥苗已上傳到騰訊微雲(http://blog.itpub.net/26736162/viewspace-1624453/),各位朋友可以去下載。另外,小麥苗已經將安裝好的虛擬機器上傳到了雲盤,裡邊已整合了rlwrap軟體。

2.1  安裝

詳細安裝過程小麥苗就不一個一個截圖了,網上已經有網友貼出了一步一步的過程,OpenFiler的記憶體設定為1G大小或再小點也無所謂,磁碟選用IDE磁碟格式,由於後續要配置多路徑,所以需要安裝2塊網路卡。安裝完成後,重新啟動,介面如下所示:

wpsE475.tmp 

 

注意,方框中的內容,可以在瀏覽器中直接開啟。可以用root使用者登入進行使用者的維護,若進行儲存的維護則只能使用openfiler使用者。openfiler是在遠端使用Web介面進行管理的,小麥苗這裡的管理地址是https://192.168.59.200:446,其管理初始使用者名稱是openfiler(小寫的),密碼是password,可以在登入之後,修改這個密碼。

wpsE476.tmp 

 

2.2  基本配置

2.2.1  網路卡配置

wpsE477.tmp 

配置靜態網路卡地址:

[root@OFLHR ~]# more /etc/sysconfig/network-scripts/ifcfg-eth0

# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]

DEVICE=eth0

BOOTPROTO=static

BROADCAST=192.168.59.255

HWADDR=00:0C:29:98:1A:CD

IPADDR=192.168.59.200

NETMASK=255.255.255.0

NETWORK=192.168.59.0

ONBOOT=yes

[root@OFLHR ~]# more /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

MTU=1500

USERCTL=no

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.2.200

NETMASK=255.255.255.0

HWADDR=00:0C:29:98:1A:D7

[root@OFLHR ~]# ip a

1: lo: mtu 16436 qdisc noqueue

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000

    link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

    inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

    inet6 fe80::20c:29ff:fe98:1acd/64 scope link

       valid_lft forever preferred_lft forever

3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000

    link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

    inet 192.168.2.200/24 brd 192.168.2.255 scope global eth1

    inet6 fe80::20c:29ff:fe98:1ad7/64 scope link

       valid_lft forever preferred_lft forever

[root@OFLHR ~]#

 

 

2.2.2  新增硬碟

新增一塊100G大小的IDE格式的硬碟作為儲存。

wpsE478.tmp 

[root@OFLHR ~]# fdisk -l

 

Disk /dev/sda: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000adc2c

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *          63      610469      305203+  83  Linux

/dev/sda2          610470    17382329     8385930   83  Linux

/dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris

 

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/sdb doesn't contain a valid partition table

[root@OFLHR ~]#

 

 

2.3  iscsi target配置

openfiler伺服器配置了兩塊硬碟,其中10GB的硬碟已經用來安裝openfiler作業系統,而200GB的硬碟則會用做資料儲存。

2.3.1  建立邏輯卷

 

登入地址:https://192.168.59.200:446

初始使用者名稱和密碼:openfiler/password

 

在獨立儲存裝置中,LUN(Logical Unit Number)是最重要的基本單位。LUN可以被SAN中的任何主機訪問,不管是透過HBA或是iSCSI。就算是軟體啟用的iSCSI,也可以在不同的作業系統之下,在作業系統啟動之後利用軟體的iSCSI initiator訪問LUN。在OpenFiler之下,LUN被稱為Logical VolumeLV),因此在OpenFiler下建立LUN就是建立LV

當你安裝好OpenFiler之後,接下來就是要將OpenFiler下的磁碟分享出來給虛擬機器或網路上的其他主機使用了。在標準的SAN之後,這些可以在RAID層面完成,但VG的好處及彈性是RAID無法比較的,下面看看OpenFiler下的VG是如何一步一步建立的。

 建立VG的步驟:

1)進入OpenFiler的介面,並且選擇要使用的實體硬碟。

2)將要加入的實體硬碟格式化成Physical Volume格式。

3)建立一個VG組,並且將格式化成為PV格式的實體硬碟加入。

4)加入完畢之後,就成為一個大的VG組,被視為系統的一個大實體硬碟。

5)在這個VG中新增邏輯分割區LUN,在OpenFiler中稱為Logical Volume

6)指定LUN的檔案格式,如iSCSIext3或是NFS,並且格式化。

7)如果是iSCSI則需要再配置,如果是其他檔案格式,就可以用NAS的方式分享出去而

 

 

登入後,點選Volumes標籤

openfiler伺服器配置了兩塊硬碟,其中10GB的硬碟已經用來安裝openfiler作業系統,而200GB的硬碟則會用做資料儲存。

wpsE479.tmp 

wpsE47A.tmp 

點選create new physical volumes點選/dev/sdb

wpsE47B.tmp 

點選頁面右下角Reset,然後點選Create。分割槽型別為Physical volume

wpsE48B.tmp 

點選Volume Groups

wpsE48C.tmp 

wpsE48D.tmp 

輸入名稱,勾選核取方塊,單擊Add volume group

wpsE48E.tmp 

[root@OFLHR ~]# fdisk -l

 

Disk /dev/sda: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000adc2c

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *          63      610469      305203+  83  Linux

/dev/sda2          610470    17382329     8385930   83  Linux

/dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris

 

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

 

 

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1   209715199   104857599+  ee  GPT

[root@OFLHR ~]# pvs

  PV         VG    Fmt  Attr PSize  PFree

  /dev/sdb1  vmlhr lvm2 a-   95.34g 95.34g

[root@OFLHR ~]#

 

 

點選Add Volume

wpsE48F.tmp 

 

輸入內容,調整磁碟大小為10G,卷型別選擇blockiSCSIFCetc

wpsE490.tmp 

wpsE491.tmp 

依次共建立4個邏輯卷:

wpsE492.tmp 

[root@OFLHR ~]# vgs

  VG    #PV #LV #SN Attr   VSize  VFree

  vmlhr   1   4   0 wz--n- 95.34g 55.34g

[root@OFLHR ~]# pvs

  PV         VG    Fmt  Attr PSize  PFree

  /dev/sdb1  vmlhr lvm2 a-   95.34g 55.34g

[root@OFLHR ~]# lvs

  LV   VG    Attr   LSize  Origin Snap%  Move Log Copy%  Convert

  lv01 vmlhr -wi-a- 10.00g                                     

  lv02 vmlhr -wi-a- 10.00g                                     

  lv03 vmlhr -wi-a- 10.00g                                     

  lv04 vmlhr -wi-a- 10.00g                                     

[root@OFLHR ~]# fdisk -l

 

Disk /dev/sda: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000adc2c

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *          63      610469      305203+  83  Linux

/dev/sda2          610470    17382329     8385930   83  Linux

/dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris

 

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

 

 

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1   209715199   104857599+  ee  GPT

 

Disk /dev/dm-0: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/dm-0 doesn't contain a valid partition table

 

Disk /dev/dm-1: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/dm-1 doesn't contain a valid partition table

 

Disk /dev/dm-2: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/dm-2 doesn't contain a valid partition table

 

Disk /dev/dm-3: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/dm-3 doesn't contain a valid partition table

[root@OFLHR ~]#

 

2.3.2  開啟iSCSI Target服務

wpsE493.tmp 

點選Services標籤欄設定iSCSI Target Enable 開啟服務Start

 

2.3.3  LUN Mapping操作

wpsE494.tmp 

返回Volumes標籤頁,點選iSCSI Targets

wpsE495.tmp 

點選Add

選擇LUN Mapping標籤 點選Map

wpsE496.tmp 

2.3.4  Network ACL

由於iSCSI是走IP網路,因此我們要允許網路中的計算機可以透過IP來訪問。下面就是OpenFilerIP網路和同一網段中其他主機的連線方法。

1.進入OpenFiler中的System,並且直接拉到頁面的下方。

2.在Network Access Configuration的地方輸入這個網路訪問的名稱,如VM_LHR

3.輸入主機的IP段。注意不可以輸入單一主機的IP,這樣會都無法訪問。我們在這邊輸入192.168.59.0,表示從192.168.59.1一直到192.168.59.254都能訪問。

4.在Netmask中選擇255.255.255.0,並且在Type下拉選單框中選擇Share,之後即可以單擊Update按鈕。

wpsE497.tmp 

選擇完之後就更新

至此就可以在這個OpenFiler中看到被授權的網段了。

 

iSCSI Targets中,點選 Network ACL 標籤

wpsE498.tmp 

設定AccessAllow 然後點選Update

到此儲存的配置已經完成

2.3.5  /etc/initiators.deny

註釋掉iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL

[root@OFLHR ~]# more /etc/initiators.deny  

 

# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!

#       This configuration file was autogenerated

#       by Openfiler. Any manual changes will be overwritten

#       Generated at: Sat Jan 21 1:49:55 CST 2017

 

 

#iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL

 

 

# End of Openfiler configuration

 

[root@OFLHR ~]#

 

 

第三章 RAC中配置共享

 

3.1  RAC節點配置iSCSI

iSCSI(Internet Small Computer System InterfaceiSCSI技術由IBM公司研究開發,是一個供硬體裝置使用的、可以在IP協議的上層執行的SCSI指令集,這種指令集合可以實現在IP網路上執行SCSI協議,使其能夠在諸如高速千兆乙太網上進行路由選擇。iSCSI技術是一種新儲存技術,該技術是將現有SCSI介面與乙太網路(Ethernet)技術結合,使伺服器可與使用IP網路的儲存裝置互相交換資料。iSCSI是一種基於 TCP/IP 的協議,用來建立和管理 IP儲存裝置、主機和客戶機等之間的相互連線,並建立儲存區域網路(SAN)。

iSCSI target:就是儲存裝置端,存放磁碟或RAID的裝置,目前也能夠將Linux主機模擬成iSCSI target了!目的在提供其他主機使用的『磁碟』;

iSCSI initiator:就是能夠使用target的使用者端,通常是伺服器。也就是說,想要連線到iSCSI target的伺服器,也必須要安裝iSCSI initiator的相關功能後才能夠使用iSCSI target提供的磁碟。

3.1.1  iSCSI target

[root@OFLHR ~]# service iscsi-target start

Starting iSCSI target service: [  OK  ]

[root@OFLHR ~]# more /etc/ietd.conf      

#####   WARNING!!! - This configuration file generated by Openfiler. DO NOT MANUALLY EDIT.  ##### 

 

 

Target iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

        HeaderDigest None

        DataDigest None

        MaxConnections 1

        InitialR2T Yes

        ImmediateData No

        MaxRecvDataSegmentLength 131072

        MaxXmitDataSegmentLength 131072

        MaxBurstLength 262144

        FirstBurstLength 262144

        DefaultTime2Wait 2

        DefaultTime2Retain 20

        MaxOutstandingR2T 8

        DataPDUInOrder Yes

        DataSequenceInOrder Yes

        ErrorRecoveryLevel 0

        Lun 0 Path=/dev/vmlhr/lv01,Type=blockio,ScsiSN=22llvD-CacO-MOMA,ScsiId=22llvD-CacO-MOMA,IOMode=wt

        Lun 1 Path=/dev/vmlhr/lv02,Type=blockio,ScsiSN=BgLpy9-u7PH-csDC,ScsiId=BgLpy9-u7PH-csDC,IOMode=wt

        Lun 2 Path=/dev/vmlhr/lv03,Type=blockio,ScsiSN=38KsSC-REKL-yPgW,ScsiId=38KsSC-REKL-yPgW,IOMode=wt

        Lun 3 Path=/dev/vmlhr/lv04,Type=blockio,ScsiSN=aN5blo-NyMp-L4Jl,ScsiId=aN5blo-NyMp-L4Jl,IOMode=wt

 

 

[root@OFLHR ~]# ps -ef|grep iscsi

root       937     2  0 01:01 ?        00:00:00 [iscsi_eh]

root       946     1  0 01:01 ?        00:00:00 iscsid

root       947     1  0 01:01 ?        00:00:00 iscsid

root     13827  1217  0 02:43 pts/1    00:00:00 grep iscsi

[root@OFLHR ~]# cat /proc/net/iet/volume

tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

        lun:0 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv01

        lun:1 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv02

        lun:2 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv03

        lun:3 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv04

[root@OFLHR ~]# cat /proc/net/iet/session

tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

[root@OFLHR ~]#

 

 

3.1.2  iSCSI initiator

3.1.2.1  安裝iSCSI initiator

RAC的2個節點分別安裝iSCSI initiator

[root@raclhr-12cR1-N1 ~]# rpm -qa|grep iscsi

iscsi-initiator-utils-6.2.0.873-10.el6.x86_64

[root@raclhr-12cR1-N1 ~]#

 

 

若未安裝可使用yum install iscsi-initiator-utils*進行安裝。

 

3.1.2.2  iscsiadm

iscsi initiator主要透過iscsiadm命令管理,我們先檢視提供服務的iscsi target機器上有哪些target:

[root@raclhr-12cR1-N1 ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.59.200

[  OK  ] iscsid: [  OK  ]

192.168.59.200:3260,1 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

192.168.2.200:3260,1 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

[root@raclhr-12cR1-N1 ~]# ps -ef|grep iscsi

root      2619     2  0 11:32 ?        00:00:00 [iscsi_eh]

root      2651     1  0 11:32 ?        00:00:00 iscsiuio

root      2658     1  0 11:32 ?        00:00:00 iscsid

root      2659     1  0 11:32 ?        00:00:00 iscsid

root      2978 56098  0 11:33 pts/1    00:00:00 grep iscsi

[root@raclhr-12cR1-N1 ~]#

 

 

到這一步就可以看出,你服務端建立的iSCSI Target 的編號和名稱。這條命令只需記住-p後面跟iSCSI服務的地址就行了,也可以是主機名,都可以!3260是服務的埠號,預設的!

然後就可以登陸某個target了,登陸成功某個target後,這個target下的硬碟也就都共享過來了:

[root@raclhr-12cR1-N1 ~]# fdisk -l | grep dev

Disk /dev/sda: 21.5 GB, 21474836480 bytes

/dev/sda1   *           1          26      204800   83  Linux

/dev/sda2              26        1332    10485760   8e  Linux LVM

/dev/sda3            1332        2611    10279936   8e  Linux LVM

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

/dev/sdb1               1        1306    10485760   8e  Linux LVM

/dev/sdb2            1306        2611    10485760   8e  Linux LVM

/dev/sdb3            2611        3917    10485760   8e  Linux LVM

/dev/sdb4            3917       13055    73399296    5  Extended

/dev/sdb5            3917        5222    10485760   8e  Linux LVM

/dev/sdb6            5223        6528    10485760   8e  Linux LVM

/dev/sdb7            6528        7834    10485760   8e  Linux LVM

/dev/sdb8            7834        9139    10485760   8e  Linux LVM

/dev/sdb9            9139       10445    10485760   8e  Linux LVM

/dev/sdb10          10445       11750    10485760   8e  Linux LVM

/dev/sdb11          11750       13055    10477568   8e  Linux LVM

Disk /dev/sde: 10.7 GB, 10737418240 bytes

Disk /dev/sdc: 6442 MB, 6442450944 bytes

Disk /dev/sdd: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

[root@raclhr-12cR1-N1 ~]# iscsiadm --mode node --targetname iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 –portal 192.168.59.200:3260 --login

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200,3260] (multiple)

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200,3260] (multiple)

Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200,3260] successful.

Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200,3260] successful.

[root@raclhr-12cR1-N1 ~]#

[root@raclhr-12cR1-N1 ~]# fdisk -l | grep dev

Disk /dev/sda: 21.5 GB, 21474836480 bytes

/dev/sda1   *           1          26      204800   83  Linux

/dev/sda2              26        1332    10485760   8e  Linux LVM

/dev/sda3            1332        2611    10279936   8e  Linux LVM

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

/dev/sdb1               1        1306    10485760   8e  Linux LVM

/dev/sdb2            1306        2611    10485760   8e  Linux LVM

/dev/sdb3            2611        3917    10485760   8e  Linux LVM

/dev/sdb4            3917       13055    73399296    5  Extended

/dev/sdb5            3917        5222    10485760   8e  Linux LVM

/dev/sdb6            5223        6528    10485760   8e  Linux LVM

/dev/sdb7            6528        7834    10485760   8e  Linux LVM

/dev/sdb8            7834        9139    10485760   8e  Linux LVM

/dev/sdb9            9139       10445    10485760   8e  Linux LVM

/dev/sdb10          10445       11750    10485760   8e  Linux LVM

/dev/sdb11          11750       13055    10477568   8e  Linux LVM

Disk /dev/sde: 10.7 GB, 10737418240 bytes

Disk /dev/sdc: 6442 MB, 6442450944 bytes

Disk /dev/sdd: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

Disk /dev/sdf: 10.7 GB, 10737418240 bytes

Disk /dev/sdi: 10.7 GB, 10737418240 bytes

Disk /dev/sdh: 10.7 GB, 10737418240 bytes

Disk /dev/sdl: 10.7 GB, 10737418240 bytes

Disk /dev/sdj: 10.7 GB, 10737418240 bytes

Disk /dev/sdg: 10.7 GB, 10737418240 bytes

Disk /dev/sdk: 10.7 GB, 10737418240 bytes

Disk /dev/sdm: 10.7 GB, 10737418240 bytes

 

 

這裡多出了8塊盤,在openfiler中只map了四次,為什麼這裡是8塊而不是4塊呢?因為openfiler2塊網路卡,使用兩個IP登入兩次iscsi target,所以這裡有兩塊是重複的

要檢視各個iscsi的資訊:

# iscsiadm -m session -P 3

[root@raclhr-12cR1-N1 ~]#

[root@raclhr-12cR1-N1 ~]# iscsiadm -m session -P 3

iSCSI Transport Class version 2.0-870

version 6.2.0-873.10.el6

Target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

        Current Portal: 192.168.59.200:3260,1

        Persistent Portal: 192.168.59.200:3260,1

                **********

                Interface:

                **********

                Iface Name: default

                Iface Transport: tcp

                Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355

                Iface IPaddress: 192.168.59.160

                Iface HWaddress:

                Iface Netdev:

                SID: 1

                iSCSI Connection State: LOGGED IN

                iSCSI Session State: LOGGED_IN

                Internal iscsid Session State: NO CHANGE

                *********

                Timeouts:

                *********

                Recovery Timeout: 120

                Target Reset Timeout: 30

                LUN Reset Timeout: 30

                Abort Timeout: 15

                *****

                CHAP:

                *****

                username:

                password: ********

                username_in:

                password_in: ********

                ************************

                Negotiated iSCSI params:

                ************************

                HeaderDigest: None

                DataDigest: None

                MaxRecvDataSegmentLength: 262144

                MaxXmitDataSegmentLength: 131072

                FirstBurstLength: 262144

                MaxBurstLength: 262144

                ImmediateData: No

                InitialR2T: Yes

                MaxOutstandingR2T: 1

                ************************

                Attached SCSI devices:

                ************************

                Host Number: 4  State: running

                scsi4 Channel 00 Id 0 Lun: 0

                        Attached scsi disk sdg          State: running

                scsi4 Channel 00 Id 0 Lun: 1

                        Attached scsi disk sdj          State: running

                scsi4 Channel 00 Id 0 Lun: 2

                        Attached scsi disk sdk          State: running

                scsi4 Channel 00 Id 0 Lun: 3

                        Attached scsi disk sdm          State: running

        Current Portal: 192.168.2.200:3260,1

        Persistent Portal: 192.168.2.200:3260,1

                **********

                Interface:

                **********

                Iface Name: default

                Iface Transport: tcp

                Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355

                Iface IPaddress: 192.168.2.100

                Iface HWaddress:

                Iface Netdev:

                SID: 2

                iSCSI Connection State: LOGGED IN

                iSCSI Session State: LOGGED_IN

                Internal iscsid Session State: NO CHANGE

                *********

                Timeouts:

                *********

                Recovery Timeout: 120

                Target Reset Timeout: 30

                LUN Reset Timeout: 30

                Abort Timeout: 15

                *****

                CHAP:

                *****

                username:

                password: ********

                username_in:

                password_in: ********

                ************************

                Negotiated iSCSI params:

                ************************

                HeaderDigest: None

                DataDigest: None

                MaxRecvDataSegmentLength: 262144

                MaxXmitDataSegmentLength: 131072

                FirstBurstLength: 262144

                MaxBurstLength: 262144

                ImmediateData: No

                InitialR2T: Yes

                MaxOutstandingR2T: 1

                ************************

                Attached SCSI devices:

                ************************

                Host Number: 5  State: running

                scsi5 Channel 00 Id 0 Lun: 0

                        Attached scsi disk sdf          State: running

                scsi5 Channel 00 Id 0 Lun: 1

                        Attached scsi disk sdh          State: running

                scsi5 Channel 00 Id 0 Lun: 2

                        Attached scsi disk sdi          State: running

                scsi5 Channel 00 Id 0 Lun: 3

                        Attached scsi disk sdl          State: running

[root@raclhr-12cR1-N1 ~]#

 

登陸之後要對新磁碟進行分割槽,格式化,然後在掛載即可

完成這些命令後,iscsi initator會把這些資訊記錄到/var/lib/iscsi目錄下:

/var/lib/iscsi/send_targets記錄了各個target的情況,/var/lib/iscsi/nodes記錄了各個target下的nodes情況。下次再啟動iscsi initator時(service iscsi start),就會自動登陸各個target上。如果想讓重新手工登陸各個target,需要把/var/lib/iscsi/send_targets目錄下的內容和/var/lib/iscsi/nodes下的內容全部刪除掉。

3.2  多路徑multipath

3.2.1  RAC的2個節點上分別安裝multipath軟體

1、安裝多路徑軟體包:

[root@raclhr-12cR1-N1 ~]# mount /dev/sr0 /media/lhr/cdrom/

mount: block device /dev/sr0 is write-protected, mounting read-only

[root@raclhr-12cR1-N1 ~]# cd /media/lhr/cdrom/Packages/

[root@raclhr-12cR1-N1 Packages]# ll device-mapper-*.x86_64.rpm

-r--r--r-- 104 root root  168424 Oct 30  2013 device-mapper-1.02.79-8.el6.x86_64.rpm

-r--r--r-- 104 root root  118316 Oct 30  2013 device-mapper-event-1.02.79-8.el6.x86_64.rpm

-r--r--r-- 104 root root  112892 Oct 30  2013 device-mapper-event-libs-1.02.79-8.el6.x86_64.rpm

-r--r--r-- 104 root root  199924 Oct 30  2013 device-mapper-libs-1.02.79-8.el6.x86_64.rpm

-r--r--r--  95 root root  118892 Oct 25  2013 device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

-r--r--r--  95 root root  184760 Oct 25  2013 device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

-r--r--r--  96 root root 2444388 Oct 30  2013 device-mapper-persistent-data-0.2.8-2.el6.x86_64.rpm

[root@raclhr-12cR1-N1 Packages]# ll iscsi*

-r--r--r-- 101 root root 702300 Oct 29  2013 iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm

[root@raclhr-12cR1-N1 Packages]# rpm -qa|grep device-mapper

device-mapper-persistent-data-0.2.8-2.el6.x86_64

device-mapper-1.02.79-8.el6.x86_64

device-mapper-event-libs-1.02.79-8.el6.x86_64

device-mapper-event-1.02.79-8.el6.x86_64

device-mapper-libs-1.02.79-8.el6.x86_64

[root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-1.02.79-8.el6.x86_64.rpm

warning: device-mapper-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing...                ########################################### [100%]

        package device-mapper-1.02.79-8.el6.x86_64 is already installed

[root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-event-1.02.79-8.el6.x86_64.rpm

warning: device-mapper-event-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing...                ########################################### [100%]

        package device-mapper-event-1.02.79-8.el6.x86_64 is already installed

[root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

error: Failed dependencies:

        device-mapper-multipath-libs = 0.4.9-72.el6 is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

        libmpathpersist.so.0()(64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

        libmultipath.so()(64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

[root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

warning: device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing...                ########################################### [100%]

   1:device-mapper-multipath########################################### [100%]

[root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm   

warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

Preparing...                ########################################### [100%]

   1:device-mapper-multipath########################################### [100%]

[root@raclhr-12cR1-N1 Packages]# rpm -qa|grep device-mapper

device-mapper-multipath-0.4.9-72.el6.x86_64

device-mapper-persistent-data-0.2.8-2.el6.x86_64

device-mapper-1.02.79-8.el6.x86_64

device-mapper-event-libs-1.02.79-8.el6.x86_64

device-mapper-event-1.02.79-8.el6.x86_64

device-mapper-multipath-libs-0.4.9-72.el6.x86_64

device-mapper-libs-1.02.79-8.el6.x86_64

[root@raclhr-12cR1-N2 Packages]#

 

 

rpm -ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

 

 

 

3.2.2  啟動multipath

將多路徑軟體新增至核心模組中

modprobe dm-multipath

modprobe dm-round-robin

 

檢查核心新增情況

[root@raclhr-12cR1-N1 Packages]# lsmod |grep multipath

dm_multipath           17724  1 dm_round_robin

dm_mod                 84209  16 dm_multipath,dm_mirror,dm_log

[root@raclhr-12cR1-N1 Packages]#

 

將多路徑軟體multipath設定為開機自啟動

[root@raclhr-12cR1-N1 Packages]# chkconfig  --level 2345 multipathd on

[root@raclhr-12cR1-N1 Packages]#

[root@raclhr-12cR1-N1 Packages]# chkconfig  --list|grep multipathd

multipathd      0:off   1:off   2:on    3:on    4:on    5:on    6:off

[root@raclhr-12cR1-N1 Packages]#

 

啟動multipath服務

[root@raclhr-12cR1-N1 Packages]# service multipathd restart

ux_socket_connect: No such file or directory

Stopping multipathd daemon: [FAILED]

Starting multipathd daemon: [  OK  ]

[root@raclhr-12cR1-N1 Packages]#

 

3.2.3  配置多路徑軟體/etc/multipath.conf

1、配置multipath軟體編輯/etc/multipath.conf

   注意:預設情況下/etc/multipath.conf是不存在的需要用如下命令生成multipath.conf檔案:

/sbin/mpathconf --enable --find_multipaths y --with_module y --with_chkconfig y

[root@raclhr-12cR1-N1 ~]# multipath -ll

Jan 23 12:52:54 | /etc/multipath.conf does not exist, blacklisting all devices.

Jan 23 12:52:54 | A sample multipath.conf file is located at

Jan 23 12:52:54 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf

Jan 23 12:52:54 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf

[root@raclhr-12cR1-N1 ~]# multipath -ll

Jan 23 12:53:49 | /etc/multipath.conf does not exist, blacklisting all devices.

Jan 23 12:53:49 | A sample multipath.conf file is located at

Jan 23 12:53:49 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf

Jan 23 12:53:49 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf

[root@raclhr-12cR1-N1 ~]# /sbin/mpathconf --enable --find_multipaths y --with_module y --with_chkconfig y

[root@raclhr-12cR1-N1 ~]#

[root@raclhr-12cR1-N1 ~]# ll /etc/multipath.conf

-rw------- 1 root root 2775 Jan 23 12:55 /etc/multipath.conf

[root@raclhr-12cR1-N1 ~]#

 

 

 

2、檢視並獲取儲存分配給伺服器的邏輯盤lunwwid資訊

[root@raclhr-12cR1-N1 multipath]# multipath -v0

[root@raclhr-12cR1-N1 multipath]# more /etc/multipath/wwids

# Multipath wwids, Version : 1.0

# NOTE: This file is automatically maintained by multipath and multipathd.

# You should not need to edit this file in normal circumstances.

#

# Valid WWIDs:

/14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/

/14f504e46494c455242674c7079392d753750482d63734443/

/14f504e46494c455233384b7353432d52454b4c2d79506757/

/14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/

[root@raclhr-12cR1-N1 multipath]#

 

 

將檔案/etc/multipath/wwids和/etc/multipath/bindings的內容覆蓋節點2

[root@raclhr-12cR1-N2 ~]# multipath -v0

[root@raclhr-12cR1-N2 ~]# more /etc/multipath/wwids

# Multipath wwids, Version : 1.0

# NOTE: This file is automatically maintained by multipath and multipathd.

# You should not need to edit this file in normal circumstances.

#

# Valid WWIDs:

/14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/

/14f504e46494c455242674c7079392d753750482d63734443/

/14f504e46494c455233384b7353432d52454b4c2d79506757/

/14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/

[root@raclhr-12cR1-N1 ~]# more /etc/multipath/bindings

# Multipath bindings, Version : 1.0

# NOTE: this file is automatically maintained by the multipath program.

# You should not need to edit this file in normal circumstances.

#

# Format:

# alias wwid

#

mpatha 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41

mpathb 14f504e46494c455242674c7079392d753750482d63734443

mpathc 14f504e46494c455233384b7353432d52454b4c2d79506757

mpathd 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c

[root@raclhr-12cR1-N1 ~]#

 

 

 

[root@raclhr-12cR1-N2 ~]#

[root@raclhr-12cR1-N1 multipath]# multipath -ll

mpathd (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

size=10G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 5:0:0:3 sdk 8:160 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 4:0:0:3 sdm 8:192 active ready running

mpathc (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

size=10G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 5:0:0:2 sdj 8:144 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 4:0:0:2 sdl 8:176 active ready running

mpathb (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

size=10G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 4:0:0:1 sdh 8:112 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 5:0:0:1 sdi 8:128 active ready running

mpatha (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

size=10G features='0' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 4:0:0:0 sdf 8:80  active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 5:0:0:0 sdg 8:96  active ready running

[root@raclhr-12cR1-N1 multipath]# fdisk -l | grep dev

Disk /dev/sda: 21.5 GB, 21474836480 bytes

/dev/sda1   *           1          26      204800   83  Linux

/dev/sda2              26        1332    10485760   8e  Linux LVM

/dev/sda3            1332        2611    10279936   8e  Linux LVM

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

/dev/sdb1               1        1306    10485760   8e  Linux LVM

/dev/sdb2            1306        2611    10485760   8e  Linux LVM

/dev/sdb3            2611        3917    10485760   8e  Linux LVM

/dev/sdb4            3917       13055    73399296    5  Extended

/dev/sdb5            3917        5222    10485760   8e  Linux LVM

/dev/sdb6            5223        6528    10485760   8e  Linux LVM

/dev/sdb7            6528        7834    10485760   8e  Linux LVM

/dev/sdb8            7834        9139    10485760   8e  Linux LVM

/dev/sdb9            9139       10445    10485760   8e  Linux LVM

/dev/sdb10          10445       11750    10485760   8e  Linux LVM

/dev/sdb11          11750       13055    10477568   8e  Linux LVM

Disk /dev/sdc: 6442 MB, 6442450944 bytes

Disk /dev/sdd: 10.7 GB, 10737418240 bytes

Disk /dev/sde: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

Disk /dev/sdf: 10.7 GB, 10737418240 bytes

Disk /dev/sdg: 10.7 GB, 10737418240 bytes

Disk /dev/sdh: 10.7 GB, 10737418240 bytes

Disk /dev/sdi: 10.7 GB, 10737418240 bytes

Disk /dev/sdj: 10.7 GB, 10737418240 bytes

Disk /dev/sdk: 10.7 GB, 10737418240 bytes

Disk /dev/sdl: 10.7 GB, 10737418240 bytes

Disk /dev/sdm: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/mpatha: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/mpathb: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/mpathc: 10.7 GB, 10737418240 bytes

Disk /dev/mapper/mpathd: 10.7 GB, 10737418240 bytes

[root@raclhr-12cR1-N1 multipath]#

 

 

3.2.4  編輯/etc/multipath.conf

for i in f g h i j k l m ;

do

echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted  --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted  --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\""

done

 

 

[root@raclhr-12cR1-N1 multipath]# for i in f g h i j k l m ;

> do

> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted  --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted  --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\""

> done

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskf",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskg",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diskh",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diski",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskj",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskk",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskl",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskm",OWNER="grid",GROUP="asmadmin",MODE="0660"

[root@raclhr-12cR1-N1 multipath]#

[root@raclhr-12cR1-N1 multipath]# more   /etc/multipath.conf

defaults {

        find_multipaths yes

        user_friendly_names yes

}

 

blacklist {

      wwid 3600508b1001c5ae72efe1fea025cd2e5

      devnode "^hd[a-z]"

      devnode "^sd[a-e]"

      devnode "^sda"

}

 

multipaths {

       multipath {

               wwid                    14f504e46494c455232326c6c76442d4361634f2d4d4f4d41

               alias                   VMLHRStorage000

               path_grouping_policy    multibus

               path_selector           "round-robin 0"

               failback                manual

               rr_weight               priorities

               no_path_retry           5

      }

       multipath {

               wwid                    14f504e46494c455242674c7079392d753750482d63734443

               alias                   VMLHRStorage001

               path_grouping_policy    multibus

               path_selector           "round-robin 0"

               failback                manual

               rr_weight               priorities

               no_path_retry           5

       }

       multipath {

               wwid                    14f504e46494c455233384b7353432d52454b4c2d79506757

               alias                   VMLHRStorage002

               path_grouping_policy    multibus

               path_selector           "round-robin 0"

               failback                manual

               rr_weight               priorities

               no_path_retry           5

       }

       multipath {

               wwid                    14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c

               alias                   VMLHRStorage003

               path_grouping_policy    multibus

               path_selector           "round-robin 0"

               failback                manual

               rr_weight               priorities

               no_path_retry           5

       }

}

devices {

       device {

               vendor                  "VMWARE"

               product                 "VIRTUAL-DISK"

               path_grouping_policy    multibus

               getuid_callout          "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

               path_checker            readsector0

               path_selector           "round-robin 0"

               hardware_handler        "0"

               failback                15

               rr_weight               priorities

               no_path_retry           queue

       }

}

[root@raclhr-12cR1-N1 multipath]#

 

 

 

啟動multipath配置

[root@raclhr-12cR1-N1 ~]# service multipathd restart

ok

Stopping multipathd daemon: [  OK  ]

Starting multipathd daemon: [  OK  ]

[root@raclhr-12cR1-N1 ~]# multipath -ll

VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 5:0:0:3 sdk 8:160 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 4:0:0:3 sdm 8:192 active ready running

VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 5:0:0:2 sdj 8:144 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 4:0:0:2 sdl 8:176 active ready running

VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 4:0:0:1 sdh 8:112 active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 5:0:0:1 sdi 8:128 active ready running

VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 4:0:0:0 sdf 8:80  active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  `- 5:0:0:0 sdg 8:96  active ready running

[root@raclhr-12cR1-N1 ~]#

[root@raclhr-12cR1-N1 ~]# multipath -ll|grep LHR

VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

[root@raclhr-12cR1-N1 ~]#

 

啟用multipath配置後,會在/dev/mapper下生成多路徑邏輯盤

[root@raclhr-12cR1-N1 ~]# cd /dev/mapper

[root@raclhr-12cR1-N1 mapper]# ll

total 0

crw-rw---- 1 root root 10, 58 Jan 23 12:49 control

lrwxrwxrwx 1 root root      7 Jan 23 12:49 vg_orasoft-lv_orasoft_soft -> ../dm-3

lrwxrwxrwx 1 root root      7 Jan 23 12:49 vg_orasoft-lv_orasoft_u01 -> ../dm-2

lrwxrwxrwx 1 root root      7 Jan 23 12:50 vg_rootlhr-Vol00 -> ../dm-1

lrwxrwxrwx 1 root root      7 Jan 23 12:50 vg_rootlhr-Vol01 -> ../dm-4

lrwxrwxrwx 1 root root      7 Jan 23 12:49 vg_rootlhr-Vol02 -> ../dm-0

lrwxrwxrwx 1 root root      7 Jan 23 12:50 vg_rootlhr-Vol03 -> ../dm-5

lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage000 -> ../dm-6

lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage001 -> ../dm-7

lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage002 -> ../dm-8

lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage003 -> ../dm-9

[root@raclhr-12cR1-N1 mapper]#

 

至此多路徑multipath配置完成

3.2.5  配置multipath裝置的許可權

6.2之前配置multipath裝置的許可權只需要在裝置配置裡增加uid,gid,mode就可以

uid 1100 #uid

gid 1020 #gid

如:

        multipath {

                wwid                    360050763008101d4e00000000000000a

                alias                   DATA03

                uid                     501                                               #uid

                gid                     501                                               #gid

}

 

6.2之後配置multipath配置檔案裡去掉uid,gid,mode這三個引數,需要使用udev使用,示例檔案在/usr/share/doc/device-mapper-version中有一個模板檔案,名為12-dm-permissions.rules,您可以使用它並將其放在 /etc/udev/rules.d 目錄中使其生效。

[root@raclhr-12cR1-N1 rules.d]# ll /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

-rw-r--r--. 1 root root 3186 Aug 13  2013 /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

[root@raclhr-12cR1-N1 rules.d]#

[root@raclhr-12cR1-N1 rules.d]# ll

total 24

-rw-r--r-- 1 root root  77 Jan 23 18:06 12-dm-permissions.rules

-rw-r--r-- 1 root root 190 Jan 23 15:40 55-usm.rules

-rw-r--r-- 1 root root 549 Jan 23 15:17 70-persistent-cd.rules

-rw-r--r-- 1 root root 585 Jan 23 15:09 70-persistent-net.rules

-rw-r--r-- 1 root root 633 Jan 23 15:46 99-oracle-asmdevices.rules

-rw-r--r-- 1 root root 916 Jan 23 15:16 99-oracleasm.rules

[root@raclhr-12cR1-N1 rules.d]# more /etc/udev/rules.d/12-dm-permissions.rules

ENV{DM_NAME}=="VMLHRStorage*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"

[root@raclhr-12cR1-N1 rules.d]#

 

 

將檔案/etc/udev/rules.d/12-dm-permissions.rules複製到節點2上。

 

3.2.6  配置udev規則

指令碼如下所示:

for i in f g h i j k l m ;

do

echo "KERNEL==\"dm-*\", BUS==\"block\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" >> /etc/udev/rules.d/99-oracleasm.rules

done

 

 

由於多路徑的設定WWID有重複,所以應該去掉檔案/etc/udev/rules.d/99-oracleasm.rules中的重複的行。

在節點1執行以下操作:

[root@raclhr-12cR1-N1 rules.d]# for i in f g h i j k l m ;

> do

> echo "KERNEL==\"dm-*\", BUS==\"block\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" >> /etc/udev/rules.d/99-oracleasm.rules

> done

 

 

開啟檔案/etc/udev/rules.d/99-oracleasm.rules去掉WWID重複的行只保留一行即可。

[root@raclhr-12cR1-N1 ~]# cat /etc/udev/rules.d/99-oracleasm.rules

KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskf",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diskh",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskj",OWNER="grid",GROUP="asmadmin",MODE="0660"

KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskk",OWNER="grid",GROUP="asmadmin",MODE="0660"

[root@raclhr-12cR1-N1 ~]#

 

 

將檔案/etc/udev/rules.d/99-oracleasm.rules的內容複製到節點2,然後重啟udev

[root@raclhr-12cR1-N1 ~]# start_udev

Starting udev: [  OK  ]

[root@raclhr-12cR1-N1 ~]#

[root@raclhr-12cR1-N1 ~]# ll /dev/asm-*

brw-rw---- 1 grid asmadmin   8, 32 Jan 23 15:50 /dev/asm-diskc

brw-rw---- 1 grid asmadmin   8, 48 Jan 23 15:48 /dev/asm-diskd

brw-rw---- 1 grid asmadmin   8, 64 Jan 23 15:48 /dev/asm-diske

brw-rw---- 1 grid asmadmin 253,  7 Jan 23 15:46 /dev/asm-diskf

brw-rw---- 1 grid asmadmin 253,  9 Jan 23 15:46 /dev/asm-diskh

brw-rw---- 1 grid asmadmin 253,  6 Jan 23 15:46 /dev/asm-diskj

brw-rw---- 1 grid asmadmin 253,  8 Jan 23 15:46 /dev/asm-diskk

[root@raclhr-12cR1-N1 ~]#

[grid@raclhr-12cR1-N1 ~]$ $ORACLE_HOME/bin/kfod disks=all s=true ds=true

--------------------------------------------------------------------------------

Disk          Size Header    Path                                     Disk Group   User     Group  

================================================================================

   1:       6144 Mb MEMBER    /dev/asm-diskc                           OCR          grid     asmadmin

   2:      10240 Mb MEMBER    /dev/asm-diskd                           DATA         grid     asmadmin

   3:      10240 Mb MEMBER    /dev/asm-diske                           FRA          grid     asmadmin

   4:      10240 Mb CANDIDATE /dev/asm-diskf                           #            grid     asmadmin

   5:      10240 Mb CANDIDATE /dev/asm-diskh                           #            grid     asmadmin

   6:      10240 Mb CANDIDATE /dev/asm-diskj                           #            grid     asmadmin

   7:      10240 Mb CANDIDATE /dev/asm-diskk                           #            grid     asmadmin

--------------------------------------------------------------------------------

ORACLE_SID ORACLE_HOME                                                         

================================================================================

     +ASM2 /u01/app/12.1.0/grid                                                           

     +ASM1 /u01/app/12.1.0/grid                                                           

[grid@raclhr-12cR1-N1 ~]$ asmcmd

 

ASMCMD> lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     10240     6487                0            6487              0             N  DATA/

MOUNTED  EXTERN  N         512   4096  1048576     10240    10144                0           10144              0             N  FRA/

MOUNTED  EXTERN  N         512   4096  1048576      6144     1672                0            1672              0             Y  OCR/

ASMCMD> lsdsk

Path

/dev/asm-diskc

/dev/asm-diskd

/dev/asm-diske

ASMCMD>  lsdsk --candidate -p

Group_Num  Disk_Num      Incarn  Mount_Stat  Header_Stat  Mode_Stat  State   Path

        0         1           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskf

        0         3           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskh

        0         2           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskj

        0         0           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskk

ASMCMD>

 

 

3.3  利用新磁碟建立磁碟組

CREATE DISKGROUP FRA external redundancy DISK '/dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' = '12.1', 'compatible.asm' = '12.1';

SQL> select path from v$asm_disk;

 

PATH

--------------------------------------------------------------------------------

/dev/asm-diskk

/dev/asm-diskf

/dev/asm-diskj

/dev/asm-diskh

/dev/asm-diske

/dev/asm-diskd

/dev/asm-diskc

 

7 rows selected.

 

SQL> CREATE DISKGROUP TESTMUL external redundancy DISK '/dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' = '12.1', 'compatible.asm' = '12.1';

 

Diskgroup created.

 

SQL>

ASMCMD> lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     10240     6487                0            6487              0             N  DATA/

MOUNTED  EXTERN  N         512   4096  1048576     10240    10144                0           10144              0             N  FRA/

MOUNTED  EXTERN  N         512   4096  1048576      6144     1672                0            1672              0             Y  OCR/

MOUNTED  EXTERN  N         512   4096  1048576     20480    20381                0           20381              0             N  TESTMUL/

ASMCMD>

 

[root@raclhr-12cR1-N1 ~]# crsctl stat res -t | grep -2 TESTMUL

               ONLINE  ONLINE       raclhr-12cr1-n1          STABLE

               ONLINE  ONLINE       raclhr-12cr1-n2          STABLE

ora.TESTMUL.dg

               ONLINE  ONLINE       raclhr-12cr1-n1          STABLE

               ONLINE  ONLINE       raclhr-12cr1-n2          STABLE

[root@raclhr-12cR1-N1 ~]#

 

 

3.3.1  測試磁碟組

[oracle@raclhr-12cR1-N1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 23 16:17:28 2017

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

 

SQL> create tablespace TESTMUL datafile '+TESTMUL' size 10M;

 

Tablespace created.

 

SQL> select name from v$datafile;

 

NAME

--------------------------------------------------------------------------------

+DATA/LHRRAC/DATAFILE/system.258.933550527

+DATA/LHRRAC/DATAFILE/undotbs2.269.933551323

+DATA/LHRRAC/DATAFILE/sysaux.257.933550483

+DATA/LHRRAC/DATAFILE/undotbs1.260.933550575

+DATA/LHRRAC/DATAFILE/example.268.933550723

+DATA/LHRRAC/DATAFILE/users.259.933550573

+TESTMUL/LHRRAC/DATAFILE/testmul.256.934042679

 

7 rows selected.

 

SQL>

 

 

將儲存停掉一塊網路卡eth1

[root@OFLHR ~]# ip a

1: lo: mtu 16436 qdisc noqueue

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000

    link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

    inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

    inet6 fe80::20c:29ff:fe98:1acd/64 scope link

       valid_lft forever preferred_lft forever

3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000

    link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

    inet 192.168.2.200/24 brd 192.168.2.255 scope global eth1

    inet6 fe80::20c:29ff:fe98:1ad7/64 scope link

       valid_lft forever preferred_lft forever

[root@OFLHR ~]# ifconfig eth1 down

[root@OFLHR ~]# ip a

1: lo: mtu 16436 qdisc noqueue

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000

    link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

    inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

    inet6 fe80::20c:29ff:fe98:1acd/64 scope link

       valid_lft forever preferred_lft forever

3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000

    link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

    inet 192.168.2.200/24 brd 192.168.2.255 scope global eth1

[root@OFLHR ~]#

 

 

rac節點檢視日誌:

[root@raclhr-12cR1-N1 ~]# tail -f /var/log/messages

Jan 23 16:20:51 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host)

Jan 23 16:20:57 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host)

Jan 23 16:21:03 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host)

[root@raclhr-12cR1-N1 ~]# multipath -ll

VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-8 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 5:0:0:3 sdm 8:192 failed faulty running

  `- 4:0:0:3 sdl 8:176 active ready  running

VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-9 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 5:0:0:2 sdj 8:144 failed faulty running

  `- 4:0:0:2 sdk 8:160 active ready  running

VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 4:0:0:1 sdi 8:128 active ready  running

  `- 5:0:0:1 sdh 8:112 failed faulty running

VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 4:0:0:0 sdf 8:80  active ready  running

  `- 5:0:0:0 sdg 8:96  failed faulty running

[root@raclhr-12cR1-N1 ~]#

 

表空間可以正常訪問:

SQL> create table tt tablespace TESTMUL as select * from dual;

 

Table created.

 

SQL> select * from tt;

 

D

-

X

 

SQL>

 

同理,將eth1進行up,而將eth0宕掉,表空間依然正常。重啟叢集和儲存後,叢集一切正常。

第四章 測試多路徑

重新搭建一套多路徑的環境來測試多路徑。

最簡單的測試方法,是用dd往磁碟讀寫資料,然後用iostat觀察各通道的流量和狀態,以判斷Failover或負載均衡方式是否正常:

# dd if=/dev/zero of=/dev/mapper/mpath0

# iostat -k 2

[root@orcltest ~]# multipath -ll

VMLHRStorage003 (14f504e46494c4552674a61727a472d523449782d5336784e) dm-3 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 35:0:0:2 sdf 8:80  active ready running

  `- 36:0:0:2 sdg 8:96  active ready running

VMLHRStorage002 (14f504e46494c4552506a5a5954422d6f6f4e652d34423171) dm-2 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 35:0:0:3 sdh 8:112 active ready running

  `- 36:0:0:3 sdi 8:128 active ready running

VMLHRStorage001 (14f504e46494c4552324b583573332d774e5a622d696d7334) dm-1 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 35:0:0:1 sdd 8:48  active ready running

  `- 36:0:0:1 sde 8:64  active ready running

VMLHRStorage000 (14f504e46494c45523431576859532d643246412d5154564f) dm-0 OPNFILER,VIRTUAL-DISK

size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 35:0:0:0 sdb 8:16  active ready running

  `- 36:0:0:0 sdc 8:32  active ready running

[root@orcltest ~]# dd if=/dev/zero of=/dev/mapper/VMLHRStorage001

 

 

 

 

重新開一個視窗執行iostat -k 2可以看到

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

           0.00    0.00    5.23   20.78    0.00   73.99

 

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn

sda               9.00         0.00        92.00          0        184

scd0              0.00         0.00         0.00          0          0

sdb               0.00         0.00         0.00          0          0

sdc               0.00         0.00         0.00          0          0

sdd            1197.50      4704.00     10886.00       9408      21772

sde            1197.50      4708.00     10496.00       9416      20992

sdh               0.00         0.00         0.00          0          0

sdi               0.00         0.00         0.00          0          0

sdf               0.00         0.00         0.00          0          0

sdg               0.00         0.00         0.00          0          0

dm-0              0.00         0.00         0.00          0          0

dm-4              0.00         0.00         0.00          0          0

dm-10             0.00         0.00         0.00          0          0

dm-1           2395.00      9412.00     21382.00      18824      42764

dm-2              0.00         0.00         0.00          0          0

dm-3              0.00         0.00         0.00          0          0

dm-5              0.00         0.00         0.00          0          0

dm-6              0.00         0.00         0.00          0          0

dm-7              0.00         0.00         0.00          0          0

dm-8              0.00         0.00         0.00          0          0

dm-9              0.00         0.00         0.00          0          0

 

 

wpsE4B9.tmp 


4.1  有關多路徑其它理論知識

multipath生成對映後,會在/dev目錄下產生多個指向同一條鏈路的裝置:

/dev/mapper/mpathn

/dev/mpath/mpathn

/dev/dm-n

但它們的來源是完全不同的

/dev/mapper/mpathn multipath虛擬出來的多路徑裝置我們應該使用這個裝置;/dev/mapper 中的裝置是在引導過程中生成的。可使用這些裝置訪問多路徑裝置,例如在生成邏輯卷時。

/dev/mpath/mpathn udev裝置管理器建立的,實際上就是指向下面的dm-n裝置,僅為了方便,不能用來掛載;提供 /dev/mpath 中的裝置是為了方便,這樣可在一個目錄中看到所有多路徑裝置。這些裝置是由 udev 裝置管理器生成的,且在系統需要訪問它們時不一定能啟動。請不要使用這些裝置生成邏輯卷或者檔案系統。

/dev/dm-n 是軟體內部自身使用的,不能被軟體以外使用,不可掛載。所有 /dev/dm-n 格式的裝置都只能是作為內部使用,且應該永遠不要使用。

簡單來說,就是我們應該使用/dev/mapper/下的裝置符。對該裝置即可用fdisk進行分割槽,或建立為pv

 



   好了,有關使用OpenFiler來模擬儲存配置RAC中ASM共享盤及多路徑的測試就到此為止了。2016年結束了,今天是1月23日,明天是1月24日,小麥苗回家過年了,O(∩_∩)O~。在此也祝福各位網友和朋友身體健康,萬事如意,閤家歡樂,生活美滿,事業有成,珠玉滿堂,多壽多富,財大氣粗,攻無不克,戰無不勝!

About Me

...............................................................................................................................

本文作者:小麥苗,只專注於資料庫的技術,更注重技術的運用

本文在itpubhttp://blog.itpub.net/26736162)、部落格園http://www.cnblogs.com/lhrbest和個人微信公眾號(xiaomaimiaolhr)上有同步更新

本文itpub地址:http://blog.itpub.net/26736162/viewspace-2132858/

本文部落格園地址:http://www.cnblogs.com/lhrbest/p/6345157.html

本文pdf小麥苗雲盤地址:http://blog.itpub.net/26736162/viewspace-1624453/

● QQ群:230161599     微信群:私聊

聯絡我請加QQ好友(642808185),註明新增緣由

2017-01-22 08:00 ~ 2016-01-23 24:00農行完成

文章內容來源於小麥苗的學習筆記,部分整理自網路,若有侵權或不當之處還請諒解

版權所有,歡迎分享本文,轉載請保留出處

...............................................................................................................................

拿起手機使用微信客戶端掃描下邊的左邊圖片來關注小麥苗的微信公眾號:xiaomaimiaolhr,掃描右邊的二維碼加入小麥苗的QQ群,學習最實用的資料庫技術。

 使用OpenFiler來模擬儲存配置RAC中ASM共享盤及多路徑(multipath)的測試  DBA筆試面試講解

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25469263/viewspace-2156501/,如需轉載,請註明出處,否則將追究法律責任。

相關文章