針對mdadm的RAID1失效測試

Felix-ZP發表於2018-10-02

背景

對軟RAID(mdadm)方式進行各個場景失效測試。

一、初始資訊

核心版本:

root@omv30:~# uname -a
Linux omv30 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.6-1~bpo9+1 (2018-09-13) x86_64 GNU/Linux
root@omv30:~# mdadm --version
mdadm - v3.4 - 28th January 2016

使用omv建立RAID1之後,查詢sdb的資訊,此時sdb對應的是8ac693c5的UUID,device號為1:

root@omv30:~# mdadm --query /dev/sdb
/dev/sdb: is not an md array
/dev/sdb: device 1 in 2 device undetected raid1 /dev/md0.  Use mdadm --examine for more detail.
root@omv30:~# mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
           Name : omv30:raid1  (local to host omv30)
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=0 sectors
          State : clean
    Device UUID : 64a58fb5:c7e76b1a:29453878:8ac693c5

    Update Time : Mon Oct  1 13:20:56 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 2e1fb65b - correct
         Events : 21


   Device Role : Active device 1
   Array State : AA (`A` == active, `.` == missing, `R` == replacing)
root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 19:46:42 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       16        1      active sync   /dev/sdb

配置檔案資訊:

root@omv30:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN`T match is
# used if no RAID devices are configured.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=omv30:raid1 UUID=921a8946:b273e00e:3fa4b99d:040a4437

二、碟符錯亂測試

1、碟符交換測試

在VirtualBox的儲存-SATA下,分別選中兩塊硬碟,在右邊的屬性將SATA埠調換位置,即可交換碟符。

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 19:52:46 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 29

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

可以看到碟符已經換了。

root@omv30:~# mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
           Name : omv30:raid1  (local to host omv30)
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=0 sectors
          State : clean
    Device UUID : 6e545465:3dcf10df:1d5bb938:fe840307

    Update Time : Mon Oct  1 19:52:46 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : a47a7d1f - correct
         Events : 29


   Device Role : Active device 0
   Array State : AA (`A` == active, `.` == missing, `R` == replacing)
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdb[0] sdc[1]
      1047552 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
root@omv30:~# fdisk -l
...(省略)

Disk /dev/md0: 1023 MiB, 1072693248 bytes, 2095104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

mount後訪問正常。

2、更換硬碟位置測試

關機,新增一塊硬碟,佔用原來sdc的位置,啟動:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 19:52:46 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 29

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       48        1      active sync   /dev/sdd
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdb[0] sdd[1]
      1047552 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

原來的sdc變成了sdc,raid1還完好儲存,不受影響。

3、移除硬碟測試

關機,然後移除一塊硬碟,啟動:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 1
    Persistence : Superblock is persistent

          State : inactive

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 29

    Number   Major   Minor   RaidDevice

       -       8       16        -        /dev/sdb
root@omv30:~# mdadm --query /dev/sdb
/dev/sdb: is not an md array
/dev/sdb: device 0 in 2 device undetected raid1 /dev/md0.  Use mdadm --examine for more detail.
root@omv30:~# mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
           Name : omv30:raid1  (local to host omv30)
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=0 sectors
          State : clean
    Device UUID : 6e545465:3dcf10df:1d5bb938:fe840307

    Update Time : Mon Oct  1 19:52:46 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : a47a7d1f - correct
         Events : 29


   Device Role : Active device 0
   Array State : AA (`A` == active, `.` == missing, `R` == replacing)
root@omv30:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdb[0](S)
      1047552 blocks super 1.2
       
unused devices: <none>
root@omv30:~# fdisk -l
Disk /dev/sda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8c9b0fb9

Device     Boot    Start      End  Sectors Size Id Type
/dev/sda1  *        2048 12582911 12580864   6G 83 Linux
/dev/sda2       12584958 16775167  4190210   2G  5 Extended
/dev/sda5       12584960 16775167  4190208   2G 82 Linux swap / Solaris


Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

raid1變成了inactive,但raid資訊本身是儲存在磁碟中,不會丟失。

4、結論

  • mdadm不是根據碟符/dev/sdx來記錄RAID資訊的,碟符無論怎麼變換,RAID資訊不錯亂。
  • mdadm是使用Device UUID來區分硬碟的。與RAID硬碟盒不一樣,硬碟盒是記錄硬碟槽位號的。
  • 所以mdadm每個硬碟可以使用任意硬碟盒,不用記錄位置。

三、RAID降級恢復測試

場景:正常執行的RAID1,突然一塊盤失效,進行重建恢復。
方法:可以用模擬fail的方式,也可以用VirtualBox熱插拔硬碟的功能。

1、模擬fail的方式

初始資訊如下:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 20:29:44 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 31

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       48        1      active sync   /dev/sdd

(1)手工fail掉sdd:

root@omv30:~# mdadm /dev/md0 --fail /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0
root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 20:29:59 2018
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 33

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

       1       8       48        -      faulty   /dev/sdd

如果未移除,要先移除損壞的硬碟:

root@omv30:~# mdadm /dev/md0 -r /dev/sdd
mdadm: hot remove failed for /dev/sdd: No such device or address

(2)增加新盤:

root@omv30:~# mdadm /dev/md0 --add /dev/sdc   (新加的盤是2G的,經實測不影響RAID1重建)
mdadm: added /dev/sdc
root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 20:36:22 2018
          State : clean, degraded, recovering 
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 76% complete

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 48

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       32        1      spare rebuilding   /dev/sdc

可以看到正在重建。
過一會兒再執行:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 20:36:24 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 53

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       2       8       32        1      active sync   /dev/sdc
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc[2] sdb[0]
      1047552 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

已經重建成功。

2、熱插拔硬碟測試

在VirtualBox的儲存中,勾選sdb的熱插拔,然後啟動。
系統在執行時,到VirtualBox的儲存中移除sdb硬碟,然後檢視狀態:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 21:13:45 2018
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 56

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       2       8       32        1      active sync   /dev/sdc

檢視系統日誌,發現有硬碟離線,並且RAID1降級(有日誌跟蹤是軟RAID的一個優勢):

root@omv30:/var/log# dmesg | tail -20
[  340.551533] md: recovery of RAID array md0
[  345.881625] md: md0: recovery done.
[  657.932091] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[ 2571.324851] ata2: SATA link down (SStatus 0 SControl 300)
[ 2576.667851] ata2: SATA link down (SStatus 0 SControl 300)
[ 2582.044796] ata2: SATA link down (SStatus 0 SControl 300)
[ 2582.044864] ata2.00: disabled
[ 2582.045573] ata2.00: detaching (SCSI 3:0:0:0)
[ 2582.058467] sd 3:0:0:0: [sdb] Synchronizing SCSI cache
[ 2582.058528] sd 3:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 2582.058536] sd 3:0:0:0: [sdb] Stopping disk
[ 2582.058552] sd 3:0:0:0: [sdb] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 2586.709149] md/raid1:md0: Disk failure on sdb, disabling device.
               md/raid1:md0: Operation continuing on 1 devices.

加入新硬碟。新加的硬碟不能比現有的小,可以比現在的大。

root@omv30:~# mdadm /dev/md0 --add /dev/sdc

開始重建:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 21:38:36 2018
          State : clean, degraded, recovering 
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 56% complete

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 69

    Number   Major   Minor   RaidDevice State
       3       8       32        0      spare rebuilding   /dev/sdc
       2       8       16        1      active sync   /dev/sdb
root@omv30:~# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc[3] sdb[2]
      1047552 blocks super 1.2 [2/1] [_U]
      [===================>.]  recovery = 95.0% (996288/1047552) finish=0.0min speed=249072K/sec
      
unused devices: <none>

重建完成:

root@omv30:~# mdadm -D /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Oct  1 21:38:39 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : omv30:raid1  (local to host omv30)
           UUID : 921a8946:b273e00e:3fa4b99d:040a4437
         Events : 78

    Number   Major   Minor   RaidDevice State
       3       8       32        0      active sync   /dev/sdc
       2       8       16        1      active sync   /dev/sdb
root@omv30:~# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc[3] sdb[2]
      1047552 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

四、重灌系統恢復RAID1測試

1、RAID1兩塊硬碟正常

將兩塊硬碟掛到一個新裝的ubuntu中,啟動。

root@UB13:/home/op# fdisk -l
...(省略)

Disk /dev/md127:1023 MiB,1072693248 位元組,2095104 個扇區
單元:扇區 / 1 * 512 = 512 位元組
扇區大小(邏輯/物理):512 位元組 / 512 位元組
I/O 大小(最小/最佳):512 位元組 / 512 位元組
root@UB13:/home/op# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 sdd[1] sdc[0]
      1047552 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

可以看到,自動識別出來/dev/md127。

root@UB13:/mnt/md# mdadm -D /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Sun Sep 30 22:31:39 2018
        Raid Level : raid1
        Array Size : 1047552 (1023.00 MiB 1072.69 MB)
     Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Mon Oct  1 19:31:08 2018
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : omv30:raid1
              UUID : 921a8946:b273e00e:3fa4b99d:040a4437
            Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd

系統自動識別並恢復了RAID1,不用執行mdadm –assemble –scan。
檢視/etc/mdadm/mdadm.conf,也是自動加入了md資訊。
如果沒有自動生成裝置資訊,則執行:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u

mount後訪問正常。

至於如何將md127修改成md0,詳見下一節。

2、RAID1損壞了一塊硬碟

只有一塊硬碟可用,需要在新系統上重建RAID1。

root@op:/home/op# fdisk -l
(未發現新的md裝置,略去詳細輸出)
root@op:/home/op# mdadm --examine /dev/sdb 
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
           Name : omv30:raid1
  Creation Time : Sun Sep 30 22:31:39 2018
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 4192256 (2047.00 MiB 2146.44 MB)
     Array Size : 1047552 (1023.00 MiB 1072.69 MB)
  Used Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=2097152 sectors
          State : clean
    Device UUID : 6e2fc709:35a8f6fb:d4c0e242:6905437d

    Update Time : Mon Oct  1 21:40:25 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : e65b30a8 - correct
         Events : 78


   Device Role : Active device 1
   Array State : AA (`A` == active, `.` == missing, `R` == replacing)
root@op:/home/op# cat /proc/mdstat 
Personalities : 
unused devices: <none>

說明新掛載的盤RAID1資訊猶在,只需要重新識別:

root@op:/home/op# mdadm --assemble --scan
mdadm: /dev/md/raid1 has been started with 1 drive (out of 2).
root@op:/home/op# fdisk -l
...(省略)

Disk /dev/md127:1023 MiB,1072693248 位元組,2095104 個扇區
單元:扇區 / 1 * 512 = 512 位元組
扇區大小(邏輯/物理):512 位元組 / 512 位元組
I/O 大小(最小/最佳):512 位元組 / 512 位元組
root@op:/home/op# mdadm -D /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Sun Sep 30 22:31:39 2018
        Raid Level : raid1
        Array Size : 1047552 (1023.00 MiB 1072.69 MB)
     Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Mon Oct  1 21:40:25 2018
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : omv30:raid1
              UUID : 921a8946:b273e00e:3fa4b99d:040a4437
            Events : 78

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       2       8       16        1      active sync   /dev/sdb

加入新硬碟。新加的硬碟不能比現有的小,可以比現在的大。
先要拷貝分割槽表:

sfdisk -d /dev/sdb | sfdisk /dev/sdc
root@omv30:~# mdadm /dev/md127 --add /dev/sdc

等待同步完成。
檢視/etc/mdadm/mdadm.conf,如果沒有自動生成裝置資訊,則執行:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

如果這個時候重啟系統,會發現fdisk -l中沒有md127了。並且mdadm -D發現RAID1處於inactive。

root@op:/home/op# mdadm -D /dev/md127
/dev/md127:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 1
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 1

              Name : omv30:raid1
              UUID : 921a8946:b273e00e:3fa4b99d:040a4437
            Events : 78

    Number   Major   Minor   RaidDevice

       -       8       16        -        /dev/sdb

這時需要先mdadm -S /dev/md127刪除舊的md127,
再重新mdadm –assemble –scan。
再add硬碟,完成重建即可。

root@omv30:~# mdadm /dev/md0 --add /dev/sdc

這時再重啟也不會有影響,唯一的變化就是在之前主機是md0,現在變成了md127了。
解決方法:
修改/etc/mdadm/mdadm.conf,
把第二列:/dev/md/raid1
修改成:/dev/md0

再執行:

update-initramfs -u

重啟,搞定。

五、常用命令

檢視狀態

mdadm -D /dev/md0
cat /proc/mdstat
mdadm --examine /dev/sdb
mdadm --detail --scan

刪除md

mdadm -S /dev/md0

啟用md

mdadm -A /dev/md0

在新OS上重新匯入raid

mdadm --assemble --scan

重建

mdadm /dev/md0 --add /dev/sdc

相關文章