How to restore raid after reinstall Linux

jolly10發表於2012-01-05

Linux version: Red Hat 4 U5

Disk info: hda with 3 partitions, hdb with 3 partitions, hdc with 1 partition

[@more@]

Linux version: Red Hat 4 U5

Disk info: hda with 3 partitions, hdb with 3 partitions, hdc with 1 partition

Test purpose: raid1 is created on the first two partitions of hda and hdb, another partition of hda and hdb and the first partition of hdc make up raid5. The Linux system is installed in raid1, if the raid1 file system destroyed for whatever reason, how to restore the data information of raid5 after reinstall Linux on hda and hdb?

Step 1:

The original system information, raid1 and raid5 are coexisted on Linux.

[root@localhost u01]# fdisk -l

Disk /dev/hda: 160.0 GB, 160040803840 bytes

255 heads, 63 sectors/track, 19457 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hda1 * 1 12 96358+ fd Linux raid autodetect

/dev/hda2 13 3836 30716280 fd Linux raid autodetect

/dev/hda3 3837 6385 20474842+ 83 Linux

/dev/hda4 6386 19457 105000840 5 Extended

/dev/hda5 6386 6639 2040223+ fd Linux raid autodetect

Disk /dev/hdb: 160.0 GB, 160041885696 bytes

255 heads, 63 sectors/track, 19457 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hdb1 * 1 12 96358+ fd Linux raid autodetect

/dev/hdb2 13 3836 30716280 fd Linux raid autodetect

/dev/hdb3 3837 6385 20474842+ 83 Linux

/dev/hdb4 6386 19457 105000840 5 Extended

/dev/hdb5 6386 6639 2040223+ fd Linux raid autodetect

0

Disk /dev/hdc: 80.0 GB, 80026361856 bytes

255 heads, 63 sectors/track, 9729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hdc1 * 1 2549 20474811 83 Linux

Disk /dev/md0: 98 MB, 98566144 bytes

2 heads, 4 sectors/track, 24064 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 2089 MB, 2089091072 bytes

2 heads, 4 sectors/track, 510032 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md2: 31.4 GB, 31453347840 bytes

2 heads, 4 sectors/track, 7679040 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md3: 41.9 GB, 41932161024 bytes

2 heads, 4 sectors/track, 10237344 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md3 doesn't contain a valid partition table

[root@localhost u01]# cat /proc/mdstat

Personalities : [raid1] [raid5]

md3 : active raid5 hdc1[2] hdb3[1] hda3[0]

40949376 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]

md2 : active raid1 hdb2[1] hda2[0]

30716160 blocks [2/2] [UU]

md1 : active raid1 hdb5[1] hda5[0]

2040128 blocks [2/2] [UU]

md0 : active raid1 hdb1[1] hda1[0]

96256 blocks [2/2] [UU]

unused devices:

[root@localhost u01]# cat /proc/mdstat

Personalities : [raid1] [raid5]

md3 : active raid5 hdc1[2] hdb3[1] hda3[0]

40949376 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]

md2 : active raid1 hdb2[1] hda2[0]

30716160 blocks [2/2] [UU]

md1 : active raid1 hdb5[1] hda5[0]

2040128 blocks [2/2] [UU]

md0 : active raid1 hdb1[1] hda1[0]

96256 blocks [2/2] [UU]

unused devices:

[root@localhost u01]# cat /etc/mdadm.conf

ARRAY /dev/md3 level=raid5 num-devices=3 spares=1 UUID=9452ebe4:bb76f648:ef90a804:31e84ae3

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=37658e4f:fa0cb61d:c41c1e82:f29a4c29

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a36a9a2c:ccfd5746:69812977:64c9201b

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=314d8e18:588fe98f:770b11c9:4d67cd9a

Step 2:

If raid1 file system is destroyed, below section is the information of reinstalled Linux with the new raid1. We must not touch the disk partition of raid5 when reinstall system.

[root@localhost ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/md2 29G 2.3G 26G 9% /

/dev/md0 92M 12M 75M 14% /boot

none 2.0G 0 2.0G 0% /dev/shm

[root@localhost ~]# fdisk -l

Disk /dev/hda: 160.0 GB, 160040803840 bytes

255 heads, 63 sectors/track, 19457 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hda1 * 1 12 96358+ fd Linux raid autodetect

/dev/hda2 13 266 2040255 fd Linux raid autodetect

/dev/hda3 3837 6385 20474842+ 83 Linux

/dev/hda4 6386 19457 105000840 5 Extended

/dev/hda5 6386 6639 2040223+ fd Linux raid autodetect

/dev/hda6 6640 10463 30716248+ fd Linux raid autodetect

Disk /dev/hdb: 160.0 GB, 160041885696 bytes

255 heads, 63 sectors/track, 19457 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hdb1 * 1 12 96358+ fd Linux raid autodetect

/dev/hdb2 13 266 2040255 fd Linux raid autodetect

/dev/hdb3 3837 6385 20474842+ 83 Linux

/dev/hdb4 6386 19457 105000840 5 Extended

/dev/hdb5 6386 6639 2040223+ fd Linux raid autodetect

/dev/hdb6 6640 10463 30716248+ fd Linux raid autodetect

Disk /dev/hdc: 80.0 GB, 80026361856 bytes

255 heads, 63 sectors/track, 9729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/hdc1 * 1 2549 20474811 83 Linux

Disk /dev/md0: 98 MB, 98566144 bytes

2 heads, 4 sectors/track, 24064 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Device Boot Start End Blocks Id System

Disk /dev/md2: 31.4 GB, 31453347840 bytes

2 heads, 4 sectors/track, 7679040 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md1: 2089 MB, 2089091072 bytes

2 heads, 4 sectors/track, 510032 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

[root@localhost ~]# cat /proc/mdstat

Personalities : [raid1]

md1 : active raid1 hdb5[1] hda5[0]

2040128 blocks [2/2] [UU]

md2 : active raid1 hdb6[1] hda6[0]

30716160 blocks [2/2] [UU]

md0 : active raid1 hdb1[1] hda1[0]

96256 blocks [2/2] [UU]

unused devices:

Step 3:

Restore the data information of raid5.

[root@localhost ~]# mdadm -A /dev/md3 /dev/hda3 /dev/hdb3 /dev/hdc1

mdadm: /dev/md3 has been started with 3 drives.

[root@localhost ~]# cat /proc/mdstat

Personalities : [raid1] [raid5]

md3 : active raid5 hda3[0] hdc1[2] hdb3[1]

40949376 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]

md1 : active raid1 hdb5[1] hda5[0]

2040128 blocks [2/2] [UU]

md2 : active raid1 hdb6[1] hda6[0]

30716160 blocks [2/2] [UU]

md0 : active raid1 hdb1[1] hda1[0]

96256 blocks [2/2] [UU]

unused devices:

[root@localhost /]# mkdir /u01

[root@localhost /]# mount /dev/md3 /u01

[root@localhost /]# cd /u01

[root@localhost u01]# ls

abc lost+found r8168-8.018.00.tar.gz

[root@localhost u01]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/md2 29G 2.3G 26G 9% /

/dev/md0 92M 12M 75M 14% /boot

none 2.0G 0 2.0G 0% /dev/shm

/dev/md3 39G 81M 37G 1% /u01

[root@localhost u01]# mdadm --detail --scan

ARRAY /dev/md3 level=raid5 num-devices=3 UUID=9452ebe4:bb76f648:ef90a804:31e84ae3

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a36a9a2c:ccfd5746:69812977:64c9201b

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99d1a2c2:60959d2f:c2deabee:80cd5fac

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=25c8d055:2de0cc26:7f815553:31404ff8

[root@localhost u01]# cat /etc/mdadm.conf

# mdadm.conf written out by anaconda

DEVICE partitions

MAILADDR root

ARRAY /dev/md2 super-minor=2

ARRAY /dev/md0 super-minor=0

ARRAY /dev/md1 super-minor=1

[root@localhost u01]#

[root@localhost u01]# cp /etc/mdadm.conf /etc/mdadm.conf.bak

[root@localhost u01]# mdadm --detail --scan > /etc/mdadm.conf

[root@localhost u01]# vi /etc/fstab

/dev/md3 /u01 ext3 defaults 0 0

If want to auto mount the raid partition when system starting up, we must modify the partition's system id from 83 to fd.

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/271283/viewspace-1057041/,如需轉載,請註明出處,否則將追究法律責任。

相關文章