linux下用mdadm命令實現軟raid(zt)

tonykorn97發表於2007-10-26
raid是當前儲存提高安全和效能的主要技術手段,實現raid一般用raid卡實現,也就是硬raid。除此之外,我們還可以用軟體來實現raid技術。

這篇文章就簡單介紹如何用軟體實現raid技術(以raid0為例)。

有兩個可以實現軟raid的工具:raidtools, mdadm。
raidtool,這是在RHEL3中所使用的,但是我在RHEL4中沒有找到raidtool,只有mdadm,看來RH也是偏向於使用mdadm的。
本文也以mdadm為例講述。

一、檢視當前硬碟情況
[root@primary /]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1114 8843782+ 83 Linux
/dev/sda3 1115 1305 1534207+ 82 Linux swap

Disk /dev/sdb: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdc doesn't contain a valid partition table

二、為硬碟分割槽
raid一般多個硬碟來組成,你也可以用同一個硬碟的多個分割槽組成raid,但這樣是沒有意義的。

[root@primary /]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-102, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-102, default 102):
Using default value 102

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@primary /]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-102, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-102, default 102):
Using default value 102

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

此時硬碟的分割槽情況:
[root@primary /]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1114 8843782+ 83 Linux
/dev/sda3 1115 1305 1534207+ 82 Linux swap

Disk /dev/sdb: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 102 104432 83 Linux

Disk /dev/sdc: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 102 104432 83 Linux

三、建立raid0

[root@primary /]# mdadm --create /dev/md0 --level=raid0 --chunk=8 --raid-devices=2 /dev/sdb1 /dev/sdc1

四、格式化raid
mdadm: array /dev/md0 started.
[root@primary /]# mkfs.ext3 /dev/md0
mke2fs 1.35 (28-Feb-2004)
max_blocks 213647360, rsv_groups = 26080, rsv_gdb = 256
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
52208 inodes, 208640 blocks
10432 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
26 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801

Writing inode tables: done
inode.i_blocks = 3586, i_size = 67383296
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

五、掛載raid分割槽
[root@primary /]# mount /dev/md0 /opt
[root@primary /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 8.4G 5.7G 2.3G 73% /
/dev/sda1 99M 8.4M 86M 9% /boot
none 252M 0 252M 0% /dev/shm
/dev/hdc 161M 161M 0 100% /media/cdrom
/dev/md0 198M 5.8M 182M 4% /opt

六、檢視raid的資訊
[root@primary opt]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun Jul 8 22:54:28 2007
Raid Level : raid0
Array Size : 208640 (203.75 MiB 213.65 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sun Jul 8 22:54:29 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 8K

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
UUID : a86f0502:df5715c0:fd871bbc:9f75e0ad
Events : 0.1

七、生成mdadm配置檔案
mdadm的預設配置檔案為/etc/mdadm.conf,它主要是為了方便陣列的日常管理而設定的,對於陣列而言不是必須的,但是為了減少日後管理中不必要的麻煩,還是應該堅持把這一步做完。

在mdadm.conf檔案中要包含兩種型別的行:一種是以DEVICE開頭的行,它指明在陣列中的裝置列表;另一種是以ARRAY開頭的行,它詳細地說明了陣列的名稱、模式、陣列中活動裝置的數目以及裝置的UUID號。
我們可以用mdadm -Ds來得到mdadm.conf檔案需要的資訊:
[root@primary ~]# mdadm -Ds
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=63f24968:d367038d:f207e458:9a803df9
devices=/dev/sdb1,/dev/sdc1

根據上面的資訊編輯/etc/mdadm.conf,如下:
[root@primary ~]# more /etc/mdadm.conf
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=63f24968:d367038d:f207e458:9a803df9
device /dev/sdb1 /dev/sdc1

如果不配置這個檔案,在重啟後嘗試mount raid裝置的時候會報錯:
[root@primary ~]# mount /dev/md0 /opt
/dev/md0: Invalid argument
mount: /dev/md0: can't read superblock

八、設定開機自動掛載
為了讓系統開機後能自動掛載raid裝置,需要在/etc/fstab新增如下行:
vi /etc/fstab
/dev/md0 /opt ext3 defaults 0 0


至此,一個raid0就算配置完畢了。其他級別的raid也可以用類似的方法配置,具體可以看幫助。

用mdadm建立raid10


raid0 + raid1的也稱raid10,它提供了足夠的資料安全和效能,是一般企業最常用的raid級別。
今天我們討論如何用mdadm建立軟raid10.

本次實驗的測試環境是: vmware + linux as4


1、首先在vmware新增四個虛擬SCSI硬碟,作為實驗所用硬碟
略過。

2、為四個虛擬硬碟分割槽
略過

3、建立raid
建立raid10的順序是:先建立2個raid0,再把2個raid0配置成一個raid1.
--建立第一個raid0
[root@primary ~]# mdadm --create /dev/md0 --level=raid0 --chunk=8 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 started.
--建立第二個raid0
[root@primary ~]# mdadm --create /dev/md1 --level=raid0 --chunk=8 --raid-devices=2 /dev/sdd1 /dev/sde1
mdadm: array /dev/md1 started.
--利用上面的兩個raid0建立raid1
[root@primary ~]# mdadm --create /dev/md2 --level=raid1 --chunk=8 --raid-devices=2 /dev/md0 /dev/md1
mdadm: array /dev/md2 started.

4、格式話raid裝置
注意:對於raid裝置,只需要格式化最頂層的裝置,不管該raid包含了多少層。
在本例中,md2是最頂層的裝置,因此我們要把它格式化了就可以了。
[root@primary ~]# mkfs.ext3 /dev/md2
mke2fs 1.35 (28-Feb-2004)
max_blocks 213581824, rsv_groups = 26072, rsv_gdb = 256
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
52208 inodes, 208576 blocks
10428 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
26 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801

Writing inode tables: done
inode.i_blocks = 3586, i_size = 67383296
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.


5、檢視當前raid資訊
[root@primary opt]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 md1[1] md0[0]
208576 blocks [2/2] [UU]

md1 : active raid0 sde1[1] sdd1[0]
208640 blocks 8k chunks

md0 : active raid0 sdc1[1] sdb1[0]
208640 blocks 8k chunks

unused devices:

6、掛載raid裝置
[root@primary ~]# mount /dev/md2 /opt
[root@primary ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 8.4G 5.8G 2.2G 73% /
/dev/sda1 99M 8.4M 86M 9% /boot
none 252M 0 252M 0% /dev/shm
/dev/md2 198M 5.8M 182M 4% /opt


7、配置/etc/mdadm.conf
首先得出raid的資訊:
[root@primary opt]# mdadm -Ds
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=97e0cb8d:3613c0eb:6d2b2a87:be1c8030
devices=/dev/md0,/dev/md1
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=634ab4f9:92d40a05:3b6d00ca:d28a2683
devices=/dev/sdd1,/dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=fe4f0d31:32580633:45d6f507:d0b7d41a
devices=/dev/sdb1,/dev/sdc1

然後編輯/etc/mdadm.conf,新增如下內容:
[root@primary opt]# vi /etc/mdadm.conf

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=97e0cb8d:3613c0eb:6d2b2a87:be1c8030
device /dev/md0 /dev/md1
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=634ab4f9:92d40a05:3b6d00ca:d28a2683
device /dev/sdd1 /dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=fe4f0d31:32580633:45d6f507:d0b7d41a
device /dev/sdb1 /dev/sdc1

8、設定開機自動掛載
為了讓系統開機後能自動掛載raid裝置,需要在/etc/fstab新增如下行:
vi /etc/fstab
/dev/md2 /opt ext3 defaults 0 0

至此,raid10配置完畢。

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/312079/viewspace-245787/,如需轉載,請註明出處,否則將追究法律責任。

相關文章