linux下用mdadm命令實現軟raid(zt)
這篇文章就簡單介紹如何用軟體實現raid技術(以raid0為例)。
有兩個可以實現軟raid的工具:raidtools, mdadm。
raidtool,這是在RHEL3中所使用的,但是我在RHEL4中沒有找到raidtool,只有mdadm,看來RH也是偏向於使用mdadm的。
本文也以mdadm為例講述。
一、檢視當前硬碟情況
[root@primary /]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1114 8843782+ 83 Linux
/dev/sda3 1115 1305 1534207+ 82 Linux swap
Disk /dev/sdb: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdc doesn't contain a valid partition table
二、為硬碟分割槽
raid一般多個硬碟來組成,你也可以用同一個硬碟的多個分割槽組成raid,但這樣是沒有意義的。
[root@primary /]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-102, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-102, default 102):
Using default value 102
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@primary /]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-102, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-102, default 102):
Using default value 102
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
此時硬碟的分割槽情況:
[root@primary /]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1114 8843782+ 83 Linux
/dev/sda3 1115 1305 1534207+ 82 Linux swap
Disk /dev/sdb: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 102 104432 83 Linux
Disk /dev/sdc: 107 MB, 107374080 bytes
64 heads, 32 sectors/track, 102 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 102 104432 83 Linux
三、建立raid0
[root@primary /]# mdadm --create /dev/md0 --level=raid0 --chunk=8 --raid-devices=2 /dev/sdb1 /dev/sdc1
四、格式化raid
mdadm: array /dev/md0 started.
[root@primary /]# mkfs.ext3 /dev/md0
mke2fs 1.35 (28-Feb-2004)
max_blocks 213647360, rsv_groups = 26080, rsv_gdb = 256
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
52208 inodes, 208640 blocks
10432 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
26 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801
Writing inode tables: done
inode.i_blocks = 3586, i_size = 67383296
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
五、掛載raid分割槽
[root@primary /]# mount /dev/md0 /opt
[root@primary /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 8.4G 5.7G 2.3G 73% /
/dev/sda1 99M 8.4M 86M 9% /boot
none 252M 0 252M 0% /dev/shm
/dev/hdc 161M 161M 0 100% /media/cdrom
/dev/md0 198M 5.8M 182M 4% /opt
六、檢視raid的資訊
[root@primary opt]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun Jul 8 22:54:28 2007
Raid Level : raid0
Array Size : 208640 (203.75 MiB 213.65 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Jul 8 22:54:29 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 8K
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
UUID : a86f0502:df5715c0:fd871bbc:9f75e0ad
Events : 0.1
七、生成mdadm配置檔案
mdadm的預設配置檔案為/etc/mdadm.conf,它主要是為了方便陣列的日常管理而設定的,對於陣列而言不是必須的,但是為了減少日後管理中不必要的麻煩,還是應該堅持把這一步做完。
在mdadm.conf檔案中要包含兩種型別的行:一種是以DEVICE開頭的行,它指明在陣列中的裝置列表;另一種是以ARRAY開頭的行,它詳細地說明了陣列的名稱、模式、陣列中活動裝置的數目以及裝置的UUID號。
我們可以用mdadm -Ds來得到mdadm.conf檔案需要的資訊:
[root@primary ~]# mdadm -Ds
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=63f24968:d367038d:f207e458:9a803df9
devices=/dev/sdb1,/dev/sdc1
根據上面的資訊編輯/etc/mdadm.conf,如下:
[root@primary ~]# more /etc/mdadm.conf
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=63f24968:d367038d:f207e458:9a803df9
device /dev/sdb1 /dev/sdc1
如果不配置這個檔案,在重啟後嘗試mount raid裝置的時候會報錯:
[root@primary ~]# mount /dev/md0 /opt
/dev/md0: Invalid argument
mount: /dev/md0: can't read superblock
八、設定開機自動掛載
為了讓系統開機後能自動掛載raid裝置,需要在/etc/fstab新增如下行:
vi /etc/fstab
/dev/md0 /opt ext3 defaults 0 0
至此,一個raid0就算配置完畢了。其他級別的raid也可以用類似的方法配置,具體可以看幫助。
用mdadm建立raid10
raid0 + raid1的也稱raid10,它提供了足夠的資料安全和效能,是一般企業最常用的raid級別。
今天我們討論如何用mdadm建立軟raid10.
本次實驗的測試環境是: vmware + linux as4
1、首先在vmware新增四個虛擬SCSI硬碟,作為實驗所用硬碟
略過。
2、為四個虛擬硬碟分割槽
略過
3、建立raid
建立raid10的順序是:先建立2個raid0,再把2個raid0配置成一個raid1.
--建立第一個raid0
[root@primary ~]# mdadm --create /dev/md0 --level=raid0 --chunk=8 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 started.
--建立第二個raid0
[root@primary ~]# mdadm --create /dev/md1 --level=raid0 --chunk=8 --raid-devices=2 /dev/sdd1 /dev/sde1
mdadm: array /dev/md1 started.
--利用上面的兩個raid0建立raid1
[root@primary ~]# mdadm --create /dev/md2 --level=raid1 --chunk=8 --raid-devices=2 /dev/md0 /dev/md1
mdadm: array /dev/md2 started.
4、格式話raid裝置
注意:對於raid裝置,只需要格式化最頂層的裝置,不管該raid包含了多少層。
在本例中,md2是最頂層的裝置,因此我們要把它格式化了就可以了。
[root@primary ~]# mkfs.ext3 /dev/md2
mke2fs 1.35 (28-Feb-2004)
max_blocks 213581824, rsv_groups = 26072, rsv_gdb = 256
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
52208 inodes, 208576 blocks
10428 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
26 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801
Writing inode tables: done
inode.i_blocks = 3586, i_size = 67383296
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
5、檢視當前raid資訊
[root@primary opt]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 md1[1] md0[0]
208576 blocks [2/2] [UU]
md1 : active raid0 sde1[1] sdd1[0]
208640 blocks 8k chunks
md0 : active raid0 sdc1[1] sdb1[0]
208640 blocks 8k chunks
unused devices:
6、掛載raid裝置
[root@primary ~]# mount /dev/md2 /opt
[root@primary ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 8.4G 5.8G 2.2G 73% /
/dev/sda1 99M 8.4M 86M 9% /boot
none 252M 0 252M 0% /dev/shm
/dev/md2 198M 5.8M 182M 4% /opt
7、配置/etc/mdadm.conf
首先得出raid的資訊:
[root@primary opt]# mdadm -Ds
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=97e0cb8d:3613c0eb:6d2b2a87:be1c8030
devices=/dev/md0,/dev/md1
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=634ab4f9:92d40a05:3b6d00ca:d28a2683
devices=/dev/sdd1,/dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=fe4f0d31:32580633:45d6f507:d0b7d41a
devices=/dev/sdb1,/dev/sdc1
然後編輯/etc/mdadm.conf,新增如下內容:
[root@primary opt]# vi /etc/mdadm.conf
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=97e0cb8d:3613c0eb:6d2b2a87:be1c8030
device /dev/md0 /dev/md1
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=634ab4f9:92d40a05:3b6d00ca:d28a2683
device /dev/sdd1 /dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=fe4f0d31:32580633:45d6f507:d0b7d41a
device /dev/sdb1 /dev/sdc1
8、設定開機自動掛載
為了讓系統開機後能自動掛載raid裝置,需要在/etc/fstab新增如下行:
vi /etc/fstab
/dev/md2 /opt ext3 defaults 0 0
至此,raid10配置完畢。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/312079/viewspace-245787/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- linux下軟raid的實現LinuxAI
- mdadm linux 做軟磁碟陣列 raid0Linux陣列AI
- Linux下軟體RAID的實現 (轉)LinuxAI
- 用mdadm建立raid10AI
- 如何使用linux程式mdadm建立軟體RAID1軟陣列LinuxAI陣列
- linux軟raid實現方式LinuxAI
- 使用mdadm在Linux中配置RAID(轉)LinuxAI
- [Linux] 軟 RAID (mdadm) 一些不瞭解的地方的測試LinuxAI
- mdadm用iscsi硬碟和本機硬碟建立raid硬碟AI
- 在Windows NT/2000下實現"軟"RAID的方法(轉)WindowsAI
- 針對mdadm的RAID1失效測試AI
- Linux的軟RAIDLinuxAI
- Linux下軟體應用的相關命令(轉)Linux
- 接上一篇(測試--mdadm用iscsi硬碟和本機硬碟建立raid)硬碟AI
- [zt]磁碟 RAID10 / RAID5 配置下的IOPS對比AI
- 在 Linux 下使用 RAID(三):用兩塊磁碟建立 RAID 1(映象)LinuxAI
- linux sort 命令詳解 (zt)Linux
- linux date 命令詳解(ZT)Linux
- 在linux上用dd命令實現ghost功能Linux
- 在 Linux 下配置 RAIDLinuxAI
- 實用的Linux命令Linux
- linux history命令使用tip_ztLinux
- Linux平臺下7個實用的軟體Linux
- Linux 中軟體 RAID 的使用LinuxAI
- 實用Unix/Linux 命令(轉)Linux
- linux tar(tape archive) 命令詳解(ZT)LinuxHive
- linux硬體資訊檢視命令(ZT)Linux
- Linux 檔案命令精通指南(zt)Linux
- Linux 下 cut 命令的 4 個基礎實用的示例Linux
- linux 下實用軟體組合 -- 為你的 linux 減肥!(轉)Linux
- Linux中的LVM和軟RAIDLinuxLVMAI
- linux軟RAID配置、測試、刪除LinuxAI
- linux軟RAID的建立和維護LinuxAI
- 在Linux中,如何配置軟體RAID?LinuxAI
- 用iptales實現包 過慮型防火牆(zt)防火牆
- 用rsync實現網站映象和備份(ZT)網站
- linux下實現tcpdumpLinuxTCP
- 實用的Linux命令列技巧Linux命令列