linux軟RAID配置、測試、刪除
軟RAID是在作業系統層面進行的RAID配置,也能對資料進行保護,實際生產環境中使用儲存中磁碟陣列和硬RAID實現冗餘的情況比較多,
如瞭解硬RAID配置配置請檢視我的部落格 http://blog.itpub.net/27771627/viewspace-1246405/
使用vmware workstation
OS版本:RHEL7.0
虛擬機器新增三塊硬碟,使用sdb、sdc、sdd三塊盤進行配置
[root@rh ~]# fdisk -l|grep sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 41943039 20458496 8e Linux LVM
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
分別給三塊盤分割槽
[root@rh ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
其他兩塊採用同樣的分割槽方法
建立raid5
[root@rh ~]# mdadm -C /dev/md0 -l5 -n3 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
檢視raid狀態,逐漸完成
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=========>...........] recovery = 47.8% (5010300/10476032) finish=0.6min speed=135439K/sec
unused devices:
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[============>........] recovery = 64.6% (6773352/10476032) finish=0.4min speed=131099K/sec
unused devices:
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=============>.......] recovery = 67.7% (7098984/10476032) finish=0.4min speed=128510K/sec
unused devices:
[root@rh ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices:
使用以下命令檢視
[root@rh ~]# mdadm --query /dev/md0
/dev/md0: 19.98GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail.
檢視細節、或者使用 mdadm --detail /dev/md0
[root@rh ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 2 11:49:09 2014
Raid Level : raid5
Array Size : 20952064 (19.98 GiB 21.45 GB)
Used Dev Size : 10476032 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Aug 2 11:50:32 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rh.ol.com:0 (local to host rh.ol.com)
UUID : 3c2cfd0d:08646c79:601cde6a:cdf532d7
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
在/dev/md0建立檔案系統
[root@rh ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 2 11:49:09 2014
Raid Level : raid5
Array Size : 20952064 (19.98 GiB 21.45 GB)
Used Dev Size : 10476032 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Aug 2 11:53:44 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rh.ol.com:0 (local to host rh.ol.com)
UUID : 3c2cfd0d:08646c79:601cde6a:cdf532d7
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
建立檔案系統
[root@rh ~]# mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0 isize=256 agcount=16, agsize=327296 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=5236736, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
掛載相應目錄
[root@rh ~]# mkdir /md0
[root@rh ~]# mount /dev/md0 /md0
[root@rh ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 18G 3.5G 15G 20% /
devtmpfs 485M 0 485M 0% /dev
tmpfs 494M 148K 494M 1% /dev/shm
tmpfs 494M 7.2M 487M 2% /run
tmpfs 494M 0 494M 0% /sys/fs/cgroup
/dev/sda1 497M 119M 379M 24% /boot
/dev/sr0 4.0G 4.0G 0 100% /run/media/liuzhen/RHEL-7.0 Server.x86_64
/dev/md0 20G 33M 20G 1% /md0
寫入自動掛載檔案/etc/fstab
[root@rh ~]# echo /dev/md0 /md0 xfs defaults 1 2 >> /etc/fstab
檢視檔案系統型別
[root@rh ~]# findmnt /md0
TARGET SOURCE FSTYPE OPTIONS
/md0 /dev/md0 xfs rw,relatime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota
破壞、恢復raid實驗,建立檔案在/md0 目錄下
[root@rh md0]# ls
testRAID testRAID1
[root@rh md0]# pwd
/md0
將/dev/sdb 狀態改為faulty
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0](F)
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
unused devices:
關機 虛擬機器刪除sdb 新增一塊新的硬碟
檢視當前狀態
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
檢視細節
[root@rh md0]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Aug 2 11:49:09 2014
Raid Level : raid5
Array Size : 20952064 (19.98 GiB 21.45 GB)
Used Dev Size : 10476032 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Aug 2 20:26:49 2014
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rh.ol.com:0 (local to host rh.ol.com)
UUID : 3c2cfd0d:08646c79:601cde6a:cdf532d7
Events : 24
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1
新新增sde
[root@rh md0]# fdisk /dev/sde
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x93106f1e.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rh md0]# mdadm /dev/md0 -a /dev/sde1
mdadm: added /dev/sde1
檢視恢復狀態
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=====>...............] recovery = 28.3% (2966160/10476032) finish=0.9min speed=128963K/sec
unused devices:
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[======>..............] recovery = 33.8% (3546676/10476032) finish=0.8min speed=131358K/sec
unused devices:
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=========>...........] recovery = 48.3% (5065596/10476032) finish=0.6min speed=140007K/sec
unused devices:
[root@rh md0]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdc1[1] sdd1[3]
20952064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices:
恢復完成,檢視細節
[root@rh md0]# mdadm --query /dev/md0
/dev/md0: 19.98GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail.
檢視檔案系統中檔案依然存在
[root@rh md0]# ls
testRAID testRAID1
[root@rh md0]# pwd
/md0
停用軟raid
去電mount的檔案系統,去掉/etc/fstab相應的行
[root@rh ~]# umount /md0
[root@rh ~]#
[root@rh ~]#
刪除raid
[root@rh ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
檢視raid狀態,發現已經沒有raid
[root@rh ~]# cat /proc/mdstat
Personalities :
unused devices:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/27771627/viewspace-1246416/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- linux的軟RAID和LVM測試LinuxAILVM
- 測試平臺系列(63) 軟刪除之殤
- Oracle刪除效率測試Oracle
- 在Linux中,如何配置軟體RAID?LinuxAI
- 物理DG刪除歸檔測試
- linux下刪除軟連線Linux
- 功能測試-登陸、新增、刪除、查詢測試要點
- 軟體測試學習資源—登陸、新增、刪除、查詢模組測試用例設計
- [Linux] 軟 RAID (mdadm) 一些不瞭解的地方的測試LinuxAI
- Linux的軟RAIDLinuxAI
- 在 Linux 下配置 RAIDLinuxAI
- Laravel 軟刪除操作Laravel
- 軟連結刪除
- oracle 12c pdb測試:建立、開關、刪除Oracle
- oracle刪除使用者後的恢復測試Oracle
- 軟體測試學習教程——JDBC配置JDBC
- linux軟raid實現方式LinuxAI
- 軟raid5 試驗(rhel 5)AI
- Laravel 軟刪除模型指南Laravel模型
- JB的測試之旅-Linux下配置Linux
- Linux軟體包管理(底層安裝和刪除)Linux
- 硬碟測試軟體IOMETER安裝配置指南硬碟
- 刪除LINUX分割槽Linux
- linux刪除目錄Linux
- Linux 分割槽刪除Linux
- iOS itunes-connect使用文件(app 構建 刪除 測試)iOSAPP
- 【刪除】【Oracle】完美刪除Windows系統上的Oracle軟體OracleWindows
- linux下軟raid的實現LinuxAI
- Linux 中軟體 RAID 的使用LinuxAI
- Keepalived配置和測試過程for linuxLinux
- 【軟體測試】——介面測試
- 刪除無效軟連線
- DFT 5008U3刪除和建立raid步驟AI
- 軟體驗收測試 第三方軟體測試 軟體功能測試 軟體資訊保安測試
- 測試與除錯除錯
- linux的刪除檔案命令和強制刪除命令Linux
- GO語言————8.2 測試鍵值對是否存在及刪除元素Go
- Linux刪除檔案命令Linux