在實際工作中,會經常碰到所給的伺服器硬碟容量太小,而實際的應用軟體中卻需要一個容量較大的分割槽進行資料儲存等,除了通過硬體RAID卡來實現合併多硬碟外,其實我們也可以通過軟體的方式來實現。
實驗環境
- 虛擬機器:CentOS 6.6 x64
- 硬碟1:/dev/sdb
- 硬碟2:/dev/sdc
- 硬碟3:/dev/sdd
詳細硬碟列表資訊如下所示:
使用LVM合併硬碟
使用LVM(Logical Volume Manager)目的如下所示:
將多塊獨立的硬碟合併為邏輯上的一塊,並掛載到指定的掛載點中,達到在一個目錄中使用多塊硬碟所有空間的效果
LVM相關概念
- PV(Physical Volume):物理卷
硬碟分割槽後但還未格式化為檔案系統,可使用pvcreate命令將分割槽建立為PV,其對應的system ID為8e即LVM格式所對應的系統識別符號。
- VG(Volume Group):卷組
將多個PV組合起來,使用vgcreate建立卷組,這樣卷組就可以包含多個PV,相當於重新組合多個分割槽後所得到的磁碟。雖然VG是組合了多個PV,但建立VG時會將VG所有的空間根據指定的PE大小劃分為多個PE,在LVM模式中儲存都是以PE為單元,類似於檔案系統中的BLOCK。
- PE(Physical Extent):物理塊
PE是VG的儲存單元,實際的資料都是儲存在PE中
- LV(Logical Volume):邏輯卷
VG相當於組合的多個硬碟,則LV相當於分割槽,只不過該分割槽是通過VG進行劃分的。VG中存在很多PE,可以指定將多少個PE劃分給一個LV,也可以直接指定大小來劃分。劃分為LV後就相當於劃分了分割槽,僅需要對LV進行格式化檔案系統即可。
- LE(Logical Extent):邏輯塊
PE是物理儲存單元,而LE則是邏輯儲存單元,即LE為LV中的邏輯儲存單元,與PE大小一致。從VG中劃分LV,實際上就是從VG中劃分PE,而劃分LV後稱之為LE,而不是PE了。
LVM之所以能夠伸縮容量,其實現方法就是將LV中的PE進行刪除或增加
- LVM的儲存機制
LV是從VG中劃分出來的,因此LV中的PE可能來自於多個PV。因此向LV儲存資料時,主要有兩種機制:
- [ ] 線性模式(Linear):先將資料儲存在屬於同一個PV的PE,然後再向下一個PV中的PE
- [ ] 條帶模式(Striped):將一份資料拆分為多份,分別寫入該LV對應的每個PV中,類似於RAID 0,因此讀寫效能會優於線性模式。
儘管條帶模式讀寫效能會比較好,但LVM的重點是擴充套件容量而非效能,如果要實現讀寫效能還是推薦採用RAID方式實現。
- LVM示意圖
建立LVM操作步驟
1、建立PV(Physical Volume)
[root@Wine ~]# pvcreate /dev/sdb /dev/sdc /dev/sdd
Physical volume "/dev/sdb" successfully created
Physical volume "/dev/sdc" successfully created
Physical volume "/dev/sdd" successfully created
2、檢視建立的PV列表
[root@Wine ~]# pvs # 檢視列表
PV VG Fmt Attr PSize PFree
/dev/sdb lvm2 --- 40.00g 40.00g
/dev/sdc lvm2 --- 50.00g 50.00g
/dev/sdd lvm2 --- 30.00g 30.00g
或
[root@Wine ~]# pvscan
PV /dev/sdb lvm2 [40.00 GiB]
PV /dev/sdc lvm2 [50.00 GiB]
PV /dev/sdd lvm2 [30.00 GiB]
Total: 3 [120.00 GiB] / in use: 0 [0 ] / in no VG: 3 [120.00 GiB]
[root@Wine ~]# pvdisplay # 檢視PV詳細資訊
"/dev/sdb" is a new physical volume of "40.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 40.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 9vAxyC-FsAc-S2HA-aCze-MZe5-em24-7th27s
"/dev/sdc" is a new physical volume of "50.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdc
VG Name
PV Size 50.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID HdbCuK-hFkP-QQbr-Naaa-PNzz-WFNw-78uXs3
"/dev/sdd" is a new physical volume of "30.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdd
VG Name
PV Size 30.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID EpPdAf-ku4b-zIMm-V2Np-gnuC-59nj-L0Zd9G
3、建立VG(Volume Group)
建立的VG的使用方法如下
vgcreate [自定義LVM名稱] [裝置]
[root@Wine ~]# vgcreate TestLVM /dev/sdb # 建立主要卷組
Volume group "TestLVM" successfully created
[root@Wine ~]# vgdisplay # 顯示卷組詳細資訊
--- Volume group ---
VG Name TestLVM
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 40.00 GiB
PE Size 4.00 MiB
Total PE 10239
Alloc PE / Size 0 / 0
Free PE / Size 10239 / 40.00 GiB
VG UUID s0gVkf-FScU-7x9v-HIx3-cinR-Sc60-IHgKmn
4、向VG中新增PV(Volume Group)
向VG中新增PV的使用方法如下
vgextend [自定義LVM名稱] [裝置]
[root@Wine ~]# vgextend TestLVM /dev/sdc /dev/sdd # 建立擴充套件卷組,並使其合併到一個卷組中
Volume group "TestLVM" successfully extended # 檢視擴充套件卷組
[root@Wine ~]# vgdisplay
--- Volume group ---
VG Name TestLVM
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 119.99 GiB # 注意這裡與前面的區別
PE Size 4.00 MiB
Total PE 30717
Alloc PE / Size 0 / 0
Free PE / Size 30717 / 119.99 GiB
VG UUID s0gVkf-FScU-7x9v-HIx3-cinR-Sc60-IHgKmn
5、建立LV(Logical Volume)
建立的LV的使用方法如下
lvcreate -L[自定義分割槽大小] -n[自定義分割槽名稱] [VG名稱]
或
lvcreate -l[%{ VG | FREE | ORIGIN }] -n[自定義分割槽名稱] [VG名稱]
[root@Wine ~]# lvcreate -l 100%VG -nTestData TestLVM # 建立LV
Logical volume "TestData" created
[root@Wine ~]# lvscan # 檢視建立的LV列表
ACTIVE `/dev/TestLVM/TestData` [119.99 GiB] inherit
[root@Wine ~]# lvdisplay # 檢視建立的LV詳細資訊
--- Logical volume ---
LV Path /dev/TestLVM/TestData
LV Name TestData
VG Name TestLVM
LV UUID 2zvNe9-dtlv-pcWc-oTnJ-6INz-e2dI-vRQ7Vq
LV Write Access read/write
LV Creation host, time Wine, 2018-11-14 11:01:56 +0800
LV Status available
# open 0
LV Size 119.99 GiB
Current LE 30717
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
6、格式化分割槽
[root@Wine ~]# mkfs -t ext4 /dev/TestLVM/TestData # 格式化分割槽
mke2fs 1.41.12 (17-May-2010)
檔案系統標籤=
作業系統:Linux
塊大小=4096 (log=2)
分塊大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
7864320 inodes, 31454208 blocks
1572710 blocks (5.00%) reserved for the super user
第一個資料塊=0
Maximum filesystem blocks=4294967296
960 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
正在寫入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
7、建立掛載點並進行掛載
[root@Wine ~]# mkdir /LVM
[root@Wine ~]# mount /dev/TestLVM/TestData /LVM/
[root@Wine ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 79G 9.6G 65G 13% /
tmpfs tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sda1 ext4 190M 32M 149M 18% /boot
/dev/mapper/TestLVM-TestData ext4 118G 60M 112G 1% /LVM # 建立的LVM掛載點
8、新增開機自動掛載
[root@Wine ~]# echo `/dev/TestLVM/TestData /LVM ext4 defaults 0 0 ` >> /etc/fstab
刪除LVM操作步驟
1、在備份LVM資料後,先解除安裝掛載點並刪除/etc/fstab中的掛載記錄
[root@Wine ~]# umount /LVM/;df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 79G 9.6G 65G 13% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sda1 190M 32M 149M 18% /boot
2、刪除LV
[root@Wine ~]# lvremove /dev/TestLVM/TestData
Do you really want to remove active logical volume TestData? [y/n]: y
Logical volume "TestData" successfully removed
3、刪除VG
[root@Wine ~]# vgremove TestLVM
Volume group "TestLVM" successfully removed
4、刪除PV
[root@Wine ~]# pvremove /dev/sdb /dev/sdc /dev/sdd
Labels on physical volume "/dev/sdb" successfully wiped
Labels on physical volume "/dev/sdc" successfully wiped
Labels on physical volume "/dev/sdd" successfully wiped
使用軟RAID
建立軟RAID
RAID通過分為兩種:
- 硬體RAID:通過RAID卡連線多個硬碟或伺服器主機板整合RAID控制器從而實現RAID相關功能
- 軟體RAID:通過軟體層面來模擬實現RAID的相關功能,從而達到與硬體RAID相同的功能
在Linux中通常是使用md模組來實現軟體RAID
1、確認作業系統是否安裝mdadm包
[root@Wine ~]# rpm -q mdadm
mdadm-3.3-6.el6.x86_64
2、對進行需要建立軟體的硬碟進行分割槽並設定分割槽型別為RAID
[root@Wine ~]# lsblk # 顯示硬碟和分割槽資訊
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sdb 8:16 0 40G 0 disk
sdd 8:48 0 30G 0 disk
sdc 8:32 0 50G 0 disk
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 200M 0 part /boot
└─sda2 8:2 0 79.8G 0 part /
# 建立分割槽
[root@Wine ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x7bfec905.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won`t be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It`s strongly recommended to
switch off the mode (command `c`) and change display units to
sectors (command `u`).
Command (m for help): n # 新增新分割槽
Command action
e extended
p primary partition (1-4)
p # 選擇分割槽型別
Partition number (1-4): 1 # 設定分割槽號
First cylinder (1-5221, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-5221, default 5221):
Using default value 5221
Command (m for help): l # 列印支援的分割槽格式型別
0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx
5 Extended 42 SFS 86 NTFS volume set da Non-FS data
6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt
9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access
a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT
f W95 Ext`d (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor
14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT
1e Hidden W95 FAT1
Command (m for help): t # 更改分割槽型別
Selected partition 1
Hex code (type L to list codes): fd # 設定分割槽型別為RAID
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p # 列印資訊
Disk /dev/sdb: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7bfec905
Device Boot Start End Blocks Id System
/dev/sdb1 1 5221 41937651 fd Linux raid autodetect
Command (m for help): w # 儲存分割槽資訊
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
fdisk命令只適合小於2T的硬碟,如大於2T則需要使用parted。
使用parted命令建立RAID的如下所示:
[root@Wine ~]# parted /dev/sdc
GNU Parted 2.1
使用 /dev/sdc
Welcome to GNU Parted! Type `help` to view a list of commands.
(parted) mklabel gpt
警告: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to continue?
是/Yes/否/No? y
(parted) mkpart primary 1 -1
(parted) set 1 raid # 關鍵步驟在這裡
新狀態? [開]/on/關/off? on
(parted) print
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdc: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name 標誌
1 1049kB 53.7GB 53.7GB primary raid
3、使用mdadm建立RAID
[root@Wine ~]# mdadm --create /dev/md0 --auto yes --level 0 -n3 /dev/sd{b,c,d}1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@Wine ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sdb 8:16 0 40G 0 disk
└─sdb1 8:17 0 40G 0 part
└─md0 9:0 0 119.9G 0 raid0
sdd 8:48 0 30G 0 disk
└─sdd1 8:49 0 30G 0 part
└─md0 9:0 0 119.9G 0 raid0
sdc 8:32 0 50G 0 disk
└─sdc1 8:33 0 50G 0 part
└─md0 9:0 0 119.9G 0 raid0
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 200M 0 part /boot
└─sda2 8:2 0 79.8G 0 part /
該命令中各引數說明:
- -C/–create:新建陣列
- -a/–auto:允許mdadm建立裝置檔案,一般常用引數-a yes一次性建立
- -l/–levle:RAID模式,支援RAID0/1/4/5/6/10等
- -n/–raid-devices=:建立陣列中活動磁碟的數量
- /dev/md0:陣列的裝置名稱
- /dev/sd{b,c,d}1:建立陣列中的物理磁碟分割槽資訊
更多mdadm幫助,可使用mdadm -h 或 man mdadm
建立完成後,檢視陣列狀態:
[root@Wine ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdd1[2] sdc1[1] sdb1[0]
125722624 blocks super 1.2 512k chunks
unused devices: <none>
或使用
[root@Wine ~]# mdadm -D /dev/md0 # 檢視軟體RAID資訊
/dev/md0:
Version : 1.2
Creation Time : Wed Nov 14 14:36:11 2018
Raid Level : raid0
Array Size : 125722624 (119.90 GiB 128.74 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Wed Nov 14 14:36:11 2018
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : Wine:0 (local to host Wine)
UUID : 2c8da2fd:7729efbd:5e414dd0:9cfb9f5f
Events : 0
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4、建立md0配置檔案
[root@Wine ~]# echo DEVICE /dev/sd{b,c,d}1 >> /etc/mdadm.conf
[root@Wine ~]# mdadm -Evs >> /etc/mdadm.conf
[root@Wine ~]# cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1
ARRAY /dev/md/0 level=raid0 metadata=1.2 num-devices=3 UUID=2c8da2fd:7729efbd:5e414dd0:9cfb9f5f name=Wine:0
devices=/dev/sdb1,/dev/sdc1,/dev/sdd1
5、格式化RAID分割槽
[root@Wine ~]# mkfs -t ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
檔案系統標籤=
作業系統:Linux
塊大小=4096 (log=2)
分塊大小=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
7864320 inodes, 31430656 blocks
1571532 blocks (5.00%) reserved for the super user
第一個資料塊=0
Maximum filesystem blocks=4294967296
960 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
正在寫入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
6、新增開機自動掛載
[root@Wine ~]# blkid | grep /dev/md # 這裡推薦使用UUID進行掛載
/dev/md0: UUID="40829115-a1c5-4d5a-af4a-07225a4619fc" TYPE="ext4"
[root@Wine ~]# echo "UUID=40829115-a1c5-4d5a-af4a-07225a4619fc /SoftRAID ext4 defaults 0 0" >> /etc/fstab# 新增掛載資訊到/etc/fstab中
[root@Wine ~]# mount -a;df -h # 檢視系統掛載資訊
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 79G 9.6G 65G 13% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sda1 190M 32M 149M 18% /boot
/dev/md0 118G 60M 112G 1% /SoftRAID
刪除軟RAID
1、解除安裝掛載點
[root@Wine ~]# umount /dev/md0
2、停止軟體RAID裝置
[root@Wine ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
3、刪除RAID中的磁碟
[root@Wine ~]# mdadm --misc --zero-superblock /dev/sd{b,c,d}1
4、刪除mdadm配置檔案
[root@Wine ~]# rm -f /etc/mdadm.conf
5、刪除/etc/fstab中的掛載資訊
以上即是在Linux常見的兩種將多個硬碟合併容量的方法,僅作為參考。在現實環境中還是推薦使用硬體RAID,資料無價。
本文同步在微信訂閱號上釋出,如各位小夥伴們喜歡我的文章,也可以關注我的微信訂閱號:woaitest,或掃描下面的二維碼新增關注: