分散式儲存glusterfs
主機名 ip地址 環境
glusterfs01 192.168.81.240 centos7.6,增加兩塊10G硬碟
glusterfs02 192.168.81.250 centos7.6,增加兩塊10G硬碟
glusterfs03 192.168.81.136 centos7.6,增加兩塊10G硬碟
1.什麼是glusterfs
glusterfs是一個開源分散式檔案系統,具有強大的橫向擴充套件能力,可支援數pb儲存容量和數千客戶端,透過網路互聯成一個並行的網路檔案系統,具有可擴充套件性、高效能、高可用等特點
常用資源:
pool 儲存資源池
peer 節點
volume 卷 必須處於start才可用
brick儲存單元(硬碟),可增,可減
gluster
glusterfs新增節點是預設本機是localhost,只需要新增其他機器即可,每個節點都是主
glusterfs預設監聽49152埠
2.安裝glusterfs
1)yum安裝
[root@192 ~]# yum install centos-release-gluster -y
[root@192 ~]# yum install glusterfs-server -y
2)啟動glusterfs
所有節點都進行如下操作
[root@192 ~]# systemctl start glusterd.service
[root@192 ~]# systemctl enable glusterd.service
[root@192 ~]# mkdir -p /gfs/test1
[root@192 ~]# mkdir -p /gfs/test2
3)配置hosts解析
[root@glusterfs01 ~]# cat /etc/hosts
192.168.81.240 glusterfs01
192.168.81.250 glusterfs02
192.168.81.136 glusterfs03
[root@glusterfs01 ~]# scp /etc/hosts root@192.168.81.250:/etc/
[root@glusterfs01 ~]# scp /etc/hosts root@192.168.81.136:/etc/
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
3.格式化硬碟並掛載
所有節點配置一致,格式化掛載
1.格式化
[root@192 ~]# mkfs.xfs /dev/sdb
[root@192 ~]# mkfs.xfs /dev/sdc
2.獲取磁碟UUID
這裡我們將uuid寫入fstab檔案中,而不是磁碟名,防止下次重啟機器讀錯磁碟機代號
[root@192 ~]# blkid /dev/sdb /dev/sdc
/dev/sdb: UUID="8835164f-78ab-4f6f-a156-9d3afd0132eb" TYPE="xfs"
/dev/sdc: UUID="6f86c2be-56cc-4e98-8add-63eb43852d65" TYPE="xfs"
3.編輯/etc/fstab檔案
[root@192 ~]# vim /etc/fstab
UUID="8835164f-78ab-4f6f-a156-9d3afd0132eb" /gfs/test1 xfs defaults 0 0
UUID="6f86c2be-56cc-4e98-8add-63eb43852d65" /gfs/test2 xfs defaults 0 0
4.掛載
[root@192 ~]# mount -a
[root@192 ~]# df -hT | grep gfs
/dev/sdb xfs 10G 33M 10G 1% /gfs/test1
/dev/sdc xfs 10G 33M 10G 1% /gfs/test2
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
4.新增儲存資源池
資源池就相當於叢集,往裡面新增節點,預設有一個localhost
master節點操作
檢視當前的資源池列表
[root@glusterfs01 ~]# gluster pool list
UUID Hostname State
a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected
增加glusterfs02和glusterfs03節點
[root@glusterfs01 ~]# gluster peer probe glusterfs02
peer probe: success.
[root@glusterfs01 ~]# gluster peer probe glusterfs03
peer probe: success.
增加完再檢視
[root@glusterfs01 ~]# gluster pool list
UUID Hostname State
07502cd5-4c18-4bde-9bcf-7f29f2a68af7 glusterfs02 Connected
5c76e19c-6141-4e95-9446-b3a424cd5f6e glusterfs03 Connected
a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
報錯
此報錯是因為沒有做hosts解析和沒有關防火牆導致
5.glusertfs卷管理
企業中用的最多的就是分散式複製卷
分散式複製卷可以設定複製的數量,如replica設定的是2,那麼就表示上傳的檔案會複製2份,比如上傳10個檔案實際上是上傳了20個檔案,起到一定的備份作用,這20個檔案會隨機分佈在各個節點
5.1.建立一個分散式複製卷
建立之前所有節點停止防火牆!!!
[root@glusterfs01 ~]# gluster volume create web_volume01 replica 2 glusterfs01:/gfs/test1 glusterfs01:/gfs/test2 glusterfs02:/gfs/test1 glusterfs02:/gfs/test2 force
volume create: web_volume01: success: please start the volume to access data
[root@glusterfs01 ~]# gluster volume list
web_volume01
gluster 命令關鍵字
volume 對捲進行操作
create 建立一個卷
web_volume01 卷名
replica 2 副本數
glusterfs01:/gfs/test1 表示glusterfs01節點上的/gfs/test1這個目錄加入到卷裡
force 強制生成
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
5.2.刪除一個卷
先停止在刪除
[root@glusterfs01 ~]# gluster volume stop web_volume01
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: web_volume01: success
[root@glusterfs01 ~]# gluster volume delete web_volume01
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
1.
2.
3.
4.
5.
6.
7.
5.3.專案例項-建立一個web_volume01的卷,使客戶端可以使用
1)新增儲存資源
[root@glusterfs01 ~]# gluster peer probe glusterfs02
peer probe: success.
[root@glusterfs01 ~]# gluster peer probe glusterfs03
peer probe: success.
[root@glusterfs01 ~]# gluster pool list
UUID Hostname State
07502cd5-4c18-4bde-9bcf-7f29f2a68af7 glusterfs02 Connected
5c76e19c-6141-4e95-9446-b3a424cd5f6e glusterfs03 Connected
a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected
2)建立一個分散式卷
建立
[root@glusterfs01 ~]# gluster volume create web_volume01 replica 2 glusterfs01:/gfs/test1 glusterfs01:/gfs/test2 glusterfs02:/gfs/test1 glusterfs02:/gfs/test2 force
volume create: web_volume01: success: please start the volume to access data
檢視
[root@glusterfs01 ~]# gluster volume list
web_volume01
啟動這個卷
[root@glusterfs01 ~]# gluster volume start web_volume01
volume start: web_volume01: success
3)檢視卷的資訊
[root@glusterfs01 ~]# gluster volume info web_volume01
Volume Name: web_volume01
Type: Distributed-Replicate
Volume ID: 4327e3a1-c48d-4442-9230-f0f53b04b35c
Status: Started //表示可用
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: glusterfs01:/gfs/test1
Brick2: glusterfs01:/gfs/test2
Brick3: glusterfs02:/gfs/test1
Brick4: glusterfs02:/gfs/test2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
4)客戶端掛載
[root@glusterfs03 ~]# mount -t glusterfs 192.168.81.240:/web_volume01 /data_gfs
[root@glusterfs03 ~]# df -hT | grep '/data_gfs'
192.168.81.240:/web_volume01 fuse.glusterfs 20G 270M 20G 2% /data_gfs
這裡顯示20G是因為咱們是複製型的卷容量會減半
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
5.4.驗證是否分佈
造幾個檔案進去
[root@glusterfs03 ~]# cp /etc/yum.repos.d/* /data_gfs/
主節點去檢視檔案存放位置,會發現每一個檔案都有備份
[root@glusterfs01 ~]# ls /gfs/*
/gfs/test1:
Centos-7.repo epel.repo
/gfs/test2:
Centos-7.repo epel.repo
[root@glusterfs01 ~]# ssh 192.168.81.250 "ls /gfs/*"
root@192.168.81.250's password:
/gfs/test1:
CentOS-Base.repo
CentOS-Gluster-7.repo
CentOS-Storage-common.repo
/gfs/test2:
CentOS-Base.repo
CentOS-Gluster-7.repo
CentOS-Storage-common.repo
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
5.5.gluster卷擴容
語法:gluster volume add-brick 卷名 節點:磁碟 force
1)擴容
[root@glusterfs01 ~]# gluster volume add-brick web_volume01 glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 force
volume add-brick: success
2)檢視資訊
[root@glusterfs01 ~]# gluster volume info web_volume01
Volume Name: web_volume01
Type: Distributed-Replicate
Volume ID: 4327e3a1-c48d-4442-9230-f0f53b04b35c
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6 #3x2表示有3個節點,每個節點由2塊盤,共6個盤
Transport-type: tcp
Bricks:
Brick1: glusterfs01:/gfs/test1
Brick2: glusterfs01:/gfs/test2
Brick3: glusterfs02:/gfs/test1
Brick4: glusterfs02:/gfs/test2
Brick5: glusterfs03:/gfs/test1
Brick6: glusterfs03:/gfs/test2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
3)客戶端重新整理
重新執行df命令即可
[root@glusterfs03 ~]# df -hT | grep '/data_gfs'
192.168.81.240:/web_volume01 fuse.glusterfs 30G 404M 30G 2% /data_gfs
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
5.6.擴容後重新整理分佈資訊
語法:gluster volume rebalance 卷名 start
[root@glusterfs01 ~]# gluster volume rebalance web_volume01 start
volume rebalance: web_volume01: success: Rebalance on web_volume01 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: c8e0f0cf-e1d1-4da5-ae79-90ec6e9db72e
1.
2.
3.
5.6.縮容
縮容後會將所有檔案遷移走
語法格式:gluster volume remove-brick 卷 節點名:磁碟名 start
複製
[root@glusterfs01 ~]# gluster volume remove-brick web_volume01 glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 start
It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: b7ba1075-3bf0-40b3-adaf-9496beee2afc
[root@glusterfs01 ~]# ssh 192.168.81.136 "ls /gfs/*"
root@192.168.81.136's password:
/gfs/test1:
/gfs/test2:
-----------------------------------
轉自
©著作權歸作者所有:來自51CTO部落格作者江曉龍的技術部落格的原創作品,請聯絡作者獲取轉載授權,否則將追究法律責任
分散式儲存glusterfs詳解
https://blog.51cto.com/jiangxl/4637873