GlusterFS分散式儲存叢集部署記錄-相關補充

散盡浮華發表於2018-04-10

 

接著上一篇Centos7下GlusterFS分散式儲存叢集環境部署記錄文件,繼續做一些補充記錄,希望能加深對GlusterFS儲存操作的理解和熟悉度。

========================清理glusterfs儲存環境=========================

由上面可知,該glusterfs儲存叢集有四個節點:
[root@GlusterFS-master ~]# cat /etc/hosts
.......
192.168.10.239  GlusterFS-master
192.168.10.212  GlusterFS-slave
192.168.10.204  GlusterFS-slave2
192.168.10.220  GlusterFS-slave3
 
現將四個節點的儲存目錄/opt/gluster/data全部刪除
[root@GlusterFS-master ~]# rm -rf /opt/gluster
[root@GlusterFS-slave ~]# rm -rf  /opt/gluster
[root@GlusterFS-slave2 ~]# rm -rf  /opt/gluster
[root@GlusterFS-slave3 ~]# rm -rf  /opt/gluster

[root@GlusterFS-master ~]# gluster volume list
models

[root@GlusterFS-master ~]# gluster volume info
Volume Name: models
Type: Distributed-Replicate
Volume ID: f1945b0b-67d6-4202-9198-639244ab0a6a
Status: Stopped
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/opt/gluster/data
Brick2: 192.168.10.212:/opt/gluster/data
Brick3: 192.168.10.204:/opt/gluster/data
Brick4: 192.168.10.220:/opt/gluster/data
Options Reconfigured:
performance.write-behind: on
performance.io-thread-count: 32
performance.flush-behind: on
performance.cache-size: 128MB
features.quota: on
 
接著刪除之前建立的models卷
[root@GlusterFS-master ~]# gluster volume stop models
[root@GlusterFS-master ~]# gluster volume delete models
 
[root@GlusterFS-master ~]# gluster volume info
No volumes present
 
檢視叢集節點情況,如下發現glusterfs叢集中有三個節點。
由於是在192.168.10.239機器上檢視的,所以在叢集中預設看不到自己的。
如果在其他叢集上執行這個命令,就能檢視到192.168.10.239這個節點了
[root@GlusterFS-master ~]# gluster peer status  
Number of Peers: 3
 
Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)
 
Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)
 
Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)
 
然後分別將節點從叢集中刪除(不能在節點機本機的gluster命令下刪除自己)
[root@GlusterFS-master ~]# gluster                               //可以在gluster的互動介面裡操作
gluster> peer detach 192.168.10.220
peer detach: success
gluster> peer detach 192.168.10.204
peer detach: success
gluster> peer detach 192.168.10.212
peer detach: success
gluster> peer detach 192.168.10.239
peer detach: failed: 192.168.10.239 is localhost                //預設在本機是刪除不了自己的。需要在別的節點上刪除它。
gluster>
 
登入另一臺節點機上,執行將192.168.10.220節點從叢集中移除的操作
[root@GlusterFS-slave ~]# gluster
gluster> peer detach 192.168.10.239
peer detach: success
gluster>
 
再次檢視叢集情況,發現沒有節點了
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 0
 
可以在gluster互動介面裡執行它的所有相關命令:
[root@GlusterFS-master ~]# gluster
gluster> volume info
No volumes present
gluster> peer status
Number of Peers: 0
gluster>

=====================dd命令建立虛擬分割槽,建立儲存目錄=====================

首先利用dd命令建立虛擬分割槽,建立儲存目錄
[root@GlusterFS-master ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   36G  1.8G   34G   5% /
devtmpfs                 2.9G     0  2.9G   0% /dev
tmpfs                    2.9G     0  2.9G   0% /dev/shm
tmpfs                    2.9G  8.5M  2.9G   1% /run
tmpfs                    2.9G     0  2.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   18G   33M   18G   1% /home
tmpfs                    581M     0  581M   0% /run/user/0

dd命令建立一個虛擬分割槽出來,格式化並掛載到/data目錄下
[root@GlusterFS-master ~]# dd if=/dev/vda1 of=/dev/vdb1
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 2.0979 s, 512 MB/s

[root@GlusterFS-master ~]# du -sh /dev/vdb1
1.0G  /dev/vdb1

[root@GlusterFS-master ~]# mkfs.xfs -f /dev/vdb1                        //這裡格式成xfs格式檔案,也可以格式化成ext4格式的。
meta-data=/dev/vdb1              isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


[root@GlusterFS-master ~]# mkdir /data

[root@GlusterFS-master ~]# mount /dev/vdb1 /data

[root@GlusterFS-master ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   36G  1.8G   34G   5% /
devtmpfs                 2.9G   34M  2.8G   2% /dev
tmpfs                    2.9G     0  2.9G   0% /dev/shm
tmpfs                    2.9G  8.5M  2.9G   1% /run
tmpfs                    2.9G     0  2.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   18G   33M   18G   1% /home
tmpfs                    581M     0  581M   0% /run/user/0
/dev/loop0               976M  2.6M  907M   1% /data

[root@GlusterFS-master ~]# fdisk -l
.......
Disk /dev/loop0: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

設定開機自動掛載
[root@GlusterFS-master ~]# echo '/dev/loop0 /data xfs defaults 1 2' >> /etc/fstab

然後建立gluster儲存目錄
[root@GlusterFS-master ~]# mkdir /data/gluster

以上操作要在四臺節點機器上都要執行一遍,即建立儲存目錄環境!

================建立分散式卷(即Hash卷)及其相關管理操作================

接著將節點新增到叢集。這裡選擇在GlusterFS-master節點上執行:
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.212
peer probe: success. 
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.204
peer probe: success. 
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.220
peer probe: success. 

[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 3

Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)

Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)

Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)


登入其他節點叢集檢視,就能看到GlusterFS-master節點(192.168.10.239)也在叢集中了
[root@GlusterFS-slave ~]# gluster peer status
Number of Peers: 3

Hostname: GlusterFS-master
Uuid: 5dfd40e2-096b-40b5-bee3-003b57a39007
State: Peer in Cluster (Connected)

Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)

Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)

----------------------------------------------------------
現在開始建立。這裡在GlusterFS-master機器上執行的,預設這裡建立的是雜湊卷。
如下,它會自動在192.168.10.212節點的/data下面建立個gluster目錄(這個目錄不需要提前手動建立)
[root@GlusterFS-master ~]# gluster volume create gluster_data 192.168.10.212:/data/gluster force
volume create: gluster_data: success: please start the volume to access data

登入192.168.10.212節點檢視,果然在/data目錄下自動建立了gluster目錄
[root@GlusterFS-slave ~]# ls /data/gluster
[root@GlusterFS-slave ~]# ls /data/                     //data分割槽大小為1G
gluster

啟動卷,檢視卷狀態
[root@GlusterFS-master ~]# gluster volume start gluster_data
volume start: gluster_data: success

[root@GlusterFS-master ~]# gluster volume info 
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster

掛載卷操作
在客戶端機器上執行glusterfs儲存掛載的操作
注意,由於上面新增的是192.168.10.212節點,所以客戶端要掛載的也是192.168.10.212節點的儲存
[root@Client ~]# mkdir /opt/gfsmount
[root@Client ~]# mount -t glusterfs 192.168.10.212:gluster_data /opt/gfsmount
[root@Client ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root       38G  4.3G   33G  12% /
devtmpfs                     1.9G     0  1.9G   0% /dev
tmpfs                        1.9G     0  1.9G   0% /dev/shm
tmpfs                        1.9G  8.6M  1.9G   1% /run
tmpfs                        1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1                   1014M  143M  872M  15% /boot
/dev/mapper/centos-home       19G   33M   19G   1% /home
tmpfs                        380M     0  380M   0% /run/user/0
overlay                       38G  4.3G   33G  12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm                           64M     0   64M   0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm
192.168.10.212:gluster_data 1014M   33M  982M   4% /opt/gfsmount

上面可知,已經掛載上了glusterfs儲存,大小為1G(即是192.168.10.212的儲存目錄所在的/data分割槽的空間。
記住:儲存目錄在節點機的哪個分割槽下,客戶端掛載後就享用這個分割槽空間。

在客戶端掛載目錄下測試寫資料
[root@Client gfsmount]# mkdir test
[root@Client gfsmount]# touch kevin
[root@Client gfsmount]# ls
kevin  test

然後在192.168.10.212的儲存目錄下發現是正常同步過來的
[root@GlusterFS-slave ~]# cd /data/gluster/
[root@GlusterFS-slave gluster]# ls
kevin  test

----------------------------------------------------------
增加brick(即擴容卷)
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.239:/data/gluster force
volume add-brick: success
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.204:/data/gluster force
volume add-brick: success
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.220:/data/gluster force
volume add-brick: success

同樣,上面三個節點的/data下會自動建立gluster目錄!

檢視卷狀態
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster
Brick2: 192.168.10.239:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster

然後在到客戶端,發現掛載點的容量已經由1G上升到了4G!!(即四個節點的儲存目錄所在分割槽空間之和)
[root@Client ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root       38G  4.3G   33G  12% /
devtmpfs                     1.9G     0  1.9G   0% /dev
tmpfs                        1.9G     0  1.9G   0% /dev/shm
tmpfs                        1.9G  8.6M  1.9G   1% /run
tmpfs                        1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1                   1014M  143M  872M  15% /boot
/dev/mapper/centos-home       19G   33M   19G   1% /home
tmpfs                        380M     0  380M   0% /run/user/0
overlay                       38G  4.3G   33G  12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm                           64M     0   64M   0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm
192.168.10.212:gluster_data  4.0G  130M  3.9G   4% /opt/gfsmount

如上操作後的總結
1)客戶端掛載點的容量是叢集中四個節點的儲存目錄所在分割槽總和。
2)在客戶端掛載點下建立目錄,所有節點的儲存目錄(即Brick)下都會同步到。
3)在客戶端掛載點的目錄下建立的檔案,會在3個節點的儲存目錄內hash分佈。
4)直接在客戶端掛載點下建立的檔案,則這些檔案只會單獨同步到所掛載的節點(如上是192.168.10.212的/data/gluster目錄下),其他節點不會同步!
5)刪除卷會造成一些資料丟失,因為被刪除節點有數。

比如:
a)客戶端在掛載點下建立目錄kevin
[root@Client ~]# cd /opt/gfsmount/
[root@Client gfsmount]# mkdir kevin

節點檢視
[root@GlusterFS-master ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave2 ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave3 ~]# ls /data/gluster/
kevin

b)客戶端在上面掛載點下的目錄裡建立檔案
[root@Client gfsmount]# for i in `seq -w 1 100`; do cp -rp /var/log/messages /opt/gfsmount/kevin/copy-test-$i; done
[root@Client gfsmount]# ls kevin/
copy-test-001  copy-test-014  copy-test-027  copy-test-040  copy-test-053  copy-test-066  copy-test-079  copy-test-092
copy-test-002  copy-test-015  copy-test-028  copy-test-041  copy-test-054  copy-test-067  copy-test-080  copy-test-093
copy-test-003  copy-test-016  copy-test-029  copy-test-042  copy-test-055  copy-test-068  copy-test-081  copy-test-094
copy-test-004  copy-test-017  copy-test-030  copy-test-043  copy-test-056  copy-test-069  copy-test-082  copy-test-095
copy-test-005  copy-test-018  copy-test-031  copy-test-044  copy-test-057  copy-test-070  copy-test-083  copy-test-096
copy-test-006  copy-test-019  copy-test-032  copy-test-045  copy-test-058  copy-test-071  copy-test-084  copy-test-097
copy-test-007  copy-test-020  copy-test-033  copy-test-046  copy-test-059  copy-test-072  copy-test-085  copy-test-098
copy-test-008  copy-test-021  copy-test-034  copy-test-047  copy-test-060  copy-test-073  copy-test-086  copy-test-099
copy-test-009  copy-test-022  copy-test-035  copy-test-048  copy-test-061  copy-test-074  copy-test-087  copy-test-100
copy-test-010  copy-test-023  copy-test-036  copy-test-049  copy-test-062  copy-test-075  copy-test-088
copy-test-011  copy-test-024  copy-test-037  copy-test-050  copy-test-063  copy-test-076  copy-test-089
copy-test-012  copy-test-025  copy-test-038  copy-test-051  copy-test-064  copy-test-077  copy-test-090
copy-test-013  copy-test-026  copy-test-039  copy-test-052  copy-test-065  copy-test-078  copy-test-091

節點檢視。發現檔案同步到掛載的192.168.10.212節點山上了
[root@GlusterFS-master ~]# ls /data/gluster/kevin/
copy-test-002  copy-test-014  copy-test-036  copy-test-045  copy-test-056  copy-test-070  copy-test-075  copy-test-097
copy-test-009  copy-test-020  copy-test-042  copy-test-047  copy-test-062  copy-test-071  copy-test-080
copy-test-010  copy-test-027  copy-test-043  copy-test-053  copy-test-064  copy-test-072  copy-test-084
copy-test-013  copy-test-035  copy-test-044  copy-test-055  copy-test-068  copy-test-074  copy-test-092
[root@GlusterFS-master ~]# ll /data/gluster/kevin/|wc -l
30

[root@GlusterFS-slave ~]# ls /data/gluster/kevin/
copy-test-003  copy-test-018  copy-test-037  copy-test-050  copy-test-061  copy-test-069  copy-test-089
copy-test-005  copy-test-025  copy-test-040  copy-test-058  copy-test-066  copy-test-076  copy-test-091
copy-test-007  copy-test-026  copy-test-049  copy-test-059  copy-test-067  copy-test-085  copy-test-096
[root@GlusterFS-slave ~]# ll /data/gluster/kevin/|wc -l
22

[root@GlusterFS-slave2 gluster]# ls /data/gluster/kevin/
copy-test-004  copy-test-016  copy-test-024  copy-test-046  copy-test-065  copy-test-082  copy-test-088  copy-test-099
copy-test-006  copy-test-017  copy-test-029  copy-test-048  copy-test-078  copy-test-086  copy-test-093
copy-test-015  copy-test-023  copy-test-033  copy-test-052  copy-test-079  copy-test-087  copy-test-095
[root@GlusterFS-slave2 gluster]# ll /data/gluster/kevin/|wc -l
23

[root@GlusterFS-slave3 ~]# ls /data/gluster/kevin/
copy-test-001  copy-test-019  copy-test-030  copy-test-038  copy-test-054  copy-test-073  copy-test-090
copy-test-008  copy-test-021  copy-test-031  copy-test-039  copy-test-057  copy-test-077  copy-test-094
copy-test-011  copy-test-022  copy-test-032  copy-test-041  copy-test-060  copy-test-081  copy-test-098
copy-test-012  copy-test-028  copy-test-034  copy-test-051  copy-test-063  copy-test-083  copy-test-100
[root@GlusterFS-slave3 ~]# ll /data/gluster/kevin/|wc -l
29

c)如果直接在客戶端掛載點下建立檔案,則這些檔案只會單獨同步到所掛載的節點
(如上是192.168.10.212的/data/gluster目錄下),其他節點不會同步!
[root@Client gfsmount]# for i in `seq -w 1 30`; do cp -rp /var/log/messages /opt/gfsmount/haha-test-$i; done
[root@Client gfsmount]# ls
haha-test-01  haha-test-05  haha-test-09  haha-test-13  haha-test-17  haha-test-21  haha-test-25  haha-test-29
haha-test-02  haha-test-06  haha-test-10  haha-test-14  haha-test-18  haha-test-22  haha-test-26  haha-test-30
haha-test-03  haha-test-07  haha-test-11  haha-test-15  haha-test-19  haha-test-23  haha-test-27  kevin
haha-test-04  haha-test-08  haha-test-12  haha-test-16  haha-test-20  haha-test-24  haha-test-28

節點檢視,發現只是在192.168.10.212節點上有上面建立的那30個檔案,其他3個節點都沒有
[root@GlusterFS-master ~]# ls /data/gluster/
kevin

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-01  haha-test-05  haha-test-09  haha-test-13  haha-test-17  haha-test-21  haha-test-25  haha-test-29
haha-test-02  haha-test-06  haha-test-10  haha-test-14  haha-test-18  haha-test-22  haha-test-26  haha-test-30
haha-test-03  haha-test-07  haha-test-11  haha-test-15  haha-test-19  haha-test-23  haha-test-27  kevin
haha-test-04  haha-test-08  haha-test-12  haha-test-16  haha-test-20  haha-test-24  haha-test-28

[root@GlusterFS-slave2 ~]# ls /data/gluster/
kevin

[root@GlusterFS-slave3 ~]# ls /data/gluster/
kevin

d)刪除卷會造成一些資料丟失,因為被刪除節點有數。
從卷中刪除其中一個節點的brick
[root@GlusterFS-master ~]# gluster volume remove-brick gluster_data 192.168.10.220:/data/gluster
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
[root@GlusterFS-master ~]# gluster
gluster> volume remove-brick gluster_data 192.168.10.220:/data/gluster force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
gluster> 

檢視卷的資訊,mount點空間也下降了(少了192.168.10.220的儲存空間)
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster
Brick2: 192.168.10.239:/data/gluster
Brick3: 192.168.10.204:/data/gluster

客戶端的掛載點空間下降到3G了
[root@Client ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
.......
192.168.10.212:gluster_data  3.0G   98M  2.9G   4% /opt/gfsmount

[root@Client ~]# ll /opt/gfsmount/kevin|wc -l
73

發現上面在客戶端掛載點的kevin目錄下建立的100個檔案已經少了一部分,因為這部分資料在192.168.10.220節點上,該
節點的brick已經從卷中刪除了,所以這部分資料就丟失了!

接著注意下面的操作!!
將上面刪除的192.168.10.220的brick新增進去,發現新增失敗!
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.220:/data/gluster
volume add-brick: failed: Staging failed on 192.168.10.220. Error: /data/gluster is already part of a volume

需要刪除目錄,才能加回來
[root@GlusterFS-slave3 ~]# rm -rf /data/gluster
[root@GlusterFS-slave3 ~]# ll /data
total 0

然後再新增就能成功了
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.220:/data/gluster 
volume add-brick: success

[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster
Brick2: 192.168.10.239:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster

新增回來後,客戶端的掛載點的容量又上升到4G了
[root@Client ~]# df -h
.......
192.168.10.212:gluster_data  4.0G  131M  3.9G   4% /opt/gfsmount
---------------------------------------------------------------------------------

rebalance操作能夠讓檔案按照之前的規則再分配

做下rebalance,看到新加的節點上分到了檔案和目錄。
注意,在實際生產環節中,做rebalance,最好在伺服器空閒的時間操作

如上,新新增進去的192.168.10.220節點的儲存目錄下一開始是沒有內容的
[root@GlusterFS-slave3 ~]# ls /data/gluster/
[root@GlusterFS-slave3 ~]# 

[root@GlusterFS-master ~]# gluster volume rebalance gluster_data start
volume rebalance: gluster_data: success: Initiated rebalance on volume gluster_data.
Execute "gluster volume rebalance <volume-name> status" to check status.
ID: 49277bfa-df25-45c4-b1fb-cbcf8607a23e

再次檢視新新增進去的192.168.10.220節點的儲存目錄,發現就有了資料
[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-03  haha-test-11  haha-test-14  haha-test-16  haha-test-25  haha-test-30  kevin

發現做了rebalance之後,客戶端在掛載點下的資料就會均衡地分佈到各節點上了
[root@Client ~]# ls /opt/gfsmount/
haha-test-01  haha-test-05  haha-test-09  haha-test-13  haha-test-17  haha-test-21  haha-test-25  haha-test-29
haha-test-02  haha-test-06  haha-test-10  haha-test-14  haha-test-18  haha-test-22  haha-test-26  haha-test-30
haha-test-03  haha-test-07  haha-test-11  haha-test-15  haha-test-19  haha-test-23  haha-test-27  kevin
haha-test-04  haha-test-08  haha-test-12  haha-test-16  haha-test-20  haha-test-24  haha-test-28
[root@Client ~]# ls /opt/gfsmount/kevin/
copy-test-002  copy-test-014  copy-test-026  copy-test-043  copy-test-053  copy-test-066  copy-test-076  copy-test-088
copy-test-003  copy-test-015  copy-test-027  copy-test-044  copy-test-055  copy-test-067  copy-test-078  copy-test-089
copy-test-004  copy-test-016  copy-test-029  copy-test-045  copy-test-056  copy-test-068  copy-test-079  copy-test-091
copy-test-005  copy-test-017  copy-test-033  copy-test-046  copy-test-058  copy-test-069  copy-test-080  copy-test-092
copy-test-006  copy-test-018  copy-test-035  copy-test-047  copy-test-059  copy-test-070  copy-test-082  copy-test-093
copy-test-007  copy-test-020  copy-test-036  copy-test-048  copy-test-061  copy-test-071  copy-test-084  copy-test-095
copy-test-009  copy-test-023  copy-test-037  copy-test-049  copy-test-062  copy-test-072  copy-test-085  copy-test-096
copy-test-010  copy-test-024  copy-test-040  copy-test-050  copy-test-064  copy-test-074  copy-test-086  copy-test-097
copy-test-013  copy-test-025  copy-test-042  copy-test-052  copy-test-065  copy-test-075  copy-test-087  copy-test-099

[root@GlusterFS-master ~]# ls /data/gluster/
haha-test-06  haha-test-07  haha-test-15  haha-test-19  haha-test-22  haha-test-28  kevin
[root@GlusterFS-master ~]# ls /data/gluster/kevin/
copy-test-002  copy-test-014  copy-test-036  copy-test-047  copy-test-059  copy-test-069  copy-test-080  copy-test-097
copy-test-003  copy-test-018  copy-test-037  copy-test-049  copy-test-061  copy-test-070  copy-test-084
copy-test-005  copy-test-020  copy-test-040  copy-test-050  copy-test-062  copy-test-071  copy-test-085
copy-test-007  copy-test-025  copy-test-042  copy-test-053  copy-test-064  copy-test-072  copy-test-089
copy-test-009  copy-test-026  copy-test-043  copy-test-055  copy-test-066  copy-test-074  copy-test-091
copy-test-010  copy-test-027  copy-test-044  copy-test-056  copy-test-067  copy-test-075  copy-test-092
copy-test-013  copy-test-035  copy-test-045  copy-test-058  copy-test-068  copy-test-076  copy-test-096

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-01  haha-test-04  haha-test-08  haha-test-17  haha-test-20  haha-test-26  haha-test-27  haha-test-29  kevin
[root@GlusterFS-slave ~]# ls /data/gluster/kevin/
copy-test-002  copy-test-017  copy-test-040  copy-test-050  copy-test-064  copy-test-074  copy-test-086  copy-test-097
copy-test-004  copy-test-020  copy-test-042  copy-test-052  copy-test-065  copy-test-075  copy-test-087  copy-test-099
copy-test-006  copy-test-023  copy-test-043  copy-test-053  copy-test-066  copy-test-076  copy-test-088
copy-test-009  copy-test-024  copy-test-044  copy-test-055  copy-test-067  copy-test-078  copy-test-089
copy-test-010  copy-test-027  copy-test-045  copy-test-056  copy-test-068  copy-test-079  copy-test-091
copy-test-013  copy-test-029  copy-test-046  copy-test-058  copy-test-069  copy-test-080  copy-test-092
copy-test-014  copy-test-033  copy-test-047  copy-test-059  copy-test-070  copy-test-082  copy-test-093
copy-test-015  copy-test-035  copy-test-048  copy-test-061  copy-test-071  copy-test-084  copy-test-095
copy-test-016  copy-test-036  copy-test-049  copy-test-062  copy-test-072  copy-test-085  copy-test-096

[root@GlusterFS-slave2 ~]# ls /data/gluster/
haha-test-02  haha-test-09  haha-test-12  haha-test-18  haha-test-23  kevin
haha-test-05  haha-test-10  haha-test-13  haha-test-21  haha-test-24
[root@GlusterFS-slave2 ~]# ls /data/gluster/kevin/
copy-test-004  copy-test-016  copy-test-024  copy-test-046  copy-test-065  copy-test-082  copy-test-088  copy-test-099
copy-test-006  copy-test-017  copy-test-029  copy-test-048  copy-test-078  copy-test-086  copy-test-093
copy-test-015  copy-test-023  copy-test-033  copy-test-052  copy-test-079  copy-test-087  copy-test-095

[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-03  haha-test-11  haha-test-14  haha-test-16  haha-test-25  haha-test-30  kevin
[root@GlusterFS-slave3 ~]# ls /data/gluster/kevin/

檢視下平衡狀態
[root@GlusterFS-master ~]# gluster volume rebalance gluster_data status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               29       164.8KB           105             0             0            completed               1.00
                          192.168.10.212               43       248.7KB           131             0             0            completed               1.00
                          192.168.10.204                0        0Bytes           105             0             0            completed               0.00
                          192.168.10.220                0        0Bytes           105             0             0            completed               0.00
volume rebalance: gluster_data: success: 

--------------------------------------------------------------------------
接著進行解除安裝掛載點,停止卷
這個操作很危險,但是卷刪除了,下面的資料還在:

[root@Client ~]# umount /opt/gfsmount -lf

[root@GlusterFS-master ~]# gluster volume stop gluster_data
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gluster_data: success

[root@GlusterFS-master ~]# gluster volume delete gluster_data
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gluster_data: success

[root@GlusterFS-master ~]# gluster volume list
No volumes present in cluster

[root@GlusterFS-master ~]# gluster volume info
No volumes present

gluster_data卷刪除了,但是各節點上儲存目錄下的資料還在
[root@GlusterFS-master ~]# ls /data/gluster/
haha-test-06  haha-test-07  haha-test-15  haha-test-19  haha-test-22  haha-test-28  kevin

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-01  haha-test-04  haha-test-08  haha-test-17  haha-test-20  haha-test-26  haha-test-27  haha-test-29  kevin

[root@GlusterFS-slave2 ~]# ls /data/gluster/
haha-test-02  haha-test-09  haha-test-12  haha-test-18  haha-test-23  kevin
haha-test-05  haha-test-10  haha-test-13  haha-test-21  haha-test-24

[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-03  haha-test-11  haha-test-14  haha-test-16  haha-test-25  haha-test-30  kevin

客戶端掛載點資料不顯示
[root@Client ~]# ls /opt/gfsmount/
[root@Client ~]#

想要清除資料,可以登入到每個節點上刪除brick下面的資料
[root@GlusterFS-master ~]# rm -rf /data/gluster
[root@GlusterFS-slave ~]# rm -rf /data/gluster
[root@GlusterFS-slave2 ~]# rm -rf /data/gluster
[root@GlusterFS-slave3 ~]# rm -rf /data/gluster

================建立複製卷及其相關管理操作================

先刪除各個節點在上面用到的儲存目錄
[root@GlusterFS-master ~]# rm -rf /data/gluster
[root@GlusterFS-slave ~]# rm -rf /data/gluster
[root@GlusterFS-slave2 ~]# rm -rf /data/gluster
[root@GlusterFS-slave3 ~]# rm -rf /data/gluster
 
[root@GlusterFS-master ~]# gluster volume info
No volumes present
[root@GlusterFS-master ~]# gluster volume status
No volumes present
[root@GlusterFS-master ~]# gluster volume list
No volumes present in cluster
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 3
 
Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)
 
Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)
 
Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)
 
刪除叢集中的節點
[root@GlusterFS-master ~]# gluster
gluster> peer detach 192.168.10.220
peer detach: success
gluster> peer detach 192.168.10.204   
peer detach: success
gluster> peer detach 192.168.10.212 
peer detach: success
gluster> peer detach 192.168.10.239 
peer detach: failed: 192.168.10.239 is localhost
gluster>
 
[root@GlusterFS-slave ~]# gluster
peer detach: success
 
檢視叢集,已沒有節點資訊了
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 0
----------------------------------------------------------------------
 
現在開始建立複製卷,操作如下:
先新增叢集
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.212
peer probe: success.
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 1
 
Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)
 
建立複製卷,需要replica 2,表示兩個副本
[root@GlusterFS-master ~]# gluster volume create gluster_share replica 2 192.168.10.239:/data/gluster 192.168.10.212:/data/gluster force
volume create: gluster_share: success: please start the volume to access data
 
[root@GlusterFS-master ~]# gluster volume info
  
Volume Name: gluster_share
Type: Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
 
以上兩個節點會自動生成儲存目錄/data/gluster
[root@GlusterFS-master ~]# ll /data/
total 0
drwxr-xr-x. 2 root root 6 Apr 10 12:58 gluster
 
[root@GlusterFS-slave ~]# ll /data/
total 0
drwxr-xr-x. 2 root root 6 Apr 10 12:59 gluster
 
啟動卷,檢視卷狀態 
[root@GlusterFS-master ~]# gluster volume list
gluster_share
 
[root@GlusterFS-master ~]# gluster volume start gluster_share
volume start: gluster_share: success
 
[root@GlusterFS-master ~]# gluster volume info
  
Volume Name: gluster_share
Type: Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
 
客戶端掛載後操作建立檔案檔案,發現容量只有一個節點的容量,因為是複製卷
[root@Client ~]# mount -t glusterfs 192.168.10.239:gluster_share /opt/gfsmount/
[root@Client ~]# df -h
.......
192.168.10.239:gluster_share 1014M   33M  982M   4% /opt/gfsmount
 
客戶端掛載點寫入資料
[root@Client gfsmount]# mkdir test
[root@Client gfsmount]# touch kevin grace
 
兩個節點由於是副本關係,內容一致
[root@GlusterFS-master ~]# ls /data/gluster/
grace  kevin  test
 
[root@GlusterFS-slave ~]# ls /data/gluster/
grace  kevin  test
 
------------------------------------------------------------------
模擬誤刪卷資訊故障
刪除卷資訊,卷資訊在下面路徑下 
[root@GlusterFS-master ~]# ls /usr/local/glusterfs/var/lib/glusterd/vols/gluster_share/
bricks                                         gluster_share-rebalance.vol  rbstate
cksum                                          gluster_share.tcp-fuse.vol   run
gluster_share.192.168.10.212.data-gluster.vol  info                         snapd.info
gluster_share.192.168.10.239.data-gluster.vol  node_state.info              trusted-gluster_share.tcp-fuse.vol
 
[root@GlusterFS-slave ~]# ls /usr/local/glusterfs/var/lib/glusterd/vols/gluster_share/
bricks                                         gluster_share-rebalance.vol  rbstate
cksum                                          gluster_share.tcp-fuse.vol   run
gluster_share.192.168.10.212.data-gluster.vol  info                         snapd.info
gluster_share.192.168.10.239.data-gluster.vol  node_state.info              trusted-gluster_share.tcp-fuse.vol
 
這裡以刪除192.168.10.212(即GlusterFS-slave)節點的卷資訊為例
[root@GlusterFS-slave ~]# rm -rf /usr/local/glusterfs/var/lib/glusterd/vols/gluster_share/
 
[root@GlusterFS-slave ~]# ls /usr/local/glusterfs/var/lib/glusterd/vols/gluster_share/
ls: cannot access /usr/local/glusterfs/var/lib/glusterd/vols/gluster_share/: No such file or directory
 
恢復卷資訊
把卷資訊同步過來,192.168.10.212節點上的卷資訊刪除了,但是它的備份節點192.168.10.239節點上的卷資訊是正常的!
下面操作的all表示同步所有卷資訊過來,這裡也可以寫成gluster_share卷
 
特別注意:這種卷資訊要定期備份!!!!
[root@GlusterFS-master ~]# gluster volume sync 192.168.10.239 all     //不能在節點本機進行鍼對自己的備份
Sync volume may make data inaccessible while the sync is in progress. Do you want to continue? (y/n) y
volume sync: failed: sync from localhost not allowed
 
[root@GlusterFS-slave ~]# gluster volume sync 192.168.10.239 all      //所以要在叢集中的其他節點上操作
Sync volume may make data inaccessible while the sync is in progress. Do you want to continue? (y/n) y
volume sync: success
 
然後再去192.168.10.212節點上檢視卷資訊,發現刪除的卷資訊已經恢復回來了!
[root@GlusterFS-slave ~]# ls /usr/local/glusterfs/var/lib/glusterd/vols/gluster_share/
bricks                                         gluster_share-rebalance.vol  rbstate
cksum                                          gluster_share.tcp-fuse.vol   run
gluster_share.192.168.10.212.data-gluster.vol  info                         snapd.info
gluster_share.192.168.10.239.data-gluster.vol  node_state.info              trusted-gluster_share.tcp-fuse.vol

------------------------------------------------------------------
追加節點操作
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.204
peer probe: success. 
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.220
peer probe: success. 
[root@GlusterFS-master ~]# gluster peer status 
Number of Peers: 3

Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)

Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)

Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)

檢視卷資訊
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_share
Type: Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster

卷擴容操作(即將新新增的兩個節點的brick新增到上面的models磁碟裡)

先將客戶端的掛載解除安裝掉
[root@Client ~]# umount /opt/gfsmount/

接著關閉gluster_share卷
[root@GlusterFS-master ~]# gluster volume stop gluster_share
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gluster_share: success
[root@GlusterFS-master ~]# gluster volume status gluster_share
Volume gluster_share is not started

然後執行卷擴容操作
[root@GlusterFS-master ~]# gluster volume add-brick gluster_share 192.168.10.204:/data/gluster 192.168.10.220:/data/gluster force
volume add-brick: success

再次檢視卷資訊,發現新節點已經加入進去了
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_share
Type: Distributed-Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Stopped
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster

然後重新啟動gluster_share卷
[root@GlusterFS-master ~]# gluster volume start gluster_share
volume start: gluster_share: success

[root@GlusterFS-master ~]# gluster volume status gluster_share
Status of volume: gluster_share
Gluster process                     Port    Online  Pid
------------------------------------------------------------------------------
Brick 192.168.10.239:/data/gluster          49155   Y   8907
Brick 192.168.10.212:/data/gluster          49156   Y   4049
Brick 192.168.10.204:/data/gluster          49156   Y   11447
Brick 192.168.10.220:/data/gluster          49157   Y   16714
NFS Server on localhost                 N/A N   N/A
Self-heal Daemon on localhost               N/A N   N/A
NFS Server on 192.168.10.212                N/A N   N/A
Self-heal Daemon on 192.168.10.212          N/A N   N/A
NFS Server on 192.168.10.220                N/A N   N/A
Self-heal Daemon on 192.168.10.220          N/A N   N/A
NFS Server on 192.168.10.204                N/A N   N/A
Self-heal Daemon on 192.168.10.204          N/A N   N/A
 
Task Status of Volume gluster_share
------------------------------------------------------------------------------
There are no active volume tasks

如上發現四個節點的Online項狀態都是"Y"

接著在Client客戶機重新掛載儲存
[root@Client ~]# mount -t glusterfs 192.168.10.239:gluster_share /opt/gfsmount/
[root@Client ~]# df -h
.......
192.168.10.239:gluster_share  2.0G   65M  2.0G   4% /opt/gfsmount

如上發現客戶端掛載點的容量是2G,即兩組replica的分割槽大小

在客戶機上寫入資料做測試,發現:
1)新新增到models卷裡的節點的儲存目錄裡不會有之前其他節點的資料,只會有新寫入的資料。
2)由於上面建立副本卷的時候,指定的副本是2個(即replica 2),如果不做rebalance,那麼在客戶端寫入的檔案資料都只會同步到原來的那個replica
   下的節點裡,即新加入的兩個節點裡不會同步資料;在客戶端建立目錄,四個節點都會同步到;但是檔案只會同步到之前的兩個節點。

接著進行重新均衡。均衡卷的前提是至少有兩個brick儲存單元
[root@GlusterFS-master ~]# gluster volume rebalance gluster_share start
volume rebalance: gluster_share: success: Initiated rebalance on volume gluster_share.
Execute "gluster volume rebalance <volume-name> status" to check status.
ID: 26035833-3c20-4822-b065-7a5e15d30b85

[root@GlusterFS-master ~]# gluster volume rebalance gluster_share status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              103        10.3MB           305             0             0            completed               1.00
                          192.168.10.212                0        0Bytes           210             0             0            completed               0.00
                          192.168.10.204               11        58.9KB           213             0             0            completed               0.00
                          192.168.10.220                0        0Bytes           210             0             0            completed               0.00
volume rebalance: gluster_share: success:

然後刪除客戶端的資料,重新寫入進行測試
[root@Client ~]# rm -rf /opt/gfsmount/*
[root@Client ~]# cd /opt/gfsmount/
[root@Client gfsmount]# ls
[root@Client gfsmount]# mkdir kevin

發現四個節點都有這個kevin目錄
[root@GlusterFS-master ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave2 ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave3 ~]# ls /data/gluster/
kevin

然後批量寫入資料
[root@Client gfsmount]# for i in `seq -w 1 100`; do cp -rp /var/log/messages /opt/gfsmount/haha-test-$i; done
[root@Client gfsmount]# ls
haha-test-001  haha-test-014  haha-test-027  haha-test-040  haha-test-053  haha-test-066  haha-test-079  haha-test-092
haha-test-002  haha-test-015  haha-test-028  haha-test-041  haha-test-054  haha-test-067  haha-test-080  haha-test-093
haha-test-003  haha-test-016  haha-test-029  haha-test-042  haha-test-055  haha-test-068  haha-test-081  haha-test-094
haha-test-004  haha-test-017  haha-test-030  haha-test-043  haha-test-056  haha-test-069  haha-test-082  haha-test-095
haha-test-005  haha-test-018  haha-test-031  haha-test-044  haha-test-057  haha-test-070  haha-test-083  haha-test-096
haha-test-006  haha-test-019  haha-test-032  haha-test-045  haha-test-058  haha-test-071  haha-test-084  haha-test-097
haha-test-007  haha-test-020  haha-test-033  haha-test-046  haha-test-059  haha-test-072  haha-test-085  haha-test-098
haha-test-008  haha-test-021  haha-test-034  haha-test-047  haha-test-060  haha-test-073  haha-test-086  haha-test-099
haha-test-009  haha-test-022  haha-test-035  haha-test-048  haha-test-061  haha-test-074  haha-test-087  haha-test-100
haha-test-010  haha-test-023  haha-test-036  haha-test-049  haha-test-062  haha-test-075  haha-test-088  kevin
haha-test-011  haha-test-024  haha-test-037  haha-test-050  haha-test-063  haha-test-076  haha-test-089
haha-test-012  haha-test-025  haha-test-038  haha-test-051  haha-test-064  haha-test-077  haha-test-090
haha-test-013  haha-test-026  haha-test-039  haha-test-052  haha-test-065  haha-test-078  haha-test-091

發現這些資料被分為兩個replica副本,分到放到了192.168.10.239、192.168.10.212(副本關係)和192.168.10.204、192.168.10.220上了
[root@GlusterFS-master ~]# ls /data/gluster/
haha-test-004  haha-test-015  haha-test-028  haha-test-040  haha-test-050  haha-test-070  haha-test-080  haha-test-091
haha-test-007  haha-test-017  haha-test-029  haha-test-042  haha-test-051  haha-test-072  haha-test-081  haha-test-093
haha-test-009  haha-test-019  haha-test-032  haha-test-043  haha-test-052  haha-test-073  haha-test-084  haha-test-094
haha-test-010  haha-test-020  haha-test-033  haha-test-044  haha-test-056  haha-test-075  haha-test-087  haha-test-095
haha-test-011  haha-test-021  haha-test-034  haha-test-047  haha-test-059  haha-test-076  haha-test-088  haha-test-097
haha-test-012  haha-test-022  haha-test-036  haha-test-048  haha-test-061  haha-test-078  haha-test-089  haha-test-100
haha-test-014  haha-test-027  haha-test-037  haha-test-049  haha-test-069  haha-test-079  haha-test-090  kevin

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-004  haha-test-015  haha-test-028  haha-test-040  haha-test-050  haha-test-070  haha-test-080  haha-test-091
haha-test-007  haha-test-017  haha-test-029  haha-test-042  haha-test-051  haha-test-072  haha-test-081  haha-test-093
haha-test-009  haha-test-019  haha-test-032  haha-test-043  haha-test-052  haha-test-073  haha-test-084  haha-test-094
haha-test-010  haha-test-020  haha-test-033  haha-test-044  haha-test-056  haha-test-075  haha-test-087  haha-test-095
haha-test-011  haha-test-021  haha-test-034  haha-test-047  haha-test-059  haha-test-076  haha-test-088  haha-test-097
haha-test-012  haha-test-022  haha-test-036  haha-test-048  haha-test-061  haha-test-078  haha-test-089  haha-test-100
haha-test-014  haha-test-027  haha-test-037  haha-test-049  haha-test-069  haha-test-079  haha-test-090  kevin

[root@GlusterFS-slave2 ~]# ls /data/gluster/
haha-test-001  haha-test-013  haha-test-026  haha-test-041  haha-test-057  haha-test-065  haha-test-077  haha-test-096
haha-test-002  haha-test-016  haha-test-030  haha-test-045  haha-test-058  haha-test-066  haha-test-082  haha-test-098
haha-test-003  haha-test-018  haha-test-031  haha-test-046  haha-test-060  haha-test-067  haha-test-083  haha-test-099
haha-test-005  haha-test-023  haha-test-035  haha-test-053  haha-test-062  haha-test-068  haha-test-085  kevin
haha-test-006  haha-test-024  haha-test-038  haha-test-054  haha-test-063  haha-test-071  haha-test-086
haha-test-008  haha-test-025  haha-test-039  haha-test-055  haha-test-064  haha-test-074  haha-test-092

[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-001  haha-test-013  haha-test-026  haha-test-041  haha-test-057  haha-test-065  haha-test-077  haha-test-096
haha-test-002  haha-test-016  haha-test-030  haha-test-045  haha-test-058  haha-test-066  haha-test-082  haha-test-098
haha-test-003  haha-test-018  haha-test-031  haha-test-046  haha-test-060  haha-test-067  haha-test-083  haha-test-099
haha-test-005  haha-test-023  haha-test-035  haha-test-053  haha-test-062  haha-test-068  haha-test-085  kevin
haha-test-006  haha-test-024  haha-test-038  haha-test-054  haha-test-063  haha-test-071  haha-test-086
haha-test-008  haha-test-025  haha-test-039  haha-test-055  haha-test-064  haha-test-074  haha-test-092

同樣在客戶端的掛載點的目錄(即/opt/gfsmount/kevin)下建立的檔案也是同樣被分為兩個replica副本,分到放到了
192.168.10.239、192.168.10.212(副本關係)和192.168.10.204、192.168.10.220上了

================Gluster設定允許可信任客戶端IP================

設定只允許192.168.1.*的訪問
[root@GlusterFS-master ~]# gluster volume set gluster_share auth.allow 192.168.1.*
volume set: success

檢視卷資訊
[root@GlusterFS-master ~]# gluster volume info
Volume Name: gluster_share
Type: Distributed-Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster
Options Reconfigured:
auth.allow: 192.168.1.*

注意上面最後一行卷資訊,說明只允許客戶端ip為192.168.10.*網段的機器掛載。

然後在192.168.10.213客戶端進行掛載,發現就掛載不上了
[root@Client ~]# mount -t glusterfs 192.168.10.239:gluster_share /opt/gfsmount/
[root@Client ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   38G  4.3G   33G  12% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.6M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   19G   33M   19G   1% /home
tmpfs                    380M     0  380M   0% /run/user/0
overlay                   38G  4.3G   33G  12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm                       64M     0   64M   0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm

接著修改授權資訊
[root@GlusterFS-master ~]# gluster volume set gluster_share auth.allow 192.168.10.*
volume set: success
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_share
Type: Distributed-Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster
Options Reconfigured:
auth.allow: 192.168.10.*

然後再在192.168.10.213的客戶端進行掛載測試,發現能掛載上了
[root@Client ~]# mount -t glusterfs 192.168.10.239:gluster_share /opt/gfsmount/
[root@Client ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/centos-root        38G  4.3G   33G  12% /
devtmpfs                      1.9G     0  1.9G   0% /dev
tmpfs                         1.9G     0  1.9G   0% /dev/shm
tmpfs                         1.9G  8.6M  1.9G   1% /run
tmpfs                         1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1                    1014M  143M  872M  15% /boot
/dev/mapper/centos-home        19G   33M   19G   1% /home
tmpfs                         380M     0  380M   0% /run/user/0
overlay                        38G  4.3G   33G  12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm                            64M     0   64M   0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm
192.168.10.239:gluster_share  2.0G   67M  2.0G   4% /opt/gfsmount

================Gluster的效能測試工具================

在一個節點起服務。比如這裡在GlusterFS-master節點上啟動:  
[root@GlusterFS-master ~]# yum install -y iperf
[root@GlusterFS-master ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

客戶端連線上面的GlusterFS-master節點伺服器,測試網路速度
[root@Client ~]# yum install -y iperf
[root@Client ~]# iperf -c 192.168.10.239
------------------------------------------------------------
Client connecting to 192.168.10.239, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.10.213 port 55276 connected with 192.168.10.239 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  19.5 GBytes  16.7 Gbits/sec


節點伺服器上也能檢視到資訊。因為是虛擬機器環境,這裡虛高了。
[root@GlusterFS-master ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55276
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  19.5 GBytes  16.7 Gbits/sec


如果覺得壓力不夠,可以客戶端多個程式一起發包。使用-P引數 ,客戶端結果如下
[root@Client ~]# iperf -c 192.168.10.239  -P 10
------------------------------------------------------------
Client connecting to 192.168.10.239, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 12] local 192.168.10.213 port 55296 connected with 192.168.10.239 port 5001
[  6] local 192.168.10.213 port 55284 connected with 192.168.10.239 port 5001
[  7] local 192.168.10.213 port 55286 connected with 192.168.10.239 port 5001
[  3] local 192.168.10.213 port 55278 connected with 192.168.10.239 port 5001
[  5] local 192.168.10.213 port 55282 connected with 192.168.10.239 port 5001
[  4] local 192.168.10.213 port 55280 connected with 192.168.10.239 port 5001
[  8] local 192.168.10.213 port 55288 connected with 192.168.10.239 port 5001
[  9] local 192.168.10.213 port 55290 connected with 192.168.10.239 port 5001
[ 11] local 192.168.10.213 port 55294 connected with 192.168.10.239 port 5001
[ 10] local 192.168.10.213 port 55292 connected with 192.168.10.239 port 5001
[ ID] Interval       Transfer     Bandwidth
[ 12]  0.0-10.0 sec  3.18 GBytes  2.73 Gbits/sec
[ 11]  0.0-10.0 sec   616 MBytes   516 Mbits/sec
[ 10]  0.0-10.0 sec  3.26 GBytes  2.80 Gbits/sec
[  6]  0.0-10.0 sec  3.18 GBytes  2.72 Gbits/sec
[  7]  0.0-10.0 sec  3.18 GBytes  2.73 Gbits/sec
[  3]  0.0-10.0 sec   616 MBytes   516 Mbits/sec
[  4]  0.0-10.0 sec  3.07 GBytes  2.63 Gbits/sec
[  8]  0.0-10.0 sec  2.90 GBytes  2.49 Gbits/sec
[  9]  0.0-10.0 sec  2.89 GBytes  2.48 Gbits/sec
[  5]  0.0-10.0 sec  3.06 GBytes  2.62 Gbits/sec
[SUM]  0.0-10.0 sec  25.9 GBytes  22.2 Gbits/sec


節點伺服器上的資訊:
[root@GlusterFS-master ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55276
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  19.5 GBytes  16.7 Gbits/sec
[  4] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55278
[  5] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55280
[  7] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55284
[  6] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55282
[  8] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55286
[  9] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55288
[ 10] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55290
[ 13] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55294
[ 11] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55292
[ 12] local 192.168.10.239 port 5001 connected with 192.168.10.213 port 55296
[ 13]  0.0-10.5 sec   616 MBytes   491 Mbits/sec
[  4]  0.0-10.5 sec   616 MBytes   491 Mbits/sec
[  5]  0.0-10.5 sec  3.07 GBytes  2.50 Gbits/sec
[  7]  0.0-10.5 sec  3.18 GBytes  2.59 Gbits/sec
[  8]  0.0-10.5 sec  3.18 GBytes  2.59 Gbits/sec
[  9]  0.0-10.5 sec  2.90 GBytes  2.37 Gbits/sec
[  6]  0.0-10.5 sec  3.06 GBytes  2.49 Gbits/sec
[ 10]  0.0-10.5 sec  2.89 GBytes  2.35 Gbits/sec
[ 11]  0.0-10.5 sec  3.26 GBytes  2.65 Gbits/sec
[ 12]  0.0-10.5 sec  3.18 GBytes  2.59 Gbits/sec
[SUM]  0.0-10.5 sec  25.9 GBytes  21.1 Gbits/sec

-----------------------------------------------------
dd工具測試:客戶端寫入速度和讀取速度測試

[root@Client ~]# mount -t glusterfs 192.168.10.239:gluster_share /opt/gfsmount

[root@Client ~]# df -h
.......
192.168.10.239:gluster_share  2.0G   68M  2.0G   4% /opt/gfsmount

測試寫檔案的速度(寫入一個300M檔案的速度)
[root@Client ~]# dd if=/dev/zero of=/opt/gfsmount/grace bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 0.981631 s, 534 MB/s

[root@Client ~]# du -sh /opt/gfsmount/grace 
500M  /opt/gfsmount/grace

測試讀檔案的速度
[root@Client ~]# dd if=/opt/gfsmount/grace of=/dev/null bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 1.00698 s, 521 MB/s

再次測試,虛擬機器不是很穩定
[root@Client ~]# dd if=/dev/zero of=/opt/gfsmount/grace1 bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 1.15882 s, 452 MB/s

[root@Client ~]# dd if=/opt/gfsmount/grace1 of=/dev/null bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 1.0682 s, 491 MB/s

================Gluster叢集典型故障處理=================

1)複製卷資料不一致
故障現象:雙副本卷資料出現不一致
故障模擬:刪除其中一個brick資料
修復方法:訪問檔案觸發自修復:

[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_share
Type: Distributed-Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster
Options Reconfigured:

客戶端操作:
[root@Client ~]# df -h
.......
192.168.10.239:gluster_share  2.0G  1.1G  961M  53% /opt/gfsmount
[root@Client ~]# rm -rf /opt/gfsmount/*
[root@Client ~]# cd /opt/gfsmount/
[root@Client gfsmount]# touch a b c d e f g 

客戶端寫入的資料分為兩個replica放在了四個節點上
[root@GlusterFS-master ~]# ls /data/gluster/
a  b  c  e
[root@GlusterFS-slave ~]# ls /data/gluster/
a  b  c  e

[root@GlusterFS-slave2 ~]# ls /data/gluster/
d  f  g
[root@GlusterFS-slave3 ~]# ls /data/gluster/
d  f  g

模擬問題:
在GlusterFS-slave2機器上刪除檔案
[root@GlusterFS-slave2 ~]# cd /data/gluster/
[root@GlusterFS-slave2 gluster]# ls
d  f  g
[root@GlusterFS-slave2 gluster]# rm -rf d 
[root@GlusterFS-slave2 gluster]# rm -rf f
[root@GlusterFS-slave2 gluster]# ls
g

GlusterFS-slave2的備份節點GlusterFS-slave3上有上面刪除的資料
[root@GlusterFS-slave3 ~]# cd /data/gluster/
[root@GlusterFS-slave3 gluster]# ls
d  f  g

客戶端上訪問檔案可以觸發檔案的自動修復
[root@Client ~]# cd /opt/gfsmount/
[root@Client gfsmount]# ls
a  b  c  d  e  f  g
[root@Client gfsmount]# cat d
[root@Client gfsmount]# cat f

再次到GlusterFS-slave2節點上檢視,刪除的資料就自動修復了
[root@GlusterFS-slave2 ~]# ls /data/gluster/
d  f  g

2)glusterfs叢集節點配置資訊不正確
故障模擬
刪除server2部分配置資訊
配置資訊位置:/usr/local/glusterfs/var/lib/glusterd

修復方法
觸發自修復:通過Gluster工具同步配置資訊
gluster volume sync server1 all

恢復複製卷 brick
故障現象:雙副本卷中一個brick損壞
恢復流程:
a)重新建立故障brick目錄
# setfattr -n trusted.gfid -v
0x00000000000000000000000000000001 /data2
# setfattr -n trusted.glusterfs.dht -v
0x000000010000000000000000ffffffff /data2
# setfattr -n trusted.glusterfs.volume-id -v
0xcc51d546c0af4215a72077ad9378c2ac /data2
-v 的引數設定成你的值
b)設定擴充套件屬性(參考另一個複製 brick)
c)重啟 glusterd服務
d)觸發資料自修復
# find /data/glusterd -type f -print0 | xargs -0 head -c1 >/dev/null

模擬刪除brick的操作 
[root@GlusterFS-master ~]# gluster volume info
Volume Name: gluster_share
Type: Distributed-Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster
Options Reconfigured:

在GlusterFS-slave節點上刪除brick資料
[root@GlusterFS-slave ~]# ls /data/gluster/
a  b  c  e
[root@GlusterFS-slave ~]# rm -rf /data/gluster
[root@GlusterFS-slave ~]# ll /data/gluster
ls: cannot access /data/gluster: No such file or directory

接在在GlusterFS-slave的備份節點GlusterFS-master上操作,獲取擴充套件屬性(使用"yum search getfattr"命令getfattr工具的安裝途徑)
[root@GlusterFS-master ~]# yum install -y attr.x86_64
[root@GlusterFS-master ~]# cd /data/
[root@GlusterFS-master data]# getfattr -d -m . -e hex gluster/
# file: gluster/
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffffc25
trusted.glusterfs.volume-id=0xa9f989bd7edd4089836ad9f742b8d37a

注意下面的操作將根據上面的屬性資訊中的id進行操作

重新建立故障brick目錄
恢復操作,擴充套件屬性可以從GlusterFS-master節點機上獲取的複製(注意上面屬性中的id:0xa9f989bd7edd4089836ad9f742b8d37a),執行順序沒關係
[root@GlusterFS-slave ~]# mkdir /data/gluster
[root@GlusterFS-slave ~]# yum install -y attr.x86_64
[root@GlusterFS-slave ~]# getfattr -d -m . -e hex /data/gluster
getfattr: Removing leading '/' from absolute path names
# file: data/gluster
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
[root@GlusterFS-slave ~]# setfattr -n trusted.glusterfs.volume-id -v 0xa9f989bd7edd4089836ad9f742b8d37a /data/gluster
[root@GlusterFS-slave ~]# getfattr -d -m . -e hex /data/gluster
getfattr: Removing leading '/' from absolute path names
# file: data/gluster
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.glusterfs.volume-id=0xa9f989bd7edd4089836ad9f742b8d37a
[root@GlusterFS-slave ~]# setfattr -n trusted.gfid -v 0x00000000000000000000000000000001 /data/gluster
[root@GlusterFS-slave ~]# setfattr -n trusted.glusterfs.dht -v 0x0000000100000000000000007ffffc25 /data/gluster
[root@GlusterFS-slave ~]# getfattr -d -m . -e hex /data/gluster
getfattr: Removing leading '/' from absolute path names
# file: data/gluster
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffffc25
trusted.glusterfs.volume-id=0xa9f989bd7edd4089836ad9f742b8d37a

重啟gluster服務
[root@GlusterFS-slave ~]# ps -ef|grep gluster
root      4909     1  0 15:12 ?        00:00:01 /usr/local/glusterfs/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /usr/local/glusterfs/var/lib/glusterd/glustershd/run/glustershd.pid -l /usr/local/glusterfs/var/log/glusterfs/glustershd.log -S /var/run/f9f65f2dbb0c193ecab167839d75699e.socket --xlator-option *replicate*.node-uuid=f8e69297-4690-488e-b765-c1c404810d6a
root      5069  5044  0 17:31 pts/0    00:00:00 grep --color=auto gluster
root     32450     1  0 Apr08 ?        00:00:26 /usr/local/glusterfs/sbin/glusterd
[root@GlusterFS-slave ~]# ps -ef|grep gluster|awk '{print $2}'|xargs kill -9
kill: sending signal to 5071 failed: No such process
[root@GlusterFS-slave ~]# ps -ef|grep gluster
root      5078  5044  0 17:32 pts/0    00:00:00 grep --color=auto gluster
[root@GlusterFS-slave ~]# /usr/local/glusterfs/sbin/glusterd
[root@GlusterFS-slave ~]# ps -ef|grep gluster
root      5080     1 14 17:32 ?        00:00:00 /usr/local/glusterfs/sbin/glusterd
root      5212  5044  0 17:32 pts/0    00:00:00 grep --color=auto gluster

[root@GlusterFS-slave ~]# ls /data/gluster/
[root@GlusterFS-slave ~]# 

可以嘗試通過下面三種方法進行資料恢復
a)重啟gluster服務後,Gluster-slave節點的資料。如果沒有恢復,那麼就嘗試第2種方法 
b)在客戶端的掛載點下cat那些刪除的檔案或者再寫入新資料,去觸發自動修復。如果還沒有恢復。那麼嘗試第3種方法
c)重啟gluster_share卷
[root@GlusterFS-master ~]# gluster volume stop gluster_share
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gluster_share: success

[root@GlusterFS-master ~]# gluster volume start gluster_share
volume start: gluster_share: success

[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_share
Type: Distributed-Replicate
Volume ID: a9f989bd-7edd-4089-836a-d9f742b8d37a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/data/gluster
Brick2: 192.168.10.212:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster

[root@GlusterFS-master ~]# gluster volume status gluster_share
Status of volume: gluster_share
Gluster process           Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.10.239:/data/gluster      49155 Y 15508
Brick 192.168.10.212:/data/gluster      49156 Y 5249
Brick 192.168.10.204:/data/gluster      49156 Y 12162
Brick 192.168.10.220:/data/gluster      49157 Y 17402
NFS Server on localhost         N/A N N/A
Self-heal Daemon on localhost       N/A Y 15527
NFS Server on 192.168.10.204        N/A N N/A
Self-heal Daemon on 192.168.10.204      N/A Y 12181
NFS Server on 192.168.10.220        N/A N N/A
Self-heal Daemon on 192.168.10.220      N/A Y 17421
NFS Server on 192.168.10.212        N/A N N/A
Self-heal Daemon on 192.168.10.212      N/A Y 5268
 
Task Status of Volume gluster_share
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 26035833-3c20-4822-b065-7a5e15d30b85
Status               : completed 

[root@GlusterFS-master ~]# gluster volume heal gluster_share info
Brick GlusterFS-master:/data/gluster/
Number of entries: 0

Brick GlusterFS-slave:/data/gluster/
Number of entries: 0

Brick GlusterFS-slave2:/data/gluster/
Number of entries: 0

Brick GlusterFS-slave3:/data/gluster/
Number of entries: 0

檢視下資料是否恢復了
[root@GlusterFS-slave ~]# cd /data/gluster/
[root@GlusterFS-slave gluster]# ls
a  b  c

注意:
如上在客戶端掛載點寫入新資料後,GlusterFS-slave節點的資料就會恢復,如果發現恢復後的資料跟它的備份節點
GlusterFS—master的資料不一致,這個時候只需要在客戶端掛載點下cat那些不一致的檔案,即觸發自動修復機制,
則GlusterFS-slave節點就會自動恢復那些不一致的資料!

================glusterfs叢集生產場景調優精要=================

系統關鍵考慮
1)效能需求
2)Read/Write
3)吞吐量/IOPS/可用性
4)Workload
5)什麼應用?
6)大檔案?
7)小檔案?
8)除了吞吐量之外的需求?

系統配置
1)根據Workload選擇適當的 Volume型別
2)Volume型別
3)DHT – 高效能,無冗餘
4)AFR – 高可用,讀效能高
5)STP – 高併發讀,寫效能低,無冗餘
6)協議/效能
7)Native – 效能最優
8)NFS – 特定應用下可獲得最優效能
9)CIFS – 僅Windows平臺使用
10)資料流
11)不同訪問協議的資料流差異
12)hash+複製卷的模式是生產必須的

系統硬體配置
1)節點和叢集配置
2)多 CPU-支援更多的併發執行緒
3)多 MEM-支援更大的 Cache
4)多網路埠-支援更高的吞吐量
5)專用後端網路用於叢集內部通訊
6)NFS/CIFS協議訪問需要專用後端網路
7)推薦至少10GbE
8)Native協議用於內部節點通訊

效能相關經驗
1)GlusterFS效能很大程度上依賴硬體
2)充分理解應用基礎上進行硬體配置
3)預設引數主要用於通用目的
4)GlusterFS存在若干效能調優引數
5)效能問題應當優先排除磁碟和網路故障
6)建議6到8個磁碟最做一個raid

系統規模和架構
1)效能理論上由硬體配置決定
2)CPU/Mem/Disk/Network
3)系統規模由效能和容量需求決定
4)2U/4U儲存伺服器和JBOD適合構建Brick
5)三種典型應用部署
6)容量需求應用
7)2U/4U儲存伺服器+多個JBOD
8)CPU/RAM/Network要求低
9)效能和容量混合需求應用
10)2U/4U儲存伺服器+少數JBOD
11)高 CPU/RAM,低Network
12)效能需求應用
13)1U/2U儲存伺服器(無JBOD)
14)高 CPU/RAM,快DISK/Network

系統調優
1)關鍵調優引數
2)Performance.write-behind-window-size 65535 (位元組)
3)Performance.cache-refresh-timeout 1 (秒)
4)Performance.cache-size 1073741824 (位元組)
5)Performance.read-ahead off (僅1GbE)
6)Performance.io-thread-count 24 (CPU核數)
7)Performance.client-io-threads on (客戶端)
8)performance.write-behind on
9)performance.flush-behind on
10)cluster.stripe-block-size 4MB (預設 128KB)
11)Nfs.disable off (預設開啟)
12)預設引數設定適用於混合workloads
13)不同應用調優
14)理解硬體/韌體配置及對效能的影響
15)如CPU頻率、 IB、 10GbE、 TCP offload
 
KVM優化
1)使用 QEMU-GlusterFS(libgfapi)整合方案
2)gluster volume set <volume> group virt
3)tuned-adm profile rhs-virtualization
4)KVM host: tuned-adm profile virtual-host
5)Images和應用資料使用不同的 volume
6)每個gluster節點不超過2個KVM Host (16 guest/host)
7)提高響應時間
8)減少/sys/block/vda/queue/nr_request
9)Server/Guest: 128/8 (預設企值256/128)
10)提高讀頻寬
11)提高 /sys/block/vda/queue/read_ahead_kb
12)VM readahead: 4096 (預設值128)

相關文章