CentOS 6.4下Cman+RGManager+iSCSI+GFS2+cLVM實現廉價可擴充套件的叢集共享儲存

luashin發表於2016-02-17
大綱

前言

一、環境準備
二、iscsi安裝與配置
三、cman與rgmanager叢集安裝與配置
四、cLVM安裝與配置
五、gfs2安裝與配置
六、測試

前言
   前面的幾篇文章中我們一直在講解高可用叢集,但一直沒有講解共享儲存,我們又說共享儲存是十分重要的,在這一篇博文中我們重點來講一下共享儲存,與前面的文章有所不同的是在這一篇文章中我們重點講解的是具體操作,理論知識不做重點說明,至於什麼是RHCS叢集套件、什麼是iscsi,大家可以到網上去搜尋一下,全都是我在這裡就不重點說明。但有一點我得說明一下,有博友會說了現在共享儲存不是NAS就是SAN,你這個應用範圍也太小了吧,但我想說是,企業級的NAS或都SAN裝置動不動就是幾十萬或都上百萬(我這裡說少了),不是什麼公司都能承受的(畢竟大公司也就那幾個),對於中小型企業來說,我們不想用NAS或者SAN但我們又想用共享儲存,效能上不是要求太高,我想這種方案還個不錯的選擇。好了,引子就說到這裡,下面我們來具體演示一下,這種共享儲存的實現。

一、環境準備
1.作業系統
CentOS 6.4 X86_64

2.軟體版本
scsi-target-utils-1.0.24-3.el6_4.x86_64
iscsi-initiator-utils-6.2.0.873-2.el6.x86_64
cman-3.0.12.1-49.el6_4.1.x86_64
rgmanager-3.0.12.1-17.el6.x86_64
gfs2-utils-3.0.12.1-49.el6_4.1.x86_64
lvm2-cluster-2.02.98-9.el6.x86_64

3.實驗拓撲


   簡單說明:RHCS叢集套件,要求節點最少得3個。所以這裡有3個叢集節點和一個共享儲存。(注,shared storage不但是共享儲存,還是跳板機,且shared storage的主機名是target.test.com)

4.叢集環境
(1).配置各節點名稱
node1:
[root@node1 ~]# uname -n  
node1.test.com  

[root@node1 ~]# cat /etc/hosts  
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4  
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6  
192.168.18.201    node1.test.com    node1  
192.168.18.202    node2.test.com    node2  
192.168.18.203    node3.test.com    node3
192.168.18.208    target.test.com   target

node2:
[root@node2 ~]# uname -n
node2.test.com
 
[root@node2 ~]# cat /etc/hosts
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.18.201    node1.test.com    node1
192.168.18.202    node2.test.com    node2
192.168.18.203    node3.test.com    node3
192.168.18.208    target.test.com   target

node3:
[root@node3 ~]# uname -n
node3.test.com

[root@node3 ~]# cat /etc/hosts
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.18.201    node1.test.com    node1
192.168.18.202    node2.test.com    node2
192.168.18.203    node3.test.com    node3
192.168.18.208    target.test.com   target

shared storage:
[root@target ~]# uname -n
target.test.com

[root@target ~]# cat /etc/hosts
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.18.201    node1.test.com    node1
192.168.18.202    node2.test.com    node2
192.168.18.203    node3.test.com    node3  
192.168.18.208    target.test.com   target

(2).配置各節點與跳板機ssh互信
node1:
[root@node1 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''  
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@target.test.com

node2:
[root@node2 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''  
[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@target.test.com

node3:
[root@node3 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''  
[root@node3 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@target.test.com

shared storage:
[root@target ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''  
[root@target ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node1.test.com
[root@target ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2.test.com
[root@target ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node3.test.com

(3).配置各節點時間同步
node1:
[root@node1 ~]# ntpdate 202.120.2.101

node2:
[root@node2 ~]# ntpdate 202.120.2.101

node3:
[root@node3 ~]# ntpdate 202.120.2.101

shared storage:
[root@target ~]# ntpdate 202.120.2.101

   大家有沒有發現我們時間同步的操作包括下面的很操作都是相同的有沒有一種方法,只要執行一次,方法有很多,我們來說一下最常用的方法:ssh。在上面的操作中我們已經配置了ssh互信,下面我們就在跳板機上操作一下。
[root@target ~]# alias ha='for I in {1..3}; do' #設定一個別名,因為每次都得用到時
[root@target ~]# ha ssh node$I 'ntpdate 202.120.2.101'; done #各節點都在時間同步  
20 Aug 14:32:40 ntpdate[14752]: adjust time server 202.120.2.101 offset -0.019162 sec  
20 Aug 14:32:41 ntpdate[11994]: adjust time server 202.120.2.101 offset 0.058863 sec  
20 Aug 14:32:43 ntpdate[1578]: adjust time server 202.120.2.101 offset 0.062831 sec
注:大家看到了吧,這就是配置跳板機的好處,配置只要執行一次。

(5).安裝yum源
[root@target ~]# ha ssh node$I 'rpm -ivh '; done
[root@target ~]# ha ssh node$I 'rpm -ivh '; done

(6).關閉防火牆與SELinux
[root@target ~]# ha ssh node$I 'service iptables stop'; done

node1:
[root@node1 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.  
# SELINUX= can take one of these three values:  
#    enforcing - SELinux security policy is enforced.  
#    permissive - SELinux prints warnings instead of enforcing.  
#    disabled - No SELinux policy is loaded.  
SELINUX=disabled  
# SELINUXTYPE= can take one of these two values:  
#    targeted - Targeted processes are protected,  
#    mls - Multi Level Security protection.  
SELINUXTYPE=targeted

node2:
[root@node2 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.  
# SELINUX= can take one of these three values:  
#    enforcing - SELinux security policy is enforced.  
#    permissive - SELinux prints warnings instead of enforcing.  
#    disabled - No SELinux policy is loaded.  
SELINUX=disabled  
# SELINUXTYPE= can take one of these two values:  
#    targeted - Targeted processes are protected,  
#    mls - Multi Level Security protection.  
SELINUXTYPE=targeted

node3:
[root@node3 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.  
# SELINUX= can take one of these three values:  
#    enforcing - SELinux security policy is enforced.  
#    permissive - SELinux prints warnings instead of enforcing.  
#    disabled - No SELinux policy is loaded.  
SELINUX=disabled  
# SELINUXTYPE= can take one of these two values:  
#    targeted - Targeted processes are protected,  
#    mls - Multi Level Security protection.  
SELINUXTYPE=targeted
好了,環境準備全部完成,下面我們來配置一下iscsi。 見

二、iscsi 安裝與配置
1.安裝target
[root@target ~]# yum install -y scsi-target-utils

2.配置target
[root@target ~]# vim /etc/tgt/targets.conf
#
#    direct-store /dev/sdd  
#    incominguser someuser secretpass12  
#
#配置target名稱
    #配置共享磁碟  
            vendor_id test #配置發行商(任意)  
            lun 6 #配置LUN號 
      incominguser iscsiuser iscsiuser #配置認證的使用者名稱和密碼
      initiator-address 192.168.18.0/24 #配置允許的網段

3.啟動target並設定為開機自啟動
[root@target ~]# service tgtd start  
[root@target ~]# chkconfig tgtd on  
[root@target ~]# chkconfig tgtd --list  
tgtd              0:關閉    1:關閉    2:啟用    3:啟用    4:啟用    5:啟用    6:關閉

4.檢視配置的target
[root@target ~]# tgtadm --lld iscsi --mode target --op show  
Target 1: iqn.2013-08.com.test:teststore.sdb  
    System information:  
        Driver: iscsi  
        State: ready  
    I_T nexus information:  
    LUN information:  
        LUN: 0  
            Type: controller  
            SCSI ID: IET    00010000  
            SCSI SN: beaf10  
            Size: 0 MB, Block size: 1  
            Online: Yes  
            Removable media: No  
            Prevent removal: No  
            Readonly: No  
            Backing store type: null  
            Backing store path: None  
            Backing store flags:    
    Account information:  
        iscsiuser  
    ACL information:  
        192.168.18.0/24

5.在各節點上安裝initiator
[root@target ~]# ha ssh node$I 'yum install -y initiator'; done

6.配置initiator
node1:
[root@node1 ~]# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2013-08.com.test:node1

[root@node1 ~]# vim /etc/iscsi/iscsid.conf
#修改下面三項
node.session.auth.authmethod = CHAP #開啟CHAP認證
node.session.auth.username = iscsiuser #配置認證使用者名稱  
node.session.auth.password = iscsiuser #配置認證密碼

node2:
[root@node2 ~]# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2013-08.com.test:node2

[root@node2~]# vim /etc/iscsi/iscsid.conf
#修改下面三項
node.session.auth.authmethod = CHAP #開啟CHAP認證
node.session.auth.username = iscsiuser #配置認證使用者名稱  
node.session.auth.password = iscsiuser #配置認證密碼

node3:
[root@node3 ~]# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2013-08.com.test:node3

[root@node3 ~]# vim /etc/iscsi/iscsid.conf
#修改下面三項
node.session.auth.authmethod = CHAP #開啟CHAP認證
node.session.auth.username = iscsiuser #配置認證使用者名稱  
node.session.auth.password = iscsiuser #配置認證密碼

7.各節點啟動initiator並設定為開機自啟動
[root@target ~]# ha ssh node$I 'service iscsi start'; done
[root@target ~]# ha ssh node$I 'chkconfig iscsi on'; done
[root@target ~]# ha ssh node$I 'chkconfig iscsi --list'; done  
iscsi              0:關閉    1:關閉    2:啟用    3:啟用    4:啟用    5:啟用    6:關閉  
iscsi              0:關閉    1:關閉    2:啟用    3:啟用    4:啟用    5:啟用    6:關閉  
iscsi              0:關閉    1:關閉    2:啟用    3:啟用    4:啟用    5:啟用    6:關閉

8.在各節點上發現一下target
[root@target ~]# ha ssh node$I 'iscsiadm -m discovery -t st -p 192.168.18.208:3260'; done  
192.168.18.208:3260,1 iqn.2013-08.com.test:teststore.sdb  
192.168.18.208:3260,1 iqn.2013-08.com.test:teststore.sdb  
192.168.18.208:3260,1 iqn.2013-08.com.test:teststore.sdb

9.各節點登入一下target並檢視一下磁碟
[root@target ~]# ha ssh node$I 'iscsiadm -m node -T iqn.2013-08.com.test:teststore.sdb -p 192.168.18.208 -l'; done  
[root@target ~]# ha ssh node$I 'fdisk -l'; done
Disk /dev/sda: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x000dfceb
  Device Boot      Start        End      Blocks  Id  System  
/dev/sda1  *          1          26      204800  83  Linux  
Partition 1 does not end on cylinder boundary.  
/dev/sda2              26        1301    10240000  83  Linux  
/dev/sda3            1301        1938    5120000  83  Linux  
/dev/sda4            1938        2611    5405696    5  Extended  
/dev/sda5            1939        2066    1024000  82  Linux swap / Solaris

Disk /dev/sdb: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x5f3b697c
  Device Boot      Start        End      Blocks  Id  System

Disk /dev/sdd: 21.5 GB, 21474836480 bytes  
64 heads, 32 sectors/track, 20480 cylinders  
Units = cylinders of 2048 * 512 = 1048576 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x0c68b5e3
  Device Boot      Start        End      Blocks  Id  System

Disk /dev/sda: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x000dfceb
  Device Boot      Start        End      Blocks  Id  System  
/dev/sda1  *          1          26      204800  83  Linux  
Partition 1 does not end on cylinder boundary.  
/dev/sda2              26        1301    10240000  83  Linux  
/dev/sda3            1301        1938    5120000  83  Linux  
/dev/sda4            1938        2611    5405696    5  Extended  
/dev/sda5            1939        2066    1024000  82  Linux swap / Solaris

Disk /dev/sdb: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x00000000

Disk /dev/sdd: 21.5 GB, 21474836480 bytes  
64 heads, 32 sectors/track, 20480 cylinders  
Units = cylinders of 2048 * 512 = 1048576 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x0c68b5e3
  Device Boot      Start        End      Blocks  Id  System

Disk /dev/sda: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x000dfceb
  Device Boot      Start        End      Blocks  Id  System  
/dev/sda1  *          1          26      204800  83  Linux  
Partition 1 does not end on cylinder boundary.  
/dev/sda2              26        1301    10240000  83  Linux  
/dev/sda3            1301        1938    5120000  83  Linux  
/dev/sda4            1938        2611    5405696    5  Extended  
/dev/sda5            1939        2066    1024000  82  Linux swap / Solaris

Disk /dev/sdb: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x00000000

Disk /dev/sdd: 21.5 GB, 21474836480 bytes  
64 heads, 32 sectors/track, 20480 cylinders  
Units = cylinders of 2048 * 512 = 1048576 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x0c68b5e3
  Device Boot      Start        End      Blocks  Id  System
好了,到這裡iscsi配置全部完成,下面我們來配置一下,叢集。

三、cman與rgmanager 叢集安裝與配置
1.各節點安裝cman與rgmanager
[root@target ~]# ha ssh node$I 'yum install -y cman rgmanager'; done

2.配置叢集
(1).配置叢集名稱
[root@node1 ~]# ccs_tool create testcluster

(2).配置fencing裝置
[root@node1 ~]# ccs_tool addfence meatware fence_manual
[root@node1 ~]# ccs_tool lsfence
Name            Agent
meatware        fence_manual

(3).配置叢集節點
[root@node1 ~]# ccs_tool addnode -n 1 -f meatware node1.test.com  
[root@node1 ~]# ccs_tool addnode -n 2 -f meatware node2.test.com  
[root@node1 ~]# ccs_tool addnode -n 3 -f meatware node3.test.com  
[root@node1 ~]# ccs_tool lsnode
Cluster name: testcluster, config_version: 5
Nodename                        Votes Nodeid Fencetype  
node1.test.com                    1    1    meatware  
node2.test.com                    1    2    meatware  
node3.test.com                    1    3    meatware

3.同步配置檔案到各節點
[root@node1 cluster]# scp cluster.conf root@node2.test.com:/etc/cluster/
[root@node1 cluster]# scp cluster.conf root@node3.test.com:/etc/cluster/

4.啟動各節點叢集
node1:
[root@node1 cluster]# service cman start  
Starting cluster:    
  Checking if cluster has been disabled at boot...        [確定]  
  Checking Network Manager...                             [確定]  
  Global setup...                                         [確定]  
  Loading kernel modules...                               [確定]  
  Mounting configfs...                                    [確定]  
  Starting cman...                                        [確定]  
  Waiting for quorum...                                   [確定]  
  Starting fenced...                                      [確定]  
  Starting dlm_controld...                                [確定]  
  Tuning DLM kernel config...                             [確定]  
  Starting gfs_controld...                                [確定]  
  Unfencing self...                                       [確定]  
  Joining fence domain...                                 [確定]

node2:
[root@node2 cluster]# service cman start  
Starting cluster:    
  Checking if cluster has been disabled at boot...        [確定]  
  Checking Network Manager...                             [確定]  
  Global setup...                                         [確定]  
  Loading kernel modules...                               [確定]  
  Mounting configfs...                                    [確定]  
  Starting cman...                                        [確定]  
  Waiting for quorum...                                   [確定]  
  Starting fenced...                                      [確定]  
  Starting dlm_controld...                                [確定]  
  Tuning DLM kernel config...                             [確定]  
  Starting gfs_controld...                                [確定]  
  Unfencing self...                                       [確定]  
  Joining fence domain...                                 [確定]

node3:
[root@node3 cluster]# service cman start  
Starting cluster:    
  Checking if cluster has been disabled at boot...        [確定]  
  Checking Network Manager...                             [確定]  
  Global setup...                                         [確定]  
  Loading kernel modules...                               [確定]  
  Mounting configfs...                                    [確定]  
  Starting cman...                                        [確定]  
  Waiting for quorum...                                   [確定]  
  Starting fenced...                                      [確定]  
  Starting dlm_controld...                                [確定]  
  Tuning DLM kernel config...                             [確定]  
  Starting gfs_controld...                                [確定]  
  Unfencing self...                                       [確定]  
  Joining fence domain...                                 [確定]

5.檢視各節點啟動埠
node1:
[root@node1 cluster]# netstat -ntulp  
Active Internet connections (only servers)  
Proto Recv-Q Send-Q Local Address              Foreign Address            State      PID/Program name
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                  LISTEN      1082/sshd      
tcp        0      0 127.0.0.1:25                0.0.0.0:*                  LISTEN      1158/master    
tcp        0      0 127.0.0.1:6010              0.0.0.0:*                  LISTEN      14610/sshd        
tcp        0      0 :::22                      :::*                        LISTEN      1082/sshd      
tcp        0      0 ::1:25                      :::*                        LISTEN      1158/master      
tcp        0      0 ::1:6010                    :::*                        LISTEN      14610/sshd        
udp        0      0 192.168.18.201:5404        0.0.0.0:*                              15583/corosync  
udp        0      0 192.168.18.201:5405        0.0.0.0:*                              15583/corosync  
udp        0      0 239.192.47.48:5405          0.0.0.0:*                              15583/corosync

node2:
[root@node2 cluster]# netstat -ntulp  
Active Internet connections (only servers)  
Proto Recv-Q Send-Q Local Address              Foreign Address            State      PID/Program name
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                  LISTEN      1082/sshd      
tcp        0      0 127.0.0.1:25                0.0.0.0:*                  LISTEN      1158/master    
tcp        0      0 127.0.0.1:6010              0.0.0.0:*                  LISTEN      14610/sshd        
tcp        0      0 :::22                      :::*                        LISTEN      1082/sshd      
tcp        0      0 ::1:25                      :::*                        LISTEN      1158/master      
tcp        0      0 ::1:6010                    :::*                        LISTEN      14610/sshd        
udp        0      0 192.168.18.201:5404        0.0.0.0:*                              15583/corosync  
udp        0      0 192.168.18.201:5405        0.0.0.0:*                              15583/corosync  
udp        0      0 239.192.47.48:5405          0.0.0.0:*                              15583/corosync

node3:
[root@node3 cluster]# netstat -ntulp  
Active Internet connections (only servers)  
Proto Recv-Q Send-Q Local Address              Foreign Address            State      PID/Program name
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                  LISTEN      1082/sshd      
tcp        0      0 127.0.0.1:25                0.0.0.0:*                  LISTEN      1158/master    
tcp        0      0 127.0.0.1:6010              0.0.0.0:*                  LISTEN      14610/sshd        
tcp        0      0 :::22                      :::*                        LISTEN      1082/sshd      
tcp        0      0 ::1:25                      :::*                        LISTEN      1158/master      
tcp        0      0 ::1:6010                    :::*                        LISTEN      14610/sshd        
udp        0      0 192.168.18.201:5404        0.0.0.0:*                              15583/corosync  
udp        0      0 192.168.18.201:5405        0.0.0.0:*                              15583/corosync  
udp        0      0 239.192.47.48:5405          0.0.0.0:*                              15583/corosync
好了,到此叢集配置完成,下面我們來配置cLVM。

四、cLVM 安裝與配置
1.安裝cLVM
[root@target ~]# ha ssh node$I 'yum install -y lvm2-cluster'; done

2.啟用叢集LVM
[root@target ~]# ha ssh node$I 'lvmconf --enable-cluster'; done

3.檢視一下啟用的叢集LVM
    [root@target ~]# ha ssh node$I 'grep "locking_type = 3" /etc/lvm/lvm.conf'; done  
    locking_type = 3  
    locking_type = 3  
    locking_type = 3
注:所有節點啟用完成。

4.啟動cLVM服務
[root@target ~]# ha ssh node$I 'service clvmd start'; done  
Starting clvmd:    
Activating VG(s):  No volume groups found  
[確定]  
Starting clvmd:    
Activating VG(s):  No volume groups found  
[確定]  
Starting clvmd:    
Activating VG(s):  No volume groups found  
[確定]

5.將各節點的cman rgmanger clvmd設定為開機自啟動
[root@target ~]# ha ssh node$I 'chkconfig clvmd on'; done  
[root@target ~]# ha ssh node$I 'chkconfig cman on'; done  
[root@target ~]# ha ssh node$I 'chkconfig rgmanager on'; done

6.在叢集節點上建立lvm
node1:
(1).檢視一下共享儲存
[root@node1 ~]# fdisk -l                        #檢視一下共享儲存
Disk /dev/sda: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x000dfceb
  Device Boot      Start        End      Blocks  Id  System  
/dev/sda1  *          1          26      204800  83  Linux  
Partition 1 does not end on cylinder boundary.  
/dev/sda2              26        1301    10240000  83  Linux  
/dev/sda3            1301        1938    5120000  83  Linux  
/dev/sda4            1938        2611    5405696    5  Extended  
/dev/sda5            1939        2066    1024000  82  Linux swap / Solaris

Disk /dev/sdb: 21.5 GB, 21474836480 bytes  
255 heads, 63 sectors/track, 2610 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x5f3b697c
  Device Boot      Start        End      Blocks  Id  System

Disk /dev/sdd: 21.5 GB, 21474836480 bytes  
64 heads, 32 sectors/track, 20480 cylinders  
Units = cylinders of 2048 * 512 = 1048576 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disk identifier: 0x0c68b5e3
  Device Boot      Start        End      Blocks  Id  System

(2).建立叢集邏輯卷
[root@node1 ~]# pvcreate /dev/sdd                         #建立物理卷  
  Physical volume "/dev/sdd" successfully created  
[root@node1 ~]# pvs  
  PV        VG  Fmt  Attr PSize  PFree    
  /dev/sdd        lvm2 a--  20.00g 20.00g  
[root@node1 ~]# vgcreate clustervg /dev/sdd               #建立卷組  
  Clustered volume group "clustervg" successfully created  
[root@node1 ~]# vgs  
  VG        #PV #LV #SN Attr  VSize  VFree    
  clustervg  1  0  0 wz--nc 20.00g 20.00g  
[root@node1 ~]# lvcreate -L 10G -n clusterlv clustervg    #建立邏輯卷  
  Logical volume "clusterlv" created  
[root@node1 ~]# lvs  
  LV        VG        Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert  
  clusterlv clustervg -wi-a---- 10.00g

7.在node2與node3上檢視一下建立的邏輯卷
node2:
[root@node2 ~]# lvs  
  LV        VG        Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert  
  clusterlv clustervg -wi-a---- 10.00g  
node3:
[root@node3 ~]# lvs  
  LV        VG        Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert  
  clusterlv clustervg -wi-a---- 10.00g
好了,clvm到這裡配置全部完成,下面我們將生成的邏輯卷格式成叢集檔案系統(gfs)。

五、gfs2 安裝與配置
1.安裝gfs2
[root@target ~]# ha ssh node$I 'yum install -y gfs2-utils'; done

2.檢視一下幫助檔案
[root@node1 ~]# mkfs.gfs2 -h  
Usage:
mkfs.gfs2 [options] [ block-count ]
Options:
  -b       Filesystem block size  
  -c           Size of quota change file  
  -D              Enable debugging code  
  -h              Print this help, then exit  
  -J           Size of journals  
  -j         Number of journals  
  -K              Don't try to discard unused blocks  
  -O              Don't ask for confirmation  
  -p         Name of the locking protocol  
  -q              Don't print anything  
  -r           Resource Group Size  
  -t         Name of the lock table  
  -u           Size of unlinked file  
  -V              Print program version information, then exit
注:對於我們用到的引數進行說明
-j # 指定日誌區域的個數,有幾個就能夠被幾個節點掛載
-J # 指定日誌區域的大小,預設為128MB
-p {lock_dlm|lock_nolock}
-t : 鎖表的名稱,格式為clustername:locktablename

3.格式化為叢集檔案系統
[root@node1 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t testcluster:sharedstorage /dev/clustervg/clusterlv  
This will destroy any data on /dev/clustervg/clusterlv.  
It appears to contain: symbolic link to `../dm-0'
Are you sure you want to proceed? [y/n] y
Device:                    /dev/clustervg/clusterlv
Blocksize:                4096  
Device Size                10.00 GB (2621440 blocks)  
Filesystem Size:          10.00 GB (2621438 blocks)  
Journals:                  2  
Resource Groups:          40  
Locking Protocol:          "lock_dlm"  
Lock Table:                "testcluster:sharedstorage"  
UUID:                      60825032-b995-1970-2547-e95420bd1c7c
注:testcluster是叢集名稱,sharedstorage為鎖表名稱

4.建立掛載目錄並掛載
[root@node1 ~]# mkdir /mydata  
[root@node1 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata  
[root@node1 ~]# cd /mydata/  
[root@node1 mydata]# ll  
總用量 0

5.將node2與node3進行掛載
node2:
[root@node2 ~]# mkdir /mydata  
[root@node2 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
[root@node2 ~]# cd /mydata/  
[root@node2 mydata]# ll  
總用量 0

node3:
[root@node3 ~]# mkdir /mydata  
[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata  
Too many nodes mounting filesystem, no free journals
       注:可以看到,node2成功掛載而node3沒有功功掛載,Too many nodes mounting filesystem, no free journals,沒有多於的日誌空間。因為我們在格式化時只建立了2個日誌檔案,所以node1與node2可以掛載,而node3無法掛載,至於怎麼解決我們下面會說明。現在我們來測試一下叢集檔案系統。

六、測試
1.檢視是否能快速同步檔案
node1:
[root@node1 mydata]# touch 123.txt  
[root@node1 mydata]# ll  
總用量 4  
-rw-r--r-- 1 root root 0 8月  20 16:13 123.txt  
[root@node1 mydata]# ll  
總用量 8  
-rw-r--r-- 1 root root 0 8月  20 16:13 123.txt  
-rw-r--r-- 1 root root 0 8月  20 16:14 456.txt

node2:
[root@node2 mydata]# ll  
總用量 4  
-rw-r--r-- 1 root root 0 8月  20 16:13 123.txt  
[root@node2 mydata]# touch 456.txt  
[root@node2 mydata]# ll  
總用量 8  
-rw-r--r-- 1 root root 0 8月  20 16:13 123.txt  
-rw-r--r-- 1 root root 0 8月  20 16:14 456.txt
注:我們可以看到檔案可以快速同步,下面我們來看一下掛載目錄屬性

2.檢視掛載目錄的屬性
[root@node1 mydata]# gfs2_tool gettune /mydata  
incore_log_blocks = 8192  
log_flush_secs = 60  
quota_warn_period = 10  
quota_quantum = 60  
max_readahead = 262144  
complain_secs = 10  
statfs_slow = 0  
quota_simul_sync = 64  
statfs_quantum = 30  
quota_scale = 1.0000  (1, 1)  
new_files_jdata = 0        
#最常用,設定是否立刻同步到磁碟的,一般設定為1,下面我們就來設定一下

[root@node1 mydata]# gfs2_tool settune /mydata new_files_jdata 1  

[root@node1 mydata]# gfs2_tool gettune /mydata  
incore_log_blocks = 8192  
log_flush_secs = 60  
quota_warn_period = 10  
quota_quantum = 60  
max_readahead = 262144  
complain_secs = 10  
statfs_slow = 0  
quota_simul_sync = 64  
statfs_quantum = 30  
quota_scale = 1.0000  (1, 1)  
new_files_jdata = 1

3.檢視一下日誌檔案
[root@node1 mydata]# gfs2_tool journals /mydata  
journal1 - 128MB  
journal0 - 128MB  
2 journal(s) found.
注,大家可以看到只有兩個日誌檔案,預設為128MB,下面我們來新增一個日誌檔案,並將node3掛載上

4.新增日誌檔案並掛載
[root@node1 ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv  
Filesystem:            /mydata  
Old Journals          2  
New Journals          3  

[root@node1 ~]#  gfs2_tool journals /mydata  

journal2 - 128MB  
journal1 - 128MB  
journal0 - 128MB  
3 journal(s) found.

[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata  
[root@node3 ~]# cd /mydata/  
[root@node3 mydata]# ll  
總用量 8  
-rw-r--r-- 1 root root 0 8月  20 16:13 123.txt  
-rw-r--r-- 1 root root 0 8月  20 16:14 456.txt
注,大家可以看到node3順利掛載上

5.最後和大家說一下怎麼擴充套件叢集邏輯卷
(1).先檢視一下大小
[root@node3 ~]# lvs  
  LV        VG        Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert  
  clusterlv clustervg -wi-ao--- 10.00g
注,現在是10個G,下面我們將其擴充套件到15G。

(2).擴充套件物理邊界
[root@node3 ~]# lvextend -L 15G /dev/clustervg/clusterlv  
  Extending logical volume clusterlv to 15.00 GiB  
  Logical volume clusterlv successfully resized  

[root@node3 ~]# lvs  
  LV        VG        Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert  
  clusterlv clustervg -wi-ao--- 15.00g

(3).擴充套件邏輯邊界
[root@node3 ~]# gfs2_grow /dev/clustervg/clusterlv  
FS: Mount Point: /mydata  
FS: Device:      /dev/dm-0  
FS: Size:        2621438 (0x27fffe)  
FS: RG size:    65533 (0xfffd)  
DEV: Size:      3932160 (0x3c0000)  
The file system grew by 5120MB.  
gfs2_grow complete.  
   
[root@node3 ~]# df -h  
檔案系統                        容量  已用 可用 已用%% 掛載點  
/dev/sda2                       9.7G  1.5G 7.7G 17%    /  
tmpfs                           116M  29M  88M  25%    /dev/shm  
/dev/sda1                       194M  26M  159M 14%    /boot  
/dev/sda3                       4.9G  138M 4.5G  3%    /data  
/dev/sdc1                       5.0G  138M 4.6G  3%    /mnt  
/dev/mapper/clustervg-clusterlv 15G  388M  15G   3%    /mydata
      可以看到,現在是15G了。好了,CentOS 6.4下基於cman+rgmanager+iscsi+gfs2+cLVM實現廉價的可擴充套件的叢集共享儲存的所有演示全部完成,希望大家有所收穫。^_^……

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/9034054/viewspace-1990201/,如需轉載,請註明出處,否則將追究法律責任。

相關文章