安裝 GPFS 及配置準備

beatony發表於2010-06-19

實驗中放在/soft_ins下,當然也可以建一個 NFS 伺服器共享安裝。

[NSD1][root][/home/scripts/gpfs]>rcp_file.sh /home/scripts/gpfs/install_gpfs.sh
[NSD1][root][/home/scripts/gpfs]> run_cmd.sh /home/scripts/gpfs/install_gpfs.sh

確認每臺伺服器安裝顯示 ok

[NSD1][root][/home/scripts/gpfs]>run_cmd.sh /home/scripts/gpfs/chang_profile.sh 

[NSD1][root][/home/scripts/gpfs]>run_cmd.sh mkdir /share  
[NSD1][root][/home/scripts/gpfs]>run_cmd.sh ln -s  /share /tmp/mmfs
[NSD1][root][/home/scripts/gpfs]>run_cmd.sh mkdir /tmp/gpfs







[NSD1][root][/]>vi /tmp/gpfs/nodefile
NSD1:quorum
NSD2:quorum
App1:client
App2:client
App3:client
App4:client

先確保清理乾淨原先 GPFS 叢集

[NSD1][root][/tmp/gpfs]>mmdelnode -f

建立叢集

[NSD1][root][/tmp/gpfs]>mmcrcluster -C bgbcrun -U bgbc 
-N /tmp/gpfs/nodefile -p NSD1 -s NSD2
Thu Jun 28 15:42:57 BEIST 2007: 6027-1664 mmcrcluster: Processing node NSD1
Thu Jun 28 15:42:57 BEIST 2007: 6027-1664 mmcrcluster: Processing node NSD2
…..
mmcrcluster: Command successfully completed
mmcrcluster: 6027-1371 Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.
其中引數含義
-C bgbcrun 設定叢集名稱
-U bgbc    定義域名
-N /tmp/gpfs/nodefile 指定節點檔名
-p NSD1 指定主NSD伺服器為 NSD1
-s NSD1 指定備NSD伺服器為 NSD1

[NSD1][root][/tmp/gpfs]>mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         bgbcrun.NSD1
  GPFS cluster id:           739157013761844865
  GPFS UID domain:           bgbc
  Remote shell command:      /usr/bin/rsh
  Remote file copy command:  /usr/bin/rcp

GPFS cluster configuration servers:
-----------------------------------
  Primary server:    NSD1
  Secondary server:  NSD2

 Node  Daemon node name            IP address       Admin node name         Designation    
|-------10--------20--------30--------40--------50--------60--------70--------80--------9|
|-------- XML error:  The previous line is longer than the max of 90 characters ---------|
-----------------------------------------------------------------------------------------
   1   NSD1                      10.66.3.98       NSD1                       quorum
   2   NSD2                      10.66.3.99       NSD2                       quorum
   3   App1                      10.66.5.51       App1                     
   4   App2                      10.66.5.52       App2                     
   5   App3                      10.66.5.53       App3                     
   6   App4                      10.66.5.54       App4         







[NSD1][root][/tmp/gpfs]>vi /tmp/gpfs/nsdfile
新增
hdisk2:NSD1:NSD2: dataAndMetadata:4

[NSD1][root][/tmp/gpfs]>mmcrnsd -F /tmp/gpfs/nsdfile
mmcrnsd: Processing disk hdisk2
mmcrnsd: 6027-1371 Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.

此時,該檔案作了轉換

[NSD1][root][/tmp/gpfs]>cat nsdfile
# hdisk2:NSD2:NSD1: dataAndMetadata:4
gpfs1nsd:::dataAndMetadata:4:

[NSD1][root][/tmp/gpfs]>lspv
hdisk3          00003e846ffa7a6e                    gpfs1nsd        

[NSD2][root][/tmp/gpfs]>mmstartup -a 
Thu Jun 28 15:52:12 BEIST 2007: 6027-1642 mmstartup: Starting GPFS ...
NSD2:  6027-2114 The GPFS subsystem is already active.
…
App4:  6027-2114 The GPFS subsystem is already active.







[NSD2][root][/]>mmcrfs /share sharelv -F /tmp/gpfs/nsdfile  -A yes -B 64K -n 30 -v no
GPFS: 6027-531 The following disks of sharelv will be formatted on node NSD1:
    gpfs1nsd: size 67108864 KB
GPFS: 6027-540 Formatting file system ...
GPFS: 6027-535 Disks up to size 140 GB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
GPFS: 6027-572 Completed creation of file system /dev/sharelv.
mmcrfs: 6027-1371 Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.
引數含義如下:
/share 檔案系統 mount 點名
sharelv 指定檔案系統 lv 名
-F   指定 NSD 的檔名
-A   自動 mount 選項為 yes
-B   塊大小為64K
-n   掛載檔案系統的節點估計數30個
-v   校驗建立磁碟是否已有檔案系統 為否

[NSD2][root][/home/scripts/gpfs]>run_cmd.sh mount /share

[App1][root][/]>mkdir /share/user1work
[App1][root][/]>chown user1:bea /share/user1work
[App1][root][/]>chmod 700 /share/user1work
[App1][root][/]>mkdir /share/user1temp
[App1][root][/]>chown user1:bea /share/user1temp
[App1][root][/]> chmod 750 /share/user1temp

同樣在另3臺機器建立其他目錄並修改相應所有者即許可權,即 work 目錄只有所有者可操作,temp 目錄可讀可進入。






在節點開機後自動啟動 GPFS

[NSD1][root][/]>mmchconfig autoload=yes
mmchconfig: Command successfully completed
mmchconfig: 6027-1371 Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.

採用多個 Quorum 盤

[NSD1][root][/]> mmchconfig singleNodeQuorum=no
mmchconfig: 6027-1119 Obsolete option: singleNodeQuorum

[NSD1][root][/]>mmlsconfig
Configuration data for cluster bgbcrun.NSD2:
-----------------------------------------------
clusterName bgbcrun.NSD2
clusterId 739157013761844865
clusterType lc
autoload no
useDiskLease yes
uidDomain bgbc
maxFeatureLevelAllowed 912

至此 GPFS 的安裝配置完成

[NSD1][root][/]>mmgetstate -a
 Node number  Node name        GPFS state 
------------------------------------------
       1      NSD1          active
       2      NSD2          active
       3      App1          active
       4      App2          active
       5      App3          active
      10      App4          active






最終,透過 userid 的不同,在 App1 的共享目錄 /share下,user1 只能讀寫自己資料夾的檔案,對其它使用者的 temp 目錄可讀,對其它使用者的 work 目錄不能讀寫。

[App1][/share]>ls -latr
drwx------   2 user1     bea         2048 Jun 28 17:21 user1work
drwxr--x--  2 user1     bea         2048 Jun 28 17:21 user1temp
drwx------   2 502      bea            2048 Jun 28 17:22 user2work
drwxr--x---   2 502      bea            2048 Jun 28 17:22 user2temp
drwx------   2 503      bea         2048 Jun 28 17:22 user3work
drwxr-x--   2 503      bea         2048 Jun 28 17:22 user3temp
drwxr----   2 504      bea         2048 Jun 28 17:22 user4work
drwxr-x---   2 504      bea         2048 Jun 28 17:23 user4temp
[App1][user1][/share]>cd user2work 
ksh: user2temp : Permission denied.
[App1][user1][/share]>cd user2temp 
[App1][user1][/share]>>a
The file access permissions do not allow the specified action.
ksh[2]: a: 0403-005 Cannot create the specified file.
[App1][user1][/share]>cd /user1temp
[App1][user1][/share/user1temp]>>a
[App1][user1][/share/user1temp]>ls -l
-rw-r-----   1 user1   bea               0 Aug 13 18:47 a
[App1][user1][/share/xhtemp]>rm a
[App1][user1][/share/xhtemp]>ls -l
[App1][/share]>ls -latr
drwxr------   2 user1     bea         2048 Jun 28 17:21 user1work
drwxr------   2 user1     bea         2048 Jun 28 17:21 user1temp
drwxr------   2 502      bea            2048 Jun 28 17:22 user2work
drwxr------   2 502      bea            2048 Jun 28 17:22 user2temp
drwxr------   2 503      bea         2048 Jun 28 17:22 user3work
drwxr------   2 503      bea         2048 Jun 28 17:22 user3temp
drwxr------   2 504      bea         2048 Jun 28 17:22 user4work
drwxr------   2 504      bea         2048 Jun 28 17:23 user4temp

同樣,在其它3臺機器也有類似我們希望的結果。





1.新增一個節點

除所有準備和安裝工作外,還需完成以下操作:

[NSD1][root][/home/scripts/gpfs]>mmaddnode -N bgbcw14:client 
Thu Jun 28 16:28:21 BEIST 2007: 6027-1664 mmaddnode: Processing node App3
mmaddnode: Command successfully completed
mmaddnode: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

2.停掉節點

停掉所有節點

[NSD2][root][/home/scripts/gpfs]>mmshutdown -a
Mon Jul 30 09:56:02 BEIST 2007: 6027-1341 mmshutdown: Starting force unmount of 
GPFS file systems
NSD1:  forced unmount of /share
…
App4:  forced unmount of /share
Mon Jul 30 09:56:07 BEIST 2007: 6027-1344 mmshutdown: Shutting down GPFS daemons
NSD1:  Shutting down!
…
App3:  Shutting down!
NSD1:  'shutdown' command about to kill process 368890
….
App4:  'shutdown' command about to kill process 474040
Mon Jul 30 09:56:13 BEIST 2007: 6027-1345 mmshutdown: Finished

也可以用 mmshutdown -N 只停某個節點

清除 GPFS

1.	fuser –kcu /share
2.	unmount /share  #在所有節點
3.	mmdelfs sharelv,   
4.	mmlsfs sharelv #檢查結果
5.	mmdelnsd –F /tmp/gpfs/nsdfile
6.	mmshutdown –a
7.	mmdelnode –n /tmp/gpfs/nodefile
8.	mdelnode –f 	#最後清除叢集

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/22578826/viewspace-665690/,如需轉載,請註明出處,否則將追究法律責任。

相關文章