叢集的應用例項(zt)

tonykorn97發表於2007-10-24
作者:張智濤 更新時間:2005-11-29


我是從最初的HA(高可用性)開始的,別人的例子是用VMWARE,可以做試驗但不能實際應用,我又沒有光纖卡的Share Storage,於是就選用ISCSI,成功後又發現ISCSI+EXT3不能用於LVS,倒最後發現GFS可用,我最終成功配成可實際應用的LVS,前後斷斷續續花了四個月,走了很多彎路。我花了三天時間寫下這篇文章,希望對大家有用。
這裡要感謝linuxfans.org、linuxsir.com、chinaunix.com以及其它很多網站,很多資料都是從他們的論壇上找到的。
參考文件及下載點

a.
b.

c.ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHGFS/SRPMS
d.

LVS結構圖:
eth0=10.3.1.101
eth0:1=10.3.1.254
Load Balance
Router
eth1=192.168.1.71
eth1:1=192.168.1.1
| |
| |
Real1 Real2
eth0=192.168.1.68 eth0=192.168.1.67
(eth0 gateway=192.168.1.1)
eth1=192.168.0.1---eth1=192.168.0.2
(雙機互聯線)
|
|
GFS
ISCSI
Share storage
eth0=192.168.1.124

1.Setup ISCSI Server Server: PIII 1.4,512M, Dell 1650,Redhat 9,IP=192.168.1.124 從下載ISCSI TARGET的Source code
()
我選了iscsitarget-0.3.8.tar.gz,要求kernel 2.4.29 從kernel.org下載kernel 2.4.29,解開編譯重啟後編譯安裝iscsitarget-0.3.8:
#make KERNELSRC=/usr/src/linux-2.4.29
#make KERNELSRC=/usr/src/linux-2.4.29 install
#cp ietd.conf /etc
#vi /etc/ietd.conf
# Example iscsi target configuration
#
# Everything until the first target definition belongs
# to the global configuration.
# Right now this is only the user configuration used
# during discovery sessions:
# Users, who can access this target
# (no users means anyone can access the target)
User iscsiuser 1234567890abc
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw1
User iscsiuser 1234567890abc
Lun 0 /dev/sda5 fileio
Alias iraw1
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw2
User iscsiuser 1234567890abc
Lun 1 /dev/sda6 fileio
Alias iraw2
Target iqn.2005-04.com.my:storage.disk2.sys2.idisk
User iscsiuser 1234567890abc
Lun 2 /dev/sda3 fileio
Alias idisk
Target iqn.2005-04.com.my:storage.disk2.sys2.icca
User iscsiuser 1234567890abc
Lun 3 /dev/sda7 fileio
Alias icca
說明:password 長度必須不小於12個字元, Alias是別名, 不知為何這個別名在Client端顯示不出來.
分割槽:我只有一個SCSI盤,所以:
/dev/sda3: Share storage,容量越大越好
/dev/sda5: raw1, 建Cluster要的rawdevice, 我給了900M
/dev/sda6: raw2, 建Cluster要的rawdevice, 我給了900M
/dev/sda7: cca, 建GFS要的,我給了64M
(/dev/sda4是Extended分割槽,在其中建了sda5,6,7)
#Reboot,用service iscsi-target start啟ISCSI server(我覺得比建議的好,可以用service iscsi-target status看狀態)
2.Setup ISCSI Client(on two real server) Server: PIII 1.4,512M, Dell 1650,Redhat AS3U4(用AS3U5更好),2.4.21-27.EL
#vi /etc/iscsi.conf
DiscoveryAddress=192.168.1.124
OutgoingUsername=iscsiuser
OutgoingPassword=1234567890abc
Username=iscsiuser
Password=1234567890abc
LoginTimeout=15
IncomingUsername=iscsiuser
IncomingPassword=1234567890abc
SendAsyncTest=yes
#service iscsi restart
#iscsi-ls -l
..., 精簡如下:
/dev/sdb:iraw2
/dev/sdc:iraw1
/dev/sdd:idisk
/dev/sde:icca
注意: 在real server中ISCSI device的順序很重要,兩個real server中一定要一樣,如不一樣,就改ISCSI Server中的設定,多試幾次。
3.Install Redhat Cluster suite
先下載Cluster Suite的ISO, AS3的我是從ChinaUnix.net找到的下載點, 安裝clumanager和redhat-config-cluster。沒有Cluster Suite的ISO也沒關係,從下載clumanager-1.2.xx.src.rpm,redhat-config-cluster-1.0.x.src.rpm,編譯後安裝,應該更好:
#rpm -Uvh clumanager-1.2.26.1-1.src.rpm
#rpmbuild -bs /usr/src/redhat/SPECS/clumanager.spec
#rpmbuild --rebuild --target i686 /usr/src/redhat/SRPMS/clumanager-1.2.26.1-1.src.rpm
還有redhat-config-cluster-1.0.x.src.rpm,也裝好
4.Setup Cluster as HA module
詳細步驟我就不寫了,網上有很多文章,我也是看了別人的文章學會的,不過人家是用VMWARE,而我是用真的機子+ISCSI,raw device就是/dev/sdb,/dev/sdc, 然後就mount /dev/sdd /u01, mkfs.ext3 /u01 ...... 設好後會發現ISCSI有問題:同時只能有一個Client聯接寫盤,如果兩個Client同時聯ISCSI的Share Storge,一個Client寫,另一個Client是看不到的,而且此時檔案系統已經破壞了,Client重聯ISCSI時會發現檔案是壞的,用fsck也修復不了。
ISCSI真的是雞肋嗎?
 NO!從GOOGLE上我終於查到ISCSI只有用Cluster File System才能真正用於Share Storage! 而Redhat買下的GFS就是一個!
5.Setup GFS on ISCSI
GFS只有Fedora Core4才自帶了,而GFS又一定要用到Cluster Suite產生的/etc/cluster.xml檔案,我沒見FC4有Cluster Suite,真不知Redhat給FC4帶GFS幹嘛,饞人嗎? 好,閒話少說,下載:c處的GFS-6.0.2.20-2.src.rpm, 按a處的gfs.txt編譯安裝,不過關於cluster.ccs,fence.ccs,nodes.ccs的設定沒說,看b的文件,我總算弄出來了,都存在/root/cluster下,存在別的地方也行,不過我不知道有沒有錯,我沒有光纖卡,文件又沒講ISCSI的例子,不過GFS能啟動的。
#cat cluster.ccs
cluster {
name = "Cluster_1"
lock_gulm {
servers = ["cluster1", "cluster2"]
heartbeat_rate = 0.9
allowed_misses = 10
}
}
注:name就是Cluster Suite設定的Cluster name, servers就是Cluster member的Hostname,別忘了加進/etc/hosts;allowed_misses我開始設為1,結果跑二天GFS就會死掉,改為10就沒死過了。
#cat fence.ccs
fence_devices{
admin {
agent = "fence_manual"
}
}
#cat nodes.ccs
nodes {
cluster1 {
ip_interfaces {
hsi0 = "192.168.0.1"
}
fence {
human {
admin {
ipaddr = "192.168.0.1"
}
}
}
}
cluster2 {
ip_interfaces {
hsi0 = "192.168.0.2"
}
fence {
human {
admin {
ipaddr = "192.168.0.2"
}
}
}
}
}
注:ip就是心跳線的ip
這三個檔案建在/root/cluster下,先建立Cluster Configuration System:
a.#vi /etc/gfs/pool0.cfg
poolname pool0
minor 1 subpools 1
subpool 0 8 1 gfs_data
pooldevice 0 0 /dev/sde1
b.#pool_assemble -a pool0
c.#ccs_tool create /root/cluster /dev/pool/pool0
d.#vi /etc/sysconfig/gfs
CCS_ARCHIVE="/dev/pool/pool0"

再Creating a Pool Volume,就是我們要的共享磁碟啦,
a.#vi /etc/gfs/pool1.cfg
poolname pool1
minor 2 subpools 1
subpool 0 128 1 gfs_data
pooldevice 0 0 /dev/sdd1
b.#pool_assemble -a pool1
c.#gfs_mkfs -p lock_gulm -t Cluster_1:gfs1 -j 8 /dev/pool/pool1
d.#mount -t gfs -o noatime /dev/pool/pool1 /u01
下面是個GFS的啟動指令碼,注意real1和real2必須同時啟動lock_gulmd程式,第一臺lock_gulmd會成為Server並等Client的lock_gulmd,幾十秒後沒有響應會fail,GFS啟動失敗。Redhat建議GFS盤不要寫進/etc/fstab。
#cat gfstart.sh
#!/bin/sh
depmod -a
modprobe pool
modprobe lock_gulm
modprobe gfs
sleep 5
service iscsi start
sleep 20
service rawdevices restart
pool_assemble -a pool0
pool_assemble -a pool1
service ccsd start
service lock_gulmd start
mount -t gfs /dev/pool/pool1 /s02 -o noatime
service gfs status
6. Setup Linux LVS
LVS是章文嵩博士發起和領導的優秀的叢集解決方案,許多商業的叢集產品,比如RedHat的Piranha,Turbolinux公司的Turbo Cluster等,都是基於LVS的核心程式碼的。我的系統是Redhat AS3U4,就用Piranha了。從rhel-3-u5-rhcs-i386.iso安裝piranha-0.7.10-2.i386.rpm,ipvsadm-1.21-9.ipvs108.i386.rpm () 裝完後service httpd start & service piranha-gui start,就可以從管理或設定了,當然了,手工改/etc/sysconfig/ha/lvs.cf也一樣。


#cat /etc/sysconfig/ha/lvs.cf
serial_no = 80
primary = 10.3.1.101
service = lvs
rsh_command = ssh
backup_active = 0
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 1050
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.1.1 eth1:1
nat_nmask = 255.255.255.0
reservation_conflict_action = preempt
debug_level = NONE
virtual lvs1 {
active = 1
address = 10.3.1.254 eth0:1
vip_nmask = 255.255.255.0
fwmark = 100
port = 80
persistent = 60
pmask = 255.255.255.255
send = "GET / HTTP/1.0rnrn"
expect = "HTTP"
load_monitor = ruptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 1
server Real1 {
address = 192.168.1.68
active = 1
weight = 1
}
server Real2 {
address = 192.168.1.67
active = 1
weight = 1
}
}
virtual lvs2 {
active = 1
address = 10.3.1.254 eth0:1
vip_nmask = 255.255.255.0
port = 21
send = "n"
use_regex = 0
load_monitor = ruptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server ftp1 {
address = 192.168.1.68
active = 1
weight = 1
}
server ftp2 {
address = 192.168.1.67
active = 1
weight = 1
}
}
設定完後service pulse start, 別忘了把相關的client加進/etc/hosts
#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 80 -j MARK --set-mark 100
#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 443 -j MARK --set-mark 100
#iptables -A POSTROUTING -t nat -p tcp -s 10.3.1.0/24 --sport 20 -j MASQUERADE
執行以上三行命令並存入/etc/rc.d/rc.local,用ipvsadm看狀態:
#ipvsadm
IP Virtual Server version 1.0.8 (size=65536)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.3.1.254:ftp wlc
-> cluster2:ftp Masq 1 0 0
-> cluster1:ftp Masq 1 0 0
FWM 100 wlc persistent 60
-> cluster1:0 Masq 1 0 0
-> cluster2:0 Masq 1 0 0
注意:a.Firewall Mark可以不要,我反正是加了,文件說有https的話加上,值我選了100,
b.Virtual IP別加進/etc/hosts,我上過當,80埠時有時無的,
c.eth0:1,eth1:1是piranha產生的,別自己手工設定,我幹過這畫蛇添足的事,網上有
些帖子沒說清,最後是看Redhat的文件才弄清楚的。
d.The LVS router can monitor the load on the various real servers by using
either rup or ruptime. If you select rup from the drop-down menu, each real
server must run the rstatd service. If you select ruptime, each real server
must run the rwhod service.Redhat的原話,就是如選rup的監控模式real server上
都要執行rstatd程式,如選ruptime就要執行rwhod程式。
e.Real Server同Router相聯的網路卡的Gateway必須是Router的那塊網路卡的VIP,舉本例:
Router的eth1同兩個real server的eth0相聯,如VIP eth1:1=192.168.1.1,則real
server 的eth0的Gateway=192.168.1.1
7.Setup TOMCAT5.59+JDK1.5(用Redhat自帶的Apache)
a.#tar xzvf jakarta-tomcat-5.5.9.tar.gz
#mv jakarta-tomcat-5.5.9 /usr/local
#ln -s /usr/local/jakarta-tomcat-5.5.9 /usr/local/tomcat
b.#jdk-1_5_0_04-linux-i586.bin
#mv jdk1.5.0_4 /usr/java
#ln -s /usr/java/jdk1.5.0_4 /usr/java/jdk
c.#vi /etc/profile.d/tomcat.sh
export CATALINA_HOME=/usr/local/tomcat
export TOMCAT_HOME=/usr/local/tomcat
d.#vi /etc/profile.d/jdk.sh
if ! echo ${PATH} | grep "/usr/java/jdk/bin" ; then
JAVA_HOME=/usr/java/jdk
export JAVA_HOME
export PATH=/usr/java/jdk/bin:${PATH}
export CLASSPATH=$JAVA_HOME/lib
fi
e.#chmod 755 /etc/profile.d/*.sh
f.重新用root登入,讓tomcat.sh和jdk.sh起作用,
#tar xzvf jakarta-tomcat-connectors-jk2-src-current.tar.gz
#cd jakarta-tomcat-connectors-jk2-2.0.4-src/jk/native2/
#./configure --with-apxs2=/usr/sbin/apxs --with-jni --with-apr-lib=/usr/lib
#make
#libtool --finish /usr/lib/httpd/modules
#cp ../build/jk2/apache2/mod_jk2.so ../build/jk2/apache2/libjkjni.so /usr/lib/httpd/modules/
g.#vi /usr/local/tomcat/bin/catalina.sh
在# Only set CATALINA_HOME if not already set後加上以下兩行:
serverRoot=/etc/httpd
export serverRoot
h.#vi /usr/local/tomcat/conf/jk2.properties
serverRoot=/etc/httpd
apr.NativeSo=/usr/lib/httpd/modules/libjkjni.so
apr.jniModeSo=/usr/lib/httpd/modules/mod_jk2.so
i.#vi /usr/local/tomcat/conf/server.xml,
在前加上以下幾行,建了兩個VirtualPath:myjsp和local,一個指向share storage,
一個指向real server本地

prefix="cluster.log." suffix=".txt" timestamp="true" />
prefix="cluster_access.log." suffix=".txt" pattern="common" resolveHosts="false" />

j.#vi /etc/httpd/conf/workers2.properties
#[logger.apache2]
#level=DEBUG
[shm]
file=/var/log/httpd/shm.file
size=1048576
[channel.socket:localhost:8009]
tomcatId=localhost:8009
keepalive=1
info=Ajp13 forwarding over socket
[ajp13:localhost:8009]
channel=channel.socket:localhost:8009
[status:status]
info=Status worker, displays runtime informations
[uri:/*.jsp]
worker=ajp13:localhost:8009
context=/
k.#vi /etc/httpd/conf/httpd.conf
改:DocumentRoot "/u01/www"
加:
在LoadModule最後加:
LoadModule jk2_module modules/mod_jk2.so
JkSet config.file /etc/httpd/conf/workers2.properties
在#之前加:

Order allow,deny
Deny from all

l:#mkdir /u01/ftproot
#mkdir /u01/www
#mkdir /u01/www/myjsp
m:在每個real server上生成index.jsp
#vi /var/www/html/index.jsp


在real server2上就是"test page on real server 2"
n:下載jdbc Driver

可惜只有for JDK1.4的,在兩臺real server上分別
#cp -R /usr/local/tomcat/webapps/webdav/WEB-INF /u01/www/myjsp
#cp ojdbc14.jar ojdbc14_g.jar ocrs12.zip /u01/www/myjsp/WEB-INF/lib
o:假設我有一臺OracleServer,ip=10.3.1.211,sid=MYID,username=my,password=1234,並有Oracle
的例子employees的read許可權,或乾脆把這個table拷過來,我是Oracle9i中的
#vi /u01/www/myjsp/testoracle.jsp

p:#vi /u01/www/index.html

WEB'> Local

Test'> Oracle WEB

q:在兩臺real server上分別
#vi /usr/local/tomcat/conf/tomcat-users.xml
加下面一行,允許頁面管理:

r:在兩臺real server上分別
#service httpd restart
#/usr/local/tomcat/bin/startup.sh
s:開啟http://1092.168.1.68:8080http://1092.168.1.67:8080,選Tomcat Manager,用manager/tomcat登入,虛擬目錄/myjsp和/local應該Start了在兩臺機子上分別開啟網頁,選WEB Local,可以看到一臺顯示:
"test page on real server 1",另一臺為"test page on real server 2",同時在Router上
ipvsadm可以看到每個real server的聯接數
8.設定FTP服務
#vi /etc/vsftpd/vsftp.conf,在兩臺real server上分別加入以下幾行:
anon_root=/u01/ftproot
local_root=/u01/ftproot
setproctitle_enable=YES
#service vsftpd start
現在LVM+GFS+ISCSI+TOMCAT就設定好了,我們可以用Apache Jmeter來測試LVM的效能,兩臺機子上分別執行jmeter,都指向10.3.1.254/myjsp/testoracle.jsp,各200個threads同時執行,在Router上用ipvsadm可以監控,Oracle Server的效能可要好,否則大量的http程式會hang在real server上,ipvsadm也會顯示有個real server失去了。測試時real server的CPU idle會降到70%,而Router 的CPU idle幾乎不動。

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/312079/viewspace-245783/,如需轉載,請註明出處,否則將追究法律責任。

相關文章