CentOS上zookeeper叢集模式安裝配置
本篇介紹在四個節點的叢集中搭建zookeeper環境,zookeeper可配置三種模式執行:單機模式,偽叢集模式,叢集模式,本文使用叢集模式搭建。
安裝環境
- 虛擬機器:VMware Workstation 12 Player
- Linux版本:CentOS release 6.4 (Final)
- zookeeper版本:zookeeper-3.4.5-cdh5.7.6.tar.gz
- 叢集節點:
- master:192.168.137.11
- slave1:192.168.137.12
- slave2:192.168.137.13
- slave3:192.168.137.14
- 前提:java已安裝,已配置ssh綿密登入,停掉防火牆等。
上傳安裝包
將下載的zookeeper-3.4.5-cdh5.7.6.tar.gz安裝包上傳到CentOS指定目錄,例如/opt
。
上傳方法很多,這裡在SecureCRT用rz命令。
解壓縮安裝包:
tar -zxf zookeeper-3.4.5-cdh5.7.6.tar.gz
重新命名資料夾:
mv zookeeper-3.4.5-cdh5.7.6 zookeeper
修改配置檔案
配置檔案在安裝目錄conf
資料夾下的zoo_sample.cfg
,需要先複製一個並且改檔名:
[root@master conf]# pwd
/opt/zookeeper/conf
[root@master conf]# cp zoo_sample.cfg zoo.cfg
[root@master conf]# ll
total 16
-rw-rw-r--. 1 root root 535 Feb 22 2017 configuration.xsl
-rw-rw-r--. 1 root root 2693 Feb 22 2017 log4j.properties
-rw-r--r--. 1 root root 808 Jan 23 10:06 zoo.cfg
-rw-rw-r--. 1 root root 808 Feb 22 2017 zoo_sample.cfg
修改zoo.cfg配置檔案:
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/tmp
# the port at which the clients will connect
clientPort=2181
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataLogDir=/opt/zookeeper/logs
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
server.4=slave3:2888:3888
引數說明:
- tickTime: zookeeper中使用的基本時間單位, 毫秒值.
- dataDir: 資料目錄. 可以是任意目錄.
- dataLogDir: log目錄, 同樣可以是任意目錄. 如果沒有設定該引數, 將使用和dataDir相同的設定.
- clientPort: 監聽client連線的埠號.
- initLimit: zookeeper叢集中的包含多臺server, 其中一臺為leader, 叢集中其餘的server為follower. initLimit引數配置初始化連線時, follower和leader之間的最長心跳時間. 此時該引數設定為5, 說明時間限制為5倍tickTime, 即5*2000=10000ms=10s.
- syncLimit: 該引數配置leader和follower之間傳送訊息, 請求和應答的最大時間長度. 此時該引數設定為2, 說明時間限制為2倍tickTime, 即4000ms.
- server.X=A:B:C 其中X是一個數字, 表示這是第幾號server. A是該server所在的IP地址. B配置該server和叢集中的leader交換訊息所使用的埠. C配置選舉leader時所使用的埠.
由於我們修改了dataDir
目錄,在zookeeper目錄中建立一個資料夾用於後面建立myid
檔案:
mkdir /opt/zookeeper/tmp
mkdir /opt/zookeeper/logs
複製安裝包到其他節點
將zookeeper
資料夾複製到其他三個伺服器上:
scp -r /opt/zookeeper/ root@slave1:/opt
scp -r /opt/zookeeper/ root@slave2:/opt
scp -r /opt/zookeeper/ root@slave3:/opt
在master節點上用一下命令給每個節點上建立myid
檔案,檔案中的id號與zoo.cfg
配置檔案中的對應:
[root@master zookeeper]# echo 1 > /opt/zookeeper/tmp/myid
[root@master zookeeper]# ssh slave1 "echo 2 > /opt/zookeeper/tmp/myid"
[root@master zookeeper]# ssh slave2 "echo 3 > /opt/zookeeper/tmp/myid"
[root@master zookeeper]# ssh slave3 "echo 4 > /opt/zookeeper/tmp/myid"
執行啟動
由於沒有配置環境變數,需要用全路徑執行:
[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
其實配置檔案中修改dataLogDir
的本意是想讓啟動日誌輸出到配置的資料夾裡,但是好像並沒有,日誌檔案zookeeper.out
還是在zookeeper的安裝目錄下生成。
檢視zookeeper.out
檔案發現有錯誤:
2018-01-23 10:48:35,470 [myid:] - INFO [main:QuorumPeerConfig@101] - Reading configuration from: /opt/zookeeper/bin/../conf/zoo.cfg
2018-01-23 10:48:35,484 [myid:] - WARN [main:QuorumPeerConfig@290] - Non-optimial configuration, consider an odd number of servers.
2018-01-23 10:48:35,484 [myid:] - INFO [main:QuorumPeerConfig@334] - Defaulting to majority quorums
2018-01-23 10:48:35,512 [myid:4] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2018-01-23 10:48:35,513 [myid:4] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2018-01-23 10:48:35,513 [myid:4] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2018-01-23 10:48:35,536 [myid:4] - INFO [main:QuorumPeerMain@132] - Starting quorum peer
2018-01-23 10:48:35,587 [myid:4] - INFO [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2018-01-23 10:48:35,611 [myid:4] - INFO [main:QuorumPeer@913] - tickTime set to 2000
2018-01-23 10:48:35,612 [myid:4] - INFO [main:QuorumPeer@933] - minSessionTimeout set to -1
2018-01-23 10:48:35,612 [myid:4] - INFO [main:QuorumPeer@944] - maxSessionTimeout set to -1
2018-01-23 10:48:35,612 [myid:4] - INFO [main:QuorumPeer@959] - initLimit set to 10
2018-01-23 10:48:35,639 [myid:4] - INFO [main:QuorumPeer@429] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2018-01-23 10:48:35,643 [myid:4] - INFO [main:QuorumPeer@444] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2018-01-23 10:48:35,652 [myid:4] - INFO [Thread-1:QuorumCnxManager$Listener@486] - My election bind port: 0.0.0.0/0.0.0.0:3888
2018-01-23 10:48:35,674 [myid:4] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:2181:QuorumPeer@670] - LOOKING
2018-01-23 10:48:35,679 [myid:4] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@740] - New election. My id = 4, proposed zxid=0x0
2018-01-23 10:48:35,692 [myid:4] - INFO [slave3/192.168.137.14:3888:QuorumCnxManager$Listener@493] - Received connection request /192.168.137.11:34491
2018-01-23 10:48:35,704 [myid:4] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@542] - Notification: 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)
2018-01-23 10:48:35,706 [myid:4] - WARN [WorkerSender[myid=4]:QuorumCnxManager@368] - Cannot open channel to 2 at election address slave1/192.168.137.12:3888
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:327)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365)
at java.lang.Thread.run(Thread.java:748)
提示Connection refused
的異常,其實一開始先不急著百度這個問題,其實要所有節點上都啟動zookeeper後再看看執行狀態,現在檢視執行狀態都是沒執行的,也找不到相應的程式:
[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
到其他節點伺服器上都啟動zookeeper,過一會兒後每個伺服器檢視狀態:
[root@master zookeeper]# /opt/zookeeper/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@master zookeeper]# jps
5488 QuorumPeerMain
5539 Jps
如果有Mode和QuorumPeerMain,就說明已經啟動成功了。
如果要關閉zookeeper,需要在每個節點上執行:
/opt/zookeeper/bin/zkServer.sh stop
另外如果使用如下命令啟動,就會在啟動時輸出日誌資訊:
/opt/zookeeper/bin/zkServer.sh start-foreground
批量啟動和關閉
一臺一臺伺服器去執行命令有點麻煩,寫一個指令碼批量執行:
#!/bin/bash
#下面變數修改zookeeper安裝目錄
zooHome=/opt/zookeeper
if [ $1 != "" ]
then
confFile=$zooHome/conf/zoo.cfg
slaves=$(cat "$confFile" | sed '/^server/!d;s/^.*=//;s/:.*$//g;/^$/d')
for salve in $slaves ; do
ssh $salve "$zooHome/bin/zkServer.sh $1"
done
else
echo "parameter empty! parameter:start|stop"
fi
將上面指令碼儲存為zooManager
檔案,呼叫執行:
sh zooManager start
sh zooManager stop
[root@master opt]# sh zooManager start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
由於所有伺服器節點都是使用root使用者,所以沒有考慮許可權問題,實際情況要考慮的。
相關文章
- 安裝配置 zookeeper (單機非叢集模式)模式
- Zookeeper-3.4.10 叢集的安裝配置
- Centos7安裝Nacos單機模式以及叢集模式(包含nignx安裝以及實現叢集)的相關配置CentOS模式
- 安裝Zookeeper和Kafka叢集Kafka
- azkaban叢集模式安裝與execute-as-user配置模式
- Zookeeper3.4.14(單叢集)、Kafka_2.12-2.2.2(叢集)安裝Kafka
- CentOS7 安裝PG叢集CentOS
- centos安裝k8s叢集CentOSK8S
- Redis安裝之叢集-哨兵模式(sentinel)模式Redis模式
- Zookeeper 安裝配置
- Zookeeper叢集 + Kafka叢集Kafka
- redis偽叢集配置Cluster叢集模式Redis模式
- Ubuntu上kubeadm安裝Kubernetes叢集Ubuntu
- centos7下zookeeper叢集安裝部署CentOS
- 搭建zookeeper叢集(偽叢集)
- CentOS 7.4 下安裝 ES 6.5.1 搜尋叢集CentOS
- Centos7手工安裝Kubernetes叢集CentOS
- 【Linux】Centos7.6 安裝ZookeeperLinuxCentOS
- 在 CentOS7 上安裝 zookeeper-3.4.9 服務CentOS
- 在Ubuntu 18.04.1上安裝Hadoop叢集UbuntuHadoop
- zookeeper叢集及kafka叢集搭建Kafka
- Kubernetes安裝之三:etcd叢集的配置
- ZooKeeper三種安裝模式模式
- zookeeper 叢集搭建
- Zookeeper叢集搭建
- 在CentOS 7.5上安裝和配置ProFTPDCentOSFTP
- centos7 hadoop 單機模式安裝配置CentOSHadoop模式
- linux下搭建ZooKeeper叢集(偽叢集)Linux
- 安裝Kafka叢集Kafka
- 安裝Consul叢集
- FreeSwitch+Opensips叢集 安裝配置操作指導
- Zookeeper原始碼分析(四) —– 叢集模式(replicated)執行原始碼模式
- Zookeeper原始碼分析(四) ----- 叢集模式(replicated)執行原始碼模式
- Linux-Centos6.8安裝redis-4.0.10 官方叢集版LinuxCentOSRedis
- ZooKeeper 搭建 solr 叢集Solr
- zookeeper叢集的搭建
- 如何在CentOS上建立Kubernetes叢集CentOS
- centos7上keepalived的安裝和配置CentOS
- centos7 安裝k8s1.30.1高可用叢集(非獨立etcd叢集)CentOSK8S