Kafka叢集安裝Version2.10
Kafka叢集安裝,基於版本2.10,
使用kafka_2.10-0.10.1.0.tgz安裝包。
1.安裝規劃
Storm叢集模式,安裝到下面三臺機器
IP | Hostname |
---|---|
10.43.159.237 | zdh-237 |
10.43.159.238 | zdh-238 |
10.43.159.239 | zdh-239 |
Zookeeper叢集,已經安裝好
主機:zdh-237,zdh-238,zdh-239
埠:2181
使用者:garrison/zdh1234
2.安裝使用者
kafka/zdh1234
useradd -g hadoop -s /bin/bash -md /home/kafka kafka
3.上傳並且解壓安裝包
上傳安裝包:
ftp kafka_2.10-0.10.1.0.tgz
解壓安裝包:
tar -zxvf kafka_2.10-0.10.1.0.tgz
配置環境變數和別名,方便操作:
export KAFKA_HOME=/home/kafka/kafka_2.10-0.10.1.0
export PATH=$PATH:$KAFKA_HOME/bin
alias conf='cd $KAFKA_HOME/config'
alias logs='cd $KAFKA_HOME/logs'
建立本地資料存放的目錄:
mkdir /home/kafka/kafka_2.10-0.10.1.0/kafka-logs
4.修改server.properties配置檔案
broker.id=1
log.dirs=/home/kafka/kafka_2.10-0.10.1.0/kafka-logs
zookeeper.connect=zdh-237:2181,zdh-238:2181,zdh-239:2181
以下下配置項可以使用預設值:
port=9092
host.name=zdh-237
advertised.host.name=
num.partitions=2
log.retention.hours=168
引數說明:
# The id of the broker. This must be set to a unique integer for each broker.
#整數,建議根據ip區分,這裡我是使用zookeeper中的id來設定
# The port the socket server listens on
# broker用於接收producer訊息的埠
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
# broker的hostname
# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#這個是配置PRODUCER/CONSUMER連上來的時候使用的地址
# A comma seperated list of directories under which to store log files
#kafka存放訊息檔案的路徑
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
#topic的預設分割槽數
# The minimum age of a log file to be eligible for deletion
#kafka接收日誌的儲存目錄(目前我們儲存7天資料log.retention.hours=168)
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
5.拷貝Kafka到叢集其他機器
將zdh-237上的kafka複製到其他機器上
scp -r kafka_2.10-0.10.1.0 kafka@zdh-238:/home/kafka
需要修改server.properties配置檔案,
修改broker.id為其他整數,
修改host.name為當前所在機器,
參考示例:
broker.id=2
host.name=zdh-238
6.啟動Kafka
/home/kafka/kafka_2.10-0.10.1.0/bin/kafka-server-start.sh /home/kafka/kafka_2.10-0.10.1.0/config/server.properties &>/home/kafka/kafka_2.10-0.10.1.0/catalina.log &
7.驗證安裝
7.1
在zdh-237上建立一個testTopic主題
KAFKA有3個,replication-factor就填3個
kafka-topics.sh --create --topic testTopic --replication-factor 3 --partitions 2 --zookeeper zdh-237:2181
7.2
在zdh-237上檢視剛才建立的testTopic主題
kafka-topics.sh --list --zookeeper zdh-237:2181
7.3
在zdh-238上傳送訊息至kafka(模擬producer),傳送訊息"hello world"
kafka-console-producer.sh --broker-list zdh-237:9092 --sync --topic testTopic
然後輸入hello wrold
7.4
在zdh-239上開啟一個消費者(模擬consumer),可以看到剛才傳送的訊息"hello world"
kafka-console-consumer.sh --zookeeper zdh-237:2181 --topic testTopic --from-beginning
7.5
刪除掉一個Topic,這裡我們測試建立一個idoall的主題,再刪除掉
kafka-topics.sh --create --topic idoall --replication-factor 3 --partitions 2 --zookeeper zdh-237:2181
kafka-topics.sh --delete --topic idoall --zookeeper zdh-237:2181
注意delete.topic.enable需要為true才能真正刪除topic
#Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
8.其他
8.1.Kafka Debug方法
在kafka-run-class.sh增加如下引數:
KAFKA_DEBUG=true
DEBUG_SUSPEND_FLAG="y"
8.2.清除kafka資料
執行如下命令:
rm -rf kafka-logs
mkdir kafka-logs
相關文章
- kafka叢集安裝Kafka
- 【redis叢集安裝】Redis
- zookeeper 叢集安裝
- hbase叢集安裝
- StarRocks 叢集安裝
- cdh 叢集安裝
- ElasticSearch 6.6.0叢集安裝Elasticsearch
- TiDB叢集安裝TiDB
- Elasticsearch 叢集安裝部署Elasticsearch
- 【Zookeeper】zookeeper叢集安裝
- 記HBase叢集安裝
- Hadoop叢集安裝Hadoop
- Hadoop叢集安裝配置Hadoop
- kubernetes叢集安裝
- elasticsearch叢集安裝(3臺)Elasticsearch
- Storm叢集安裝與部署ORM
- ZooKeeper叢集安裝和部署
- linux ZooKeeper叢集安裝Linux
- WAS叢集安裝配置過程
- Kubernetes 叢集安裝
- redis cluster 4.0.9 叢集安裝搭建Redis
- WAS中介軟體垂直叢集安裝
- WAS中介軟體水平叢集安裝
- mongo副本集叢集安裝配置Go
- K8S叢集安裝K8S
- CDH5 叢集安裝教程H5
- Hadoop 叢集安裝與配置Hadoop
- hadoop叢集安裝檔案Hadoop
- Flink(四)叢集安裝(二)
- Zookeeper介紹與叢集安裝
- k8s 叢集安裝K8S
- Redis概述和單機、叢集安裝Redis
- Hadoop叢集安裝詳細教程Hadoop
- ElasticSearch 2.3.3 叢集安裝+ Marvel + kibanaElasticsearch
- oracle雙機叢集安裝小結Oracle
- 安裝Kafka叢集Kafka
- ceph叢集安裝報錯解決方法
- PXC(Percona XtraDB Cluster)叢集安裝