kafka-2.11叢集搭建
producer:訊息生產者,向kafka broker發訊息的客戶端
consumer:訊息消費者,向kafka broker取訊息的客戶端
Topic:釋出到Kafka叢集訊息的一個類別
broker:一臺kafka伺服器就是一個broker,一個叢集由多個broker組成,一個broker可以容納多個topic
1.下載安裝zookeeper(必須先安裝zookeeper和jdk)
[root@node1 ~]# wget
[root@node1 ~]# tar xvf zookeeper-3.4.13.tar.gz -C /opt/
[root@node1 ~]# cd /opt/zookeeper-3.4.13/conf/
[root@node1 conf]# vim zoo.cfg
tickTime=2000
dataDir=/opt/zookeeper-3.4.13/data
clientPort=2181
initLimit=5
syncLimit=2
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
[root@node1 conf]# mkdir /opt/zookeeper-3.4.13/data
[root@node1 conf]# cd /opt/zookeeper-3.4.13/data --myid必須要在data目錄下面,否則會報錯
[root@node1 data]# cat myid
1
[root@node1 zookeeper-3.4.13]# cd ..
[root@node1 opt]# scp -r zookeeper-3.4.13 node2:/opt/
[root@node1 opt]# scp -r zookeeper-3.4.13 node3:/opt/
2.在node2修改myid檔案
[root@node2 opt]# cat /opt/zookeeper-3.4.13/data/myid
2
[root@node2 opt]#
3.在node3修改myid檔案
[root@node3 ~]# cat /opt/zookeeper-3.4.13/data/myid
3
[root@node3 ~]# zkServer.sh start --每個節點都要啟動zookeeper服務
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node3 opt]# zkCli.sh --使用客戶端登陸
3.下載安裝kafka(三個節點一樣)
[root@node1 ~]# wget
[root@node1 ~]# tar xvf kafka_2.11-2.2.0.tgz -C /opt/
[root@node1 ~]# cd /opt/kafka_2.11-2.2.0/
[root@node1 kafka_2.11-2.2.0]# cd config/
[root@node1 config]# vim server.properties
broker.id=0 --每個id不一樣
zookeeper.connect=172.16.8.23:2181,172.16.8.24:2181,172.16.8.178:2181 --zookeeper叢集IP地址
[root@node1 config]# cd /opt/
[root@node1 opt]# scp -r kafka_2.11-2.2.0/ node2:/opt/
[root@node1 opt]# scp -r kafka_2.11-2.2.0/ node3:/opt/
[root@node1 opt]# cd kafka_2.11-2.2.0/bin/
[root@node1 bin]# ./kafka-server-start.sh ../config/server.properties & --三臺kafka都要後臺啟動服務
4.檢視kafka服務是否啟動正常
[root@node1 bin]# jps
30851 Kafka
3605 HMaster
12728 QuorumPeerMain
12712 DFSZKFailoverController
31656 Jps
3929 DataNode
15707 JournalNode
32188 NameNode
14335 ResourceManager
[root@node1 bin]# netstat -antulp | grep 30851
tcp6 0 0 :::9092 :::* LISTEN 30851/java
tcp6 0 0 :::37161 :::* LISTEN 30851/java
tcp6 0 0 172.16.8.23:40754 172.16.8.178:9092 ESTABLISHED 30851/java
tcp6 0 0 172.16.8.23:9092 172.16.8.23:39704 ESTABLISHED 30851/java
tcp6 0 0 172.16.8.23:45480 172.16.8.24:9092 ESTABLISHED 30851/java
tcp6 0 0 172.16.8.23:45294 172.16.8.178:2181 ESTABLISHED 30851/java
tcp6 0 0 172.16.8.23:39704 172.16.8.23:9092 ESTABLISHED 30851/java
[root@node1 bin]#
5.使用命令介面
[root@node1 bin]# ./kafka-topics.sh --create --zookeeper node1:2181 --topic tongcheng --replication-factor 3 --partitions 3 --建立topic
Created topic tongcheng.
[root@node1 bin]# ./kafka-topics.sh --list --zookeeper node1:2181 --檢視topic
tongcheng
[root@node1 bin]# ./kafka-topics.sh --delete --zookeeper node1:2181 --topic tongcheng --刪除topic
Topic tongcheng is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[root@node1 bin]# ./kafka-topics.sh --list --zookeeper node1:2181
[root@node1 bin]#
6.傳送訊息/接收訊息
[root@node1 bin]# ./kafka-console-producer.sh --broker-list node2:9092 --topic ttt
>tongcheng is goods;
>tong is goods;
>cheng is goods!
>
--------接收端-------------
[root@node2 bin]# ./kafka-console-consumer.sh --topic ttt --bootstrap-server node1:9092,node2:9092,node3:9092 --from-beginning
tongcheng is goods;
tong is goods;
cheng is goods!
[root@node2 bin]# ./kafka-topics.sh --describe --zookeeper node1:2181 --topic ttt --檢視分割槽數和副本數
Topic:ttt PartitionCount:1 ReplicationFactor:1 Configs:
Topic: ttt Partition: 0 Leader: 0 Replicas: 0 Isr: 0
[root@node2 bin]#
7.檢視zookeeper資料
[root@node1 bin]# ./zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller, brokers, zookeeper, hadoop-ha, admin, isr_change_notification, log_dir_event_notification, controller_epoch, consumers, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1]
8.接收組訊息(當消費者傳送訊息時,只能是組中一個接收都者接收訊息)
[root@node1 bin]# ./kafka-console-producer.sh --broker-list node1:9092 --topic tong --在node1節點傳送訊息
>
------啟動兩臺消費者-----------
[root@node2 bin]# vim ../config/consumer.properties --兩臺消費都都要修改
group.id=wuhan
[root@node2 bin]# ./kafka-console-consumer.sh --topic tong --bootstrap-server node1:9092 --consumer.config ../config/consumer.properties
[2019-04-05 20:52:09,152] WARN [Consumer clientId=consumer-1, groupId=wuhan] Error while fetching metadata with correlation id 2 :
9.在傳送端傳送訊息,接收端組接收訊息
[root@node1 bin]# ./kafka-console-producer.sh --broker-list node1:9092 --topic tong
>[2019-04-05 20:51:31,094] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-05 20:52:09,114] INFO Creating topic tong with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(2)) (kafka.zk.AdminZkClient)
[2019-04-05 20:52:09,124] INFO [KafkaApi-0] Auto creation of topic tong with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
>hello ttt;
>
-----------接收端--------------
[root@node2 bin]# ./kafka-console-consumer.sh --topic tong --bootstrap-server node1:9092 --consumer.config ../config/consumer.properties --在node2節點接收到訊息
[2019-04-05 20:52:09,152] WARN [Consumer clientId=consumer-1, groupId=wuhan] Error while fetching metadata with correlation id 2 : {tong=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
hello ttt;
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25854343/viewspace-2640487/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 搭建zookeeper叢集(偽叢集)
- zookeeper叢集及kafka叢集搭建Kafka
- linux下搭建ZooKeeper叢集(偽叢集)Linux
- Redis系列:搭建Redis叢集(叢集模式)Redis模式
- 搭建ELK叢集
- Ambari叢集搭建
- kafka叢集搭建Kafka
- Hadoop搭建叢集Hadoop
- zookeeper 叢集搭建
- 搭建 Redis 叢集Redis
- nacos 叢集搭建
- mysql叢集搭建MySql
- redis叢集搭建Redis
- Hadoop叢集搭建Hadoop
- Zookeeper叢集搭建
- RabbitMQ叢集搭建MQ
- HBASE叢集搭建
- 【環境搭建】RocketMQ叢集搭建MQ
- 4.4 Hadoop叢集搭建Hadoop
- Redis(5.0) 叢集搭建Redis
- MySQL 5.7 叢集搭建MySql
- 搭建spark on yarn 叢集SparkYarn
- ZooKeeper 搭建 solr 叢集Solr
- 搭建Redis原生叢集Redis
- 搭建MongoDB分片叢集MongoDB
- MySQL MGR 叢集搭建MySql
- 【greenplum】greenplum叢集搭建
- Kubernetes 叢集搭建(上)
- Kubernetes 叢集搭建(下)
- MongoDB 分片叢集搭建MongoDB
- ElasticSearch 7.8.1叢集搭建Elasticsearch
- Redis--叢集搭建Redis
- Docker 搭建叢集 MongoDBDockerMongoDB
- zookeeper叢集的搭建
- Hadoop叢集搭建(一)Hadoop
- Kubernetes叢集搭建(vagrant)
- ONOS叢集的搭建
- 搭建redis cluster叢集Redis