用 Docker 快速搭建 Kafka 叢集

icecho發表於2020-06-23

版本

  1. JDK 14
  2. Zookeeper
  3. Kafka

安裝 Zookeeper 和 Kafka

Kafka 依賴 Zookeeper,所以我們需要在安裝 Kafka 之前先擁有 Zookeeper。準備如下的 docker-compose.yaml 檔案,將檔案中的主機地址 192.168.1.100 替換成你自己的環境中的主機地址即可。

version: "3"

services:
  zookeeper:
    image: zookeeper
    build:
      context: ./
    container_name: zookeeper
    ports:
      - 2181:2181
    volumes:
      - ./data/zookeeper/data:/data
      - ./data/zookeeper/datalog:/datalog
      - ./data/zookeeper/logs:/logs
    restart: always

  kafka_node_0:
    depends_on:
      - zookeeper
    build:
      context: ./
    container_name: kafka-node-0
    image: wurstmeister/kafka
    environment:
      KAFKA_BROKER_ID: 0
      KAFKA_ZOOKEEPER_CONNECT: 192.168.1.100:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.100:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 2
    ports:
      - 9092:9092
    volumes:
      - ./data/kafka/node_0:/kafka
    restart: unless-stopped

  kafka_node_1:
    depends_on:
      - kafka_node_0
    build:
      context: ./
    container_name: kafka-node-1
    image: wurstmeister/kafka
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 192.168.1.100:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.100:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 2
    ports:
      - 9093:9093
    volumes:
      - ./data/kafka/node_1:/kafka
    restart: unless-stopped

  kafka_node_2:
    depends_on:
      - kafka_node_1
    build:
      context: ./
    container_name: kafka-node-2
    image: wurstmeister/kafka
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: 192.168.1.100:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.100:9094
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 2
    ports:
      - 9094:9094
    volumes:
      - ./data/kafka/node_2:/kafka
    restart: unless-stopped

輸入 docker-compose up -d 執行指令碼檔案進行叢集構建。等待一會兒,得到如下結果即為成功。

SpringBoot 整合 Kafka 叢集

  1. 建立一個全新的 SpringBoot 工程,在 build.gradle 檔案中新增下列依賴。
dependencies {
    ...
    ...
    implementation 'org.springframework.kafka:spring-kafka:2.5.2.RELEASE'
    implementation 'com.alibaba:fastjson:1.2.71'
}
  1. 在 application.properties 進行 Kafka 相關引數配置
spring.kafka.bootstrap-servers=192.168.1.100:9092,192.168.1.100:9093,192.168.1.100:9094

spring.kafka.producer.retries=0
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

spring.kafka.consumer.auto-offset-reset=latest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
  1. 建立訊息體類。
public class Message {
    private Long id;
    private String message;
    private Date sendAt;
}
  1. 建立訊息傳送者
public class Sender {

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    public void send() {
        Message message = new Message();

        message.setId(System.currentTimeMillis());
        message.setMessage(UUID.randomUUID().toString());
        message.setSendAt(new Date());

        log.info("message = {}", JSON.toJSONString(message));
        kafkaTemplate.send("test", JSON.toJSONString(message));
    }
}
  1. 建立訊息接收者
public class Receiver {
    @KafkaListener(topics = {"test"}, groupId = "test")
    public void listen(ConsumerRecord<?, ?> record) {
        Optional<?> message = Optional.ofNullable(record.value());
        if (message.isPresent()) {
            log.info("receiver record = " + record);
            log.info("receiver message = " + message.get());
        }
    }
}
  1. 測試訊息佇列
public class QueueController {
    @Autowired
    private Sender sender;

    @PostMapping("/test")
    public void testQueue() {
        sender.send();
        sender.send();
        sender.send();
    }
}
  1. 得到如下日誌即為整合成功。

到這裡就我們就成功搭建了一個 Kafka 偽叢集,併成功與 SpringBoot 進行整合。

推薦閱讀文章

  1. Apache Kafka
  2. 《淺入淺出 Kafka》
  3. 真的,Kafka 入門一篇文章就夠了 - 掘金
本作品採用《CC 協議》,轉載必須註明作者和本文連結

Hello。

相關文章