開放平臺日誌推送---kafka

龍骨發表於2020-12-23

解耦

ASynchronize非同步

Synchronize 同步


Kafka訊息資料積壓,Kafka消費能力不足怎麼處理?

1)如果是Kafka消費能力不足,則可以考慮增加Topic的分割槽數,並且同時提升消費組的消費者數量,消費者數=分割槽數。(兩者缺一不可)
2)如果是下游的資料處理不及時:提高每批次拉取的數量。批次拉取資料過少(拉取資料/處理時間<生產速度),使處理的資料小於生產的資料,也會造成資料積壓。
 

一 kafka安裝

1. 建立docker-net

docker network create -d overlay docker-net

2. docker-compose-kafka.yml

version: '3.6'
services:
  zk:
    image: wurstmeister/zookeeper
    networks: 
      - docker-net
    ports:
      - "2181:2181"
  manager:
    image: kafkamanager/kafka-manager
    links:
      - zk:zk 
    networks: 
      - docker-net
    ports: 
      - "10080:9000"
    environment: 
      ZK_HOSTS: zk:2181
  broker_1:
    image: wurstmeister/kafka
    networks: 
      - docker-net
    links:
      - zk:zk
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_HOST_NAME: ip
      KAFKA_ADVERTISED_HOST_NAME: ip
      KAFKA_ZOOKEEPER_CONNECT: zk:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://ip:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_LOG_DIRS: /logs/kafka
      KAFKA_HEAP_OPTS: -Xmx16G -Xms16G
    volumes:
      - /data4/bin/kafka:/logs/kafka
networks:
  docker-net:
    external: true

二 kafka 目錄結構

 


Kafka 0.9版本之前,consumer預設將offset儲存在Zookeeper中,從0.9版本開始,consumer預設將offset儲存在Kafka一個內建的topic中,該topic為__consumer_offsets。

 

Kafka的producer生產資料,要寫入到log檔案中,寫的過程是一直追加到檔案末端,為順序寫。官網有資料表明,同樣的磁碟,順序寫能到到600M/s,而隨機寫只有100k/s。這與磁碟的機械機構有關,順序寫之所以快,是因為其省去了大量磁頭定址的時間。

 

 

三 kafka日常運維命令小結


topic列表
kafka-topics.sh --list --zookeeper zk:2181
__consumer_offsets
openai_kong_log

topic詳情
kafka-topics.sh --describe --zookeeper zk:2181 --topic openai_kong_log
Topic: openai_kong_log	PartitionCount: 8	ReplicationFactor: 1	Configs: 
	Topic: openai_kong_log	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 1	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 2	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 3	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 4	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 5	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 6	Leader: 1	Replicas: 1	Isr: 1
	Topic: openai_kong_log	Partition: 7	Leader: 1	Replicas: 1	Isr: 1

消費組列表
kafka-consumer-groups.sh --bootstrap-server ip:9092 --list 
openai
KMOffsetCache-7e0676b0a501
group_openai
--consumer-property


訊息堆積情況
kafka-consumer-groups.sh --bootstrap-server ip:9092 --group group_openai --describe

Consumer group 'group_openai' has no active members.

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID     HOST            CLIENT-ID
group_openai    openai_kong_log 3          1989534         2002875         13341           -               -               -
group_openai    openai_kong_log 4          1990547         2003945         13398           -               -               -
group_openai    openai_kong_log 1          1988941         2002588         13647           -               -               -
group_openai    openai_kong_log 2          1991347         2004919         13572           -               -               -
group_openai    openai_kong_log 7          1922215         1935656         13441           -               -               -
group_openai    openai_kong_log 5          1921993         1935469         13476           -               -               -
group_openai    openai_kong_log 6          1921179         1934700         13521           -               -               -
group_openai    openai_kong_log 0          30612637        33188446        2575809         -               -               -

單個topic 常見操作

1.建立topic
kafka-topics.sh --zookeeper ip:2181 \
--create --replication-factor 1 --partitions 1 --topic first


2.傳送訊息
kafka-console-producer.sh \
--broker-list ip:9082 --topic first
>hello world
>atguigu  atguigu

消費訊息 --from-beginning:會把主題中以往所有的資料都讀取出來
kafka-console-consumer.sh \
--bootstrap-server ip:9082 --from-beginning --topic first


3.修改分割槽數
kafka-topics.sh --zookeeper ip:2181 --alter --topic first --partitions 6


4.刪除topic
kafka-topics.sh --zookeeper ip:2181 \
--delete --topic first

參考

https://www.cnblogs.com/wangzhuxing/p/10127497.html

 

相關文章