使用 Bitnami Helm 安裝 Kafka

東風微鳴發表於2023-01-01

伺服器端 K3S 上部署 Kafka Server

Kafka 安裝

?️ Quote:

charts/bitnami/kafka at master · bitnami/charts (github.com)

輸入如下命令新增 Helm 倉庫:

> helm repo add tkemarket https://market-tke.tencentcloudcr.com/chartrepo/opensource-stable
"tkemarket" has been added to your repositories
> helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

? Tip:

tkemarket 映象沒有及時更新,建議使用 bitnami 倉庫。

但是 bitmani 在海外,有連不上的風險。

查詢 Helm Chart kafka:

> helm search repo kafka
NAME            CHART VERSION   APP VERSION     DESCRIPTION                                      
tkemarket/kafka 11.0.0          2.5.0           Apache Kafka is a distributed streaming platform.
bitnami/kafka                   15.3.0          3.1.0           Apache Kafka is a distributed streaming platfor...
bitnami/dataplatform-bp1        9.0.8           1.0.1           OCTO Data platform Kafka-Spark-Solr Helm Chart    
bitnami/dataplatform-bp2        10.0.8          1.0.1           OCTO Data platform Kafka-Spark-Elasticsearch He...

使用 bitnami 的 helm 安裝 kafka:

helm install kafka bitnami/kafka \
  --namespace kafka --create-namespace \
  --set global.storageClass=<storageClass-name> \
  --set kubeVersion=<theKubeVersion> \
  --set image.tag=3.1.0-debian-10-r22 \
  --set replicaCount=3 \
  --set service.type=ClusterIP \
  --set externalAccess.enabled=true \
  --set externalAccess.service.type=LoadBalancer \
  --set externalAccess.service.ports.external=9094 \
  --set externalAccess.autoDiscovery.enabled=true \
  --set serviceAccount.create=true \
  --set rbac.create=true \
  --set persistence.enabled=true \
  --set logPersistence.enabled=true \
  --set metrics.kafka.enabled=false \
  --set zookeeper.enabled=true \
  --set zookeeper.persistence.enabled=true \
  --wait

? Tip:

引數說明如下:

  1. --namespace kafka --create-namespace: 安裝在 kafka namespace, 如果沒有該 ns 就建立;
  2. global.storageClass=<storageClass-name> 使用指定的 storageclass
  3. kubeVersion=<theKubeVersion> 讓 bitnami/kafka helm 判斷是否滿足版本需求,不滿足就無法建立
  4. image.tag=3.1.0-debian-10-r22: 20220219 的最新映象,使用完整的名字保證儘量減少從網際網路 pull 映象;
  5. replicaCount=3: kafka 副本數為 3
  6. service.type=ClusterIP : 建立 kafka service, 用於 k8s 叢集內部,所以 ClusterIP 就可以
  7. --set externalAccess.enabled=true --set externalAccess.service.type=LoadBalancer --set externalAccess.service.ports.external=9094 --set externalAccess.autoDiscovery.enabled=true --set serviceAccount.create=true --set rbac.create=true 建立用於 k8s 叢集外訪問的 kafka-<0|1|2>-external 服務 (因為前面 kafka 副本數為 3)
  8. persistence.enabled=true: Kafka 資料持久化,容器中的目錄為 /bitnami/kafka
  9. logPersistence.enabled=true: Kafka 日誌持久化,容器中的目錄為 /opt/bitnami/kafka/logs
  10. metrics.kafka.enabled=false 不啟用 kafka 的監控 (Kafka 監控收集資料是透過 kafka-exporter 實現的)
  11. zookeeper.enabled=true: 安裝 kafka 需要先安裝 zookeeper
  12. zookeeper.persistence.enabled=true: Zookeeper 日誌持久化,容器中的目錄為:/bitnami/zookeeper
  13. --wait: helm 命令會一直等待建立的結果

輸出如下:

creating 1 resource(s)
creating 12 resource(s)
beginning wait for 12 resources with timeout of 5m0s
Service does not have load balancer ingress IP address: kafka/kafka-0-external
...
StatefulSet is not ready: kafka/kafka-zookeeper. 0 out of 1 expected pods are ready
...
StatefulSet is not ready: kafka/kafka. 0 out of 1 expected pods are ready
NAME: kafka
LAST DEPLOYED: Sat Feb 19 05:04:53 2022
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 15.3.0
APP VERSION: 3.1.0
---------------------------------------------------------------------------------------------
 WARNING

    By specifying "serviceType=LoadBalancer" and not configuring the authentication
    you have most likely exposed the Kafka service externally without any
    authentication mechanism.

    For security reasons, we strongly suggest that you switch to "ClusterIP" or
    "NodePort". As alternative, you can also configure the Kafka authentication.

---------------------------------------------------------------------------------------------

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.kafka.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.kafka.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.1.0-debian-10-r22 --namespace kafka --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace kafka -- bash

    PRODUCER:
        kafka-console-producer.sh \
            
            --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            
            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

To connect to your Kafka server from outside the cluster, follow the instructions below:

  NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
        Watch the status with: 'kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -w'

    Kafka Brokers domain: You will have a different external IP for each Kafka broker. You can get the list of external IPs using the command below:

        echo "$(kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"

    Kafka Brokers port: 9094

Kafka 測試驗證

測試訊息:

先用如下命令建立一個 kafka-client pod:

kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.1.0-debian-10-r22 --namespace kafka --command -- sleep infinity

然後進入到 kafka-client 中,執行如下命令測試:

kafka-console-producer.sh  --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092  --topic test
kafka-console-consumer.sh --bootstrap-server kafka-0.kafka-headless.kafka.svc.cluster.local:9092  --topic test --from-beginning

kafka-console-producer.sh  --broker-list 10.109.205.245:9094  --topic test
kafka-console-consumer.sh --bootstrap-server 10.109.205.245:9094  --topic test --from-beginning

效果如下:

外部 producer

external consumer

?至此,kafka 安裝完成。

Kafka 解除安裝

Danger

(按需)刪除整個 kafka 的命令:

helm delete kafka --namespace kafka

總結

Kafka

  1. Kafka 透過 Helm Chart bitnami 安裝,安裝於:K8S 叢集的 kafka namespace;

    1. 安裝模式:三節點
    2. Kafka 版本:3.1.0
    3. Kafka 例項:3 個
    4. Zookeeper 例項:1 個
    5. Kafka、Zookeeper、Kafka 日誌均已持久化,位於:/data/rancher/k3s/storage
    6. 未配置 sasl 及 tls
  2. 在 K8S 叢集內部,可以透過該地址訪問 Kafka:

    kafka.kafka.svc.cluster.local:9092

  3. 在 K8S 叢集外部,可以透過該地址訪問 Kafka:

    <loadbalancer-ip>:9094

Kafka 的持久化資料截圖如下:

image-20220219133316787

三人行, 必有我師; 知識共享, 天下為公. 本文由東風微鳴技術部落格 EWhisper.cn 編寫.

相關文章