KubeSphere 最佳實戰:K8s 構建高可用、高效能 Redis 叢集實戰指南

kubesphere發表於2024-11-20

首發:運維有術

本指南將逐步引導您完成以下關鍵任務:

  1. 安裝 Redis:使用 StatefulSet 部署 Redis。
  2. 自動或手動配置 Redis 叢集:使用命令列工具初始化 Redis 叢集。
  3. Redis 效能測試:使用 Redis 自帶的 Benchmark 工具進行效能測試。
  4. Redis 圖形化管理:安裝並配置 RedisInsight。

透過本指南,您將掌握在 K8s 上部署和管理 Redis 叢集的必備技能。讓我們開始這場 Redis 叢集部署之旅。

實戰伺服器配置(架構1:1復刻小規模生產環境,配置不同)

主機名 IP CPU 記憶體 系統盤 資料盤 用途
ksp-registry 192.168.9.90 4 8 40 200 Harbor 映象倉庫
ksp-control-1 192.168.9.91 4 8 40 100 KubeSphere/k8s-control-plane
ksp-control-2 192.168.9.92 4 8 40 100 KubeSphere/k8s-control-plane
ksp-control-3 192.168.9.93 4 8 40 100 KubeSphere/k8s-control-plane
ksp-worker-1 192.168.9.94 8 16 40 100 k8s-worker/CI
ksp-worker-2 192.168.9.95 8 16 40 100 k8s-worker
ksp-worker-3 192.168.9.96 8 16 40 100 k8s-worker
ksp-storage-1 192.168.9.97 4 8 40 400+ ElasticSearch/Longhorn/Ceph/NFS
ksp-storage-2 192.168.9.98 4 8 40 300+ ElasticSearch/Longhorn/Ceph
ksp-storage-3 192.168.9.99 4 8 40 300+ ElasticSearch/Longhorn/Ceph
ksp-gpu-worker-1 192.168.9.101 4 16 40 100 k8s-worker(GPU NVIDIA Tesla M40 24G)
ksp-gpu-worker-2 192.168.9.102 4 16 40 100 k8s-worker(GPU NVIDIA Tesla P100 16G)
ksp-gateway-1 192.168.9.103 2 4 40 自建應用服務代理閘道器/VIP:192.168.9.100
ksp-gateway-2 192.168.9.104 2 4 40 自建應用服務代理閘道器/VIP:192.168.9.100
ksp-mid 192.168.9.105 4 8 40 100 部署在 k8s 叢集之外的服務節點(Gitlab 等)
合計 15 68 152 600 2100+

實戰環境涉及軟體版本資訊

  • 作業系統:openEuler 22.03 LTS SP3 x86_64
  • KubeSphere:v3.4.1
  • Kubernetes:v1.28.8
  • KubeKey: v3.1.1
  • Redis: 7.2.6
  • RedisInsight:2.60

1. 部署方案規劃

1.1 部署架構圖

1.2 準備持久化儲存

本實戰環境使用 NFS 作為 K8s 叢集的持久化儲存,新叢集可以參考探索 Kubernetes 持久化儲存之 NFS 終極實戰指南 部署 NFS 儲存。

1.3 前提說明

Redis 叢集所有資源部署在名稱空間 opsxlab內。

2. 部署 Redis 服務

2.1 建立 ConfigMap

  1. 建立 Redis 配置檔案

請使用 vi 編輯器,建立資源清單檔案 redis-cluster-cm.yaml,並輸入以下內容:

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster-config
data:
  redis-config: |
    appendonly yes
    protected-mode no
    dir /data
    port 6379
    cluster-enabled yes
    cluster-config-file /data/nodes.conf
    cluster-node-timeout 5000
    masterauth PleaseChangeMe2024
    requirepass PleaseChangeMe2024

說明: 配置檔案僅啟用了密碼認證,未做最佳化,生產環境請根據需要調整。

  1. 建立資源

執行下面的命令,建立 ConfigMap 資源。

kubectl apply -f redis-cluster-cm.yaml -n opsxlab
  1. 驗證資源

執行下面的命令,檢視 ConfigMap 建立結果。

$ kubectl get cm -n opsxlab
NAME                   DATA   AGE
kube-root-ca.crt       1      100d
redis-cluster-config   1      115s

2.2 建立 Redis

本文使用 StatefulSet 部署 Redis 服務,需要建立 StatefulSet 和 HeadLess 兩種資源。

  1. 建立資源清單

請使用 vi 編輯器,建立資源清單檔案 redis-cluster-sts.yaml,並輸入以下內容:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
  labels:
    app.kubernetes.io/name: redis-cluster
spec:
  serviceName: redis-headless
  replicas: 6
  selector:
    matchLabels:
      app.kubernetes.io/name: redis-cluster
  template:
    metadata:
      labels:
        app.kubernetes.io/name: redis-cluster
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - redis-cluster
              topologyKey: kubernetes.io/hostname
      containers:
        - name: redis
          image: 'redis:7.2.6'
          command:
            - "redis-server"
          args:
            - "/etc/redis/redis.conf"
            - "--protected-mode"
            - "no"
            - "--cluster-announce-ip"
            - "$(POD_IP)"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          ports:
            - name: redis-6379
              containerPort: 6379
              protocol: TCP
          volumeMounts:
            - name: config
              mountPath: /etc/redis
            - name: redis-cluster-data
              mountPath: /data
          resources:
            limits:
              cpu: '2'
              memory: 4Gi
            requests:
              cpu: 50m
              memory: 500Mi
      volumes:
        - name: config
          configMap:
            name: redis-cluster-config
            items:
              - key: redis-config
                path: redis.conf
  volumeClaimTemplates:
    - metadata:
        name: redis-cluster-data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "nfs-sc"
        resources:
          requests:
            storage: 5Gi

---
apiVersion: v1
kind: Service
metadata:
  name: redis-headless
  labels:
    app.kubernetes.io/name: redis-cluster
spec:
  ports:
    - name: redis-6379
      protocol: TCP
      port: 6379
      targetPort: 6379
  selector:
    app.kubernetes.io/name: redis-cluster
  clusterIP: None
  type: ClusterIP

注意: POD_IP 是重點,如果不配置會導致線上的 POD 重啟換 IP 後,叢集狀態無法自動同步。

  1. 建立資源

執行下面的命令,建立資源。

kubectl apply -f redis-cluster-sts.yaml -n opsxlab
  1. 驗證資源

執行下面的命令,檢視 StatefulSet、Pod、Service 建立結果。

$ kubectl get sts,pod,svc -n opsxlab
NAME                             READY   AGE
statefulset.apps/redis-cluster   6/6     72s

NAME                  READY   STATUS    RESTARTS   AGE
pod/redis-cluster-0   1/1     Running   0          72s
pod/redis-cluster-1   1/1     Running   0          63s
pod/redis-cluster-2   1/1     Running   0          54s
pod/redis-cluster-3   1/1     Running   0          43s
pod/redis-cluster-4   1/1     Running   0          40s
pod/redis-cluster-5   1/1     Running   0          38s

NAME                     TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/redis-headless   ClusterIP   None         <none>        6379/TCP   36s

2.3 建立 k8s 叢集外部訪問服務

我們採用 NodePort 方式在 Kubernetes 叢集外發布 Redis 服務,指定的埠為 31379

請使用 vi 編輯器,建立資源清單檔案 redis-cluster-svc-external.yaml,並輸入以下內容:

kind: Service
apiVersion: v1
metadata:
  name: redis-cluster-external
  labels:
    app: redis-cluster-external
spec:
  ports:
    - protocol: TCP
      port: 6379
      targetPort: 6379
      nodePort: 31379
  selector:
    app.kubernetes.io/name: redis-cluster
  type: NodePort
  1. 建立資源

執行下面的命令,建立 Service 資源。

kubectl apply -f redis-cluster-svc-external.yaml -n opsxlab
  1. 驗證資源

執行下面的命令,檢視 Service 建立結果。

$ kubectl get svc -o wide -n opsxlab
NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE     SELECTOR
redis-cluster-external   NodePort    10.233.22.96   <none>        6379:31379/TCP   14s     app.kubernetes.io/name=redis-cluster
redis-headless           ClusterIP   None           <none>        6379/TCP         2m57s   app.kubernetes.io/name=redis-cluster

3. 建立 Redis 叢集

Redis POD 建立完成後,不會自動建立 Redis 叢集,需要手工執行叢集初始化的命令,有自動建立和手工建立兩種方式,二選一,建議選擇自動

3.1 自動建立 Redis 叢集

執行下面的命令,自動建立 3 個 master 和 3 個 slave 的叢集,中間需要輸入一次 yes。

  • 執行命令
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster create --cluster-replicas 1 $(kubectl get pods -n opsxlab -l app.kubernetes.io/name=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
  • 正確執行後,輸出結果如下 :
$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster create --cluster-replicas 1 $(kubectl get pods -n opsxlab -l app.kubernetes.io/name=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.233.96.17:6379 to 10.233.94.214:6379
Adding replica 10.233.68.250:6379 to 10.233.96.22:6379
Adding replica 10.233.94.231:6379 to 10.233.68.251:6379
M: da376da9577b14e4100c87d3acc53aebf57358b7 10.233.94.214:6379
   slots:[0-5460] (5461 slots) master
M: a3094b24d44430920f9250d4a6d8ce2953852f13 10.233.96.22:6379
   slots:[5461-10922] (5462 slots) master
M: 185fd2d0bbb0cd9c01fa82fa496a1082f16b9ce0 10.233.68.251:6379
   slots:[10923-16383] (5461 slots) master
S: 9ce470afe3490662fb1670ba16fad2e87e02b191 10.233.94.231:6379
   replicates 185fd2d0bbb0cd9c01fa82fa496a1082f16b9ce0
S: b57fb0717160eab39ccd67f6705a592bd5482429 10.233.96.17:6379
   replicates da376da9577b14e4100c87d3acc53aebf57358b7
S: bed82c46554a0ebf638117437d884c01adf1003f 10.233.68.250:6379
   replicates a3094b24d44430920f9250d4a6d8ce2953852f13
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 10.233.94.214:6379)
M: da376da9577b14e4100c87d3acc53aebf57358b7 10.233.94.214:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 9ce470afe3490662fb1670ba16fad2e87e02b191 10.233.94.231:6379
   slots: (0 slots) slave
   replicates 185fd2d0bbb0cd9c01fa82fa496a1082f16b9ce0
S: b57fb0717160eab39ccd67f6705a592bd5482429 10.233.96.17:6379
   slots: (0 slots) slave
   replicates da376da9577b14e4100c87d3acc53aebf57358b7
M: a3094b24d44430920f9250d4a6d8ce2953852f13 10.233.96.22:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: bed82c46554a0ebf638117437d884c01adf1003f 10.233.68.250:6379
   slots: (0 slots) slave
   replicates a3094b24d44430920f9250d4a6d8ce2953852f13
M: 185fd2d0bbb0cd9c01fa82fa496a1082f16b9ce0 10.233.68.251:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

3.2 手動建立 Redis 叢集

手動配置 3 個 Master 和 3 個 Slave 的叢集(此步驟只為了記錄手動配置叢集的過程,實際環境建議用自動建立的方式)。

一共建立了 6 個 Redis pod,叢集主-> 從配置的規則為 0->3,1->4,2->5。

由於命令太長,配置過程中,沒有采用自動獲取 IP 的方式,使用手工查詢 pod IP 並進行相關配置。

  • 查詢 Redis pod 分配的 IP
$ kubectl get pods -n opsxlab -o wide | grep redis
redis-cluster-0   1/1     Running   0          18s   10.233.94.233   ksp-worker-1   <none>           <none>
redis-cluster-1   1/1     Running   0          16s   10.233.96.29    ksp-worker-3   <none>           <none>
redis-cluster-2   1/1     Running   0          13s   10.233.68.255   ksp-worker-2   <none>           <none>
redis-cluster-3   1/1     Running   0          11s   10.233.94.209   ksp-worker-1   <none>           <none>
redis-cluster-4   1/1     Running   0          8s    10.233.96.23    ksp-worker-3   <none>           <none>
redis-cluster-5   1/1     Running   0          5s    10.233.68.4     ksp-worker-2   <none>           <none>
  • 建立 3 個 Master 節點的叢集
# 下面的命令中,三個 IP 地址分別為 redis-cluster-0 redis-cluster-1 redis-cluster-2 對應的IP, 中間需要輸入一次yes

$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster create 10.233.94.233:6379 10.233.96.29:6379 10.233.68.255:6379

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379
   slots:[0-5460] (5461 slots) master
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379
   slots:[5461-10922] (5462 slots) master
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379
   slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 10.233.94.233:6379)
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379
   slots:[0-5460] (5461 slots) master
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379
   slots:[5461-10922] (5462 slots) master
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379
   slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 為每個 Master 新增 Slave 節點(共三組
# 第一組 redis0 -> redis3
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.94.209:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id 1f4df418ac310b6d14a7920a105e060cda58275a

# 引數說明
# 10.233.94.233:6379 任意一個 master 節點的 ip 地址,一般用 redis-cluster-0 的 IP 地址
# 10.233.94.209:6379 新增到某個 Master 的 Slave 節點的 IP 地址
# --cluster-master-id 新增 Slave 對應 Master 的 ID,如果不指定則隨機分配到任意一個主節點
  • 正確執行後,輸出結果如下 :(以第一組 0->3 為例
$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.94.209:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id 1f4df418ac310b6d14a7920a105e060cda58275a

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 10.233.94.209:6379 to cluster 10.233.94.233:6379
>>> Performing Cluster Check (using node 10.233.94.233:6379)
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379
   slots:[0-5460] (5461 slots) master
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379
   slots:[5461-10922] (5462 slots) master
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379
   slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.233.94.209:6379 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 10.233.94.233:6379.
[OK] New node added correctly.
  • 依次執行另外兩組的配置(結果略)
# 第二組 redis1 -> redis4
kubectl exec -it -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.96.23:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1

# 第三組 redis2 -> redis5
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster add-node 10.233.68.4:6379 10.233.94.233:6379 --cluster-slave --cluster-master-id 149ffd5df2cae9cfbc55e3aff69c9575134ce162

3.3 驗證叢集狀態

  • 執行命令
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster check $(kubectl get pods -n opsxlab -l app.kubernetes.io/name=redis-cluster -o jsonpath='{range.items[0]}{.status.podIP}:6379{end}')
  • 正確執行後,輸出結果如下 :
$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-cli -a PleaseChangeMe2024 --cluster check $(kubectl get pods -n opsxlab -l app.kubernetes.io/name=redis-cluster -o jsonpath='{range.items[0]}{.status.podIP}:6379{end}')

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.233.94.233:6379 (1f4df418...) -> 0 keys | 5461 slots | 1 slaves.
10.233.68.255:6379 (149ffd5d...) -> 0 keys | 5461 slots | 1 slaves.
10.233.96.29:6379 (bd1a8e26...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.233.94.233:6379)
M: 1f4df418ac310b6d14a7920a105e060cda58275a 10.233.94.233:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 577675e83c2267d625bf7b408658bfa8b5690feb 10.233.96.23:6379
   slots: (0 slots) slave
   replicates bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1
M: 149ffd5df2cae9cfbc55e3aff69c9575134ce162 10.233.68.255:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 288bd84283237dcfaa7f27f1e1d0148488649d97 10.233.68.4:6379
   slots: (0 slots) slave
   replicates 149ffd5df2cae9cfbc55e3aff69c9575134ce162
M: bd1a8e265fa78e93b456b9e59cbefc893f0d2ab1 10.233.96.29:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: a5fc4eeebb4c345d783f7b9d2b8695442e4cdf07 10.233.94.209:6379
   slots: (0 slots) slave
   replicates 1f4df418ac310b6d14a7920a105e060cda58275a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4. 叢集功能測試

4.1 壓力測試

使用 Redis 自帶的壓力測試工具,測試 Redis 叢集是否可用,並簡單測試效能。

測試 set 場景:

使用 set 命令,傳送100000次請求,每個請求包含一個鍵值對,其中鍵是隨機生成的,值的大小是100位元組,同時有20個客戶端併發執行。

$ kubectl exec -it redis-cluster-0 -n opsxlab -- redis-benchmark -h 192.168.9.91 -p 31379 -a PleaseChangeMe2024 -t set -n 100000 -c 20 -d 100 --cluster
Cluster has 3 master nodes:

Master 0: dd42f52834303001a9e4c3036164ab0a11d4f3e1 10.233.94.243:6379
Master 1: e263c18891f96b6af4a4a7d842d8099355ec4654 10.233.96.41:6379
Master 2: 59944b8a38ecf0da5c1940676c9f7ac7fd9a926c 10.233.68.3:6379

====== SET ======
  100000 requests completed in 1.50 seconds
  20 parallel clients
  100 bytes payload
  keep alive: 1
  cluster mode: yes (3 masters)
  node [0] configuration:
    save: 3600 1 300 100 60 10000
    appendonly: yes
  node [1] configuration:
    save: 3600 1 300 100 60 10000
    appendonly: yes
  node [2] configuration:
    save: 3600 1 300 100 60 10000
    appendonly: yes
  multi-thread: yes
  threads: 3

Latency by percentile distribution:
0.000% <= 0.039 milliseconds (cumulative count 32)
50.000% <= 0.183 milliseconds (cumulative count 50034)
75.000% <= 0.311 milliseconds (cumulative count 76214)
87.500% <= 0.399 milliseconds (cumulative count 87628)
93.750% <= 0.495 milliseconds (cumulative count 94027)
96.875% <= 0.591 milliseconds (cumulative count 96978)
98.438% <= 0.735 milliseconds (cumulative count 98440)
99.219% <= 1.071 milliseconds (cumulative count 99219)
99.609% <= 1.575 milliseconds (cumulative count 99610)
99.805% <= 2.375 milliseconds (cumulative count 99805)
99.902% <= 3.311 milliseconds (cumulative count 99903)
99.951% <= 5.527 milliseconds (cumulative count 99952)
99.976% <= 9.247 milliseconds (cumulative count 99976)
99.988% <= 11.071 milliseconds (cumulative count 99988)
99.994% <= 22.751 milliseconds (cumulative count 99994)
99.997% <= 23.039 milliseconds (cumulative count 99997)
99.998% <= 23.119 milliseconds (cumulative count 99999)
99.999% <= 23.231 milliseconds (cumulative count 100000)
100.000% <= 23.231 milliseconds (cumulative count 100000)

Cumulative distribution of latencies:
17.186% <= 0.103 milliseconds (cumulative count 17186)
55.606% <= 0.207 milliseconds (cumulative count 55606)
74.870% <= 0.303 milliseconds (cumulative count 74870)
88.358% <= 0.407 milliseconds (cumulative count 88358)
94.386% <= 0.503 milliseconds (cumulative count 94386)
97.230% <= 0.607 milliseconds (cumulative count 97230)
98.247% <= 0.703 milliseconds (cumulative count 98247)
98.745% <= 0.807 milliseconds (cumulative count 98745)
98.965% <= 0.903 milliseconds (cumulative count 98965)
99.148% <= 1.007 milliseconds (cumulative count 99148)
99.254% <= 1.103 milliseconds (cumulative count 99254)
99.358% <= 1.207 milliseconds (cumulative count 99358)
99.465% <= 1.303 milliseconds (cumulative count 99465)
99.532% <= 1.407 milliseconds (cumulative count 99532)
99.576% <= 1.503 milliseconds (cumulative count 99576)
99.619% <= 1.607 milliseconds (cumulative count 99619)
99.648% <= 1.703 milliseconds (cumulative count 99648)
99.673% <= 1.807 milliseconds (cumulative count 99673)
99.690% <= 1.903 milliseconds (cumulative count 99690)
99.734% <= 2.007 milliseconds (cumulative count 99734)
99.755% <= 2.103 milliseconds (cumulative count 99755)
99.883% <= 3.103 milliseconds (cumulative count 99883)
99.939% <= 4.103 milliseconds (cumulative count 99939)
99.945% <= 5.103 milliseconds (cumulative count 99945)
99.959% <= 6.103 milliseconds (cumulative count 99959)
99.966% <= 7.103 milliseconds (cumulative count 99966)
99.974% <= 9.103 milliseconds (cumulative count 99974)
99.986% <= 10.103 milliseconds (cumulative count 99986)
99.989% <= 11.103 milliseconds (cumulative count 99989)
99.993% <= 12.103 milliseconds (cumulative count 99993)
99.998% <= 23.103 milliseconds (cumulative count 99998)
100.000% <= 24.111 milliseconds (cumulative count 100000)

Summary:
  throughput summary: 66533.60 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.243     0.032     0.183     0.519     0.927    23.231

其它場景(結果略)

  • ping
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-benchmark -h 192.168.9.91 -p 31379 -a PleaseChangeMe2024 -t ping -n 100000 -c 20 -d 100 --cluster
  • get
kubectl exec -it redis-cluster-0 -n opsxlab -- redis-benchmark -h 192.168.9.91 -p 31379 -a PleaseChangeMe2024 -t get -n 100000 -c 20 -d 100 --cluster

4.2 故障切換測試

測試前檢視叢集狀態(以一組 Master/Slave 為例)

......
M: e6b176bc1d53bac7da548e33d5c61853ecbe1890 10.233.96.51:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: e7f5d965fc592373b01b0a0b599f00b8883cdf7d 10.233.68.1:6379
   slots: (0 slots) slave
   replicates e6b176bc1d53bac7da548e33d5c61853ecbe1890
  1. 測試場景1: 手動刪除一個 Master 的 Slave,觀察 Slave Pod 是否會自動重建並加入原有 Master。

刪除 Slave 後,檢視叢集狀態。

......
M: e6b176bc1d53bac7da548e33d5c61853ecbe1890 10.233.96.51:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: e7f5d965fc592373b01b0a0b599f00b8883cdf7d 10.233.68.8:6379
   slots: (0 slots) slave
   replicates e6b176bc1d53bac7da548e33d5c61853ecbe1890

結果: 原有 Slave IP 為 10.233.68.1,刪除後自動重建,IP 變更為 10.233.68.8,並自動加入原有的 Master。

  1. 測試場景2: 手動刪除 Master ,觀察 Master Pod 是否會自動重建並重新變成 Master。

刪除 Master 後,檢視叢集狀態。

......
M: e6b176bc1d53bac7da548e33d5c61853ecbe1890 10.233.96.68:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: e7f5d965fc592373b01b0a0b599f00b8883cdf7d 10.233.68.8:6379
   slots: (0 slots) slave
   replicates e6b176bc1d53bac7da548e33d5c61853ecbe1890

結果: 原有 Master IP 為 10.233.96.51,刪除後自動重建, IP 變更為 10.233.96.68,並重新變成 Master

以上測試內容,僅是簡單的故障切換測試,生產環境請增加更多的測試場景。

5. 安裝管理客戶端

大部分開發、運維人員還是喜歡圖形化的 Redis 管理工具,所以介紹一下 Redis 官方提供的圖形化工具 RedisInsight。

由於 RedisInsight 預設並不提供登入驗證功能,因此,在系統安全要求比較高的環境會有安全風險,請慎用!個人建議生產環境使用命令列工具

5.1 編輯資源清單

  1. 建立資源清單

請使用 vi 編輯器,建立資源清單檔案 redisinsight-deploy.yaml,並輸入以下內容:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redisinsight
  labels:
    app.kubernetes.io/name: redisinsight
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: redisinsight
  template:
    metadata:
      labels:
        app.kubernetes.io/name: redisinsight
    spec:
      containers:
        - name: redisinsight
          image: registry.opsxlab.cn:8443/redis/redisinsight:2.60
          ports:
            - name: redisinsight
              containerPort: 5540
              protocol: TCP
          resources:
            limits:
              cpu: '2'
              memory: 4Gi
            requests:
              cpu: 100m
              memory: 500Mi
  1. 建立外部訪問服務

我們採用 NodePort 方式在 K8s 叢集中對外發布 RedisInsight 服務,指定的埠為 31380

請使用 vi 編輯器,建立資源清單檔案 redisinsight-svc-external.yaml,並輸入以下內容:

kind: Service
apiVersion: v1
metadata:
  name: redisinsight-external
  labels:
    app: redisinsight-external
spec:
  ports:
    - name: redisinsight
      protocol: TCP
      port: 5540
      targetPort: 5540
      nodePort: 31380
  selector:
    app.kubernetes.io/name: redisinsight
  type: NodePort

5.2 部署 RedisInsight

  1. 建立資源

執行下面的命令,建立 RedisInsight 資源。

kubectl apply -f redisinsight-deploy.yaml -n opsxlab
kubectl apply -f redisinsight-svc-external.yaml -n opsxlab
  1. 驗證資源

執行下面的命令,檢視 Deployment、Pod、Service 建立結果。

$ kubectl get deploy,pod,svc -n opsxlab

5.3 控制檯初始化

開啟 RedisInsight 控制檯,http://192.168.9.91:31380

進入預設配置頁面,只勾選最後一個按鈕,點選 Submit

新增 Redis 資料庫: 點選「Add Redis database」。

選擇「Add Database Manually」,按提示填寫資訊。

  • Host: 填寫 K8s 叢集任意節點IP,這裡使用 Control-1 節點的 IP

  • Port: Redis 服務對應的 Nodeport 埠

  • Database Alias: 隨便寫,就是一個標識

  • Password: 連線 Redis 的密碼

點選「Test Connection」,驗證 Redis 是否可以連線。確認無誤後,點選「Add Redis Database」。

5.4 控制檯概覽

下面用幾張圖簡單展示一下 RedisInsight v2.60 版本管理控制檯的功能,總體感覺管理功能比 V1 版本少了很多。

在 Redis Databases 列表頁,點選新新增的 Redis 資料庫,進入 Redis 管理頁面。

  • 概覽

  • Workbench(可以執行 Redis 管理命令)

  • Analytics

  • Pub-Sub

更多管理功能請自行摸索。

免責宣告:

  • 筆者水平有限,儘管經過多次驗證和檢查,盡力確保內容的準確性,但仍可能存在疏漏之處。敬請業界專家大佬不吝指教。
  • 本文所述內容僅透過實戰環境驗證測試,讀者可學習、借鑑,但嚴禁直接用於生產環境由此引發的任何問題,作者概不負責

近期活動推薦

相關文章