kube-apiserver 高可用,keepalived + haproxyNX

ocenwimtaegrad發表於2024-12-08

作者:https://github.com/daemon365/p/18592136


  • 為什麼要做高可用
  • 環境準備
  • 安裝
  • 配置 keepalived
    • 配置檔案
    • 測試
    • 配置 haproxy
  • 安裝 kubernetes 叢集
  • 測試:slowerssr加速器

為什麼要做高可用

在生產環境中,kubernetes 叢集中會多多個 master 節點,每個 master 節點上都會部署 kube-apiserver 服務,實現高可用。但是 client 訪問 kube-apiserver 時,需要指定 ip 或者域名,這樣會出現單點故障。官方推薦的做法是使用一個負載均衡器,將多個 kube-apiserver 服務負載均衡,實現高可用,但很多時候我們是沒有這個條件的。這時候就得想想辦法了,比如 nignx 轉發,但是 nginx 也是單點。域名的方式,但是這種方式生效時間較長,不太適合緊急情況。所以這裡介紹一種使用 keepalived + haproxy 的方式實現 kube-apiserver 的高可用。這是一共公用 IP 的方式,當主節點當機時,VIP 會自動切換到備節點,實現高可用。

環境準備

  • master1: 192.168.31.203
  • master2: 192.168.31.34
  • master3: 192.168.31.46
  • worker1: 192.168.31.25
  • VIP (虛擬IP): 192.168.31.230

安裝



|  | sudo apt install keepalived haproxy |
| --- | --- |
|  |  |
|  | systemctl enable haproxy |
|  | systemctl restart haproxy |
|  |  |
|  | systemctl enable keepalived |
|  | # 沒有配置會出現錯誤 不用管 |
|  | systemctl restart keepalived |


配置 keepalived

配置檔案

編輯 keepalived 配置檔案

編輯 /etc/keepalived/keepalived.conf

master1:



|  | # 健康檢查 檢視 haproxy 的程序在不在 |
| --- | --- |
|  | vrrp_script chk_haproxy { |
|  | script "killall -0 haproxy" |
|  | interval 2 # 多少秒教程一次 |
|  | weight 3 # 成功了優先順序加多少 |
|  | } |
|  |  |
|  | vrrp_instance haproxy-vip { |
|  | state MASTER # MASTER / BACKUP 1 MASTER 2 BACKUP |
|  | priority 100 # 優先順序 強的機器高一些 三臺master 分別 100 99 98 |
|  | interface enp0s3     # 網路卡名稱 |
|  | virtual_router_id 51 # 路由 ip 預設就好 |
|  | advert_int 1 # keepalived 之間廣播頻率 秒 |
|  | authentication { |
|  | auth_type PASS |
|  | auth_pass test_k8s |
|  | } |
|  | unicast_src_ip 192.168.31.203 # 自己和其他 keepalived 通訊地址 |
|  | unicast_peer { |
|  | 192.168.31.34                    # master2 的 IP 地址 |
|  | 192.168.31.46                     # master3 的 IP 地址 |
|  | } |
|  |  |
|  | virtual_ipaddress { |
|  | 192.168.31.230 # 這裡必須和其他所有的ip 在一個區域網下 |
|  | } |
|  |  |
|  | track_script { |
|  | chk_haproxy |
|  | } |
|  | } |


master2:



|  | vrrp_script chk_haproxy { |
| --- | --- |
|  | script "killall -0 haproxy" |
|  | interval 2 |
|  | weight 3 |
|  | } |
|  |  |
|  | vrrp_instance haproxy-vip { |
|  | state BACKUP |
|  | priority 99 |
|  | interface enp0s3 |
|  | virtual_router_id 51 |
|  | advert_int 1 |
|  | authentication { |
|  | auth_type PASS |
|  | auth_pass test_k8s |
|  | } |
|  | unicast_src_ip 192.168.31.34 |
|  | unicast_peer { |
|  | 192.168.31.203 |
|  | 192.168.31.46 |
|  | } |
|  |  |
|  | virtual_ipaddress { |
|  | 192.168.31.230 |
|  | } |
|  |  |
|  | track_script { |
|  | chk_haproxy |
|  | } |
|  | } |
|  |  |


master3:



|  | vrrp_script chk_haproxy { |
| --- | --- |
|  | script "killall -0 haproxy" |
|  | interval 2 |
|  | weight 3 |
|  | } |
|  |  |
|  | vrrp_instance haproxy-vip { |
|  | state BACKUP |
|  | priority 98 |
|  | interface enp0s3 |
|  | virtual_router_id 51 |
|  | advert_int 1 |
|  | authentication { |
|  | auth_type PASS |
|  | auth_pass test_k8s |
|  | } |
|  | unicast_src_ip 192.168.31.46 |
|  | unicast_peer { |
|  | 192.168.31.203 |
|  | 192.168.31.34 |
|  | } |
|  |  |
|  | virtual_ipaddress { |
|  | 192.168.31.230 |
|  | } |
|  |  |
|  | track_script { |
|  | chk_haproxy |
|  | } |
|  | } |
|  |  |


測試

重啟所有幾點的 keepalived , 虛擬 ip 會在節點 master 上,因為他的優先順序高。



|  | # master 1 |
| --- | --- |
|  | ip a show enp0s3 |
|  | 2: enp0s3:  mtu 1500 qdisc fq_codel state UP group default qlen 1000 |
|  | link/ether 08:00:27:ca:59:86 brd ff:ff:ff:ff:ff:ff |
|  | inet 192.168.31.203/24 metric 100 brd 192.168.31.255 scope global dynamic enp0s3 |
|  | valid_lft 41983sec preferred_lft 41983sec |
|  | inet 192.168.31.230/32 scope global enp0s3 |
|  | valid_lft forever preferred_lft forever |
|  | inet6 fe80::a00:27ff:feca:5986/64 scope link |
|  | valid_lft forever preferred_lft forever |


現在我們關掉 master1 的 haproxy 或者 keepalived



|  | systemctl stop haproxy |
| --- | --- |
|  | # 再檢視網路資訊 發現虛擬ip 沒了 |
|  | ip a show enp0s3 |
|  | 2: enp0s3:  mtu 1500 qdisc fq_codel state UP group default qlen 1000 |
|  | link/ether 08:00:27:ca:59:86 brd ff:ff:ff:ff:ff:ff |
|  | inet 192.168.31.203/24 metric 100 brd 192.168.31.255 scope global dynamic enp0s3 |
|  | valid_lft 41925sec preferred_lft 41925sec |
|  | inet6 fe80::a00:27ff:feca:5986/64 scope link |
|  | valid_lft forever preferred_lft forever |
|  |  |
|  | # 在優先順序第二高的 master IP 上看下網路 |
|  | ip a show enp0s3 |
|  | 2: enp0s3:  mtu 1500 qdisc fq_codel state UP group default qlen 1000 |
|  | link/ether 08:00:27:11:af:4f brd ff:ff:ff:ff:ff:ff |
|  | inet 192.168.31.34/24 metric 100 brd 192.168.31.255 scope global dynamic enp0s3 |
|  | valid_lft 41857sec preferred_lft 41857sec |
|  | inet 192.168.31.230/32 scope global enp0s3 |
|  | valid_lft forever preferred_lft forever |
|  | inet6 fe80::a00:27ff:fe11:af4f/64 scope link |
|  | valid_lft forever preferred_lft forever |
|  |  |
|  | # 啟動 master1 的 haproxy ip就會回來 |


配置 haproxy

把 16443 埠的請求轉發到 6443 埠 (3 master 的 kube-apiserver 對外埠)

/etc/haproxy/haproxy.cfg



|  | global |
| --- | --- |
|  | log /dev/log  local0 warning |
|  | chroot      /var/lib/haproxy |
|  | pidfile     /var/run/haproxy.pid |
|  | maxconn     4000 |
|  | user        haproxy |
|  | group       haproxy |
|  | daemon |
|  |  |
|  | stats socket /var/lib/haproxy/stats |
|  |  |
|  | defaults |
|  | log global |
|  | option  httplog |
|  | option  dontlognull |
|  | timeout connect 5000 |
|  | timeout client 50000 |
|  | timeout server 50000 |
|  |  |
|  | frontend kube-apiserver |
|  | bind *:16443 |
|  | mode tcp |
|  | option tcplog |
|  | default_backend kube-apiserver |
|  |  |
|  | backend kube-apiserver |
|  | mode tcp |
|  | option tcp-check |
|  | balance roundrobin |
|  | default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 |
|  | server kube-apiserver-1 192.168.31.203:6443 check |
|  | server kube-apiserver-2 192.168.31.34:6443 check |
|  | server kube-apiserver-3 192.168.31.46:6443 check |


安裝 kubernetes 叢集

master1



|  | kubeadm init --image-repository registry.aliyuncs.com/google_containers --control-plane-endpoint=192.168.31.230:16443 --v=10 |
| --- | --- |


master2 和 master3 加入叢集



|  | kubeadm join 192.168.31.230:16443 --token rxblci.ddh60vl370wjgtn7         --discovery-token-ca-cert-hash sha256:d712016d5b8ba4ae5c4a1bda8b6ab1944c13a04757d2c488dd0aefcfd1af0157   --certificate-key    c398d693c6ce9b664634c9b670f013da3010580c00bd444caf7d0a5a81e803f5         --control-plane --v=10 |
| --- | --- |


worker 加入叢集



|  | kubeadm join 192.168.31.230:16443 --token rxblci.ddh60vl370wjgtn7 \ |
| --- | --- |
|  | --discovery-token-ca-cert-hash sha256:d712016d5b8ba4ae5c4a1bda8b6ab1944c13a04757d2c488dd0aefcfd1af0157 |


檢視叢集狀態



|  | kubectl get node |
| --- | --- |
|  | NAME      STATUS     ROLES           AGE     VERSION |
|  | master1   Ready      control-plane   21m     v1.28.2 |
|  | master2   Ready      control-plane   3m46s   v1.28.12 |
|  | master3   Ready      control-plane   2m12s   v1.28.12 |
|  | worker1   Ready                5s      v1.28.2 |


測試



|  | #  關閉 master1 的 kubelet 和 apiserver |
| --- | --- |
|  | systemctl stop kubelet |
|  | sudo kill -9 $(pgrep kube-apiserver) |
|  |  |
|  | kubectl get node |
|  | NAME      STATUS     ROLES           AGE     VERSION |
|  | master1   NotReady   control-plane   25m     v1.28.2 |
|  | master2   Ready      control-plane   7m40s   v1.28.12 |
|  | master3   Ready      control-plane   6m6s    v1.28.12 |
|  | worker1   Ready                3m59s   v1.28.2 |
|  |  |
|  |  |
|  | # 關閉 master1 的 haproxy |
|  | systemctl stop haproxy |
|  | root@master1:/home/zhy# kubectl get node |
|  | NAME      STATUS     ROLES           AGE     VERSION |
|  | master1   NotReady   control-plane   26m     v1.28.2 |
|  | master2   Ready      control-plane   9m12s   v1.28.12 |
|  | master3   Ready      control-plane   7m38s   v1.28.12 |
|  | worker1   Ready                5m31s   v1.28.2 |
|  |  |
|  | # 關閉 master2 的 keepalived |
|  | kubectl get node |
|  | NAME      STATUS     ROLES           AGE     VERSION |
|  | master1   NotReady   control-plane   28m     v1.28.2 |
|  | master2   Ready      control-plane   10m     v1.28.12 |
|  | master3   Ready      control-plane   9m12s   v1.28.12 |
|  | worker1   Ready                7m5s    v1.28.2 |
|  |  |
|  | # 可以看到 虛擬ip 跑到了 master3 上 |
|  | ip a show enp0s3 |
|  | 2: enp0s3:  mtu 1500 qdisc fq_codel state UP group default qlen 1000 |
|  | link/ether 08:00:27:f1:b5:ae brd ff:ff:ff:ff:ff:ff |
|  | inet 192.168.31.46/24 metric 100 brd 192.168.31.255 scope global dynamic enp0s3 |
|  | valid_lft 41021sec preferred_lft 41021sec |
|  | inet 192.168.31.230/32 scope global enp0s3 |
|  | valid_lft forever preferred_lft forever |
|  | inet6 fe80::a00:27ff:fef1:b5ae/64 scope link |
|  | valid_lft forever preferred_lft forever |


相關文章