詳細聊聊k8s deployment的滾動更新(二)

wilson排球發表於2019-02-11

一、知識準備

● 本文詳細探索deployment在滾動更新時候的行為
● 相關的引數介紹:
  livenessProbe:存活性探測。判斷pod是否已經停止
  readinessProbe:就緒性探測。判斷pod是否能夠提供正常服務
  maxSurge:在滾動更新過程中最多可以存在的pod數
  maxUnavailable:在滾動更新過程中最多不可用的pod數

二、環境準備

元件 版本
OS Ubuntu 18.04.1 LTS
docker 18.06.0-ce

三、準備映象、yaml檔案

首先準備2個不同版本的映象,用於測試(已經在阿里雲上建立好2個不同版本的nginx映象)

docker pull registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
docker pull registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:delay_v1

2個映象都提供相同的服務,只不過nginx:delay_v1會延遲啟動20才啟動nginx

root@k8s-master:~# docker run -d --rm -p 10080:80 nginx:v1
e88097841c5feef92e4285a2448b943934ade5d86412946bc8d86e262f80a050
root@k8s-master:~# curl http://127.0.0.1:10080
----------
version: v1
hostname: f5189a5d3ad3

yaml檔案:

root@k8s-master:~# more roll_update.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: update-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: roll-update
    spec:
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
        imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
    selector:
      app: roll-update
    ports:
    - protocol: TCP
      port: 10080
      targetPort: 80

四、livenessProbe與readinessProbe

livenessProbe:存活性探測,最主要是用來探測pod是否需要重啟
readinessProbe:就緒性探測,用來探測pod是否已經能夠提供服務

● 在滾動更新的過程中,pod會動態的被delete,然後又被create出來。存活性探測保證了始終有足夠的pod存活提供服務,一旦出現pod數量不足,k8s會立即拉起新的pod
● 但是在pod啟動的過程中,服務正在開啟,並不可用,這時候如果有流量打過來,就會造成報錯

下面來模擬一下這個場景:

首先apply上述的配置檔案

root@k8s-master:~# kubectl apply -f roll_update.yaml
deployment.extensions "update-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl get pod -owide
NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
update-deployment-7db77f7cc6-c4s2v   1/1       Running   0          28s       10.10.235.232   k8s-master
update-deployment-7db77f7cc6-nfgtd   1/1       Running   0          28s       10.10.36.82     k8s-node1
update-deployment-7db77f7cc6-tflfl   1/1       Running   0          28s       10.10.169.158   k8s-node2
root@k8s-master:~# kubectl get svc
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
nginx-service   ClusterIP   10.254.254.199   <none>        10080/TCP   1m

重新開啟終端,測試當前服務的可用性(每秒做一次迴圈去獲取nginx的服務內容):

root@k8s-master:~# while :; do curl http://10.254.254.199:10080; sleep 1; done
----------
version: v1
hostname: update-deployment-7db77f7cc6-nfgtd
----------
version: v1
hostname: update-deployment-7db77f7cc6-c4s2v
----------
version: v1
hostname: update-deployment-7db77f7cc6-tflfl
----------
version: v1
hostname: update-deployment-7db77f7cc6-nfgtd
...

這時候把映象版本更新到nginx:delay_v1,這個映象會延遲啟動nginx,也就是說,會先sleep 20s,然後才去啟動nginx服務。這就模擬了在服務啟動過程中,雖然pod已經是存在的狀態,但是並沒有真正提供服務

root@k8s-master:~# kubectl patch deployment update-deployment --patch `{"metadata":{"annotations":{"kubernetes.io/change-cause":"update version to v2"}} ,"spec": {"template": {"spec": {"containers": [{"name": "nginx","image":"registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:delay_v1"}]}}}}`
deployment.extensions "update-deployment" patched
...
----------
version: v1
hostname: update-deployment-7db77f7cc6-h6hvt
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
curl: (7) Failed to connect to 10.254.254.199 port 10080: Connection refused
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-6th87
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-n22vz
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-njmpz
----------
version: delay_v1
hostname: update-deployment-d788c7dc6-6th87

可以看到,由於延遲啟動,nginx並沒有真正做好準備提供服務,此時流量已經發到後端,導致服務不可用的狀態

所以,加入readinessProbe是非常必要的手段:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: update-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: roll-update
    spec:
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1
        imagePullPolicy: Always
        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
    selector:
      app: roll-update
    ports:
    - protocol: TCP
      port: 10080
      targetPort: 80

重複上述步驟,先建立nginx:v1,然後patch到nginx:delay_v1

root@k8s-master:~# kubectl apply -f roll_update.yaml
deployment.extensions "update-deployment" created
service "nginx-service" created
root@k8s-master:~# kubectl patch deployment update-deployment --patch `{"metadata":{"annotations":{"kubernetes.io/change-cause":"update version to v2"}} ,"spec": {"template": {"spec": {"containers": [{"name": "nginx","image":"registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:delay_v1"}]}}}}`
deployment.extensions "update-deployment" patched
root@k8s-master:~# kubectl get pod -owide
NAME                                 READY     STATUS        RESTARTS   AGE       IP              NODE
busybox                              1/1       Running       0          45d       10.10.235.255   k8s-master
lifecycle-demo                       1/1       Running       0          32d       10.10.169.186   k8s-node2
private-reg                          1/1       Running       0          92d       10.10.235.209   k8s-master
update-deployment-54d497b7dc-4mlqc   0/1       Running       0          13s       10.10.169.178   k8s-node2
update-deployment-54d497b7dc-pk4tb   0/1       Running       0          13s       10.10.36.98     k8s-node1
update-deployment-6d5d7c9947-l7dkb   1/1       Terminating   0          1m        10.10.169.177   k8s-node2
update-deployment-6d5d7c9947-pbzmf   1/1       Running       0          1m        10.10.36.97     k8s-node1
update-deployment-6d5d7c9947-zwt4z   1/1       Running       0          1m        10.10.235.246   k8s-master

● 由於設定了readinessProbe,雖然pod已經啟動起來了,但是並不會立即投入使用,所以出現了 READY: 0/1 的情況
● 並且有pod出現了一直持續Terminating狀態,因為滾動更新的限制,至少要保證有pod可用

再檢視curl的狀態,image的版本平滑更新到了nginx:delay_v1,沒有出現報錯的狀況

root@k8s-master:~# while :; do curl http://10.254.66.136:10080; sleep 1; done
...
version: v1
hostname: update-deployment-6d5d7c9947-pbzmf
----------
version: v1
hostname: update-deployment-6d5d7c9947-zwt4z
----------
version: v1
hostname: update-deployment-6d5d7c9947-pbzmf
----------
version: v1
hostname: update-deployment-6d5d7c9947-zwt4z
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-pk4tb
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-4mlqc
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-pk4tb
----------
version: delay_v1
hostname: update-deployment-54d497b7dc-4mlqc
...

五、maxSurge與maxUnavailable

● 在滾動更新中,有幾種更新方案:先刪除老的pod,然後新增新的pod;先新增新的pod,然後刪除老的pod。在這個過程中,服務必須是可用的(也就是livenessProbe與readiness必須檢測通過)
● 在具體的實施中,由maxSurge與maxUnavailable來控制究竟是先刪老的還是先加新的以及粒度
● 若指定的副本數為3:
  maxSurge=1 maxUnavailable=0:最多允許存在4個(3+1)pod,必須有3個pod(3-0)同時提供服務。先建立一個新的pod,可用之後刪除老的pod,直至全部更新完畢
  maxSurge=0 maxUnavailable=1:最多允許存在3個(3+0)pod,必須有2個pod(3-1)同時提供服務。先刪除一個老的pod,然後建立新的pod,直至全部更新完畢
● 歸根結底,必須滿足maxSurge與maxUnavailable的條件,如果maxSurge與maxUnavailable同時為0,那就沒法更新了,因為又不讓刪除,也不讓新增,這種條件是無法滿足的

六、小結

● 本文介紹了deployment滾動更新過程中,maxSurge、maxUnavailable、liveness、readiness等引數的使用
● 在滾動更新過程中,還有留有一個問題。比如在一個大型的系統中,某個業務的pod數很多(100個),執行一次滾動更新時,勢必會造成pod版本不一致(有些pod是老版本,有些pod是新版本),使用者訪問很有可能會造成多次結果不一致的現象,直至版本更新完畢。關於這個問題待之後慢慢討論


至此,本文結束
在下才疏學淺,有撒湯漏水的,請各位不吝賜教…

相關文章