前文我們瞭解了k8s上的Pod資源的生命週期、健康狀態和就緒狀態探測以及資源限制相關話題,回顧請參考https://www.cnblogs.com/qiuhom-1874/p/14143610.html;今天我們來了解下Pod控制器相關話題;
在k8s上控制器就是k8s的“大腦”,在聊k8s開篇時,我們說過控制器主要負責建立,管理k8s上的資源,如果對應資源不吻合使用者定義的資源狀態,它就會嘗試重啟或重建的方式讓其狀態和使用者定義的狀態吻合;在k8s上控制器的型別有很多,比如pod控制,service控制器,endpoint控制器等等;不同型別的控制器有著不同的功能和作用;比如pod控制器就是針對pod資源進行管理的控制器;單說pod控制器,它也有很多型別,根據pod裡容器跑的應用程式來分類,可以分為有狀態應用和無狀態應用控制,從應用程式是否執行為守護程式我們可以將控制器分為,守護程式和非守護程式控制器;其中無狀態控制器中最常用的有ReplicaSet控制器和Deployment控制;有狀態應用控制器常用的有StatefulSet;守護程式控制器最常用的有daemonSet控制器;非守護程式控制器有job控制器,對Job型別的控制器,如果要週期性執行的有Cronjob控制器;
1、ReplicaSet控制器
ReplicaSet控制器的主要作用是確保Pod物件副本數量在任何時刻都能精準滿足使用者期望的數量;這種控制器啟動以後,它首先會查詢叢集中匹配其標籤選擇器的Pod資源物件,當活動pod數量與使用者期望的pod數量不吻合時,如果多了就刪除,少了就建立;它建立新pod是靠我們在配置清單中定義的pod模板來建立新pod;
示例:定義建立ReplicaSet控制器
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 3 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]#
提示:定義ReplicaSet控制器,apiVersion欄位的值為apps/v1,kind為ReplicaSet,這兩個欄位都是固定的;後面的metadata中主要定義名稱和名稱空間;spec中主要定義replicas、selector、template;其中replicas這個欄位的值為一個整數,表示對應pod的副本數量;selector用於定義標籤選擇器;其值為一個物件,其中matchLabels欄位表示精確匹配標籤,這個欄位的值為一個字典;除了精確匹配標籤選擇器這種方式,還有matchExpressions表示使用匹配表示式,其值為一個物件;簡單說定義標籤選擇器,第一種是matchLabels,這種方式就是指定一個或多個標籤,每個標籤就是一個kvj鍵值對;後者matchExpressions是指定一個表示式,其值為一個物件,這個物件中主要定義key欄位,這個欄位定義key的名稱;operator定義操作符,values定義值;key和operator欄位的值型別都是字串,其中operator的值有In, NotIn, Exists和DoesNotExist;values是一個字串列表;其次就是定義pod模板,使用template欄位定義,該欄位的值為一個物件其中metadata欄位用於定義模板的元素據資訊,這個後設資料資訊必須定義標籤屬性;通常這個標籤屬性和選擇器中的標籤相同;spec欄位用於定義pod模板的狀態,最重要的是定義pod裡容器的名字,映象等等;
應用資源配置清單
[root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo created [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 3 3 3 9s [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 3 3 3 17s nginx nginx:1.14-alpine app=nginx-pod [root@master01 ~]#
提示:rs就是ReplicaSet的簡寫;從上面的資訊可以看到對應控制器已經建立;並且當前pod副本數量為3,使用者期望的數量也為3,有3個準備就緒;
檢視pod
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 2m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 2m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 2m57s nginx-pod [root@master01 ~]#
提示:可以看到當前default名稱空間中建立了3個pod,其標籤為nginx-pod;
測試:更改其中一個pod的標籤為ngx,看看對應控制器是否會新建一個標籤為nginx-pod的pod呢?
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 5m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 5m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 5m48s nginx-pod [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=ngx --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 4s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 6m2s nginx-pod replicaset-demo-twknl 1/1 Running 0 6m2s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 6m2s ngx [root@master01 ~]#
提示:可以看到當我們把其中一個pod的標籤更改為app=ngx後,對應控制器又會根據pod模板建立一個新pod;
測試:更改pod標籤為app=nginx-pod,看看對應控制器是否會刪除一個pod呢?
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 2m35s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m33s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m33s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m33s ngx [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=nginx-pod --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 0/1 Terminating 0 2m50s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m48s nginx-pod [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 8m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m57s nginx-pod [root@master01 ~]#
提示:可以看到當叢集中有多餘使用者期望數量的pod標籤時,對應控制器會把多餘的相同標籤的pod刪除;從上面的測試可以看到ReplicaSet控制器是依靠標籤選擇器來判斷叢集中pod的數量是否和使用者定義的數量吻合,如果不吻合就嘗試刪除或新建,讓對應pod數量精確滿足使用者期望pod數量;
檢視rs控制器的詳細資訊
[root@master01 ~]# kubectl describe rs replicaset-demo Name: replicaset-demo Namespace: default Selector: app=nginx-pod Labels: <none> Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx-pod Containers: nginx: Image: nginx:1.14-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-twknl Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-vzdbb Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-rsl7q Normal SuccessfulCreate 15m replicaset-controller Created pod: replicaset-demo-qv8tp Normal SuccessfulDelete 12m replicaset-controller Deleted pod: replicaset-demo-qv8tp [root@master01 ~]#
擴充套件/縮減rs控制pod副本數量
[root@master01 ~]# kubectl scale rs replicaset-demo --replicas=6 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 6 6 6 32m [root@master01 ~]# kubectl scale rs replicaset-demo --replicas=4 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 4 4 4 32m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-5t9tt 0/1 Terminating 0 33s replicaset-demo-j75hk 1/1 Running 0 33s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vvqfw 0/1 Terminating 0 33s replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 41s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]#
提示:scale也可以對控制器做擴充套件和縮減pod副本數量,除了以上使用命令的方式來變更對應pod副本數量;也可以直接在配置清單中修改replicas欄位,然後使用apply命令執行配置清單進行修改;
修改配置清單中的replicas欄位的值來擴充套件pod副本數量
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 7 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo configured [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 7 7 7 35m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 3m33s replicaset-demo-k2n9g 1/1 Running 0 9s replicaset-demo-n7fmk 1/1 Running 0 9s replicaset-demo-q4dc6 1/1 Running 0 9s replicaset-demo-rsl7q 1/1 Running 0 36m replicaset-demo-twknl 1/1 Running 0 36m replicaset-demo-vzdbb 1/1 Running 0 36m [root@master01 ~]#
更新pod版本
方式1修改資源配置清單中pod模板的版本,然後在使用apply命令來執行配置清單
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 7 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo configured [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 55m nginx nginx:1.16-alpine app=nginx-pod [root@master01 ~]#
提示:從上面命令可以看到,它顯示的映象版本是1.16的版本;
驗證:檢視對應pod,看看對應pod中容器映象版本是否變成了1.16呢?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]#
提示:從pod建立的時間來看,pod沒有更新;
測試:刪除一個pod看看對應pod裡容器映象是否會更新呢?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]# kubectl delete pod/replicaset-demo-vzdbb pod "replicaset-demo-vzdbb" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 0/1 ContainerCreating 0 10s replicaset-demo-j75hk 1/1 Running 0 26m replicaset-demo-k2n9g 1/1 Running 0 23m replicaset-demo-n7fmk 1/1 Running 0 23m replicaset-demo-q4dc6 1/1 Running 0 23m replicaset-demo-rsl7q 1/1 Running 0 58m replicaset-demo-twknl 1/1 Running 0 58m [root@master01 ~]# kubectl describe pod/replicaset-demo-9wqj9 |grep Image Image: nginx:1.16-alpine Image ID: docker-pullable://nginx@sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad [root@master01 ~]#
提示:可以看到我們刪除了一個pod,對應控制器又新建了一個pod,對應新建的pod映象版本就成為了新版本的pod;從上面測試情況可以看到,對於rs控制器當pod模板中的映象版本發生更改,如果k8s叢集上對應pod數量和使用者定義的數量吻合,此時rs控制器不會更新pod;只有新建後的pod才會擁有新版本;也就說如果我們要rs來對pod版本更新,就得刪除原有老的pod後才會更新;
方式2使用命令更新pod版本
[root@master01 ~]# kubectl set image rs replicaset-demo nginx=nginx:1.18-alpine replicaset.apps/replicaset-demo image updated [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 72m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 1/1 Running 0 13m replicaset-demo-j75hk 1/1 Running 0 40m replicaset-demo-k2n9g 1/1 Running 0 36m replicaset-demo-n7fmk 1/1 Running 0 36m replicaset-demo-q4dc6 1/1 Running 0 36m replicaset-demo-rsl7q 1/1 Running 0 72m replicaset-demo-twknl 1/1 Running 0 72m [root@master01 ~]#
提示:對於rs控制器,不管用命令還是修改資源配置清單中pod模板中映象版本,如果有和使用者期望數量的pod,它是不會自動更新pod版本的;只有手動刪除老版本pod,對應新版本pod才會被建立;
2、deployment控制器
對於deployment控制來說,它的定義方式和rs控制都差不多,但deploy控制器的功能要比rs強大,它可以實現滾動更新,使用者手動定義更新策略;其實deploy控制器是在rs控制器的基礎上來管理pod;也就說我們在建立deploy控制器時,它自動會建立一個rs控制器;其中使用deployment控制器建立的pod名稱是由deploy控制器名稱加上“-”pod模板hash名稱加上“-”隨機字串;而對應rs控制器的名稱恰好就是deploy控制器名稱加“-”pod模板hash;即pod名稱就為rs控制器名稱加“-”隨機字串;
示例:建立deployment控制器
[root@master01 ~]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]#
應用配置清單
[root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo created [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 10s nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]#
驗證:檢視是否有rs控制器建立?
[root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-6d795f958b 3 3 3 57s replicaset-demo 7 7 7 84m [root@master01 ~]#
提示:可以看到有一個deploy-demo-6d795f958b的rs控制器被建立;
驗證:檢視pod,看看對應pod名稱是否有rs控制器名稱加“-”一串隨機字串?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-bppjr 1/1 Running 0 2m16s deploy-demo-6d795f958b-mxwkn 1/1 Running 0 2m16s deploy-demo-6d795f958b-sh76g 1/1 Running 0 2m16s replicaset-demo-9wqj9 1/1 Running 0 26m replicaset-demo-j75hk 1/1 Running 0 52m replicaset-demo-k2n9g 1/1 Running 0 49m replicaset-demo-n7fmk 1/1 Running 0 49m replicaset-demo-q4dc6 1/1 Running 0 49m replicaset-demo-rsl7q 1/1 Running 0 85m replicaset-demo-twknl 1/1 Running 0 85m [root@master01 ~]#
提示:可以看到有3個pod的名稱是deploy-demo-6d795f958b-加隨機字串;
更新pod版本
[root@master01 ~]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo configured [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 5m45s nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-95cc58f4d-45l5c 1/1 Running 0 43s deploy-demo-95cc58f4d-6bmb6 1/1 Running 0 45s deploy-demo-95cc58f4d-7d5r5 1/1 Running 0 29s replicaset-demo-9wqj9 1/1 Running 0 30m replicaset-demo-j75hk 1/1 Running 0 56m replicaset-demo-k2n9g 1/1 Running 0 53m replicaset-demo-n7fmk 1/1 Running 0 53m replicaset-demo-q4dc6 1/1 Running 0 53m replicaset-demo-rsl7q 1/1 Running 0 89m replicaset-demo-twknl 1/1 Running 0 89m [root@master01 ~]#
提示:可以看到deploy控制器只要更改了pod模板中映象版本,對應pod會自動更新;
使用命令更新pod版本
[root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.18-alpine deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 3/3 1 3 9m5s [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 1 3 9m11s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 9m38s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-567b54cd6-6h97c 1/1 Running 0 28s deploy-demo-567b54cd6-j74t4 1/1 Running 0 27s deploy-demo-567b54cd6-wcccx 1/1 Running 0 49s replicaset-demo-9wqj9 1/1 Running 0 34m replicaset-demo-j75hk 1/1 Running 0 60m replicaset-demo-k2n9g 1/1 Running 0 56m replicaset-demo-n7fmk 1/1 Running 0 56m replicaset-demo-q4dc6 1/1 Running 0 56m replicaset-demo-rsl7q 1/1 Running 0 92m replicaset-demo-twknl 1/1 Running 0 92m [root@master01 ~]#
提示:可以看到deploy控制器,只要修改了pod模板中映象的版本,對應pod就會隨之滾動更新到我們指定的版本;
檢視rs歷史版本
[root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 3m50s nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 12m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 7m27s nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 95m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:deploy控制器的更新pod版本操作,它會記錄rs的所有歷史版本;因為只要pod模板的hash值發生變化,對應的rs就會重新被建立一遍,不同於rs控制器,歷史版本的rs上沒有pod執行,只有當前版本的rs上才會執行pod;
檢視更新歷史記錄
[root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none> [root@master01 ~]#
提示:這裡可以看到有3個版本,沒有記錄對應的原因;這是因為我們在更新pod版本是沒有記錄;要想記錄器更新原因,可以在對應名後面加--record選項即可;
示例:記錄更新操作命令到更新歷史記錄
[root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 2 <none> 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.16.yaml --record deployment.apps/deploy-demo configured [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]#
提示:可以看到更新操作時加上--record選項後,再次檢視更新歷史記錄,就能顯示對應的更新命令;
回滾到上一個版本
[root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 33m nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 24m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 33m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 3 3 3 28m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 116m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 34m nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 26m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 3 3 3 35m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 29m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 118m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:可以看到執行了kubectl rollout undo deploy/deploy-demo命令後,對應版本從1.16就回滾到1.14的版本了;對應更新歷史記錄也把1.14版本更新為當前最新記錄;
回滾到指定歷史記錄版本
[root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo --to-revision=3 deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 7 <none> [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 42m nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 33m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 42m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 36m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 125m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:指定要回滾到某個歷史記錄的版本,可以使用--to-revision選項來指定歷史記錄的編號;
檢視deploy控制器的詳細資訊
[root@master01 ~]# kubectl describe deploy deploy-demo Name: deploy-demo Namespace: default CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 7 Selector: app=ngx-dep-pod Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=ngx-dep-pod Containers: nginx: Image: nginx:1.18-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: deploy-demo-567b54cd6 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 58m deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 3 Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 0 Normal ScalingReplicaSet 55m deployment-controller Scaled up replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 54m deployment-controller Scaled down replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 38m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2 Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 29m (x3 over 64m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3 Normal ScalingReplicaSet 22m (x14 over 54m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2 [root@master01 ~]#
提示:檢視deploy控制器的詳細資訊,可以看到對應pod模板,回滾的過程,以及預設更新策略等等資訊;
自定義滾動更新策略
[root@master01 ~]# cat deploy-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 1 minReadySeconds: 5 [root@master01 ~]#
提示:定義滾動更新策略需要使用strategy這個欄位,這個欄位的值是一個物件,其中type是指定更新策略,其策略有兩種,第一種是Recreate,這種策略更新方式是新建一個新版pod,然後再刪除一箇舊版pod以這種方式滾動更新;第二種是RollingUpdate,這種策略是用於我們手動指定的策略;其中maxSurge表示最大允許超出使用者期望的pod數量(即更新時允許新建超出使用者期望的pod數量),maxUnavailable表示最大允許少於用於期望的pod數量(即更新時可以一次刪除幾個舊版pod);最後minReadySeconds欄位不是定義更新策略的,它是spec中的一個欄位,用於限定pod最小就緒時長;以上更新策略表示,使用RollingUpdate型別策略,並指定最大新建pod超出使用者期望pod數量為2個,最大允許少於使用者期望pod數量為1個;pod最小就緒時間為5秒;
應用配置清單
[root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.14.yaml deployment.apps/deploy-demo configured [root@master01 ~]# kubectl describe deploy/deploy-demo Name: deploy-demo Namespace: default CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 8 Selector: app=ngx-dep-pod Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 5 RollingUpdateStrategy: 1 max unavailable, 2 max surge Pod Template: Labels: app=ngx-dep-pod Containers: nginx: Image: nginx:1.14-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: deploy-demo-6d795f958b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 47m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 47m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 31m (x14 over 64m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 41s (x4 over 73m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3 Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2 Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 34s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0 [root@master01 ~]#
提示:可以看到對應deploy控制器的更新策略已經更改為我們定義的策略;為了能夠看出更新的效果,我們這裡先手動把pod數量調整為10個;
擴充套件pod副本數量
[root@master01 ~]# kubectl scale deploy/deploy-demo --replicas=10 deployment.apps/deploy-demo scaled [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 3m33s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 8s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 8s deploy-demo-6d795f958b-czwdp 1/1 Running 0 3m33s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 8s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 3m33s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 8s deploy-demo-6d795f958b-ph99t 1/1 Running 0 8s deploy-demo-6d795f958b-wzscg 1/1 Running 0 8s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 8s replicaset-demo-9wqj9 1/1 Running 0 100m replicaset-demo-j75hk 1/1 Running 0 126m replicaset-demo-k2n9g 1/1 Running 0 123m replicaset-demo-n7fmk 1/1 Running 0 123m replicaset-demo-q4dc6 1/1 Running 0 123m replicaset-demo-rsl7q 1/1 Running 0 159m replicaset-demo-twknl 1/1 Running 0 159m [root@master01 ~]#
檢視更新過程
[root@master01 ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 5m18s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 113s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 113s deploy-demo-6d795f958b-czwdp 1/1 Running 0 5m18s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 113s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 5m18s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 113s deploy-demo-6d795f958b-ph99t 1/1 Running 0 113s deploy-demo-6d795f958b-wzscg 1/1 Running 0 113s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 113s replicaset-demo-9wqj9 1/1 Running 0 102m replicaset-demo-j75hk 1/1 Running 0 128m replicaset-demo-k2n9g 1/1 Running 0 125m replicaset-demo-n7fmk 1/1 Running 0 125m replicaset-demo-q4dc6 1/1 Running 0 125m replicaset-demo-rsl7q 1/1 Running 0 161m replicaset-demo-twknl 1/1 Running 0 161m deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-6d795f958b-mbrlw 1/1 Terminating 0 4m16s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-95srs 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m17s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-578d6b6f94-qhc9j 1/1 Running 0 15s deploy-demo-578d6b6f94-95srs 1/1 Running 0 16s deploy-demo-578d6b6f94-bht84 1/1 Running 0 18s deploy-demo-6d795f958b-ph99t 1/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 1/1 Terminating 0 4m38s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-5zr7r 1/1 Terminating 0 4m43s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m43s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-578d6b6f94-g9c8x 1/1 Running 0 12s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-578d6b6f94-lg6vk 1/1 Running 0 15s deploy-demo-6d795f958b-9mc7k 1/1 Terminating 0 4m56s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-4rpx9 1/1 Running 0 13s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 4m57s deploy-demo-578d6b6f94-4lbwg 1/1 Running 0 2s deploy-demo-6d795f958b-wzscg 1/1 Terminating 0 4m58s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 4m59s deploy-demo-578d6b6f94-fhkk9 1/1 Running 0 2s deploy-demo-6d795f958b-z5mnf 1/1 Terminating 0 5m2s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-6d795f958b-czwdp 1/1 Terminating 0 8m28s deploy-demo-578d6b6f94-sfpz4 0/1 ContainerCreating 0 1s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m28s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-sfpz4 1/1 Running 0 2s deploy-demo-6d795f958b-5bdfw 1/1 Terminating 0 8m29s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-5bs6z 1/1 Running 0 1s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m30s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-jw9n8 1/1 Terminating 0 8m38s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m38s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s
提示:使用-w選項可以一直跟蹤檢視pod變化過程;從上面的監控資訊可以看到,在更新時,首先是將三個pod標記為pending狀態,然後先刪除一個pod,然後再建立兩個pod;然後又建立一個,再刪除3個,一次進行;不管怎麼刪除和新建,對應新舊pod的數量最少要有9個,最大不超過12個;
使用暫停更新實現金絲雀釋出
[root@master01 ~]# kubectl set image deploy/deploy-demo nginx=nginx:1.14-alpine && kubectl rollout pause deploy/deploy-demo deployment.apps/deploy-demo image updated deployment.apps/deploy-demo paused [root@master01 ~]#
提示:以上命令會根據我們定義的更新策略,先刪除一個pod,然後再建立3個新版pod,然後更新操作就暫停了;此時對應pod只更新了1個,然後新建了2個新pod,總共就有12個pod;
檢視pod情況
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-df77k 1/1 Running 0 87s deploy-demo-6d795f958b-tll8b 1/1 Running 0 87s deploy-demo-6d795f958b-zbhwp 1/1 Running 0 87s deploy-demo-fb957b9b-44l6g 1/1 Running 0 3m21s deploy-demo-fb957b9b-7q6wh 1/1 Running 0 3m38s deploy-demo-fb957b9b-d45rg 1/1 Running 0 3m27s deploy-demo-fb957b9b-j7p2j 1/1 Running 0 3m38s deploy-demo-fb957b9b-mkpz6 1/1 Running 0 3m38s deploy-demo-fb957b9b-qctnv 1/1 Running 0 3m21s deploy-demo-fb957b9b-rvrtf 1/1 Running 0 3m27s deploy-demo-fb957b9b-wf254 1/1 Running 0 3m12s deploy-demo-fb957b9b-xclhz 1/1 Running 0 3m22s replicaset-demo-9wqj9 1/1 Running 0 135m replicaset-demo-j75hk 1/1 Running 0 161m replicaset-demo-k2n9g 1/1 Running 0 158m replicaset-demo-n7fmk 1/1 Running 0 158m replicaset-demo-q4dc6 1/1 Running 0 158m replicaset-demo-rsl7q 1/1 Running 0 3h14m replicaset-demo-twknl 1/1 Running 0 3h14m [root@master01 ~]# kubectl get pod|grep "^deploy.*" |wc -l 12 [root@master01 ~]#
提示:之所以多兩個是因為我們在更新策略中定義允許最大超出使用者期望2個pod;
恢復更新
[root@master01 ~]# kubectl rollout resume deploy/deploy-demo && kubectl rollout status deploy/deploy-demo deployment.apps/deploy-demo resumed Waiting for deployment "deploy-demo" rollout to finish: 3 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... deployment "deploy-demo" successfully rolled out [root@master01 ~]#
提示:resume表示恢復剛才暫停的更新操作;status是用來檢視對應更新過程;