前文我們瞭解了k8s上的pod控制器中的常用的兩種控制器ReplicaSet和Deployment控制器的相關話題,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14149042.html;今天我們來了解下DaemonSet、Job和CronJob控制器相關話題;
1、DaemonSet控制
從名字上就可以看出這個控制器是管理守護程式類的pod;DaemonSet控制器的主要作用是管理守護程式類的Pod,通常用於在每個節點需要執行一個這樣的Pod場景;比如我們要收集日誌到es中,我們就可以使用這種控制器在每個節點上執行一個Pod;DaemonSet控制器和Deployment控制器很類似,不同的是ds(DaemonSet的簡寫)控制器不需要我們手動指定其執行的pod數量,它會根據k8s叢集節點數量的變化而變化,如果新加入一個節點它會自動擴充套件對應pod數量,減少節點時,它也不會把之前執行在該節點上的pod排程到其他節點,總之一個節點上只能執行同型別Pod1個;除此之外ds它還支援通過節點選擇器來做選擇性的排程;比如,在某個擁有對應標籤的節點就執行對應pod,沒有就不執行;其他的更新操作和deployment差不多;
示例:建立DaemonSet控制器
[root@master01 ~]# cat ds-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-demo namespace: default spec: selector: matchLabels: app: ngx-ds template: metadata: labels: app: ngx-ds spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 minReadySeconds: 5 [root@master01 ~]#
提示:對於ds控制器來說,它在spec中最主要定義選擇器和pod模板,這個定義和deploy控制器一樣;上述配置檔案主要使用ds控制器執行一個nginx pod,其標籤名為ngx-ds;
應用配置清單
[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml daemonset.apps/ds-demo created [root@master01 ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR ds-demo 3 3 3 3 3 <none> 14s nginx nginx:1.14-alpine app=ngx-ds
提示:可以看到我們並沒有指定pod數量,對應控制器它會根據節點數量在每個node節點上建立pod;
驗證:檢視pod情況,看看是不是每個節點都被排程執行了一個pod?
[root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-fm9cb 1/1 Running 0 27s 10.244.1.57 node01.k8s.org <none> <none> ds-demo-pspbk 1/1 Running 0 27s 10.244.3.57 node03.k8s.org <none> <none> ds-demo-zvpbb 1/1 Running 0 27s 10.244.2.69 node02.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到對應pod在每個node節點都僅跑了一個pod;
定義節點選擇器
[root@master01 ~]# cat ds-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-demo namespace: default spec: selector: matchLabels: app: ngx-ds template: metadata: labels: app: ngx-ds spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 nodeSelector: app: nginx-1.14-alpine minReadySeconds: 5 [root@master01 ~]#
提示:定義節點選擇器需要在pod模板中的spec欄位下使用nodeSelector欄位來定義,這個欄位的值為一個字典;以上配置定義了只有節點標籤為app=nginx-1.14-alpine才會在對應節點上建立pod,否則就不予建立;
應用配置清單
[root@master01 ~]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds-demo 3 3 3 3 3 <none> 14m [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml daemonset.apps/ds-demo configured [root@master01 ~]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds-demo 0 0 0 0 0 app=nginx-1.14-alpine 14m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE ds-demo-pspbk 0/1 Terminating 0 14m [root@master01 ~]# kubectl get pod No resources found in default namespace. [root@master01 ~]#
提示:可以看到加上了節點選擇器以後,對應pod都被刪除了,原因是在k8s節點上沒有任何一個節點擁有對應節點選擇器中的標籤,所以都不滿足排程條件,當然對應pod就被控制器刪除了;
測試:給node01.k8s.org節點新增一個節點標籤,其名為app=nginx-1.14-alpine,看看對應節點是否會建立pod?
[root@master01 ~]# kubectl label node node01.k8s.org app=nginx-1.14-alpine node/node01.k8s.org labeled [root@master01 ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR ds-demo 1 1 1 1 1 app=nginx-1.14-alpine 20m nginx nginx:1.14-alpine app=ngx-ds [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-8hfnq 1/1 Running 0 18s 10.244.1.58 node01.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到只要k8snode節點擁有對應node選擇器匹配的標籤時,對應pod就會精準排程到對應節點上執行;
刪除節點選擇器,應用資源配置清單,然後再新加一個節點,看看新加的節點是否會自動建立對應pod?
刪除節點選擇器,應用資源配置清單
[root@master01 ~]# cat ds-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-demo namespace: default spec: selector: matchLabels: app: ngx-ds template: metadata: labels: app: ngx-ds spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 minReadySeconds: 5 [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml daemonset.apps/ds-demo configured [root@master01 ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR ds-demo 3 3 3 3 3 <none> 26m nginx nginx:1.14-alpine app=ngx-ds [root@master01 ~]#
準備一臺node節點,其主機名為node04.k8s.org,準備步驟請參考https://www.cnblogs.com/qiuhom-1874/p/14126750.html;
在master節點上建立節點加入叢集的命令
[root@master01 ~]# kubeadm token create --print-join-command kubeadm join 192.168.0.41:6443 --token 8rdaut.qeeyf9cw5e1dur8f --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 [root@master01 ~]#
複製命令在node04上執行
[root@node04 ~]# kubeadm join 192.168.0.41:6443 --token 8rdaut.qeeyf9cw5e1dur8f --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 --ignore-preflight-errors=Swap [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node04 ~]#
提示:如果開啟了Swap,需要在命令後面加上--ignore-preflight-errors=Swap選項;
在master節點上檢視node狀態,看看node04是否加入到k8s叢集
[root@master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION master01.k8s.org Ready control-plane,master 10d v1.20.0 node01.k8s.org Ready <none> 10d v1.20.0 node02.k8s.org Ready <none> 10d v1.20.0 node03.k8s.org Ready <none> 10d v1.20.0 node04.k8s.org NotReady <none> 117s v1.20.0 [root@master01 ~]#
提示:能夠看到node04已經加入節點,只是狀態還未準備好,待節點準備就緒後,再看看dspod數量是否增加,是否自動在node04上執行了nginx pod;
檢視ds控制器,看看現在執行了幾個pod
[root@master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION master01.k8s.org Ready control-plane,master 10d v1.20.0 node01.k8s.org Ready <none> 10d v1.20.0 node02.k8s.org Ready <none> 10d v1.20.0 node03.k8s.org Ready <none> 10d v1.20.0 node04.k8s.org Ready <none> 8m10s v1.20.0 [root@master01 ~]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds-demo 4 4 4 4 4 <none> 53m [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-g74s8 1/1 Running 0 72s 10.244.4.2 node04.k8s.org <none> <none> ds-demo-h4b77 1/1 Running 0 27m 10.244.2.70 node02.k8s.org <none> <none> ds-demo-hpmrg 1/1 Running 0 27m 10.244.3.58 node03.k8s.org <none> <none> ds-demo-kjf6f 1/1 Running 0 27m 10.244.1.59 node01.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到新加的節點準備就緒以後,對應pod會自動在新加的節點上建立pod;
更新pod版本
[root@master01 ~]# cat ds-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-demo namespace: default spec: selector: matchLabels: app: ngx-ds template: metadata: labels: app: ngx-ds spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 minReadySeconds: 5 [root@master01 ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR ds-demo 4 4 4 4 4 <none> 55m nginx nginx:1.14-alpine app=ngx-ds [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-g74s8 1/1 Running 0 3m31s 10.244.4.2 node04.k8s.org <none> <none> ds-demo-h4b77 1/1 Running 0 30m 10.244.2.70 node02.k8s.org <none> <none> ds-demo-hpmrg 1/1 Running 0 30m 10.244.3.58 node03.k8s.org <none> <none> ds-demo-kjf6f 1/1 Running 0 30m 10.244.1.59 node01.k8s.org <none> <none> [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml daemonset.apps/ds-demo configured [root@master01 ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR ds-demo 4 4 3 0 3 <none> 56m nginx nginx:1.16-alpine app=ngx-ds [root@master01 ~]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-47gtq 0/1 ContainerCreating 0 7s <none> node04.k8s.org <none> <none> ds-demo-h4b77 1/1 Running 0 31m 10.244.2.70 node02.k8s.org <none> <none> ds-demo-jp9dz 1/1 Running 0 38s 10.244.1.60 node01.k8s.org <none> <none> ds-demo-t4njt 1/1 Running 0 21s 10.244.3.59 node03.k8s.org <none> <none> [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-47gtq 1/1 Running 0 37s 10.244.4.3 node04.k8s.org <none> <none> ds-demo-8txr9 1/1 Running 0 14s 10.244.2.71 node02.k8s.org <none> <none> ds-demo-jp9dz 1/1 Running 0 68s 10.244.1.60 node01.k8s.org <none> <none> ds-demo-t4njt 1/1 Running 0 51s 10.244.3.59 node03.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到我們修改了pod模板中映象的版本,應用以後,對應pod會一一更新;
檢視ds詳細資訊
[root@master01 ~]# kubectl describe ds ds-demo Name: ds-demo Selector: app=ngx-ds Node-Selector: <none> Labels: <none> Annotations: deprecated.daemonset.template.generation: 4 Desired Number of Nodes Scheduled: 4 Current Number of Nodes Scheduled: 4 Number of Nodes Scheduled with Up-to-date Pods: 4 Number of Nodes Scheduled with Available Pods: 4 Number of Nodes Misscheduled: 0 Pods Status: 4 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=ngx-ds Containers: nginx: Image: nginx:1.16-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 59m daemonset-controller Created pod: ds-demo-fm9cb Normal SuccessfulCreate 59m daemonset-controller Created pod: ds-demo-zvpbb Normal SuccessfulCreate 59m daemonset-controller Created pod: ds-demo-pspbk Normal SuccessfulDelete 44m (x2 over 44m) daemonset-controller Deleted pod: ds-demo-fm9cb Normal SuccessfulDelete 44m (x2 over 44m) daemonset-controller Deleted pod: ds-demo-pspbk Normal SuccessfulDelete 44m (x2 over 44m) daemonset-controller Deleted pod: ds-demo-zvpbb Normal SuccessfulCreate 38m daemonset-controller Created pod: ds-demo-8hfnq Normal SuccessfulCreate 33m daemonset-controller Created pod: ds-demo-h4b77 Normal SuccessfulCreate 33m daemonset-controller Created pod: ds-demo-hpmrg Normal SuccessfulDelete 33m daemonset-controller Deleted pod: ds-demo-8hfnq Normal SuccessfulCreate 33m daemonset-controller Created pod: ds-demo-kjf6f Normal SuccessfulCreate 6m57s daemonset-controller Created pod: ds-demo-g74s8 Normal SuccessfulDelete 3m8s daemonset-controller Deleted pod: ds-demo-kjf6f Normal SuccessfulCreate 2m58s daemonset-controller Created pod: ds-demo-jp9dz Normal SuccessfulDelete 2m52s daemonset-controller Deleted pod: ds-demo-hpmrg Normal SuccessfulCreate 2m41s daemonset-controller Created pod: ds-demo-t4njt Normal SuccessfulDelete 2m35s daemonset-controller Deleted pod: ds-demo-g74s8 Normal SuccessfulCreate 2m27s daemonset-controller Created pod: ds-demo-47gtq Normal SuccessfulDelete 2m13s daemonset-controller Deleted pod: ds-demo-h4b77 Normal SuccessfulCreate 2m4s daemonset-controller Created pod: ds-demo-8txr9 [root@master01 ~]#
使用命令更新pod版本
[root@master01 ~]# kubectl set image ds ds-demo nginx=nginx:1.18-alpine --record daemonset.apps/ds-demo image updated [root@master01 ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR ds-demo 4 4 3 0 3 <none> 84m nginx nginx:1.18-alpine app=ngx-ds [root@master01 ~]# kubectl rollout status ds/ds-demo Waiting for daemon set "ds-demo" rollout to finish: 1 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated... Waiting for daemon set "ds-demo" rollout to finish: 3 of 4 updated pods are available... Waiting for daemon set "ds-demo" rollout to finish: 3 of 4 updated pods are available... daemon set "ds-demo" successfully rolled out [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-6qr6g 1/1 Running 0 70s 10.244.2.77 node02.k8s.org <none> <none> ds-demo-7gnxd 1/1 Running 0 57s 10.244.3.66 node03.k8s.org <none> <none> ds-demo-g44bd 1/1 Running 0 24s 10.244.1.66 node01.k8s.org <none> <none> ds-demo-hb8vl 1/1 Running 0 43s 10.244.4.10 node04.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod/ds-demo-6qr6g |grep Image Image: nginx:1.18-alpine Image ID: docker-pullable://nginx@sha256:a7bdf9e789a40bf112c87672a2495fc49de7c89f184a252d59061c1ae800ee52 [root@master01 ~]#
提示:預設更新策略是刪除一個pod,然後再新建一個pod;
定義更新策略
[root@master01 ~]# cat ds-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-demo namespace: default spec: selector: matchLabels: app: ngx-ds template: metadata: labels: app: ngx-ds spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 minReadySeconds: 5 updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 2 [root@master01 ~]#
提示:定義ds的更新策略需要在spec欄位下使用updateStrategy欄位,該欄位的值為一個物件,其中type是更新型別,該型別有兩個值,一個是OnDelete,一個是RollingUpdate;rollingUpdate欄位用於指定更新策略,只有當type的值為RollingUpdate時,定義rollingUpdate欄位才有意義,其中maxUnavaiable是用來定義一次刪除幾個pod(最大允許不可用的pod數量);ds更新只能先刪除再建立,不能先建立再刪除,因為它只允許一個node上執行一個pod,所以只有刪除pod後再建立;預設情況是刪除一個,新建一個;上述配置定義更新策略為一次刪除兩個;
應用配置,檢視更新過程
[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml && kubectl get pod -w daemonset.apps/ds-demo configured NAME READY STATUS RESTARTS AGE ds-demo-4k2x7 1/1 Terminating 0 15m ds-demo-b9djn 1/1 Running 0 16m ds-demo-bxkj7 1/1 Running 0 15m ds-demo-cg49r 1/1 Terminating 0 16m ds-demo-cg49r 0/1 Terminating 0 16m ds-demo-4k2x7 0/1 Terminating 0 15m ds-demo-cg49r 0/1 Terminating 0 16m ds-demo-cg49r 0/1 Terminating 0 16m ds-demo-dtsgc 0/1 Pending 0 0s ds-demo-dtsgc 0/1 Pending 0 0s ds-demo-dtsgc 0/1 ContainerCreating 0 0s ds-demo-dtsgc 1/1 Running 0 2s ds-demo-4k2x7 0/1 Terminating 0 15m ds-demo-4k2x7 0/1 Terminating 0 15m ds-demo-8d7g9 0/1 Pending 0 0s ds-demo-8d7g9 0/1 Pending 0 0s ds-demo-8d7g9 0/1 ContainerCreating 0 0s ds-demo-8d7g9 1/1 Running 0 1s ds-demo-b9djn 1/1 Terminating 0 16m ds-demo-b9djn 0/1 Terminating 0 16m ds-demo-bxkj7 1/1 Terminating 0 16m ds-demo-bxkj7 0/1 Terminating 0 16m ds-demo-b9djn 0/1 Terminating 0 16m ds-demo-b9djn 0/1 Terminating 0 16m ds-demo-dkxfs 0/1 Pending 0 0s ds-demo-dkxfs 0/1 Pending 0 0s ds-demo-dkxfs 0/1 ContainerCreating 0 0s ds-demo-dkxfs 1/1 Running 0 2s ds-demo-bxkj7 0/1 Terminating 0 16m ds-demo-bxkj7 0/1 Terminating 0 16m ds-demo-q6b5f 0/1 Pending 0 0s ds-demo-q6b5f 0/1 Pending 0 0s ds-demo-q6b5f 0/1 ContainerCreating 0 0s ds-demo-q6b5f 1/1 Running 0 1s
提示:可以看到現在更新pod版本就是一次刪除兩個pod,然後新建兩個pod;
2、Job控制器
job控制器主要作用是用來執行一個或多個pod來執行任務,當任務執行完成後,自動退出,如果在執行任務過程中pod故障了,job控制器會根據重啟策略將其進行重啟,直到任務完成pod正常退出;如果重啟策略為Never,則pod異常後將不再重啟,它會重新建立一個新pod再次執行任務,最後直到任務完成正常退出;
Job控制器pod狀態
提示:上圖主要描述了對於job控制器建立的pod的狀態,正常情況pod執行完任務正常退出,其狀態為completed;如果pod非正常退出(即退出碼非0),並且重啟策略為never,表示不重啟pod,此時pod的狀態就為Failure:雖然不重啟pod,但是對應的任務還是在,所以重啟策略為never時,pod非正常退出,job控制器會重新建立一個pod再次執行任務;如果pod非正常退出且重啟策略為OnFailure時,pod會被重啟,然後再次執行任務,直到最後任務執行完成pod正常退出,此時pod的狀態為completed;
任務作業方式
序列作業
提示:序列作業一次只有一個pod被建立,只有當pod任務執行完成後,第二個pod才會被建立;
並行作業
提示:並行作業可以並行啟動多個pod同時作業;
示例:定義job控制器
[root@master01 ~]# cat job-demo.yaml apiVersion: batch/v1 kind: Job metadata: name: job-demo spec: template: metadata: labels: app: myjob spec: containers: - name: myjob image: alpine command: ["/bin/sh", "-c", "sleep 10"] restartPolicy: Never [root@master01 ~]#
提示:建立job控制最主要是定義對應pod模板;定義方式和pod其他控制器定義方式相同;在spec欄位下使用template指定來定義pod模板;
應用配置清單
[root@master01 ~]# kubectl apply -f job-demo.yaml job.batch/job-demo created [root@master01 ~]# kubectl get jobs -o wide NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR job-demo 0/1 7s 7s myjob alpine controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997 [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE ds-demo-8d7g9 1/1 Running 0 91m ds-demo-dkxfs 1/1 Running 0 91m ds-demo-dtsgc 1/1 Running 0 91m ds-demo-q6b5f 1/1 Running 0 91m job-demo-4h9gb 1/1 Running 0 16s [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE ds-demo-8d7g9 1/1 Running 0 91m ds-demo-dkxfs 1/1 Running 0 91m ds-demo-dtsgc 1/1 Running 0 91m ds-demo-q6b5f 1/1 Running 0 91m job-demo-4h9gb 0/1 Completed 0 30s [root@master01 ~]#
提示:可以看到建立job控制後,對應啟動pod執行完任務後就正常退出,此時pod的狀態為completed;
定義並行job控制器
[root@master01 ~]# cat job-multi.yaml apiVersion: batch/v1 kind: Job metadata: name: job-multi-demo spec: completions: 6 template: metadata: labels: app: myjob spec: containers: - name: myjob image: alpine command: ["/bin/sh", "-c", "sleep 10"] restartPolicy: Never [root@master01 ~]#
提示:定義多路並行pod需要在spec欄位下使用completions來指定執行任務需要的對應pod的數量;以上配置表示job-multi-demo這個job控制器執行任務需要啟動6個pod;
應用配置清單
[root@master01 ~]# kubectl apply -f job-multi.yaml job.batch/job-multi-demo created [root@master01 ~]# kubectl get jobs -o wide NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR job-demo 1/1 18s 9m49s myjob alpine controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997 job-multi-demo 0/6 6s 6s myjob alpine controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39 [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-8d7g9 1/1 Running 0 101m 10.244.3.69 node03.k8s.org <none> <none> ds-demo-dkxfs 1/1 Running 0 101m 10.244.1.69 node01.k8s.org <none> <none> ds-demo-dtsgc 1/1 Running 0 101m 10.244.4.13 node04.k8s.org <none> <none> ds-demo-q6b5f 1/1 Running 0 100m 10.244.2.80 node02.k8s.org <none> <none> job-demo-4h9gb 0/1 Completed 0 10m 10.244.3.70 node03.k8s.org <none> <none> job-multi-demo-rbw7d 1/1 Running 0 21s 10.244.1.70 node01.k8s.org <none> <none> [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-8d7g9 1/1 Running 0 101m 10.244.3.69 node03.k8s.org <none> <none> ds-demo-dkxfs 1/1 Running 0 101m 10.244.1.69 node01.k8s.org <none> <none> ds-demo-dtsgc 1/1 Running 0 101m 10.244.4.13 node04.k8s.org <none> <none> ds-demo-q6b5f 1/1 Running 0 101m 10.244.2.80 node02.k8s.org <none> <none> job-demo-4h9gb 0/1 Completed 0 10m 10.244.3.70 node03.k8s.org <none> <none> job-multi-demo-f7rz4 1/1 Running 0 21s 10.244.3.71 node03.k8s.org <none> <none> job-multi-demo-rbw7d 0/1 Completed 0 43s 10.244.1.70 node01.k8s.org <none> <none> [root@master01 ~]#
提示:預設情況沒有指定並行度,其並行度為1,即pod和pod之間就是序列執行任務;
定義並行度
[root@master01 ~]# cat job-multi.yaml apiVersion: batch/v1 kind: Job metadata: name: job-multi-demo2 spec: completions: 6 parallelism: 2 template: metadata: labels: app: myjob spec: containers: - name: myjob image: alpine command: ["/bin/sh", "-c", "sleep 10"] restartPolicy: Never [root@master01 ~]#
提示:定義並行度需要在spec欄位下使用parallelism欄位來指定,所謂並行度指一次並行執行幾個pod,上述配置表示一次執行2個pod;即2個pod同時作業;
應用配置清單
[root@master01 ~]# kubectl apply -f job-multi.yaml job.batch/job-multi-demo2 created [root@master01 ~]# kubectl get jobs -o wide NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR job-demo 1/1 18s 18m myjob alpine controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997 job-multi-demo 6/6 116s 8m49s myjob alpine controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39 job-multi-demo2 0/6 8s 8s myjob alpine controller-uid=d40f47ea-e58d-4424-97bd-7fda6bdf4e43 [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-8d7g9 1/1 Running 0 109m 10.244.3.69 node03.k8s.org <none> <none> ds-demo-dkxfs 1/1 Running 0 109m 10.244.1.69 node01.k8s.org <none> <none> ds-demo-dtsgc 1/1 Running 0 110m 10.244.4.13 node04.k8s.org <none> <none> ds-demo-q6b5f 1/1 Running 0 109m 10.244.2.80 node02.k8s.org <none> <none> job-demo-4h9gb 0/1 Completed 0 18m 10.244.3.70 node03.k8s.org <none> <none> job-multi-demo-f7rz4 0/1 Completed 0 8m44s 10.244.3.71 node03.k8s.org <none> <none> job-multi-demo-hhcrm 0/1 Completed 0 7m23s 10.244.3.72 node03.k8s.org <none> <none> job-multi-demo-kjmld 0/1 Completed 0 8m20s 10.244.2.81 node02.k8s.org <none> <none> job-multi-demo-lfzrj 0/1 Completed 0 8m1s 10.244.2.82 node02.k8s.org <none> <none> job-multi-demo-rbw7d 0/1 Completed 0 9m6s 10.244.1.70 node01.k8s.org <none> <none> job-multi-demo-vdkrm 0/1 Completed 0 7m41s 10.244.2.83 node02.k8s.org <none> <none> job-multi-demo2-66tdd 0/1 Completed 0 25s 10.244.2.84 node02.k8s.org <none> <none> job-multi-demo2-fsl9r 0/1 Completed 0 25s 10.244.3.73 node03.k8s.org <none> <none> job-multi-demo2-js7qs 1/1 Running 0 9s 10.244.2.85 node02.k8s.org <none> <none> job-multi-demo2-nqmps 1/1 Running 0 12s 10.244.1.71 node01.k8s.org <none> <none> [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-demo-8d7g9 1/1 Running 0 110m 10.244.3.69 node03.k8s.org <none> <none> ds-demo-dkxfs 1/1 Running 0 109m 10.244.1.69 node01.k8s.org <none> <none> ds-demo-dtsgc 1/1 Running 0 110m 10.244.4.13 node04.k8s.org <none> <none> ds-demo-q6b5f 1/1 Running 0 109m 10.244.2.80 node02.k8s.org <none> <none> job-demo-4h9gb 0/1 Completed 0 19m 10.244.3.70 node03.k8s.org <none> <none> job-multi-demo-f7rz4 0/1 Completed 0 8m57s 10.244.3.71 node03.k8s.org <none> <none> job-multi-demo-hhcrm 0/1 Completed 0 7m36s 10.244.3.72 node03.k8s.org <none> <none> job-multi-demo-kjmld 0/1 Completed 0 8m33s 10.244.2.81 node02.k8s.org <none> <none> job-multi-demo-lfzrj 0/1 Completed 0 8m14s 10.244.2.82 node02.k8s.org <none> <none> job-multi-demo-rbw7d 0/1 Completed 0 9m19s 10.244.1.70 node01.k8s.org <none> <none> job-multi-demo-vdkrm 0/1 Completed 0 7m54s 10.244.2.83 node02.k8s.org <none> <none> job-multi-demo2-5f5tn 1/1 Running 0 9s 10.244.1.72 node01.k8s.org <none> <none> job-multi-demo2-66tdd 0/1 Completed 0 38s 10.244.2.84 node02.k8s.org <none> <none> job-multi-demo2-fsl9r 0/1 Completed 0 38s 10.244.3.73 node03.k8s.org <none> <none> job-multi-demo2-js7qs 0/1 Completed 0 22s 10.244.2.85 node02.k8s.org <none> <none> job-multi-demo2-md84p 1/1 Running 0 9s 10.244.3.74 node03.k8s.org <none> <none> job-multi-demo2-nqmps 0/1 Completed 0 25s 10.244.1.71 node01.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到現在pod就一次執行兩個;
3、CronJob控制器
這種型別的控制器主要用來建立週期性任務pod;
示例:定義CronJob控制器
[root@master01 ~]# cat cronjob-demo.yaml apiVersion: batch/v1beta1 kind: CronJob metadata: name: cronjob-demo labels: app: mycronjob spec: schedule: "*/2 * * * *" jobTemplate: metadata: labels: app: mycronjob-jobs spec: parallelism: 2 template: spec: containers: - name: myjob image: alpine command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster; sleep 10 restartPolicy: OnFailure [root@master01 ~]#
提示:定義cronjob控制器,最主要是定義job模板;其實cronjob控制器是通過job控制器來管理pod,這個邏輯有點類似deploy控制器通過rs來控制pod;其中schedule欄位用來指定週期性排程時間策略,這個定義和我們在linux上定義週期性任務一樣;對於job模板和我們定義job一樣;以上配置表示每2分鐘執行一次job模板中的定義的job任務;其每次並行執行2個pod;
執行配置清單
[root@master01 ~]# kubectl apply -f cronjob-demo.yaml cronjob.batch/cronjob-demo created [root@master01 ~]# kubectl get cronjob -o wide NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE CONTAINERS IMAGES SELECTOR cronjob-demo */2 * * * * False 0 <none> 12s myjob alpine <none> [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE cronjob-demo-1608307560-5hwmb 1/1 Running 0 9s cronjob-demo-1608307560-rgkkr 1/1 Running 0 9s ds-demo-8d7g9 1/1 Running 0 125m ds-demo-dkxfs 1/1 Running 0 125m ds-demo-dtsgc 1/1 Running 0 125m ds-demo-q6b5f 1/1 Running 0 125m job-demo-4h9gb 0/1 Completed 0 34m job-multi-demo-f7rz4 0/1 Completed 0 24m job-multi-demo-hhcrm 0/1 Completed 0 23m job-multi-demo-kjmld 0/1 Completed 0 24m job-multi-demo-lfzrj 0/1 Completed 0 23m job-multi-demo-rbw7d 0/1 Completed 0 25m job-multi-demo-vdkrm 0/1 Completed 0 23m job-multi-demo2-5f5tn 0/1 Completed 0 15m job-multi-demo2-66tdd 0/1 Completed 0 16m job-multi-demo2-fsl9r 0/1 Completed 0 16m job-multi-demo2-js7qs 0/1 Completed 0 16m job-multi-demo2-md84p 0/1 Completed 0 15m job-multi-demo2-nqmps 0/1 Completed 0 16m [root@master01 ~]#
提示:可以看到對應就有兩個pod正在執行;
檢視是否建立的有job控制器呢?
[root@master01 ~]# kubectl get job -o wide NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR cronjob-demo-1608307560 2/1 of 2 15s 3m18s myjob alpine controller-uid=4a84b474-b890-4dd2-80d4-a6115130785a cronjob-demo-1608307680 2/1 of 2 17s 77s myjob alpine controller-uid=affecad9-03e6-430c-8c58-c845773c8ff7 job-demo 1/1 18s 37m myjob alpine controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997 job-multi-demo 6/6 116s 28m myjob alpine controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39 job-multi-demo2 6/6 46s 19m myjob alpine controller-uid=d40f47ea-e58d-4424-97bd-7fda6bdf4e43 [root@master01 ~]#
提示:有兩個job控制器,名字都一樣;從上面顯示的結果,不難理解,cronjob每執行一次,它都會呼叫對應的job控制來建立新pod;