前文我們瞭解了k8s上的kube-scheduler的工作方式,以及pod排程策略的定義;回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14243312.html;今天我們來聊一下k8s上的節點汙點和pod容忍度相關話題;
節點汙點是什麼呢?
節點汙點有點類似節點上的標籤或註解資訊,它們都是用來描述對應節點的後設資料資訊;汙點定義的格式和標籤、註解的定義方式很類似,都是用一個kv資料來表示,不同於節點標籤,汙點的鍵值資料中包含對應汙點的effect,汙點的effect是用於描述對應節點上的汙點有什麼作用;在k8s上汙點有三個效用(effect),第一個效用是NoSchedule,表示拒絕pod排程到對應節點上執行;第二個效用是PreferSchedule,表示儘量不把pod排程到此節點上執行;第三個效用是NoExecute,表示拒絕將pod排程到此節點上執行;該效用相比NoSchedule要嚴苛一點;從上面的描述來看,對應汙點就是來描述拒絕pod執行在對應節點的節點屬性;
pod對節點汙點的容忍度
從字面意思就能夠理解,pod要想執行在對應有汙點的節點上,對應pod就要容忍對應節點上的汙點;我們把這種容忍節點汙點的定義叫做pod對節點汙點的容忍度;pod對節點汙點的容忍度就是在對應pod中定義怎麼去匹配節點汙點;通常匹配節點汙點的方式有兩種,一種是等值匹配,一種是存在性匹配;所謂等值匹配表示對應pod的汙點容忍度,必須和節點上的汙點屬性相等,所謂汙點屬性是指汙點的key、value以及effect;即容忍度必須滿足和對應汙點的key,value和effect相同,這樣表示等值匹配關係,其操作符為Equal;存在性匹配是指對應容忍度只需要匹配汙點的key和effect即可,value不納入匹配標準,即容忍度只要滿足和對應汙點的key和effect相同就表示能夠容忍對應汙點,其操作符為Exists;
節點汙點和pod容忍度的關係
提示:如上圖所示,只有能夠容忍對應節點汙點的pod才能夠被排程到對應節點執行,不能容忍節點汙點的pod是一定不能排程到對應節點上執行(除節點汙點為PreferNoSchedule);
節點汙點管理
給節點新增汙點命令使用語法格式
Usage: kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]
提示:給節點增加汙點我們可以用kubectl taint node命令來增加節點汙點,只需要指定對應節點名稱和汙點即可,汙點可以指定多個,用空格隔開;
示例:給node01新增一個test=test:NoSchedule的汙點
[root@master01 ~]# kubectl taint node node01.k8s.org test=test:NoSchedule node/node01.k8s.org tainted [root@master01 ~]#
檢視節點汙點
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint Taints: test=test:NoSchedule [root@master01 ~]#
刪除汙點
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint Taints: test=test:NoSchedule [root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule- node/node01.k8s.org untainted [root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint Taints: <none> [root@master01 ~]#
提示:刪除汙點可以指定對應節點上的汙點的key和對應汙點的effect,也可以直接在對應汙點的key後面加“-”,表示刪除對應名為對應key的所有汙點;
pod容忍度定義
示例:建立一個pod,其容忍度為對應節點有 node-role.kubernetes.io/master:NoSchedule的汙點
[root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule [root@master01 ~]#
提示:定義pod對節點汙點的容忍度需要用tolerations欄位定義,該欄位為一個列表物件;其中key是用來指定對應汙點的key,這個key必須和對應節點汙點上的key相等;operator欄位用於指定對應的操作符,即描述容忍度怎麼匹配汙點,這個操作符只有兩個,Equal和Exists;effect欄位用於描述對應的效用,該欄位的值通常有三個,NoSchedule、PreferNoSchedule、NoExecute;這個欄位的值必須和對應的汙點相同;上述清單表示,redis-demo這個pod能夠容忍節點上有node-role.kubernetes.io/master:NoSchedule的汙點;
應用清單
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 7s 10.244.4.35 node04.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到對應pod執行在node04上;這裡需要注意,定義pod容忍度只是表示對應pod可以執行在對應有汙點的節點上,並非它一定執行在對應節點上;它也可以執行在那些沒有汙點的節點上;
驗證:刪除pod,給node01,node02,03,04都打上test:NoSchedule的汙點,再次應用清單,看看對應pod是否能夠正常執行?
[root@master01 ~]# kubectl delete -f pod-demo-taints.yaml pod "redis-demo" deleted [root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule node/node01.k8s.org tainted [root@master01 ~]# kubectl taint node node02.k8s.org test:NoSchedule node/node02.k8s.org tainted [root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule node/node03.k8s.org tainted [root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule node/node04.k8s.org tainted [root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl describe node node02.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl describe node node03.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl describe node node04.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 18s 10.244.0.14 master01.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到對應pod,被排程到master節點上執行了;其原因是對應pod能夠容忍master節點上的汙點;對應其他node節點上的汙點,它並不能容忍,所以只能執行在master節點;
刪除對應pod中容忍度的定義,再次應用pod清單,看看對應pod是否會正常執行?
[root@master01 ~]# kubectl delete pod redis-demo pod "redis-demo" deleted [root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 0/1 Pending 0 6s <none> <none> <none> <none> [root@master01 ~]#
提示:可以看到對應pod處於pending狀態;其原因是對應pod沒法容忍對應節點汙點;即所有節點都排斥對應pod執行在對應節點上;
示例:定義等值匹配關係汙點容忍度
[root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Equal value: test effect: NoSchedule [root@master01 ~]#
提示:定義等值匹配關係的容忍度,需要指定對應汙點中的value屬性;
刪除原有pod,應用清單
[root@master01 ~]# kubectl delete pod redis-demo pod "redis-demo" deleted [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 0/1 Pending 0 4s <none> <none> <none> <none> [root@master01 ~]#
提示:可以看到應用對應清單以後,pod處於pending狀態,其原因是沒有滿足對應pod容忍度的節點,所以對應pod無法正常排程到節點上執行;
驗證:修改node01節點的汙點為test=test:NoSchedule
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl taint node node01.k8s.org test=test:NoSchedule --overwrite node/node01.k8s.org modified [root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints Taints: test=test:NoSchedule [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 4m46s 10.244.1.44 node01.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到把node01的汙點修改為test=test:NoSchedule以後,對應pod就被排程到node01上執行;
驗證:修改node01節點上的汙點為test:NoSchedule,看看對應pod是否被驅離呢?
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule --overwrite node/node01.k8s.org modified [root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 7m27s 10.244.1.44 node01.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到對應節點汙點修改為test:NoSchedule以後,對應pod也不會被驅離,說明效用為NoSchedule的汙點只是在pod排程時起作用,對於排程完成的pod不起作用;
示例:定義pod容忍度為test:PreferNoSchedule
[root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo1 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: PreferNoSchedule [root@master01 ~]#
應用清單
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo1 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 11m 10.244.1.44 node01.k8s.org <none> <none> redis-demo1 0/1 Pending 0 6s <none> <none> <none> <none> [root@master01 ~]#
提示:可以看到對應pod處於pending狀態,其原因是沒有節點汙點是test:PerferNoSchedule,所以對應pod不能被排程執行;
給node02節點新增test:PreferNoSchedule汙點
[root@master01 ~]# kubectl describe node node02.k8s.org |grep Taints Taints: test:NoSchedule [root@master01 ~]# kubectl taint node node02.k8s.org test:PreferNoSchedule node/node02.k8s.org tainted [root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints Taints: test:NoSchedule test:PreferNoSchedule [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 18m 10.244.1.44 node01.k8s.org <none> <none> redis-demo1 0/1 Pending 0 6m21s <none> <none> <none> <none> [root@master01 ~]#
提示:可以看到對應node02上有兩個汙點,對應pod也沒有正常執行起來,其原因是node02上有一個test:NoSchedule汙點,對應pod容忍度不能容忍此類汙點;
驗證:修改node01,node03,node04上的節點汙點為test:PreferNoSchedule,修改pod的容忍度為test:NoSchedule,再次應用清單,看看對應pod怎麼排程
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule- node/node01.k8s.org untainted [root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule- node/node03.k8s.org untainted [root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule- node/node04.k8s.org untainted [root@master01 ~]# kubectl taint node node01.k8s.org test:PreferNoSchedule node/node01.k8s.org tainted [root@master01 ~]# kubectl taint node node03.k8s.org test:PreferNoSchedule node/node03.k8s.org tainted [root@master01 ~]# kubectl taint node node04.k8s.org test:PreferNoSchedule node/node04.k8s.org tainted [root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints Taints: test:PreferNoSchedule Unschedulable: false [root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints Taints: test:NoSchedule test:PreferNoSchedule [root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints Taints: test:PreferNoSchedule Unschedulable: false [root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints Taints: test:PreferNoSchedule Unschedulable: false [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo 1/1 Running 0 31m 10.244.1.44 node01.k8s.org <none> <none> redis-demo1 1/1 Running 0 19m 10.244.1.45 node01.k8s.org <none> <none> [root@master01 ~]# kubectl delete pod --all pod "redis-demo" deleted pod "redis-demo1" deleted [root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo1 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoSchedule [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo1 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo1 1/1 Running 0 5s 10.244.4.36 node04.k8s.org <none> <none> [root@master01 ~]#
提示:從上面的驗證過程來看,當我們把node01,node03,node04節點上的汙點刪除以後,剛才建立的redis-demo1pod被排程到node01上執行了;其原因是node01上的汙點第一個被刪除;但我們把pod的容忍對修改成test:NoSchedule以後,再次應用清單,對應pod被排程到node04上執行;這意味著NoSchedule效用汙點容忍度是可以正常容忍PreferNoSchedule汙點;
示例:定義pod容忍度為test:NoExecute
[root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo2 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoExecute [root@master01 ~]#
應用清單
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo2 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo1 1/1 Running 0 35m 10.244.4.36 node04.k8s.org <none> <none> redis-demo2 1/1 Running 0 5s 10.244.4.38 node04.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到對應pod被排程到node04上執行,說明容忍效用為NoExecute能夠容忍汙點效用為PreferNoSchedule的節點;
驗證:更改所有node節點汙點為test:NoSchedule,刪除原有pod,再次應用清單,看看對應pod是否還會正常執行?
[root@master01 ~]# kubectl taint node node01.k8s.org test- node/node01.k8s.org untainted [root@master01 ~]# kubectl taint node node02.k8s.org test- node/node02.k8s.org untainted [root@master01 ~]# kubectl taint node node03.k8s.org test- node/node03.k8s.org untainted [root@master01 ~]# kubectl taint node node04.k8s.org test- node/node04.k8s.org untainted [root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule node/node01.k8s.org tainted [root@master01 ~]# kubectl taint node node02.k8s.org test:NoSchedule node/node02.k8s.org tainted [root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule node/node03.k8s.org tainted [root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule node/node04.k8s.org tainted [root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints Taints: test:NoSchedule Unschedulable: false [root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints Taints: test:NoSchedule Unschedulable: false [root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints Taints: test:NoSchedule Unschedulable: false [root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints Taints: test:NoSchedule Unschedulable: false [root@master01 ~]# kubectl delete pod --all pod "redis-demo1" deleted pod "redis-demo2" deleted [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo2 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 0/1 Pending 0 6s <none> <none> <none> <none> [root@master01 ~]#
提示:可以看到對應pod處於pending狀態,說明pod容忍效用為NoExecute,並不能容忍汙點效用為NoSchedule;
刪除pod,修改所有節點汙點為test:NoExecute,把pod容忍度修改為NoScheudle,然後應用清單,看看對應pod怎麼排程
[root@master01 ~]# kubectl delete pod --all pod "redis-demo2" deleted [root@master01 ~]# kubectl taint node node01.k8s.org test- node/node01.k8s.org untainted [root@master01 ~]# kubectl taint node node02.k8s.org test- node/node02.k8s.org untainted [root@master01 ~]# kubectl taint node node03.k8s.org test- node/node03.k8s.org untainted [root@master01 ~]# kubectl taint node node04.k8s.org test- node/node04.k8s.org untainted [root@master01 ~]# kubectl taint node node01.k8s.org test:NoExecute node/node01.k8s.org tainted [root@master01 ~]# kubectl taint node node02.k8s.org test:NoExecute node/node02.k8s.org tainted [root@master01 ~]# kubectl taint node node03.k8s.org test:NoExecute node/node03.k8s.org tainted [root@master01 ~]# kubectl taint node node04.k8s.org test:NoExecute node/node04.k8s.org tainted [root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints Taints: test:NoExecute Unschedulable: false [root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints Taints: test:NoExecute Unschedulable: false [root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints Taints: test:NoExecute Unschedulable: false [root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints Taints: test:NoExecute Unschedulable: false [root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo2 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoSchedule [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo2 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 0/1 Pending 0 8s <none> <none> <none> <none> [root@master01 ~]#
提示:從上面的演示來看,pod容忍度效用為NoSchedule也不能容忍汙點效用為NoExecute;
刪除pod,修改對應pod的容忍度為test:NoExecute
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 0/1 Pending 0 5m5s <none> <none> <none> <none> [root@master01 ~]# kubectl delete pod --all pod "redis-demo2" deleted [root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo2 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoExecute [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo2 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 6s 10.244.4.43 node04.k8s.org <none> <none> [root@master01 ~]#
修改node04節點汙點為test:NoSchedule,看看對應pod是否可以正常執行?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 4m38s 10.244.4.43 node04.k8s.org <none> <none> [root@master01 ~]# kubectl taint node node04.k8s.org test- node/node04.k8s.org untainted [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 8m2s 10.244.4.43 node04.k8s.org <none> <none> [root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule node/node04.k8s.org tainted [root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints Taints: test:NoSchedule Unschedulable: false [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 8m25s 10.244.4.43 node04.k8s.org <none> <none> [root@master01 ~]#
提示:從NoExecute更改為NoSchedule,對原有pod不會進行驅離;
修改pod的容忍度為test:NoSchedule,再次應用清單
[root@master01 ~]# cat pod-demo-taints.yaml apiVersion: v1 kind: Pod metadata: name: redis-demo3 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoSchedule --- apiVersion: v1 kind: Pod metadata: name: redis-demo4 labels: app: db spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoSchedule [root@master01 ~]# kubectl apply -f pod-demo-taints.yaml pod/redis-demo3 created pod/redis-demo4 created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 14m 10.244.4.43 node04.k8s.org <none> <none> redis-demo3 1/1 Running 0 4s 10.244.4.45 node04.k8s.org <none> <none> redis-demo4 1/1 Running 0 4s 10.244.4.46 node04.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到後面兩個pod都被排程node04上執行;其原因是對應pod的容忍度test:NoSchedule只能容忍node04上的汙點test:NoSchedule;
修改node04的汙點為NoExecute,看看對應pod是否會被驅離?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 17m 10.244.4.43 node04.k8s.org <none> <none> redis-demo3 1/1 Running 0 2m32s 10.244.4.45 node04.k8s.org <none> <none> redis-demo4 1/1 Running 0 2m32s 10.244.4.46 node04.k8s.org <none> <none> [root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints Taints: test:NoSchedule Unschedulable: false [root@master01 ~]# kubectl taint node node04.k8s.org test- node/node04.k8s.org untainted [root@master01 ~]# kubectl taint node node04.k8s.org test:NoExecute node/node04.k8s.org tainted [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 18m 10.244.4.43 node04.k8s.org <none> <none> redis-demo3 0/1 Terminating 0 3m43s 10.244.4.45 node04.k8s.org <none> <none> redis-demo4 0/1 Terminating 0 3m43s 10.244.4.46 node04.k8s.org <none> <none> [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-demo2 1/1 Running 0 18m 10.244.4.43 node04.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到修改node04的汙點為test:NoExecute以後,對應pod容忍汙點效用為不是NoExecute的pod被驅離了;說明汙點效用為NoExecute,它會驅離不能容忍該汙點效用的所有pod;
建立一個deploy,其指定容器的容忍度為test:NoExecute,並指定其驅離延遲施加為10秒
[root@master01 ~]# cat deploy-demo-taint.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo spec: replicas: 3 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: redis:4-alpine ports: - name: redis containerPort: 6379 tolerations: - key: test operator: Exists effect: NoExecute tolerationSeconds: 10 [root@master01 ~]#
提示:tolerationSeconds欄位用於指定其驅離寬限其時長;該欄位只能用在其容忍汙點效用為NoExecute的容忍度中使用;其他汙點效用不能使用該欄位來指定其容忍寬限時長;
應用配置清單
[root@master01 ~]# kubectl apply -f deploy-demo-taint.yaml deployment.apps/deploy-demo created [root@master01 ~]# kubectl get pods -o wide -w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES deploy-demo-79b89f9847-9zk8j 1/1 Running 0 7s 10.244.2.71 node02.k8s.org <none> <none> deploy-demo-79b89f9847-h8zlc 1/1 Running 0 7s 10.244.3.61 node03.k8s.org <none> <none> deploy-demo-79b89f9847-shscr 1/1 Running 0 7s 10.244.1.62 node01.k8s.org <none> <none> redis-demo2 1/1 Running 0 54m 10.244.4.43 node04.k8s.org <none> <none> deploy-demo-79b89f9847-h8zlc 1/1 Terminating 0 10s 10.244.3.61 node03.k8s.org <none> <none> deploy-demo-79b89f9847-shscr 1/1 Terminating 0 10s 10.244.1.62 node01.k8s.org <none> <none> deploy-demo-79b89f9847-2x8w6 0/1 Pending 0 0s <none> <none> <none> <none> deploy-demo-79b89f9847-2x8w6 0/1 Pending 0 0s <none> node03.k8s.org <none> <none> deploy-demo-79b89f9847-lhltv 0/1 Pending 0 0s <none> <none> <none> <none> deploy-demo-79b89f9847-9zk8j 1/1 Terminating 0 10s 10.244.2.71 node02.k8s.org <none> <none> deploy-demo-79b89f9847-2x8w6 0/1 ContainerCreating 0 0s <none> node03.k8s.org <none> <none> deploy-demo-79b89f9847-lhltv 0/1 Pending 0 0s <none> node02.k8s.org <none> <none> deploy-demo-79b89f9847-lhltv 0/1 ContainerCreating 0 0s <none> node02.k8s.org <none> <none> deploy-demo-79b89f9847-w8xjw 0/1 Pending 0 0s <none> <none> <none> <none> deploy-demo-79b89f9847-w8xjw 0/1 Pending 0 0s <none> node01.k8s.org <none> <none> deploy-demo-79b89f9847-w8xjw 0/1 ContainerCreating 0 0s <none> node01.k8s.org <none> <none> deploy-demo-79b89f9847-shscr 1/1 Terminating 0 10s 10.244.1.62 node01.k8s.org <none> <none> deploy-demo-79b89f9847-h8zlc 1/1 Terminating 0 10s 10.244.3.61 node03.k8s.org <none> <none> deploy-demo-79b89f9847-9zk8j 1/1 Terminating 0 10s 10.244.2.71 node02.k8s.org <none> <none> deploy-demo-79b89f9847-shscr 0/1 Terminating 0 11s 10.244.1.62 node01.k8s.org <none> <none> deploy-demo-79b89f9847-2x8w6 0/1 ContainerCreating 0 1s <none> node03.k8s.org <none> <none> deploy-demo-79b89f9847-lhltv 0/1 ContainerCreating 0 1s <none> node02.k8s.org <none> <none> deploy-demo-79b89f9847-w8xjw 0/1 ContainerCreating 0 1s <none> node01.k8s.org <none> <none> deploy-demo-79b89f9847-h8zlc 0/1 Terminating 0 11s 10.244.3.61 node03.k8s.org <none> <none> deploy-demo-79b89f9847-2x8w6 1/1 Running 0 1s 10.244.3.62 node03.k8s.org <none> <none> deploy-demo-79b89f9847-9zk8j 0/1 Terminating 0 11s 10.244.2.71 node02.k8s.org <none> <none> deploy-demo-79b89f9847-lhltv 1/1 Running 0 1s 10.244.2.72 node02.k8s.org <none> <none> deploy-demo-79b89f9847-w8xjw 1/1 Running 0 2s 10.244.1.63 node01.k8s.org <none> <none> deploy-demo-79b89f9847-h8zlc 0/1 Terminating 0 15s 10.244.3.61 node03.k8s.org <none> <none> deploy-demo-79b89f9847-h8zlc 0/1 Terminating 0 15s 10.244.3.61 node03.k8s.org <none> <none> ^C[root@master01 ~]#
提示:可以看到對應pod只能在對應節點上執行10秒,隨後就被驅離,因為我們建立的是一個deploy,對應pod被驅離以後,對應deploy又會重建;
總結:對於汙點效用為NoSchedule來說,它只會拒絕新建的pod,不會對原有pod進行驅離;如果對應pod能夠容忍該汙點,則對應pod就有可能執行在對應節點上;如果不能容忍,則對應pod一定不會排程到對應節點執行;對於汙點效用為PreferNoSchedule來說,它也不會驅離已存在pod,它只有在所有節點都不滿足對應pod容忍度時,對應pod可以勉強執行在此類汙點效用的節點上;對於汙點效用為NoExecute來說,預設不指定其容忍寬限時長,表示能夠一直容忍,如果指定了其寬限時長,則到了寬限時長對應pod將會被驅離;對應之前被排程到該節點上的pod,在節點汙點效用變為NoExecute後,該節點會立即驅離所有不能容忍汙點效用為NoExecute的pod;