k8s(pod,控制器,service)詳解

q_7發表於2024-05-22

一:Pod介紹

pod資源的各種配置和原理

關於很多yaml檔案的編寫,都是基於配置引出來的

1:pod的結構和定義

每個Pod中都可以包含一個或者多個容器,這些容器可以分為2大類:

  1:使用者所在的容器,數量可多可少(使用者容器)

  2:pause容器,這是每個pod都會有的一個跟容器,作用有2個

    1、可以以它為根據,評估整個pod的健康狀態

    2、可以在根容器上面設定ip地址,其他容器都以此ip,實現Pod內部的網路通訊

      這裡的Pod內部通訊是,pod之間採用二層網路技術來實現

      ;其他容器都共享這個根容器的ip地址,外界訪問這個根容器ip地址+埠即可

2:pod定義

pod的資源清單:

屬性,依次類推的進行查詢

[root@master /]# kubectl  explain pod
#檢視二級屬性
[root@master /]# kubectl  explain pod.metadata

介紹

apiVersion 版本
#檢視所有的版本
[root@master /]# kubectl  api-versions
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2
batch/v1
certificates.k8s.io/v1
coordination.k8s.io/v1
discovery.k8s.io/v1
events.k8s.io/v1
flowcontrol.apiserver.k8s.io/v1beta2
flowcontrol.apiserver.k8s.io/v1beta3
networking.k8s.io/v1
node.k8s.io/v1
policy/v1
rbac.authorization.k8s.io/v1
scheduling.k8s.io/v1
storage.k8s.io/v1
v1

kind 型別
#檢視資源的型別
[root@master /]# kubectl  api-resources 

metadata  後設資料,資源的名字,標籤等等
[root@master /]# kubectl  explain  pod.metadata 

status   狀態資訊,自動的進行生成,不需要自己定義
[root@master /]# kubectl  get pods -o yaml

spec  定義資源的詳細資訊,
下面的子屬性
containers:object  容器列表,用於定義容器的詳細資訊
nodename:string   根據nodename的值將pod的排程到指定的node節點,pod部署在哪個Pod上面
nodeselector:pod標籤選擇器,可以將pod排程到包含這些label的Node上
hostnetwork:預設是false,k8s自動的分配一個ip地址,如果設定為true,就使用宿主機的ip
volumes:儲存卷,用於定義pod上面掛載的儲存資訊
restartpolicy:重啟策略,表示pod在遇到故障的時候處理的策略

3:pod配置

主要關於pod.spec.containers屬性

裡面有的是陣列,就是可以選擇多個值,在裡面的話,有的只是一個值,看情況進行區分

[root@master /]# kubectl  explain pod.spec.containers
KIND:       Pod
VERSION:    v1

name:容器名稱
image:容器需要的映象地址
imagePullPolicy:映象拉取策略  本地的還是遠端的
command:容器的啟動命令列表,如不指定,使用打包時使用的啟動命令  string
args:容器的啟動命令需要的引數列表,也就是上面的列表的命令   string
env:容器環境變數的配置   object
ports:容器需要暴露的埠列表   object
resources:資源限制和資源請求的設定   object

1、基本配置

[root@master ~]# cat pod-base.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-base
  namespace: dev
  labels:
    user: qqqq
spec:
   containers:
     - name: nginx
       image: nginx:1.17.1
     - name: busybox
       image: busybox:1.30

簡單的Pod的配置,裡面有2個容器
nginx輕量級的web軟體
busybox:就是一個小巧的Linux命令集合

[root@master ~]# kubectl create  -f pod-base.yaml 
pod/pod-base created

#檢視Pod狀態,
ready:只有裡面有2個容器,但是隻有一個是準備就緒的,還有一個沒有啟動
restarts:重啟的次數,因為有一個容器故障了,Pod一直重啟試圖恢復它
[root@master ~]# kubectl get pods -n dev
NAME       READY   STATUS             RESTARTS      AGE
pod-base   1/2     CrashLoopBackOff   4 (29s ago)   2m36s

#可以檢視pod詳情
[root@master ~]# kubectl describe  pods pod-base -n dev
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  4m51s                 default-scheduler  Successfully assigned dev/pod-base to node2
  Normal   Pulling    4m51s                 kubelet            Pulling image "nginx:1.17.1"
  Normal   Pulled     4m17s                 kubelet            Successfully pulled image "nginx:1.17.1" in 33.75s (33.75s including waiting)
  Normal   Created    4m17s                 kubelet            Created container nginx
  Normal   Started    4m17s                 kubelet            Started container nginx
  Normal   Pulling    4m17s                 kubelet            Pulling image "busybox:1.30"
  Normal   Pulled     4m9s                  kubelet            Successfully pulled image "busybox:1.30" in 8.356s (8.356s including waiting)
  Normal   Created    3m27s (x4 over 4m9s)  kubelet            Created container busybox
  Normal   Started    3m27s (x4 over 4m9s)  kubelet            Started container busybox
  Warning  BackOff    2m59s (x7 over 4m7s)  kubelet            Back-off restarting failed container busybox in pod pod-base_dev(2e9aeb3f-2bec-4af5-853e-2d8473e115a7)
  Normal   Pulled     2m44s (x4 over 4m8s)  kubelet            Container image "busybox:1.30" already present on machine  

之後再來進行解決

2、映象拉取

imagePullPolicy

就是pod裡面有個容器,一個有本地映象,一個沒有,可以使用這個引數來進行控制是本地還是遠端的

imagePullPolicy的值,

  Always:總是從遠端倉庫進行拉取映象(一直用遠端下載)

  ifNotPresent:本地有則使用本地的映象,本地沒有則使用從遠端倉庫拉取映象

  Never:一直使用本地的,不使用遠端下載

如果映象的tag為具體的版本號:預設值是ifNotPresent,

如果是latest:預設策略是always

[root@master ~]# cat pod-policy.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-imagepullpolicy
  namespace: dev
  labels:
    user: qqqq
spec:
   containers:
     - name: nginx
       image: nginx:1.17.2
       imagePullPolicy: Never
     - name: busybox
       image: busybox:1.30

[root@master ~]# kubectl create -f pod-policy.yaml 
pod/pod-imagepullpolicy created

#檢視pods狀態
[root@master ~]# kubectl get pods -n dev
NAME                  READY   STATUS             RESTARTS        AGE
pod-base              1/2     CrashLoopBackOff   9 (3m59s ago)   25m
pod-imagepullpolicy   0/2     CrashLoopBackOff   1 (9s ago)      19s

#檢視詳細的資訊
[root@master ~]# kubectl describe  pods pod-imagepullpolicy -n dev 
Events:
  Type     Reason             Age                From               Message
  ----     ------             ----               ----               -------
  Normal   Scheduled          64s                default-scheduler  Successfully assigned dev/pod-imagepullpolicy to node1
  Normal   Pulling            64s                kubelet            Pulling image "busybox:1.30"
  Normal   Pulled             56s                kubelet            Successfully pulled image "busybox:1.30" in 8.097s (8.097s including waiting)
  Normal   Created            39s (x3 over 56s)  kubelet            Created container busybox
  Normal   Started            39s (x3 over 56s)  kubelet            Started container busybox
  Normal   Pulled             39s (x2 over 55s)  kubelet            Container image "busybox:1.30" already present on machine
  Warning  ErrImageNeverPull  38s (x6 over 64s)  kubelet            Container image "nginx:1.17.2" is not present with pull policy of Never
  Warning  Failed             38s (x6 over 64s)  kubelet            Error: ErrImageNeverPull
  Warning  BackOff            38s (x3 over 54s)  kubelet            Back-off restarting failed container busybox in pod pod-imagepullpolicy_dev(38d5d2ff-6155-4ff3-ad7c-8b7f4a370107)

#直接報了一個錯誤,就是映象拉取失敗了

#解決的措施,修改裡面的策略為ifnotpresent即可
[root@master ~]# kubectl  delete  -f pod-policy.yaml 
[root@master ~]# kubectl  apply  -f pod-policy.yaml 
[root@master ~]# kubectl  get pods -n dev
[root@master ~]# kubectl  get pods -n dev
NAME                  READY   STATUS             RESTARTS         AGE
pod-base              1/2     CrashLoopBackOff   11 (2m34s ago)   34m
pod-imagepullpolicy   1/2     CrashLoopBackOff   4 (63s ago)      2m55s
這樣就拉取成功了

3、啟動命令

command:容器啟動的命令列表,如果不指定的話,使用打包時使用的啟動命令

args:容器的啟動命令需要的引數列表

為什麼沒有busybox執行了,busybox並不是一個程式,而是類似於一個工具類的集合,他會自動的進行關閉,解決的方法就是讓其一直的執行,這就要使用command命令了

[root@master ~]# cat command.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-command
  namespace: dev
spec:
   containers:
     - name: nginx
       image: nginx:1.17.1
     - name: busybox
       image: busybox:1.30
       command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hell0.txt;sleep 3;done;"]


#/bin/sh  命令列指令碼
-c  之後的字串作為一個命令來執行
向這個檔案裡面執行時間,然後執行結束後,休息3秒鐘,這個就是一個程序一直在執行

[root@master ~]# kubectl  create -f command.yaml 
pod/pod-command created

#這樣就好了,都啟動了
[root@master ~]# kubectl get pods -n dev
NAME          READY   STATUS    RESTARTS   AGE
pod-command   2/2     Running   0          6s

#進入這個容器
[root@master ~]# kubectl  exec pod-command -n dev -it -c busybox /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # 

這樣就成功的進入裡面去了
/ # cat /tmp/hell0.txt ,因為有這個程序的存在,就不是關閉掉

  

說明:發現command已經完成啟動命令後和傳遞引數後的功能,為什麼還需要提供一個args的選項了,用於傳遞引數呢,這其實跟docker有點關係,整個個就是覆蓋dockerfile中的entrypoint的功能

k8s拉取映象的時候,裡面有一個dockerfile來構建映象,然後k8s的command和args會替換

情況:

  1,如果command和args沒有寫,那麼用dockerfile的配置

  2、如果command寫了,但是args沒有寫,那麼用dockerfile預設配置會被忽略,執行輸入的command命令

  3、如果command沒寫,但是args寫了,那麼dockerfile中的配置的entrypoint命令會被執行,使用當前的args的引數

  4、如果都寫了,那麼dockerfile的配置被忽略,執行command並追上args引數

4、環境變數(瞭解即可)

env向容器裡面傳入環境變數,object型別的陣列

鍵值對,就是一個鍵加上一個值即可

[root@master ~]# cat pod-env.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-command
  namespace: dev
spec:
   containers:
     - name: nginx
       image: nginx:1.17.1
     - name: busybox
       image: busybox:1.30
       command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hell0.txt;sleep 3;done;"]
       env:
       - name: "username"
          vaule : "admin"
       - name: "password"
         vaule: "123456"


#建立Pod
[root@master ~]# kubectl create -f pod-env.yaml 
pod/pod-command created
[root@master ~]# kubectl get pods -n dev
NAME          READY   STATUS    RESTARTS   AGE
pod-command   2/2     Running   0          47s


#進入容器裡面
-c選項,只有一個容器的話,可以省略掉即可
[root@master ~]# kubectl  exec -ti pod-command -n dev -c busybox /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # echo $username
admin
/ # echo password
password

5、埠設定(ports)

檢視埠一些選項  

[root@master ~]# kubectl  explain  pod.spec.containers.ports
ports 
   name:埠的名稱,必須是在Pod中是唯一的
   containerport 容器要監聽的埠
   hostport 容器要在主機上公開的埠,如果設定,主機上只能執行容器的一個副本,會有衝突,多個Pod會佔用一個埠
   hostip  要將外部埠繫結到主機的Ip(一般省略了)
   protocol  埠協議,預設是TCP,UTP,SCTP
   

案例:

[root@master ~]# cat pod-port.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-ports
  namespace: dev
spec:
   containers:
   - name: nginx
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
       protocol: TCP

kubectl create -f pod-port.yaml 
[root@master ~]# kubectl get pod -n dev -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES
pod-command   2/2     Running   0          27m     10.244.1.2   node2   <none>           <none>
pod-ports     1/1     Running   0          2m58s   10.244.2.2   node1   <none>           <none>

#訪問容器裡面的程式的話,需要使用Pod的ip加上容器的埠即可,進行訪問
[root@master ~]# curl 10.244.2.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

6、資源限制(resources)

因為容器的執行需要佔用一些資源,就是對某些容器進行資源的限制,如果某個資源突然大量的值記憶體的話,其他的容器就不能正常的工作了,就會出現問題

就是規定A容器只需要600M記憶體,如果大於的話,就出現了問題,進行重啟容器的操作

有2個字選項:

limits:用於限制執行時容器的最大佔用資源,當容器佔用的資源超過了limits會被終止,並就進行重啟(上限)

requests:用於設定容器需要的最小資源,如果環境資源不夠的話,容器無法進行啟動(下限)  

  作用:

    1、只針對cpu,記憶體

案例:

[root@master ~]# cat pod-r.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-resources
  namespace: dev
spec:
   containers:
   - name: nginx
     image: nginx:1.17.1
     resources:
        limits:
           cpu: "2"
           memory: "10Gi"
        requests:
            cpu: "1"
            memory: "10Mi"

kubectl create -f pod-r.yaml 
[root@master ~]# kubectl get pods -n dev
NAME            READY   STATUS    RESTARTS   AGE
pod-command     2/2     Running   0          41m
pod-ports       1/1     Running   0          16m
pod-resources   1/1     Running   0          113s


#規定最少需要10G才能啟動容器,但是不會進行啟動
[root@master ~]# cat pod-r.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-resources
  namespace: dev
spec:
   containers:
   - name: nginx
     image: nginx:1.17.1
     resources:
        limits:
           cpu: "2"
           memory: "10Gi"
        requests:
            cpu: "1"
            memory: "10G"
[root@master ~]# kubectl create -f pod-r.yaml 
pod/pod-resources created

#查詢狀態
[root@master ~]# kubectl get pods -n dev
NAME            READY   STATUS    RESTARTS   AGE
pod-command     2/2     Running   0          44m
pod-ports       1/1     Running   0          19m
pod-resources   0/1     Pending   0          89s

#檢視詳細的資訊
[root@master ~]# kubectl  describe  pods pod-resources -n dev

cpu和記憶體的單位
cpu為整數
記憶體為Gi Mi  G M等形式

二:pod生命週期    

1:概念

一般是指Pod物件從建立至終的時間範圍稱為pod的生命週期,主要包含一下過程

  1、pod建立過程

  2、執行初始化容器過程,它是容器的一種,可多可少,一定在主容器執行之前執行

  3、執行主容器過程

    容器啟動後鉤子,容器終止前鉤子,就是啟動之後的一些命令,2個特殊的點

    容器的存活性探測,就緒性探測

  4、pod終止過程

在整個生命週期中,pod會出現5中狀態

掛起(pending):apiserver,已經建立了pod資源物件,但它尚未被排程,或者仍然處於下載映象的過程中;建立一個pod,裡面有容器,需要拉取

執行中(running):pod已經被排程至某一個節點,並且所有的容器都已經被kubelet建立完成

成功(succeeded):Pod中的所有容器都已經被成功終止,並且不會被重啟;就是執行一個容器,30秒後,列印,然後退出

失敗(failed):所有容器都已經被終止,但至少有一個容器終止失敗,即容器返回非0的退出狀態

未知(unknown):apiserver無法正常的獲取到pod物件的狀態資訊,通常由網路通訊失敗所導致的

2:pod建立和終止

pod的建立過程:

都監聽到apiserver上面了

開始建立就已經返回一個資訊了,給etcd了,

scheduler:開始為pod分配主機,將結果告訴apiserver

node節點上面發現有pod排程過來,呼叫docker啟動容器,並將結果告訴apiserver

apiserver將接收的資訊pod狀態資訊存入etcd中

pod的終止過程:

service就是Pod的代理,訪問pod透過service即可

向apiserver傳送一個請求,apiserver更新pod的狀態,將pod標記為terminating狀態,kubelet監聽到為terminating,就啟動關閉pod過程

3:初始化容器

主要做的就是主容器的前置工作(環境的準備),2個特點

  1、初始化容器必須執行在完成直至結束,若某初始化容器執行失敗了,那麼k8s需要重啟它知道成功完成

  2、初始化容器必須按照定義的順序執行,當且僅當前一個成功了,後面的一個才能執行,否則不執行

初始化容器應用場景:

  提供主容器進行不具備工具程式或自定義程式碼

  初始化容器需要先於應用容器序列啟動並執行成功,因此,可應用容器的啟動直至依賴的條件得到滿足

nginx,mysql,redis, 先連mysql,不成功,則會一直處於連線, 一直連成功了,就會去連線redis,這2個條件都滿足了,nginx這個主容器就會啟動了

測試:

規定mysql 192.168.109.201 redis 192.168.109.202

[root@master ~]# cat pod-init.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-init
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
   initContainers:
   - name: test-mysql
     image: busybox:1.30
     command: ['sh','-c','util ping 192.168.109.201 -c 1;do echo waiting for mysql;sleep 2;done;']
   - name: test-redis
     image: busybox:1.30
     command: ['sh','-c','util ping 192.168.109.202 -c 1;di echo waiting for redis;sleep 2;done']
#由於沒有地址,所以的話,初始化失敗
[root@master ~]# kubectl get pods -n dev
NAME       READY   STATUS                  RESTARTS      AGE
pod-init   0/1     Init:CrashLoopBackOff   3 (27s ago)   83s

#新增地址,第一個初始化容器就能執行了
[root@master ~]# ifconfig  ens33:1 192.168.109.201 netmask 255.255.255.0 up

#再次新增地址,第二個初始化容器也能執行了
[root@master ~]# ifconfig  ens33:2 192.168.109.202 netmask 255.255.255.0 up
[root@master ~]# kubectl get pods -n dev -w
NAME       READY   STATUS     RESTARTS   AGE
pod-init   0/1     Init:0/2   0          6s
pod-init   0/1     Init:1/2   0          13s
pod-init   0/1     Init:1/2   0          14s
pod-init   0/1     PodInitializing   0          27s
pod-init   1/1     Running           0          28s

主容器就執行成功了

4:主容器鉤子函式

就是主容器上面的一些點,能夠允許使用者使用一些程式碼

2個點  

post start:容器啟動後鉤子,容器啟動之後會立即的執行,成功了,則啟動,否則,會重啟

prestop:容器終止前鉤子,容器在刪除之前執行,就是terming狀態,會阻塞容器刪除,執行成功了,就會刪除

1、鉤子處理器(三種方式定義動作)

exec命令:在容器內執行一次命令

用的最多的exec方式

lifecycle:
   podstart:
     exec:
       command:
        - cat
        - /tmp/healthy 

tcpsocket:在當前容器內嘗試訪問指定socket,在容器內部訪問8080埠

lifecycle:
   podstart:
      tcpsocket:
         port:8080   #會嘗試連線8080埠  

httpget:在當前容器中向某url發起http請求

lifecycle:
   poststart:
    httpGet:
     path: url地址
     port:   80
     host: 主機地址
     schme: HTTP 支援的協議  

案例:

apiVersion: v1
kind: Pod
metadata:
   name: pod-exec
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80  #容器內部的埠,一般是service將公開pod埠,將pod埠對映到主機上面
     lifecycle:
       postStart:
         exec:   ###在啟動的時候,執行一個命令,修改預設網頁內容
            command: ["/bin/sh","-c","echo poststart > /usr/share/nginx/html/index.html"]
       preStop:
          exec:    ###停止容器的時候,-s傳入一個引數,優雅的停止nginx服務
             command: ["/usr/sbin/nginx","-s","quit"]

[root@master ~]# kubectl create -f pod-exec.yaml 
pod/pod-exec created
[root@master ~]# kubectl get pods -n dev -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod-exec   1/1     Running   0          53s   10.244.1.7   node1   <none>           <none>
pod-init   1/1     Running   0          27m   10.244.1.6   node1   <none>           <none>

訪問一下pod裡面容器的服務即可
格式為pod的ip+容器的埠
[root@master ~]# curl 10.244.1.7:80
poststart

5:容器探測  

主容器探測:用於檢測容器中的應用例項是否正常的工作,是保障業務可用性的一種傳統機制,如果經過了探測,例項的狀態不符合預期,那麼k8s就會把問題的例項摘除,不承擔業務的流量,k8s提供了2種探針來實現容器探測,

分別是:

  liveness probes:存活性探針,用於檢測應用例項,是否處於正常的執行狀態,如果不是,k8s會重啟容器;用於決定是否重啟容器

  readiness probes:就緒性探針,用於檢測應用例項是否可以接受請求,如果不能,k8s不會轉發流量;nginx需要讀取很多的web檔案,在讀取的過程中,service認為nginx已經成功了,如果有個請求的話,那麼就無法提供了服務;所以就不會將請求轉發到這裡了

就是一個service來代理許多的pod,請求來到了pod,如果有一個pod出現了問題,如果沒有了探針的話,就會出現了問題

作用

  1、找出這些出了問題的pod

  2、服務是否已經準備成功了

三種探測方式:

exec:退出碼為0,則正常

livenessProbe
   exec:
     command:
       - cat
       - /tmp/healthy

tcpsocket:

livenessProbe:
    tcpSocket:
       port: 8080

httpget:

返回的狀態碼在200個399之間,則認為程式正常,否則不正常

livenessProbe:
    httpGet:
      path: /  url地址
       port:80  主機埠
       host:主機地址
       scheme:http
      

案例:

exec案例:

[root@master ~]# cat pod-live-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-liveness-exec
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        exec:
          command: ["/bin/cat","/tmp/hello.txt"]   #由於沒有這個檔案,所以就會一直進行重啟

#出現了問題,就會處於一直重啟的狀態
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS    RESTARTS      AGE
pod-exec            1/1     Running   0             38m
pod-init            1/1     Running   0             65m
pod-liveness-exec   1/1     Running   2 (27s ago)   97s

#檢視pod的詳細資訊
[root@master ~]# kubectl describe  pod -n dev pod-liveness-exec 
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  2m13s               default-scheduler  Successfully assigned dev/pod-liveness-exec to node2
  Normal   Pulling    2m12s               kubelet            Pulling image "nginx:1.17.1"
  Normal   Pulled     2m                  kubelet            Successfully pulled image "nginx:1.17.1" in 12.606s (12.606s including waiting)
  Normal   Created    33s (x4 over 2m)    kubelet            Created container main-container
  Normal   Started    33s (x4 over 2m)    kubelet            Started container main-container
  Warning  Unhealthy  33s (x9 over 113s)  kubelet            Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
  Normal   Killing    33s (x3 over 93s)   kubelet            Container main-container failed liveness probe, will be restarted
  Normal   Pulled     33s (x3 over 93s)   kubelet            Container image "nginx:1.17.1" already present on machine

#一直在重啟
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS             RESTARTS      AGE
pod-exec            1/1     Running            0             39m
pod-init            1/1     Running            0             66m
pod-liveness-exec   0/1     CrashLoopBackOff   4 (17s ago)   2m57s


#一個正常的案例
[root@master ~]# cat pod-live-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-liveness-exec
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        exec:
          command: ["/bin/ls","/tmp/"]
[root@master ~]# kubectl create -f pod-live-exec.yaml 
pod/pod-liveness-exec created

#就不會一直重啟了
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS    RESTARTS   AGE
pod-exec            1/1     Running   0          42m
pod-init            1/1     Running   0          69m
pod-liveness-exec   1/1     Running   0          56s

#檢視詳細的資訊,發現沒有錯誤

  

tcpsocket:  

[root@master ~]# cat tcp.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-liveness-tcp
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        tcpSocket:
           port: 8080    訪問容器的8080埠


kubectl create -f tcp.yaml
#發現一直在進行重啟,沒有訪問到8080埠
[root@master ~]# kubectl get pods -n dev
NAME               READY   STATUS    RESTARTS      AGE
pod-liveness-tcp   1/1     Running   5 (72s ago)   3m43s

#檢視詳細的資訊
[root@master ~]# kubectl describe  pod -n dev pod-liveness-tcp  
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  3m22s                  default-scheduler  Successfully assigned dev/pod-liveness-tcp to node2
  Normal   Pulled     112s (x4 over 3m22s)   kubelet            Container image "nginx:1.17.1" already present on machine
  Normal   Created    112s (x4 over 3m22s)   kubelet            Created container main-container
  Normal   Started    112s (x4 over 3m22s)   kubelet            Started container main-container
  Normal   Killing    112s (x3 over 2m52s)   kubelet            Container main-container failed liveness probe, will be restarted
  Warning  Unhealthy  102s (x10 over 3m12s)  kubelet            Liveness probe failed: dial tcp 1

正常的案例:

[root@master ~]# cat tcp.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-liveness-tcp
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        tcpSocket:
           port: 80 

#檢視效果,沒有任何的問題
[root@master ~]# kubectl describe  pods -n dev  pod-liveness-tcp 
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  27s   default-scheduler  Successfully assigned dev/pod-liveness-tcp to node2
  Normal  Pulled     28s   kubelet            Container image "nginx:1.17.1" already present on machine
  Normal  Created    28s   kubelet            Created container main-container
  Normal  Started    28s   kubelet            Started container main-container

httpget

[root@master ~]# cat tcp.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-liveness-http
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        httpGet:
           scheme: HTTP
           port: 80
           path: /hello   # http://127.0.0.1:80/hello


#發現一直在進行重啟的操作
[root@master ~]# kubectl describe pod  -n dev  pod-liveness-http 
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS    RESTARTS      AGE
pod-liveness-http   1/1     Running   1 (17s ago)   48s
pod-liveness-tcp    1/1     Running   0             4m21s

#正常的情況
[root@master ~]# cat tcp.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-liveness-http
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        httpGet:
           scheme: HTTP
           port: 80
           path: /
[root@master ~]# kubectl describe  pods -n dev pod-liveness-http 
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  21s   default-scheduler  Successfully assigned dev/pod-liveness-http to node1
  Normal  Pulled     22s   kubelet            Container image "nginx:1.17.1" already present on machine
  Normal  Created    22s   kubelet            Created container main-container
  Normal  Started    22s   kubelet            Started container main-container

容器探測補充

[root@master ~]# kubectl explain pod.spec.containers.livenessProbe
initialDelaySeconds	<integer>  容器啟動後等待多少秒執行第一次探測
timeoutSeconds	<integer>   探測超時時間,預設是1秒,最小1秒
periodSeconds	<integer>   執行探測的頻率,預設是10秒,最小是1秒
failureThreshold	<integer>    連續探測失敗多少次後才被認為失敗,預設是3,最小值是1
successThreshold	<integer>  連續探測成功多少次後才被認定為成功,預設是1

案例:

6:重啟策略

就是容器探測出現了問題,k8s就會對容器所在的Pod進行重啟,這個由pod的重啟策略決定的,pod的重啟策略有三種

  always:容器失效時,自動重啟該容器,預設值

  onfailure:容器終止執行且退出碼不為0時重啟,異常終止

  never:不論狀態為何,都不重啟該容器

重啟策略適用於Pod物件中的所有容器,首次需要重啟的容器,將在需要時立即重啟,隨後再次需要重啟的操作由kubelet延遲一段時間進行,且反覆的重啟操作的延遲時長為10S,20S,300s為最大的延遲時長

案例:

apiVersion: v1
kind: Pod
metadata:
   name: restart-pod
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
     - name: nginx-port
       containerPort: 80
     livenessProbe:
        httpGet:
           scheme: HTTP
           port: 80
           path: /hello   # http://127.0.0.1:80/hello
   restartPolicy: Always

#會一直進行重啟

#改為Never
容器監聽失敗了,就不會進行重啟,直接停止了
狀態是完成的狀態,
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS      RESTARTS      AGE
pod-liveness-http   1/1     Running     1 (16h ago)   16h
pod-liveness-tcp    1/1     Running     1 (22m ago)   16h
restart-pod         0/1     Completed   0             41s

[root@master ~]# kubectl describe  pod -n dev  restart-pod 

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  84s                default-scheduler  Successfully assigned dev/restart-pod to node1
  Normal   Pulled     84s                kubelet            Container image "nginx:1.17.1" already present on machine
  Normal   Created    84s                kubelet            Created container main-container
  Normal   Started    84s                kubelet            Started container main-container
  Warning  Unhealthy  55s (x3 over 75s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    55s                kubelet            Stopping container main-container

三:pod排程

預設的情況下,一個Pod在哪個節點上面執行,是有scheduler元件採用相應的演算法計算出來,這個過程是不受人工控制的,但是在實際中,這不滿足需求,需要控制pod在哪個節點上面執行,這個就需要排程的規則了,四大類排程的方式

自動排程:經過演算法自動的排程

定向排程:透過nodename屬性(node的名字),nodeselector(標籤)

親和性排程:nodeAffinity(node的親和性),podAffinity(pod的親和性),podANtiAffinity(這個就是跟Pod的親和性差,所以就去相反的一側)

汙點(容忍排程):站在node節點上面完成的,有一個汙點,別人就不能在;容忍站在pod上面來說的,可以在node上面的汙點進行就是容忍排程

1:定向排程

指定的是pod宣告nodename,或者nodeselector,依次將pod排程到指定的node節點上面,這個是強制性的,即使node不存在,也會被排程,只不過是pod執行失敗而已

1、nodename

強制的排程,直接跳過了scheduler的排程邏輯,直接將pod排程到指定的節點上面

[root@master ~]# cat pod-nodename.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-nodename
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
     ports:
   nodeName: node1

[root@master ~]# kubectl create  -f pod-nodename.yaml 
pod/pod-nodename created
#執行在node1上面執行
[root@master ~]# kubectl get pods -n dev -o wide
NAME                READY   STATUS    RESTARTS      AGE   IP            NODE    NOMINATED NODE   READINESS GATES
pod-liveness-http   1/1     Running   1 (16h ago)   17h   10.244.2.8    node1   <none>           <none>
pod-liveness-tcp    1/1     Running   1 (42m ago)   17h   10.244.1.7    node2   <none>           <none>
pod-nodename        1/1     Running   0             41s   10.244.2.10   node1   <none>           <none>

#將節點改為不存在的,pod會失敗而已
[root@master ~]# kubectl get pods -n dev -o wide
NAME                READY   STATUS    RESTARTS      AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod-liveness-http   1/1     Running   1 (16h ago)   17h   10.244.2.8   node1   <none>           <none>
pod-liveness-tcp    1/1     Running   1 (43m ago)   17h   10.244.1.7   node2   <none>           <none>
pod-nodename        0/1     Pending   0             9s    <none>       node3   <none>           <none>  

2、nodeselector

看的就是節點上面的標籤,標籤選擇器,強制性的

[root@master ~]# kubectl label  nodes node1 nodeenv=pro
node/node1 labeled
[root@master ~]# kubectl label  nodes node2 nodeenv=test
node/node2 labeled
[root@master ~]# cat pod-selector.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-select
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
   nodeSelector:
       nodeenv: pro

[root@master ~]# kubectl get pods -n dev -o wide
NAME                READY   STATUS    RESTARTS      AGE     IP            NODE    NOMINATED NODE   READINESS GATES
pod-liveness-http   1/1     Running   1 (17h ago)   17h     10.244.2.8    node1   <none>           <none>
pod-liveness-tcp    1/1     Running   1 (51m ago)   17h     10.244.1.7    node2   <none>           <none>
pod-select          1/1     Running   0             2m16s   10.244.2.11   node1   <none>           <none>

#不存在的標籤
改為pr1,排程失敗
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS    RESTARTS      AGE
pod-liveness-http   1/1     Running   1 (17h ago)   17h
pod-liveness-tcp    1/1     Running   1 (51m ago)   17h
pod-select          0/1     Pending   0             5s

2:親和性排程

上面的問題,就是強制性的排程,就是如果沒有節點的話,Pod就會排程失敗

就是宣告一個排程的節點,如果找到了,就排程,否則,找其他的;這個就是親和性

  nodeAffinity:node的親和性,以node為目標,主要就是標籤()

  podAffinity:pod的親和性,以pod為目標,就是以正在執行的pod為目標,就是一個web的pod需要和一個mysql的pod在一起,向其中一個打個標籤,另外一個就會來找他

  podAntAffinity:pod的反親和性,以pod為目標,討厭和誰在一起,就選擇其他的

場景的說明:

如果2個應用時頻繁互動,那麼就有必要利用親和性讓2個應用盡可能的靠近,這樣就能減少因為網路通訊帶來的效能損耗了,排程到了pod1上面就都在一個節點上面,通訊的效能就損耗減少了

反親和性的應用:

當應用的採用多副本部署時,有必要採用反親和性讓各個應用實列打散分佈在各個node上面,這樣就能提高服務的高可用性

應用的功能是相同的,使用反親和性,都分佈在不同的節點上面,高可用性,就是壞了一個節點,其他的節點也能正常的提供工作

引數:

[root@master ~]# kubectl explain pod.spec.affinity.nodeAffinity

requiredDuringSchedulingIgnoredDuringExecution  node節點必須滿足的指定的所有規劃才可以,相當於硬限制
   nodeSelectorTerms:節點選擇列表
       matchFields:按節點欄位列出的節點選擇器要求列表
       matchExpressions  按節點標籤列出的節點選擇器要求列表(標籤)
         key:
         vaules:
         operator:關係符,支援in, not exists

如果有符合的條件,就排程,沒有符合的條件就排程失敗

preferredDuringSchedulingIgnoredDuringExecution 	<NodeSelector>  軟限制,優先找這些滿足的節點
    preference    一個節點選擇器,以相應的權重相關聯
            matchFields:按節點欄位列出的節點選擇器要求列表
            matchExpressions  按節點標籤列出的節點選擇器要求列表  
                 key:鍵
                 vaules:
                 operator:
    weight:傾向權重,1~100  ##就是傾向排程   
 

如果找不到的話,就從其他的節點排程上去

關係符
 - key:nodedev   匹配存在標籤的key為noddev的節點
    operator: exists  
- key:  nodedev   匹配標籤的key為nodedev,且vaule是xxx或者yyy的節點
   operator:in
   vaules:['xxx','yyy']    

  

1、nodeAffinity

node的親和性,2大類,硬限制,軟限制,節點上面的標籤作為選擇

[root@master ~]# cat pod-aff-re.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-aff
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
   affinity:
       nodeAffinity:   ##親和性設定
          requiredDuringSchedulingIgnoredDuringExecution:  #設定node親和性,硬限制
             nodeSelectorTerms:     
                  matchExpressions:     匹配nodeenv的值在[xxx,yyy]中的標籤
                    - key: nodeenv
                      operator: In
                      vaules: ["xxx","yyy"] 
[root@master ~]# kubectl create -f pod-aff-re.yaml 
pod/pod-aff created
[root@master ~]# kubectl get pod -n dev
NAME                READY   STATUS    RESTARTS      AGE
pod-aff             0/1     Pending   0             23s
pod-liveness-http   1/1     Running   1 (17h ago)   18h
pod-liveness-tcp    1/1     Running   1 (94m ago)   18h
pod-select          0/1     Pending   0             43m

#排程失敗

#值改為pro,就能在node1上面排程了
[root@master ~]# kubectl create -f pod-aff-re.yaml 
pod/pod-aff created
[root@master ~]# kubectl get pods -n dev
NAME                READY   STATUS    RESTARTS      AGE
pod-aff             1/1     Running   0             5s
pod-liveness-http   1/1     Running   1 (17h ago)   18h
pod-liveness-tcp    1/1     Running   1 (96m ago)   18h
pod-select          0/1     Pending   0             45m

  

軟限制

#軟限制
[root@master ~]# cat pod-aff-re.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-aff
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
   affinity:
     nodeAffinity:
       preferredDuringSchedulingIgnoredDuringExecution:   #軟限制
       - weight: 1    
         preference:
            matchExpressions:
            - key: nodeenv
              operator: In
              values: ["xxx","yyy"] 

#直接排程在node2上面了
[root@master ~]# kubectl get pods -n dev -o wide
NAME                READY   STATUS    RESTARTS       AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-aff             1/1     Running   0              41s   10.244.1.9   node2    <none>           <none>
pod-liveness-http   1/1     Running   1 (17h ago)    18h   10.244.2.8   node1    <none>           <none>
pod-liveness-tcp    1/1     Running   1 (102m ago)   18h   10.244.1.7   node2    <none>           <none>
pod-select          0/1     Pending   0              50m   <none>       <none>   <none>           <none>

注意:

如果同時定義了nodeSelector和nodeAffinity,那麼必須滿足這2個條件,pod才能在指定的node上面執行
如果nodeaffinity指定了多個nodeSelectorTerms,那麼只要有一個能夠匹配成功即可
如果一個nodeSelectorTerms中有多個matchExpressions,則一個節點必須滿足所有的才能匹配成功
如果一個Pod所在node在pod執行期間標籤發生了改變,不符合該pod的節點親和性需求,則系統將忽略此變化

這個排程就是隻在排程的時候生效,所以的話,就是如果排程成功後,標籤發生了變化,不會對這個pod進行什麼樣的變化

2、podAffinitly

就是以正在執行的pod為參照,硬限制和軟限制

kubectl explain pod.spec.affinity.podAffinity

requiredDuringSchedulingIgnoredDuringExecution   硬限制
    namespace:指定參照pod的名稱空間,如果不指定的話,預設的參照物pod就跟pod一眼的
    topologkey:排程的作用域,靠近到節點上,還是網段上面,作業系統了
                        ###hostname的話,就是以node節點為區分的範圍,排程到node1的節點上面
                                os的話,就是以作業系統為區分的,排程到跟pod1作業系統上一樣的
     labeSelector:標籤選擇器
          matchExpressions: 按節點列出的節點選擇器要求列表
               key:
               vaules:
               operator:
          matchLbales:   指多個matchExpressions對映的內容
preferredDuringSchedulingIgnoredDuringExecution  軟限制
    namespace:指定參照pod的名稱空間,如果不指定的話,預設的參照物pod就跟pod一眼的
    topologkey:排程的作用域,靠近到節點上,還是網段上面,作業系統了
                        ###hostname的話,就是以node節點為區分的範圍,排程到node1的節點上面
                                os的話,就是以作業系統為區分的,排程到跟pod1作業系統上一樣的
     labeSelector:標籤選擇器
          matchExpressions: 按節點列出的節點選擇器要求列表
               key:
               vaules:
               operator:
          matchLbales:   指多個matchExpressions對映的內容
    weight:傾向權重1~100

案例:

軟親和性:

apiVersion: v1
kind: Pod
metadata:   #後設資料的資訊
   name: pods-1   #pod的名字
   namespace: dev   #名稱空間
spec:   
  containers:   #容器
    - name: my-tomcat   #映象的名字
      image: tomcat    #拉取的映象
      imagePullPolicy: IfNotPresent   #策略為遠端和本地都有
  affinity:
     podAffinity:   #pod的親和性
       preferredDuringSchedulingIgnoredDuringExecution:   #軟限制
       - weight: 1    #權重為1
         podAffinityTerm:    #定義了具體的pod親和性的條件
          labelSelector:    #標籤選擇器
             matchExpressions:   #一個或者多個標籤匹配式
                 - key: user   #標籤的鍵
                   operator: In   
                   values:    #標籤的值
                      - "qqqq"
          topologyKey: kubernetes.io/hostname   #按照主機進行區分


就是這個pod會被排程到節點上面有pod,並且標籤為user=qqqq這個節點上面去

硬親和性:

apiVersion: v1
kind: Pod
metadata:
   name: pod-5
   namespace: dev
spec:
  containers:
    - name: my-tomcat
      image: tomcat
      imagePullPolicy: IfNotPresent
  affinity:
     podAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:  #軟限制
         - labelSelector:   #標籤選擇器
             matchExpressions:    #匹配列表
                 - key: user   
                   operator: In
                   values: ["qqqq"]   
           topologyKey: kubernetes.io/hostname    #按照主機來進行劃分

  

  

3、反親和性

就是不在這個pod上面進行排程,在另外的一個pod上面進行排程即可

案例:

[root@master mnt]# cat podaff.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: podaff
   namespace: dev
spec:
   containers:
   - name: main-container
     image: nginx:1.17.1
   affinity:
     podAntiAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
       - labelSelector:
           matchExpressions:
           - key: podenv
             operator: In
             values: ["pro"]
         topologyKey: kubernets.io/hostname

發現在node2節點上面建立了
[root@master mnt]# kubectl get pods -n dev -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
pod-podaff   1/1     Running   0          61m     10.244.2.14   node1   <none>           <none>
podaff       1/1     Running   0          2m57s   10.244.1.12   node2   <none>           <none>

3:汙點(taints)

前面都是站在pod的角度上面來進行配置的屬性,那麼就是可以站在node的節點上面,是否允許這些pod排程過來,這些在node上面的資訊就是被稱為了汙點

就是一個拒絕的策略

汙點作用:

  可以將拒絕Pod排程過來

  甚至還可以將已經存在的pod趕出去

汙點的格式:

key=value:effect

key和value:是汙點的標籤,effect描述汙點的作用

effect三種的選項為:

  PreferNoSchedule:k8s儘量避免把Pod排程到具有該汙點的node上面,除非沒有其他的節點可以排程了

  NoSchedule:k8s不會把pod排程到該具有汙點node上面,但不會影響當前node上已經存在的pod

  NoExecue:k8s將不會把Pod排程該具有汙點的node上面,同時也會將node已經存在的Pod驅離,一個pod也沒有了

設定汙點:

#設定汙點
[root@master mnt]# kubectl taint  nodes node1 key=vaule:effect

#去除汙點
[root@master mnt]# kubectl taint  nodes node1 key:effect-

#去除所有的汙點
[root@master mnt]# kubectl taint  nodes node1 key-

  

案例:

準備節點node1,先暫時停止node2節點
為node1節點一個汙點,tag=heima:PreferNoSchedule;  然後建立pod1
修改node1節點設定一個汙點;tag=heima:NoSchedule: 然後建立pod2,不在接收新的pod,原來的也不會離開
修改node1節點設定一個汙點;tag=heima:NoExecute;然後建立pod3,pod3也不會被建立,都沒有了pod了

#關掉node2節點即可

#設定node1汙點
[root@master mnt]# kubectl taint  nodes node1 tag=heima:PreferNoSchedule
node/node1 tainted
#檢視汙點
[root@master mnt]# kubectl describe  nodes -n dev node1| grep heima
Taints:             tag=heima:PreferNoSchedule

#第一個pod可以進行執行
[root@master mnt]# kubectl run taint1 --image=nginx:1.17.1 -n dev
pod/taint1 created
[root@master mnt]# kubectl  get pods -n dev 
NAME         READY   STATUS        RESTARTS   AGE
pod-podaff   1/1     Running       0          90m
podaff       1/1     Terminating   0          31m
taint1       1/1     Running       0          6s

#修改node1的汙點
[root@master mnt]# kubectl taint  nodes node1 tag=heima:PreferNoSchedule-
node/node1 untainted

[root@master mnt]# kubectl taint  nodes node1 tag=heima:NoSchedule
node/node1 tainted

#第一個正常的執行,第二個執行不了
[root@master mnt]# kubectl run taint2 --image=nginx:1.17.1 -n dev
pod/taint2 created
[root@master mnt]# kubectl get pods -n dev
NAME         READY   STATUS        RESTARTS   AGE
pod-podaff   1/1     Running       0          94m
podaff       1/1     Terminating   0          35m
taint1       1/1     Running       0          3m35s
taint2       0/1     Pending       0          3s

#第三種汙點的級別
[root@master mnt]# kubectl taint  nodes node1 tag=heima:NoSchedule-
node/node1 untainted
設定級別
[root@master mnt]# kubectl taint  nodes node1 tag=heima:NoExecute
node/node1 tainted
#新的pod也會不能建立了
[root@master mnt]# kubectl run taint3 --image=nginx:1.17.1 -n dev
pod/taint3 created
[root@master mnt]# kubectl get pods -n dev
NAME     READY   STATUS        RESTARTS   AGE
podaff   1/1     Terminating   0          39m
taint3   0/1     Pending       0          4s

  

為什麼建立pod的時候,不能往master節點上面進行排程了,因為有汙點的作用

4、容忍

容忍就是忽略,node上面有汙點,但是pod上面有容忍,進行忽略,可以進行排程

案例:

apiVersion: v1
kind: Pod
metadata:
   name: pod-aff
   namespace: dev
spec:
   containers:    
   - name: main-container
     image: nginx:1.17.1
   tolerations:     #新增容忍
   - key: "tag"    #要容忍的key
     operator: "Equal"     #運算子
     values: "heima"            #容忍的汙點
     effect: "NoExecute"    #新增容忍的規劃,這裡必須和標記的汙點規則相同

#首先建立一個沒有容忍的pod,看能不能進行建立
#無法進行建立
[root@master mnt]# kubectl get pods -n dev
NAME      READY   STATUS        RESTARTS   AGE
pod-aff   0/1     Pending       0          6s
podaff    1/1     Terminating   0          55m

#有容忍的建立
[root@master mnt]# kubectl create -f to.yaml 
pod/pod-aff created
[root@master mnt]# kubectl get pods -n dev
NAME      READY   STATUS        RESTARTS   AGE
pod-aff   1/1     Running       0          3s
podaff    1/1     Terminating   0          57m

  

容忍的詳細資訊

Key:對應的容忍的汙點的值,空意味著匹配的所有的鍵
value:對應著容忍的汙點的值
operator:key-value的運算子,支援Equal和Exists(預設),對於所有的鍵進行操作,跟值就沒有關係了
effect:對應的汙點的effect,空意味著匹配所有的影響
tolerationSeconds    容忍的時間,當effect為NoExecute時生效,表示pod在node上停留的時間

  

四:pod控制器

1、pod的控制器的介紹

1:pod的分類:

  自主式pod,k8s直接建立出來的pod,這種pod刪除後就沒有了。也不會重建

  控制器建立的pod,透過控制器建立的Pod,這種pod刪除後,還會自動重建

作用

pod控制器管理pod的中間層,使用了pod控制器後,我們需要告訴pod控制器,想要多少個pod即可,他會建立滿足條件的pod並確保pod處於使用者期望的狀態,如果pod執行中出現了故障,控制器會基於策略重啟或者重建pod

2:控制器型別

replicaSet:保證指定數量的pod執行支援數量變更

deployment:透過控制replicaSet來控制pod,支援滾動升級,版本回退的功能

horizontal pod autoscaler:可以根據叢集負載均衡自動調整pod的數量

2:控制器的詳細介紹

replicaSet(rs)

:建立的數量的Pod能夠正常的執行,會持續監聽pod的執行狀態

支援對pod數量的擴容縮容,

案例:副本數量

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: pc-replicaset   #pod控制器的名字
  namespace: dev  
spec:
   replicas: 3   #建立的pod的數量,
   selector:   #pod標籤選擇器規則,選擇app=nginx-pod的pod的標籤用來進行管理,用來管理pod上面有相同的標籤
     matchLabels:    #標籤選擇器規則
      app: nginx-pod  
   template:   副本,也就是建立pod的模版
     metadata:    #pod後設資料的資訊
       labels:    #pod上面的標籤
         app: nginx-pod    
     spec:    
       containers:   #容器裡面的名字
         - name: nginx   
           image: nginx:1.17.1  


#檢視控制器
[root@master ~]# kubectl get rs -n dev
NAME            DESIRED   CURRENT   READY   AGE
pc-replicaset   3         3         3       70s
RESIRED 期望的pod數量
CURRENT:當前有幾個
READY:準備好提供服務的有多少

#檢視pod
[root@master ~]# kubectl get rs,pods -n dev
NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-replicaset   3         3         3       2m31s

NAME                      READY   STATUS    RESTARTS      AGE
pod/pc-replicaset-448tq   1/1     Running   0             2m31s
pod/pc-replicaset-9tdhd   1/1     Running   0             2m31s
pod/pc-replicaset-9z64w   1/1     Running   0             2m31s
pod/pod-pod-affinity      1/1     Running   1 (47m ago)   12h  

案例2:實現擴縮容的pod

#編輯yaml檔案 edit
[root@master ~]# kubectl edit rs -n dev pc-replicaset 
replicaset.apps/pc-replicaset edited
[root@master ~]# kubectl get pods -n dev
NAME                  READY   STATUS    RESTARTS      AGE
pc-replicaset-448tq   1/1     Running   0             10m
pc-replicaset-9tdhd   1/1     Running   0             10m
pc-replicaset-9z64w   1/1     Running   0             10m
pc-replicaset-q6ps9   1/1     Running   0             94s
pc-replicaset-w5krn   1/1     Running   0             94s
pc-replicaset-zx8gw   1/1     Running   0             94s
pod-pod-affinity      1/1     Running   1 (55m ago)   12h
[root@master ~]# kubectl get rs -n dev
NAME            DESIRED   CURRENT   READY   AGE
pc-replicaset   6         6         6       10m


#第二種方式
[root@master ~]# kubectl scale  rs -n dev pc-replicaset --replicas=2 -n dev
replicaset.apps/pc-replicaset scaled
[root@master ~]# kubectl get rs,pod -n dev 
NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-replicaset   2         2         2       12m

NAME                      READY   STATUS    RESTARTS      AGE
pod/pc-replicaset-448tq   1/1     Running   0             12m
pod/pc-replicaset-9tdhd   1/1     Running   0             12m
pod/pod-pod-affinity      1/1     Running   1 (57m ago)   12h

案例3、映象的版本的升級

#編輯映象的版本
[root@master ~]# kubectl edit rs -n dev pc-replicaset 
replicaset.apps/pc-replicaset edited
[root@master ~]# kubectl get rs -n dev pc-replicaset -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       15m   nginx        nginx:1.17.2   app=nginx-pod

#命令來進行編輯,但是一般使用edit來進行編輯即可
[root@master ~]# kubectl get rs -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       17m   nginx        nginx:1.17.1   app=nginx-pod

  

案例4、刪除replicaSet

就是先刪除pod再來刪除控制器

#檔案來進行刪除
root@master ~]# kubectl delete -f replicas.yaml 
replicaset.apps "pc-replicaset" deleted
[root@master ~]# kubectl get rs -n dev
No resources found in dev namespace.

#命令來進行刪除
[root@master ~]# kubectl delete rs -n dev pc-replicaset 
replicaset.apps "pc-replicaset" deleted
[root@master ~]# kubectl get rs -n dev
No resources found in dev namespace.  

deployment(deploy)  

支援所有的RS的功能

保留歷史的版本,就是可以進行回退版本

滾動更新的策略

更新策略:

案例:建立deployment

[root@master ~]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: pc-deployment
   namespace: dev
spec:
   replicas: 3
   selector:
      matchLabels:
       app: nginx-pod
   template:
      metadata:
         labels:
           app: nginx-pod
      spec:
        containers:
        - name: nginx
          image: nginx:1.17.1
[root@master ~]# kubectl get deploy -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   3/3     3            3           53s

update:最新版本的pod數量
available:當前可用的pod的數量

#所以也會建立一個rs出來
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-6cb555c765   3         3         3       2m9s  
擴縮容:

基本上和之前的一樣的操作

#命令來進行編輯
[root@master ~]# kubectl scale deployment -n dev pc-deployment --replicas=5 
deployment.apps/pc-deployment scaled
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS      AGE
pc-deployment-6cb555c765-8qc9g   1/1     Running   0             4m52s
pc-deployment-6cb555c765-8xss6   1/1     Running   0             4m52s
pc-deployment-6cb555c765-m7wdf   1/1     Running   0             4s
pc-deployment-6cb555c765-plkbf   1/1     Running   0             4m52s
pc-deployment-6cb555c765-qh6gk   1/1     Running   0             4s
pod-pod-affinity                 1/1     Running   1 (81m ago)   13h

#編輯檔案
[root@master ~]# kubectl edit deployments.apps -n dev pc-deployment 
deployment.apps/pc-deployment edited
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS      AGE
pc-deployment-6cb555c765-8qc9g   1/1     Running   0             5m41s
pc-deployment-6cb555c765-8xss6   1/1     Running   0             5m41s
pc-deployment-6cb555c765-plkbf   1/1     Running   0             5m41s
pod-pod-affinity                 1/1     Running   1 (82m ago)   13h

 

映象更新

分為重建更新,滾動更新

重建更新

一次性刪除所有的來老版本的pod,然後再來建立新版本的pod

滾動更新:(預設)

先刪除一部分的內容,進行更新,老的版本越來越少,新的版本越來越多

#重建策略
#先建立pod,實時觀看
[root@master ~]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: pc-deployment
   namespace: dev
spec:
   strategy:
     type: Recreate
   replicas: 3
   selector:
      matchLabels:
       app: nginx-pod
   template:
      metadata:
         labels:
           app: nginx-pod
      spec:
        containers:
        - name: nginx
          image: nginx:1.17.1

[root@master ~]# kubectl get pods -n dev -w

#然後更新映象的版本
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev

#檢視
pc-deployment-6cb555c765-m92t8   0/1     Terminating   0             60s
pc-deployment-6cb555c765-m92t8   0/1     Terminating   0             60s
pc-deployment-6cb555c765-m92t8   0/1     Terminating   0             60s
pc-deployment-5967bb44bb-bbkzz   0/1     Pending       0             0s
pc-deployment-5967bb44bb-bbkzz   0/1     Pending       0             0s
pc-deployment-5967bb44bb-kxrn5   0/1     Pending       0             0s
pc-deployment-5967bb44bb-zxfwl   0/1     Pending       0             0s
pc-deployment-5967bb44bb-kxrn5   0/1     Pending       0             0s
pc-deployment-5967bb44bb-zxfwl   0/1     Pending       0             0s
pc-deployment-5967bb44bb-bbkzz   0/1     ContainerCreating   0             0s
pc-deployment-5967bb44bb-kxrn5   0/1     ContainerCreating   0             0s
pc-deployment-5967bb44bb-zxfwl   0/1     ContainerCreating   0             0s
pc-deployment-5967bb44bb-kxrn5   1/1     Running             0             1s

  

滾動更新:

[root@master ~]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: pc-deployment
   namespace: dev
spec:
   strategy:
     type: RollingUpdate
     rollingUpdate:
        maxUnavailable: 25%
        maxSurge: 25%
   replicas: 3
   selector:
      matchLabels:
       app: nginx-pod
   template:
      metadata:
         labels:
           app: nginx-pod
      spec:
        containers:
        - name: nginx
          image: nginx:1.17.1

#更新
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated

#就會更新
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated

總結:

映象版本更新的話,會先建立一個新的RS,老RS也會存在,pod會在新的RS裡面,老RS就會刪除一個,到最後老的rs裡面沒有了pod,新的rs裡面就會有pod了

留這個老的rs的作用的話,就是版本回退作用

版本回退:

undo回滾到上一個版本

#記錄整個更新的deployment過程
[root@master ~]# kubectl create -f deploy.yaml --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/pc-deployment created
#更新版本就會有歷史記錄
[root@master ~]# kubectl edit deployments.apps  -n dev pc-deployment 
deployment.apps/pc-deployment edited

[root@master ~]# kubectl rollout history deployment -n dev pc-deployment 
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
1         kubectl create --filename=deploy.yaml --record=true
2         kubectl create --filename=deploy.yaml --record=true
3         kubectl create --filename=deploy.yaml --record=true

#直接回退到到指定的版本,如果不指定的話,預設是上一個版本

[root@master ~]# kubectl rollout undo deployment  -n dev  pc-deployment --to-revision=1
deployment.apps/pc-deployment rolled back   
#rs也發生了變化,pod回到了老的rs裡面了
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5967bb44bb   0         0         0       4m11s
pc-deployment-6478867647   0         0         0       3m38s
pc-deployment-6cb555c765   3         3         3       5m28s
[root@master ~]# kubectl rollout  history deployment -n dev 
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
2         kubectl create --filename=deploy.yaml --record=true
3         kubectl create --filename=deploy.yaml --record=true
4         kubectl create --filename=deploy.yaml --record=true   #這個就相當於是1了

   

金絲雀釋出:

deployment支援更新過程中的控制,暫停,繼續更新操作

就是在更新的過程中,僅存在一部分的更新的應用,主機部分是一些舊的版本,將這些請求傳送到新的應用上面,不能接收請求就趕緊回退,能接受請求,就繼續更新,這個就被稱為金絲雀釋出

#更新,並且立刻暫停
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev && kubectl rollout pause deployment  -n dev pc-deployment 
deployment.apps/pc-deployment image updated
deployment.apps/pc-deployment paused

#rs的變化
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5967bb44bb   1         1         1       21m
pc-deployment-6478867647   0         0         0       20m
pc-deployment-6cb555c765   3         3         3       22m

#有一個已經更新完畢了
[root@master ~]# kubectl rollout  status  deployment  -n dev
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...

#傳送一個請求

#繼續更新
[root@master ~]# kubectl rollout  resume deployment  -n dev pc-deployment 
deployment.apps/pc-deployment resumed

#檢視狀態
[root@master ~]# kubectl rollout  status  deployment  -n dev
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "pc-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "pc-deployment" successfully rolled out

#檢視rs
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5967bb44bb   3         3         3       24m
pc-deployment-6478867647   0         0         0       24m
pc-deployment-6cb555c765   0         0         0       26m  

hpa控制器

總的來說就是,就是獲取每個pod的利用率,與pod上面的hpa定義的指標進行比較,如果大於的話,就直接自動的增加pod,當訪問量減少了話,會刪除增加的pod

透過監控pod負載均衡的情況,實現pod數量的擴縮容

安裝一個軟體,拿到pod的負載

metries-server可以用來收集叢集中的資源使用情況。pod。node都可以以進行監控

# 下載最新版配置軟體包
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml

#到每臺伺服器上系在阿里雲版本的相關版本
ctr image pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3

#修改配置檔案
containers:
- args:
  - --cert-dir=/tmp
  - --secure-port=4443
  - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  - --kubelet-use-node-status-port
  - --metric-resolution=15s
  - --kubelet-insecure-tls  #增加證書忽略
  image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3 #修改image為阿里雲下載的這個

#應用下配置檔案
kubectl apply -f   components.yaml

#檢視執行結果
[root@master ~]# kubectl get pod -n kube-system 
NAME                              READY   STATUS    RESTARTS       AGE
coredns-66f779496c-88c5b          1/1     Running   33 (55m ago)   10d
coredns-66f779496c-hcpp5          1/1     Running   33 (55m ago)   10d
etcd-master                       1/1     Running   14 (55m ago)   10d
kube-apiserver-master             1/1     Running   14 (55m ago)   10d
kube-controller-manager-master    1/1     Running   14 (55m ago)   10d
kube-proxy-95x52                  1/1     Running   14 (55m ago)   10d
kube-proxy-h2qrf                  1/1     Running   14 (55m ago)   10d
kube-proxy-lh446                  1/1     Running   15 (55m ago)   10d
kube-scheduler-master             1/1     Running   14 (55m ago)   10d
metrics-server-6779c94dff-dflh2   1/1     Running   0              2m6s

檢視資源的使用情況

#檢視node的使用情況資訊
[root@master ~]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   104m         5%     1099Mi          58%       
node1    21m          1%     335Mi           17%       
node2    22m          1%     305Mi           16%       
#檢視pod的使用情況
[root@master ~]# kubectl top pods -n dev
NAME        CPU(cores)   MEMORY(bytes)   
pod-aff     3m           83Mi            
pod-label   0m           1Mi     

實現這個hpa的操作,就是pod上面要有資源的限制才可以,

然後使用命令即可

測試:

[root@master ~]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: nginx
   namespace: dev
spec:
   replicas: 1   #一個副本數量
   selector:
      matchLabels:
       app: nginx-pod    #標籤選擇器
   template:
      metadata:
         labels:
           app: nginx-pod
      spec:
        containers:
        - name: nginx
          image: nginx:1.17.1
          resources:
             requests:
               cpu: 100m   #最少需要100毫核才能啟動


#建立deployment
kubectl create  -f deploy.yaml 
#建立service
kubectl expose deployment  nginx --type=NodePort --port=80 -n dev

#建立一個hpa
[root@master ~]# cat hpa.yaml 
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
   name: pc-hpa
   namespace: dev
spec:
   minReplicas: 1
   maxReplicas: 10
   targetCPUUtilizationPercentage: 3   #cpu的指標為%3,方便測試用的
   scaleTargetRef:  #選擇的控制器
      apiVersion: apps/v1
      kind: Deployment   #deploy控制器
      name: nginx


#檢視hpa控制器
[root@master ~]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   <unknown>/3%   1         10        0          5s
[root@master ~]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          114s


#進行壓力測試,就是超過%3
[root@master ~]# cat f.sh 
while `true`:
do
	curl 192.168.109.100:30843 &> /dev/null
done

[root@master ~]# kubectl get hpa -n dev -w
pc-hpa   Deployment/nginx   1%/3%     1         10        1          22m
pc-hpa   Deployment/nginx   0%/3%     1         10        1          22m
pc-hpa   Deployment/nginx   42%/3%    1         10        1          25m
pc-hpa   Deployment/nginx   92%/3%    1         10        4          25m
pc-hpa   Deployment/nginx   23%/3%    1         10        8          25m
pc-hpa   Deployment/nginx   0%/3%     1         10        10         26m

[root@master ~]# kubectl get deployment -n dev -w
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           39m
nginx   1/4     1            1           60m
nginx   1/4     1            1           60m
nginx   1/4     1            1           60m
nginx   1/4     4            1           60m
nginx   2/4     4            2           60m
nginx   3/4     4            3           60m
nginx   4/4     4            4           60m
nginx   4/8     4            4           60m
nginx   4/8     4            4           60m
nginx   4/8     4            4           60m
nginx   4/8     8            4           60m
nginx   5/8     8            5           60m
nginx   6/8     8            6           60m
nginx   7/8     8            7           60m
nginx   8/8     8            8           60m
nginx   8/10    8            8           61m
nginx   8/10    8            8           61m
nginx   8/10    8            8           61m
nginx   8/10    10           8           61m
nginx   9/10    10           9           61m
nginx   10/10   10           10          61m

[root@master ~]# kubectl get pod-n dev -w
nginx-7f89875f58-gt67w   0/1     Pending             0          0s
nginx-7f89875f58-gt67w   0/1     Pending             0          0s
nginx-7f89875f58-545rj   0/1     Pending             0          0s
nginx-7f89875f58-gt67w   0/1     ContainerCreating   0          0s
nginx-7f89875f58-545rj   0/1     Pending             0          0s
nginx-7f89875f58-545rj   0/1     ContainerCreating   0          0s
nginx-7f89875f58-545rj   1/1     Running             0          1s
nginx-7f89875f58-gt67w   1/1     Running             0          1s

#當訪問量減少的時候,這個pod裡面自動的減少,只不過需要一點時間  

daemonset(DS)控制器

在每個節點上面建立一個副本(並且只能有一個),就是節點級別的,一般用於日誌收集,節點監控等

當節點移除的話,自然Pod也就沒有了

案例:

[root@master ~]# cat daemonset.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
   name: daemon
   namespace: dev
spec:
   selector:
      matchLabels:
        app: nginx-pod
   template:
        metadata:
          labels:
             app: nginx-pod
        spec:
          containers:
          - name: nginx
            image: nginx:1.17.1

[root@master ~]# kubectl get pod -n dev -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
daemon-g8b4v             1/1     Running   0          2m30s   10.244.1.102   node2   <none>           <none>
daemon-t5tmd             1/1     Running   0          2m30s   10.244.2.89    node1   <none>           <none>
nginx-7f89875f58-prf9c   1/1     Running   0          79m     10.244.2.84    node1   <none>           <none>

#每個副本上面都有一個pod

  

job控制器

批次處理(依次處理指定數量的任務),一次性任務(每個任務僅執行一次就結束)

由job建立的pod執行成功時,job會記錄成功結束的Pod數量

當成功結束的pod達到指定的數量時,job將完成執行

裡面的job都是存放的一次性檔案

重啟策略:在這裡不能設定為always,因為這個是一次性任務,結束了,都要進行重啟

只能設定為onfailure和never才行

onfailure:pod出現故障時,重啟容器,不是建立pod,failed次數不變

never:出現故障,並且故障的pod不會消失也不會重啟,failed次數=1

案例:

[root@master ~]# cat jod.yaml 
apiVersion: batch/v1
kind: Job
metadata:
   name: pc-job
   namespace: dev
spec:
   manualSelector: true
   completions: 6  #一次性建立6個pod
   parallelism: 3   #允許三個一起執行,2輪就結束了
   selector:
      matchLabels:
        app: counter-pod
   template:
        metadata:
          labels:
             app: counter-pod
        spec:
          restartPolicy: Never
          containers:
          - name: busybox
            image: busybox:1.30
            command: ["/bin/sh","-c","for i in 1 2 3 4 5 6 7 8 9;do echo $i;sleep 3;done"]

[root@master ~]# kubectl get job -n dev -w
NAME     COMPLETIONS   DURATION   AGE
pc-job   0/6                      0s
pc-job   0/6           0s         0s
pc-job   0/6           2s         2s
pc-job   0/6           29s        29s
pc-job   0/6           30s        30s
pc-job   3/6           30s        30s
pc-job   3/6           31s        31s
pc-job   3/6           32s        32s
pc-job   3/6           59s        59s
pc-job   3/6           60s        60s
pc-job   6/6           60s        60s
[root@master ~]# kubectl get pod -n dev -w
NAME                     READY   STATUS    RESTARTS   AGE
daemon-g8b4v             1/1     Running   0          20m
daemon-t5tmd             1/1     Running   0          20m
nginx-7f89875f58-prf9c   1/1     Running   0          97m
pc-job-z2gmb             0/1     Pending   0          0s
pc-job-z2gmb             0/1     Pending   0          0s
pc-job-z2gmb             0/1     ContainerCreating   0          0s
pc-job-z2gmb             1/1     Running             0          1s
pc-job-z2gmb             0/1     Completed           0          28s
pc-job-z2gmb             0/1     Completed           0          29s
pc-job-z2gmb             0/1     Completed           0          30s
pc-job-z2gmb             0/1     Completed           0          30s

cronjob控制器(cj)

就是指定時間的週期執行job任務

案例:

[root@master ~]# cat cronjob.yaml 
apiVersion: batch/v1
kind: CronJob
metadata:
   name: pc-cronjob
   namespace: dev
   labels:
       controller: cronjob
spec:
    schedule: "*/1 * * * *"
    jobTemplate:
        metadata:
          name: pc-cronjob
          labels:
             controller: cronjob
        spec:
          template:
              spec:
                restartPolicy: Never
                containers:
                - name: counter
                  image: busybox:1.30
                  command: ["/bin/sh","-c","for i in 1 2 3 4 5 6 7 8 9;do echo$i;sleep 3;done"]

[root@master ~]# kubectl get job -n dev -w
NAME                  COMPLETIONS   DURATION   AGE
pc-cronjob-28604363   0/1           21s        21s
pc-job                6/6           60s        33m
pc-cronjob-28604363   0/1           28s        28s
pc-cronjob-28604363   0/1           29s        29s
pc-cronjob-28604363   1/1           29s        29s
pc-cronjob-28604364   0/1                      0s
pc-cronjob-28604364   0/1           0s         0s
pc-cronjob-28604364   0/1           1s         1s
pc-cronjob-28604364   0/1           29s        29s
pc-cronjob-28604364   0/1           30s        30s
pc-cronjob-28604364   1/1           30s        30s
^C[root@master ~]# 

[root@master ~]# kubectl get pod -n dev -w
NAME                     READY   STATUS      RESTARTS   AGE
daemon-g8b4v             1/1     Running     0          57m
daemon-t5tmd             1/1     Running     0          57m
nginx-7f89875f58-prf9c   1/1     Running     0          134m
pc-job-2p6p6             0/1     Completed   0          32m
pc-job-62z2d             0/1     Completed   0          32m
pc-job-6sm97             0/1     Completed   0          32m
pc-job-97j4j             0/1     Completed   0          31m
pc-job-lsjz5             0/1     Completed   0          31m
pc-job-pt28s             0/1     Completed   0          31m


[root@master ~]# kubectl get pod -n dev -w
pc-cronjob-28604363-fcnvr   0/1     Pending     0          0s
pc-cronjob-28604363-fcnvr   0/1     Pending     0          0s
pc-cronjob-28604363-fcnvr   0/1     ContainerCreating   0          0s
pc-cronjob-28604363-fcnvr   1/1     Running             0          0s
pc-cronjob-28604363-fcnvr   0/1     Completed           0          27s
pc-cronjob-28604363-fcnvr   0/1     Completed           0          29s
pc-cronjob-28604363-fcnvr   0/1     Completed           0          29s

#就是這個job執行結束後,每隔1分鐘再去執行 

四:service詳解

流量負載元件service和ingress

serverice用於四層的負載ingress用於七層負載

1、service介紹

pod有一個ip地址,但是不是固定的,所以的話,service就是一部分的pod的代理,有一個ip地址,可以透過這個地址來進行訪問pod

service就是一個標籤選擇器的機制

kube-proxy代理

核心就是kube-proxy機制發生的作用,當建立service時,api-server向etcd儲存service相關的資訊,kube-proxy監聽到發生了變化,就會將service相關的資訊轉換為訪問規則

檢視規則

kube-proxy支援的三種模式

userspace模式:使用者空間模式

kube-proxy會為每一個service建立一個監聽的埠,發給service的ip的請求會被iptables規則重定向到kube-proxy監聽的埠上,kube-proxy根據演算法選擇一個提供服務的pod並建立連線,以將請求轉發到pod上

kube-proxy相當於一個負載均衡器的樣子

缺點:效率比較低,進行轉發處理時,增加核心和使用者空間

iptables模式

當請求來的時候,不經過了kube-proxy了,經過clusterip(規則即可),然後進行輪詢(隨機)轉發到pod上面

缺點:沒有負載均衡,一但又問題,使用者拿到的就是錯誤的頁面

ipvs模式:

開啟ipvs模組

編輯裡面的配置檔案為mode為ipvs
[root@master /]# kubectl edit cm kube-proxy -n kube-system 
#刪除裡面的pod,帶有標籤的
[root@master /]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
root@master /]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30203 rr  輪詢的規則,就是將地址轉發到這裡面去即可
  -> 10.244.2.103:80              Masq    1      0          0         
TCP  192.168.109.100:30203 rr
  -> 10.244.2.103:80              Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.109.100:6443         Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.44:53               Masq    1      0          0         
  -> 10.244.0.45:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.44:9153             Masq    1      0          0         
  -> 10.244.0.45:9153             Masq    1      0          0         
TCP  10.100.248.78:80 rr
  -> 10.244.2.103:80              Masq    1      0          0         
TCP  10.110.118.76:443 rr
  -> 10.244.1.108:10250           Masq    1      0          0         
  -> 10.244.2.102:10250           Masq    1      0          0         
TCP  10.244.0.0:30203 rr

2:service型別

標籤選擇器只是一個表象,本質就是規則,透過標籤,來進行確定裡面的pod的ip

session親和性,如果不配置的話,請求會將輪詢到每一個pod上面,特殊的情況下,將多個請求傳送到同一個pod上面,就需要session親和性

type:就是service型別

  ClusterIP:預設值,k8s自動分配的虛擬ip,只能在叢集內部訪問

  NodePort:將service透過指定的node上面埠暴露給外部,可以實現叢集外面訪問服務,節點上面的埠暴露給外部

  LoadBalancer:使用外接負載均衡器完成到服務的負載分發,注意此模式需要外部雲環境

  ExternalName:把集合外部的服務引入叢集內部,直接使用

1、環境準備

三個pod。deploy控制器來建立,  

[root@master ~]# cat service-example.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: pc-deployment
   namespace: dev
spec:
   replicas: 3
   selector:
       matchLabels:
          app: nginx-pod
   template:
       metadata:
         labels:
           app: nginx-pod
       spec:
          containers:
          - name: nginx
            image: nginx:1.17.1
            ports:
            - containerPort: 80
[root@master ~]# kubectl get pod -n dev -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
pc-deployment-5cb65f68db-959hm   1/1     Running   0          62s   10.244.2.104   node1   <none>           <none>
pc-deployment-5cb65f68db-h6v8r   1/1     Running   0          62s   10.244.1.110   node2   <none>           <none>
pc-deployment-5cb65f68db-z4k2f   1/1     Running   0          62s   10.244.2.105   node1   <none>           <none>
#訪問pod的ip和容器裡面的埠
[root@master ~]# curl 10.244.2.104:80

修改裡面的網頁檔案,觀察請求傳送到哪一個節點上面去了,依次修改網頁檔案即可
[root@master ~]# kubectl exec -ti -n dev pc-deployment-5cb65f68db-h6v8r  /bin/bash
root@pc-deployment-5cb65f68db-z4k2f:/# echo 10.244.2.10 > /usr/share/nginx/html/index.html  

2、ClusterIP型別的service

service的埠可以隨便寫

[root@master ~]# cat ClusterIP.yaml 
apiVersion: v1
kind: Service
metadata:
   name: service-clusterip
   namespace: dev
spec:
   selector:   #service標籤選擇器
     app: nginx-pod
   clusterIP: 10.96.0.100   #不寫的話,預設生成一個ip地址
   type: ClusterIP
   ports:
   - port: 80  #service埠
     targetPort: 80  #pod的埠

[root@master ~]# kubectl create -f ClusterIP.yaml 
service/service-clusterip created

[root@master ~]# kubectl get svc -n dev
NAME                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service-clusterip   ClusterIP   10.96.0.100   <none>        80/TCP    2m7s
#檢視service的詳細的資訊,
[root@master ~]# kubectl describe svc service-clusterip -n dev
Name:              service-clusterip
Namespace:         dev
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-pod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.0.100
IPs:               10.96.0.100
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.110:80,10.244.2.104:80,10.244.2.105:80   #建立pod和service的關聯,主要是標籤選擇器,裡面都是記錄的Pod的訪問地址,實際端點服務的集合
Session Affinity:  None
Events:            <none>
[root@master ~]# kubectl get pod -n dev -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
pc-deployment-5cb65f68db-959hm   1/1     Running   0          25m   10.244.2.104   node1   <none>           <none>
pc-deployment-5cb65f68db-h6v8r   1/1     Running   0          25m   10.244.1.110   node2   <none>           <none>
pc-deployment-5cb65f68db-z4k2f   1/1     Running   0          25m   10.244.2.105   node1   <none>           <none>

[root@master ~]# kubectl get endpoints -n dev
NAME                ENDPOINTS                                         AGE
service-clusterip   10.244.1.110:80,10.244.2.104:80,10.244.2.105:80   4m48s

真正起作用的就是kube-proxy,建立service的時,會建立對應的規則
[root@master ~]# ipvsadm -Ln
TCP  10.96.0.100:80 rr
  -> 10.244.1.110:80              Masq    1      0          0         
  -> 10.244.2.104:80              Masq    1      0          0         
  -> 10.244.2.105:80              Masq    1      0          0     

#傳送一個請求,測試是誰接收了,迴圈訪問,發現是輪詢環的狀態
[root@master ~]# while true;do curl 10.96.0.100:80; sleep 5;done;
10.244.2.105
10.244.2.104
 10.244.1.110
10.244.2.105
10.244.2.104
 10.244.1.110 

訪問service的ip和主機埠

負載分發策略:(session親和性)

預設的話,訪問就是輪詢或者隨機

有設定的話,就是多個請求到同一個pod裡面上面,就不會輪訓或者隨機

#設定session親和性
[root@master ~]# cat ClusterIP.yaml 
apiVersion: v1
kind: Service
metadata:
   name: service-clusterip
   namespace: dev
spec:
   sessionAffinity: ClientIP   #就是透過喲個請求到同一個節點上面
   selector:
     app: nginx-pod
   clusterIP: 10.96.0.100
   type: ClusterIP
   ports:
   - port: 80
     targetPort: 80

[root@master ~]# kubectl get svc -n dev
NAME                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service-clusterip   ClusterIP   10.96.0.100   <none>        80/TCP    78s
[root@master ~]# ipvsadm -Ln
TCP  10.96.0.100:80 rr persistent 10800   持久化
  -> 10.244.1.112:80              Masq    1      0          0         
  -> 10.244.2.107:80              Masq    1      0          0         
  -> 10.244.2.108:80              Masq    1      0          0         

這種型別的service,只能透過叢集節點來進行訪問,就是內部進行訪問,自己的電腦訪問不了這個ip
[root@master ~]# curl 10.96.0.100:80
10.244.2.108
[root@master ~]# curl 10.96.0.100:80
10.244.2.108
[root@master ~]# curl 10.96.0.100:80
10.244.2.108

3、headliness型別的service

Cluster型別的service,預設是隨機的負載均衡分發策略,希望自己來控制這個策略,使用headliness型別的service,不會分發Clusterip。想要訪問service,只能透過service的域名來進行訪問

[root@master ~]# cat headliness.yaml 
apiVersion: v1
kind: Service
metadata:
   name: service-headliness
   namespace: dev
spec:
   selector:
     app: nginx-pod
   clusterIP: None   #設定為None,就能生成headliness型別的service
   type: ClusterIP
   ports:
   - port: 80
     targetPort: 80
[root@master ~]# kubectl get svc -n dev
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service-headliness   ClusterIP   None         <none>        80/TCP    4s

#檢視域名
[root@master ~]# kubectl exec -ti -n dev pc-deployment-5cb65f68db-959hm /bin/bash
root@pc-deployment-5cb65f68db-959hm:/# cat /etc/resolv.conf 
search dev.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5


#訪問headliness型別的service
#格式為dns伺服器,加上service的名字,名稱空間,等;; ANSWER SECTION:
[root@master ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local 
service-headliness.dev.svc.cluster.local. 30 IN	A 10.244.2.108
service-headliness.dev.svc.cluster.local. 30 IN	A 10.244.1.112
service-headliness.dev.svc.cluster.local. 30 IN	A 10.244.2.107

4、NodePort型別的service

就是將service的port對映到node節點上面,透過nodeip+node埠來實現訪問service

請求來到node的埠上面時,會將請求傳送到service的埠上面,再來傳送到pod上面的埠,實現訪問

就將service暴露到外部了

測試:

[root@master ~]# cat nodeport.yaml 
apiVersion: v1
kind: Service
metadata:
   name: service-clusterip
   namespace: dev
spec:
   selector:
     app: nginx-pod
   type: NodePort   #NodePort型別的service
   ports:
   - port: 80    #service埠
     targetPort: 80   #pod埠
     nodePort: 30002   預設在一個·1範圍內
[root@master ~]# kubectl create -f nodeport.yaml 
service/service-clusterip created
[root@master ~]# kubectl get svc -n dev
NAME                TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service-clusterip   NodePort   10.106.183.217   <none>        80:30002/TCP   4s

#訪問節點ip+埠就能對映到Clusterip+埠了
[root@master ~]# curl 192.168.109.100:30002
10.244.2.108
[root@master ~]# curl 192.168.109.101:30002
10.244.2.108
[root@master ~]# curl 192.168.109.102:30002
10.244.2.108

就能實現訪問了service,以及內部了pod了 

5、LoadBalancer型別的service

就是在nodeport的基礎上面新增了一個負載均衡的裝置,經過計算後得出

6、ExternalName型別的service

將這個這個服務引入www.baidu.com這個服務

[root@master ~]# cat service-external.yaml 
apiVersion: v1
kind: Service
metadata:
   name: service-externalname
   namespace: dev
spec:
   type: ExternalName
   externalName: www.baidu.com
[root@master ~]# kubectl create -f service-external.yaml 
service/service-externalname created
[root@master ~]# kubectl get svc -n dev
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service-clusterip      NodePort       10.106.183.217   <none>          80:30002/TCP   17m
service-externalname   ExternalName   <none>           www.baidu.com   <none>         7s

#訪問service
[root@master ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
service-externalname.dev.svc.cluster.local. 30 IN CNAME	www.baidu.com.
www.baidu.com.		30	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	30	IN	A	180.101.50.188
www.a.shifen.com.	30	IN	A	180.101.50.242

#這樣就能解析到了

3:Ingress介紹

service對外暴露服務主要就是2種型別的,NodePort和LoadBalancer

缺點:

  NodePort暴露的是主機的埠,當叢集服務很多的時候,這個埠就會更多

  LB方式就是每一個service都需要LB,浪費

 

使用者定義這個請求到service的規則,然後ingress控制器感知將其轉換為nginx配置檔案,然後動態更新到nginx-proxy裡面去即可,這個過程是動態的

1、環境的準備

#下載yaml檔案
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml

[root@master ingress-example]# kubectl get pod,svc -n ingress-nginx 
NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-jv5n5       0/1     Completed   0          77s
pod/ingress-nginx-admission-patch-tpfv6        0/1     Completed   0          77s
pod/ingress-nginx-controller-597dc6d68-rww45   1/1     Running     0          77s

NAME                                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.97.10.122   <none>        80:30395/TCP,443:32541/TCP   78s
service/ingress-nginx-controller-admission   ClusterIP   10.96.17.67    <none>        443/TCP

  

service和deployment檔案,建立2個service和6個pod

[root@master ~]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: nginx-deployment
   namespace: dev
spec:
   replicas: 3
   selector:
      matchLabels:
       app: nginx-pod
   template:
      metadata:
         labels:
           app: nginx-pod
      spec:
        containers:
        - name: nginx
          image: nginx:1.17.1
          ports:
          - containerPort: 80
---

apiVersion: apps/v1
kind: Deployment
metadata:
   name: tomcat-deployment
   namespace: dev
spec:
   replicas: 3
   selector:
      matchLabels:
       app: tocmat-pod
   template:
      metadata:
         labels:
           app: tocmat-pod
      spec:
        containers:
        - name: tomcat
          image: tomcat:8.5-jre10-slim
          ports:
          - containerPort: 8080
---

apiVersion: v1
kind: Service
metadata:
   name: nginx-service
   namespace: dev
spec:
   selector:
     app: nginx-pod
   clusterIP: None
   type: ClusterIP
   ports:
   - port: 80
     targetPort: 80
---


apiVersion: v1
kind: Service
metadata:
   name: tomcat-service
   namespace: dev
spec:
   selector:
     app: tomcat-pod
   type: ClusterIP
   clusterIP: None
   ports:
   - port: 8080
     targetPort: 8080

[root@master ~]# kubectl get deployments.apps,pod -n dev
NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment    3/3     3            3           86s
deployment.apps/tomcat-deployment   3/3     3            3           86s

NAME                                     READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-5cb65f68db-5lzpb    1/1     Running   0          86s
pod/nginx-deployment-5cb65f68db-75h4m    1/1     Running   0          86s
pod/nginx-deployment-5cb65f68db-nc8pj    1/1     Running   0          86s
pod/tomcat-deployment-5dbff496f4-6msb2   1/1     Running   0          86s
pod/tomcat-deployment-5dbff496f4-7wjc9   1/1     Running   0          86s
pod/tomcat-deployment-5dbff496f4-wlgmm   1/1     Running   0          86s

2、http代理

建立一個yaml檔案就是裡面,

訪問的就是域名+path 如果path是/xxx的話,訪問要帶上域名/xxx

訪問的時候,就會將其轉發到對應的service加上埠上面即可

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-http
  namespace: dev
  annotations:
     nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx  #這個ingress由nginx來進行處理
  rules:  #定義一組的規則
  - host: nginx.com
    http:
      paths:
      - pathType: Prefix   #表示路徑匹配是基於字首的
        path: /  #表示匹配所有以/開頭的路徑
        backend:  #指定了請求轉發到後端服務
          service:
            name: nginx-service
            port:
              number: 80   #後端服務監聽的埠
  - host: tomcat.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: tomcat-service
            port:
              number: 8080

3、https代理

金鑰要提前的生成

 

  

相關文章