什麼是資源物件?
所謂資源物件是指在k8s上建立的資源例項;即透過apiserver提供的各資源api介面(可以理解為各種資源模板),使用yaml檔案或者命令列的方式向對應資源api介面傳遞引數賦值例項化的結果;比如我們在k8s上建立一個pod,那麼我們就需要透過給apiserver互動,傳遞建立pod的相關引數,讓apiserver拿著這些引數去例項化一個pod的相關資訊存放在etcd中,然後再由排程器進行排程,由node節點的kubelet執行建立pod;簡單講資源物件就是把k8s之上的api介面進行例項化的結果;
k8s邏輯執行環境
提示:k8s執行環境如上,k8s能夠將多個node節點的底層提供的資源(如記憶體,cpu,儲存,網路等)邏輯的整合成一個大的資源池,統一由k8s進行排程編排;使用者只管在k8s上建立各種資源即可,建立完成的資源是透過k8s統一編排排程,使用者無需關注具體資源在那個node上執行,也無需關注node節點資源情況;
k8s的設計理念——分層架構
k8s的設計理念——API設計原則
1、所有API應該是宣告式的;
2、API物件是彼此互補而且可組合的,即“高內聚,松耦合”;
3、高層API以操作意圖為基礎設計;
4、低層API根據高層API的控制需要設計;
5、儘量避免簡單封裝,不要有在外部API無法顯式知道的內部隱藏的機制;
6、API操作複雜度與物件數量成正比;
7、API物件狀態不能依賴於網路連線狀態;
8、儘量避免讓操作機制依賴於全域性狀態,因為在分散式系統中要保證全域性狀態的同步是非常困難的;
kubernetes API簡介
提示:在k8s上api分內建api和自定義api;所謂內建api是指部署好k8s叢集后自帶的api介面;自定義api也稱自定義資源(CRD,Custom Resource Definition),部署好k8s之後,透過安裝其他元件等方式擴充套件出來的api;
apiserver資源組織邏輯
提示:apiserver對於不同資源是透過分類,分組,分版本的方式邏輯組織的,如上圖所示;
k8s內建資源物件簡介
k8s資源物件操作命令
資源配置清單必需欄位
1、apiVersion - 建立該物件所使用的Kubernetes API的版本;
2、kind - 想要建立的物件的型別;
3、metadata - 定義識別物件唯一性的資料,包括一個name名稱 、可選的namespace,預設不寫就是default名稱空間;
4、spec:定義資源物件的詳細規範資訊(統一的label標籤、容器名稱、映象、埠對映等),即使用者期望對應資源處於什麼狀態;
5、status(Pod建立完成後k8s自動生成status狀態),該欄位資訊由k8s自動維護,使用者無需定義,即對應資源的實際狀態;
Pod資源物件
提示:pod是k8s中最小控制單元,一個pod中可以執行一個或多個容器;一個pod的中的容器是一起排程,即排程的最小單位是pod;pod的生命週期是短暫的,不會自愈,是用完就銷燬的實體;一般我們透過Controller來建立和管理pod;使用控制器建立的pod具有自動恢復功能,即pod狀態不滿足使用者期望狀態,對應控制器會透過重啟或重建的方式,讓對應pod狀態和數量始終和使用者定義的期望狀態一致;
示例:自主式pod配置清單
apiVersion: v1 kind: Pod metadata: name: "pod-demo" namespace: default labels: app: "pod-demo" spec: containers: - name: pod-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - containerPort: 80 name: http volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 2 (4m35s ago) 7d7h test 1/1 Running 4 (4m34s ago) 13d test1 1/1 Running 4 (4m35s ago) 13d test2 1/1 Running 4 (4m35s ago) 13d root@k8s-deploy:/yaml# kubectl apply -f pod-demo.yaml pod/pod-demo created root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 2 (4m47s ago) 7d7h pod-demo 0/1 ContainerCreating 0 4s test 1/1 Running 4 (4m46s ago) 13d test1 1/1 Running 4 (4m47s ago) 13d test2 1/1 Running 4 (4m47s ago) 13d root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 2 (4m57s ago) 7d7h pod-demo 1/1 Running 0 14s test 1/1 Running 4 (4m56s ago) 13d test1 1/1 Running 4 (4m57s ago) 13d test2 1/1 Running 4 (4m57s ago) 13d root@k8s-deploy:/yaml#
提示:此pod只是在k8s上執行起來,它沒有控制器的監視,對應pod刪除,故障都不會自動恢復;
Job控制器,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14157306.html;
job控制器配置清單示例
apiVersion: batch/v1 kind: Job metadata: name: job-demo namespace: default labels: app: job-demo spec: template: metadata: name: job-demo labels: app: job-demo spec: containers: - name: job-demo-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: Never
提示:定義job資源必須定義restartPolicy;
應用清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 3 (48m ago) 7d10h pod-demo 1/1 Running 1 (48m ago) 3h32m test 1/1 Running 5 (48m ago) 14d test1 1/1 Running 5 (48m ago) 14d test2 1/1 Running 5 (48m ago) 14d root@k8s-deploy:/yaml# kubectl apply -f job-demo.yaml job.batch/job-demo created root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 26s 10.200.211.130 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (49m ago) 7d10h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (49m ago) 3h32m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (49m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (49m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (49m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml#
驗證:檢視192.168.0.34的/tmp/jobdata目錄下是否有job執行的任務資料?
root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata" data.log root@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log" data init job at 2023-05-06_23-31-32 root@k8s-deploy:/yaml#
提示:可以看到對應job所在宿主機的/tmp/jobdata/目錄下有job執行過後的資料,這說明我們定義的job任務順利完成;
定義並行job
apiVersion: batch/v1 kind: Job metadata: name: job-multi-demo namespace: default labels: app: job-multi-demo spec: completions: 5 template: metadata: name: job-multi-demo labels: app: job-multi-demo spec: containers: - name: job-multi-demo-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: Never
提示:spec欄位下使用completions來指定執行任務需要的對應pod的數量;
應用清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-demo-z8gmb 0/1 Completed 0 24m net-test1 1/1 Running 3 (73m ago) 7d11h pod-demo 1/1 Running 1 (73m ago) 3h56m test 1/1 Running 5 (73m ago) 14d test1 1/1 Running 5 (73m ago) 14d test2 1/1 Running 5 (73m ago) 14d root@k8s-deploy:/yaml# kubectl apply -f job-multi-demo.yaml job.batch/job-multi-demo created root@k8s-deploy:/yaml# kubectl get job NAME COMPLETIONS DURATION AGE job-demo 1/1 5s 24m job-multi-demo 1/5 10s 10s root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 24m 10.200.211.130 192.168.0.34 <none> <none> job-multi-demo-5vp9w 0/1 Completed 0 12s 10.200.211.144 192.168.0.34 <none> <none> job-multi-demo-frstg 0/1 Completed 0 22s 10.200.211.186 192.168.0.34 <none> <none> job-multi-demo-gd44s 0/1 Completed 0 17s 10.200.211.184 192.168.0.34 <none> <none> job-multi-demo-kfm79 0/1 ContainerCreating 0 2s <none> 192.168.0.34 <none> <none> job-multi-demo-nsmpg 0/1 Completed 0 7s 10.200.211.135 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (73m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (73m ago) 3h56m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (73m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (73m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (73m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 24m 10.200.211.130 192.168.0.34 <none> <none> job-multi-demo-5vp9w 0/1 Completed 0 33s 10.200.211.144 192.168.0.34 <none> <none> job-multi-demo-frstg 0/1 Completed 0 43s 10.200.211.186 192.168.0.34 <none> <none> job-multi-demo-gd44s 0/1 Completed 0 38s 10.200.211.184 192.168.0.34 <none> <none> job-multi-demo-kfm79 0/1 Completed 0 23s 10.200.211.140 192.168.0.34 <none> <none> job-multi-demo-nsmpg 0/1 Completed 0 28s 10.200.211.135 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (73m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (73m ago) 3h57m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (73m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (73m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (73m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml#
驗證:檢視192.168.0.34的/tmp/jobdata/目錄下是否有job資料產生?
root@k8s-deploy:/yaml# ssh 192.168.0.34 "ls /tmp/jobdata" data.log root@k8s-deploy:/yaml# ssh 192.168.0.34 "cat /tmp/jobdata/data.log" data init job at 2023-05-06_23-31-32 data init job at 2023-05-06_23-55-44 data init job at 2023-05-06_23-55-49 data init job at 2023-05-06_23-55-54 data init job at 2023-05-06_23-55-59 data init job at 2023-05-06_23-56-04 root@k8s-deploy:/yaml#
定義並行度
apiVersion: batch/v1 kind: Job metadata: name: job-multi-demo2 namespace: default labels: app: job-multi-demo2 spec: completions: 6 parallelism: 2 template: metadata: name: job-multi-demo2 labels: app: job-multi-demo2 spec: containers: - name: job-multi-demo2-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: Never
提示:在spec欄位下使用parallelism欄位來指定並行度,即一次幾個pod同時執行;上述清單表示,一次2個pod同時執行,總共需要6個pod;
應用清單
root@k8s-deploy:/yaml# kubectl get jobs NAME COMPLETIONS DURATION AGE job-demo 1/1 5s 34m job-multi-demo 5/5 25s 9m56s root@k8s-deploy:/yaml# kubectl apply -f job-multi-demo2.yaml job.batch/job-multi-demo2 created root@k8s-deploy:/yaml# kubectl get jobs NAME COMPLETIONS DURATION AGE job-demo 1/1 5s 34m job-multi-demo 5/5 25s 10m job-multi-demo2 0/6 2s 3s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-demo-z8gmb 0/1 Completed 0 34m job-multi-demo-5vp9w 0/1 Completed 0 10m job-multi-demo-frstg 0/1 Completed 0 10m job-multi-demo-gd44s 0/1 Completed 0 10m job-multi-demo-kfm79 0/1 Completed 0 9m59s job-multi-demo-nsmpg 0/1 Completed 0 10m job-multi-demo2-7ppxc 0/1 Completed 0 10s job-multi-demo2-mxbtq 0/1 Completed 0 5s job-multi-demo2-rhgh7 0/1 Completed 0 4s job-multi-demo2-th6ff 0/1 Completed 0 11s net-test1 1/1 Running 3 (83m ago) 7d11h pod-demo 1/1 Running 1 (83m ago) 4h6m test 1/1 Running 5 (83m ago) 14d test1 1/1 Running 5 (83m ago) 14d test2 1/1 Running 5 (83m ago) 14d root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-demo-z8gmb 0/1 Completed 0 34m job-multi-demo-5vp9w 0/1 Completed 0 10m job-multi-demo-frstg 0/1 Completed 0 10m job-multi-demo-gd44s 0/1 Completed 0 10m job-multi-demo-kfm79 0/1 Completed 0 10m job-multi-demo-nsmpg 0/1 Completed 0 10m job-multi-demo2-7ppxc 0/1 Completed 0 16s job-multi-demo2-8bh22 0/1 Completed 0 6s job-multi-demo2-dbjqw 0/1 Completed 0 6s job-multi-demo2-mxbtq 0/1 Completed 0 11s job-multi-demo2-rhgh7 0/1 Completed 0 10s job-multi-demo2-th6ff 0/1 Completed 0 17s net-test1 1/1 Running 3 (83m ago) 7d11h pod-demo 1/1 Running 1 (83m ago) 4h6m test 1/1 Running 5 (83m ago) 14d test1 1/1 Running 5 (83m ago) 14d test2 1/1 Running 5 (83m ago) 14d root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES job-demo-z8gmb 0/1 Completed 0 35m 10.200.211.130 192.168.0.34 <none> <none> job-multi-demo-5vp9w 0/1 Completed 0 10m 10.200.211.144 192.168.0.34 <none> <none> job-multi-demo-frstg 0/1 Completed 0 11m 10.200.211.186 192.168.0.34 <none> <none> job-multi-demo-gd44s 0/1 Completed 0 11m 10.200.211.184 192.168.0.34 <none> <none> job-multi-demo-kfm79 0/1 Completed 0 10m 10.200.211.140 192.168.0.34 <none> <none> job-multi-demo-nsmpg 0/1 Completed 0 10m 10.200.211.135 192.168.0.34 <none> <none> job-multi-demo2-7ppxc 0/1 Completed 0 57s 10.200.211.145 192.168.0.34 <none> <none> job-multi-demo2-8bh22 0/1 Completed 0 47s 10.200.211.148 192.168.0.34 <none> <none> job-multi-demo2-dbjqw 0/1 Completed 0 47s 10.200.211.141 192.168.0.34 <none> <none> job-multi-demo2-mxbtq 0/1 Completed 0 52s 10.200.211.152 192.168.0.34 <none> <none> job-multi-demo2-rhgh7 0/1 Completed 0 51s 10.200.211.143 192.168.0.34 <none> <none> job-multi-demo2-th6ff 0/1 Completed 0 58s 10.200.211.136 192.168.0.34 <none> <none> net-test1 1/1 Running 3 (84m ago) 7d11h 10.200.211.191 192.168.0.34 <none> <none> pod-demo 1/1 Running 1 (84m ago) 4h7m 10.200.155.138 192.168.0.36 <none> <none> test 1/1 Running 5 (84m ago) 14d 10.200.209.6 192.168.0.35 <none> <none> test1 1/1 Running 5 (84m ago) 14d 10.200.209.8 192.168.0.35 <none> <none> test2 1/1 Running 5 (84m ago) 14d 10.200.211.177 192.168.0.34 <none> <none> root@k8s-deploy:/yaml#
驗證job資料
提示:可以看到後面job追加的時間幾乎都是兩個重複的,這說明兩個pod同時執行了job裡的任務;
Cronjob控制器,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14157306.html;
示例:定義cronjob
apiVersion: batch/v1 kind: CronJob metadata: name: job-cronjob namespace: default spec: schedule: "*/1 * * * *" jobTemplate: spec: parallelism: 2 template: spec: containers: - name: job-cronjob-container image: harbor.ik8s.cc/baseimages/centos7:2023 command: ["/bin/sh"] args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/cronjob-data.log"] volumeMounts: - mountPath: /cache name: cache-volume - name: localtime mountPath: /etc/localtime volumes: - name: cache-volume hostPath: path: /tmp/jobdata - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai restartPolicy: OnFailure
應用清單
root@k8s-deploy:/yaml# kubectl apply -f cronjob-demo.yaml cronjob.batch/job-cronjob created root@k8s-deploy:/yaml# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE job-cronjob */1 * * * * False 0 <none> 6s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-cronjob-28056516-njddz 0/1 Completed 0 12s job-cronjob-28056516-wgbns 0/1 Completed 0 12s job-demo-z8gmb 0/1 Completed 0 64m job-multi-demo-5vp9w 0/1 Completed 0 40m job-multi-demo-frstg 0/1 Completed 0 40m job-multi-demo-gd44s 0/1 Completed 0 40m job-multi-demo-kfm79 0/1 Completed 0 40m job-multi-demo-nsmpg 0/1 Completed 0 40m job-multi-demo2-7ppxc 0/1 Completed 0 30m job-multi-demo2-8bh22 0/1 Completed 0 30m job-multi-demo2-dbjqw 0/1 Completed 0 30m job-multi-demo2-mxbtq 0/1 Completed 0 30m job-multi-demo2-rhgh7 0/1 Completed 0 30m job-multi-demo2-th6ff 0/1 Completed 0 30m net-test1 1/1 Running 3 (113m ago) 7d11h pod-demo 1/1 Running 1 (113m ago) 4h36m test 1/1 Running 5 (113m ago) 14d test1 1/1 Running 5 (113m ago) 14d test2 1/1 Running 5 (113m ago) 14d root@k8s-deploy:/yaml# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE job-cronjob */1 * * * * False 0 12s 108s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE job-cronjob-28056516-njddz 0/1 Completed 0 77s job-cronjob-28056516-wgbns 0/1 Completed 0 77s job-cronjob-28056517-d6n9h 0/1 Completed 0 17s job-cronjob-28056517-krsvb 0/1 Completed 0 17s job-demo-z8gmb 0/1 Completed 0 65m job-multi-demo-5vp9w 0/1 Completed 0 41m job-multi-demo-frstg 0/1 Completed 0 41m job-multi-demo-gd44s 0/1 Completed 0 41m job-multi-demo-kfm79 0/1 Completed 0 41m job-multi-demo-nsmpg 0/1 Completed 0 41m job-multi-demo2-7ppxc 0/1 Completed 0 31m job-multi-demo2-8bh22 0/1 Completed 0 31m job-multi-demo2-dbjqw 0/1 Completed 0 31m job-multi-demo2-mxbtq 0/1 Completed 0 31m job-multi-demo2-rhgh7 0/1 Completed 0 31m job-multi-demo2-th6ff 0/1 Completed 0 31m net-test1 1/1 Running 3 (114m ago) 7d11h pod-demo 1/1 Running 1 (114m ago) 4h38m test 1/1 Running 5 (114m ago) 14d test1 1/1 Running 5 (114m ago) 14d test2 1/1 Running 5 (114m ago) 14d root@k8s-deploy:/yaml#
提示:cronjob 預設保留最近3個歷史記錄;
驗證:檢視週期執行任務的資料
提示:從上面的時間就可以看到每過一分鐘就有兩個pod執行一次任務;
RC/RS 副本控制器
RC(Replication Controller),副本控制器,該控制器主要負責控制pod副本數量始終滿足使用者期望的副本數量,該副本控制器是第一代pod副本控制器,僅支援selector = !=;
rc控制器示例
apiVersion: v1 kind: ReplicationController metadata: name: ng-rc spec: replicas: 2 selector: app: ng-rc-80 template: metadata: labels: app: ng-rc-80 spec: containers: - name: pod-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - containerPort: 80 name: http volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE test 1/1 Running 6 (11m ago) 16d test1 1/1 Running 6 (11m ago) 16d test2 1/1 Running 6 (11m ago) 16d root@k8s-deploy:/yaml# kubectl apply -f rc-demo.yaml replicationcontroller/ng-rc created root@k8s-deploy:/yaml# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ng-rc-l7xmp 1/1 Running 0 10s 10.200.211.136 192.168.0.34 <none> <none> ng-rc-wl5d6 1/1 Running 0 9s 10.200.155.185 192.168.0.36 <none> <none> test 1/1 Running 6 (11m ago) 16d 10.200.209.24 192.168.0.35 <none> <none> test1 1/1 Running 6 (11m ago) 16d 10.200.209.31 192.168.0.35 <none> <none> test2 1/1 Running 6 (11m ago) 16d 10.200.211.186 192.168.0.34 <none> <none> root@k8s-deploy:/yaml# kubectl get rc NAME DESIRED CURRENT READY AGE ng-rc 2 2 2 25s root@k8s-deploy:/yaml#
驗證:修改pod標籤,看看對應pod是否會重新建立?
root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 2m32s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 2m31s app=ng-rc-80 test 1/1 Running 6 (13m ago) 16d run=test test1 1/1 Running 6 (13m ago) 16d run=test1 test2 1/1 Running 6 (13m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=nginx-demo --overwrite pod/ng-rc-l7xmp labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 4m42s app=nginx-demo ng-rc-rxvd4 0/1 ContainerCreating 0 3s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 4m41s app=ng-rc-80 test 1/1 Running 6 (15m ago) 16d run=test test1 1/1 Running 6 (15m ago) 16d run=test1 test2 1/1 Running 6 (15m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 4m52s app=nginx-demo ng-rc-rxvd4 1/1 Running 0 13s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 4m51s app=ng-rc-80 test 1/1 Running 6 (16m ago) 16d run=test test1 1/1 Running 6 (16m ago) 16d run=test1 test2 1/1 Running 6 (16m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/ng-rc-l7xmp app=ng-rc-80 --overwrite pod/ng-rc-l7xmp labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 5m27s app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 5m26s app=ng-rc-80 test 1/1 Running 6 (16m ago) 16d run=test test1 1/1 Running 6 (16m ago) 16d run=test1 test2 1/1 Running 6 (16m ago) 16d run=test2 root@k8s-deploy:/yaml#
提示:rc控制器是透過標籤選擇器來識別對應pod是否歸屬對應rc控制器管控,如果發現對應pod的標籤發生改變,那麼rc控制器會透過新建或刪除的方法將對應pod數量始終和使用者定義的數量保持一致;
RS(ReplicaSet),副本控制器,該副本控制器和rc類似,都是透過標籤選擇器來匹配歸屬自己管控的pod數量,如果標籤或對應pod數量少於或多餘使用者期望的數量,該控制器會透過新建或刪除pod的方式將對應pod數量始終和使用者期望的pod數量保持一致;rs控制器和rc控制器唯一區別就是rs控制器支援selector = !=精確匹配外,還支援模糊匹配in notin;是k8s之上的第二代pod副本控制器;
rs控制器示例
apiVersion: apps/v1 kind: ReplicaSet metadata: name: rs-demo labels: app: rs-demo spec: replicas: 3 selector: matchLabels: app: rs-demo template: metadata: labels: app: rs-demo spec: containers: - name: rs-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - name: web containerPort: 80 protocol: TCP env: - name: NGX_VERSION value: 1.16.1 volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
驗證:修改pod標籤,看看對應pod是否會發生變化?
root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 18m app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 18m app=ng-rc-80 rs-demo-nzmqs 1/1 Running 0 71s app=rs-demo rs-demo-v2vb6 1/1 Running 0 71s app=rs-demo rs-demo-x27fv 1/1 Running 0 71s app=rs-demo test 1/1 Running 6 (29m ago) 16d run=test test1 1/1 Running 6 (29m ago) 16d run=test1 test2 1/1 Running 6 (29m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=nginx --overwrite pod/rs-demo-nzmqs labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 19m app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 19m app=ng-rc-80 rs-demo-bdfdd 1/1 Running 0 4s app=rs-demo rs-demo-nzmqs 1/1 Running 0 103s app=nginx rs-demo-v2vb6 1/1 Running 0 103s app=rs-demo rs-demo-x27fv 1/1 Running 0 103s app=rs-demo test 1/1 Running 6 (30m ago) 16d run=test test1 1/1 Running 6 (30m ago) 16d run=test1 test2 1/1 Running 6 (30m ago) 16d run=test2 root@k8s-deploy:/yaml# kubectl label pod/rs-demo-nzmqs app=rs-demo --overwrite pod/rs-demo-nzmqs labeled root@k8s-deploy:/yaml# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ng-rc-l7xmp 1/1 Running 0 19m app=ng-rc-80 ng-rc-wl5d6 1/1 Running 0 19m app=ng-rc-80 rs-demo-nzmqs 1/1 Running 0 119s app=rs-demo rs-demo-v2vb6 1/1 Running 0 119s app=rs-demo rs-demo-x27fv 1/1 Running 0 119s app=rs-demo test 1/1 Running 6 (30m ago) 16d run=test test1 1/1 Running 6 (30m ago) 16d run=test1 test2 1/1 Running 6 (30m ago) 16d run=test2 root@k8s-deploy:/yaml#
提示:可以看到當我們修改pod標籤為其他標籤以後,對應rs控制器會新建一個pod,其標籤為app=rs-demo,這是因為當我們修改pod標籤以後,rs控制器發現標籤選擇器匹配的pod數量少於使用者定義的數量,所以rs控制器會新建一個標籤為app=rs-demo的pod;當我們把pod標籤修改為rs-demo時,rs控制器發現對應標籤選擇器匹配pod數量多餘使用者期望的pod數量,此時rs控制器會透過刪除pod方法,讓app=rs-demo標籤的pod和使用者期望的pod數量保持一致;
Deployment 副本控制器,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14149042.html;
Deployment副本控制器時k8s第三代pod副本控制器,該控制器比rs控制器更高階,除了有rs的功能之外,還有很多高階功能,,比如說最重要的滾動升級、回滾等;
deploy控制器示例
apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default labels: app: deploy-demo spec: selector: matchLabels: app: deploy-demo replicas: 2 template: metadata: labels: app: deploy-demo spec: containers: - name: deploy-demo image: "harbor.ik8s.cc/baseimages/nginx:v1" ports: - containerPort: 80 name: http volumeMounts: - name: localtime mountPath: /etc/localtime volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
應用配置清單
提示:deploy控制器是透過建立rs控制器來實現管控對應pod數量;
透過修改映象版本來更新pod版本
應用配置清單
使用命令更新pod版本
檢視rs更新歷史版本
檢視更新歷史記錄
提示:這裡歷史記錄中沒有記錄版本資訊,原因是預設不記錄,需要記錄歷史版本,可以手動使用--record選項來記錄版本資訊;如下所示
檢視某個歷史版本的詳細資訊
提示:檢視某個歷史版本的詳細資訊,加上--revision=對應歷史版本的編號即可;
回滾到上一個版本
提示:使用kubectl rollout undo 命令可以將對應deploy回滾到上一個版本;
回滾指定編號的歷史版本
提示:使用--to-revision選項來指定對應歷史版本編號,即可回滾到對應編號的歷史版本;
Service資源,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14161950.html;
nodeport型別的service訪問流程
nodeport型別service主要解決了k8s叢集外部客戶端訪問pod,其流程是外部客戶端訪問k8s叢集任意node節點的對應暴露的埠,被訪問的node或透過本機的iptables或ipvs規則將外部客戶端流量轉發給對應pod之上,從而實現外部客戶端訪問k8s叢集pod的目的;通常使用nodeport型別service為了方便外部客戶端訪問,都會在叢集外部部署一個負載均衡器,即外部客戶端訪問對應負載均衡器的對應埠,透過負載均衡器將外部客戶端流量引入k8s叢集,從而完成對pod的訪問;
ClusterIP型別svc示例
apiVersion: v1 kind: Service metadata: name: ngx-svc namespace: default spec: selector: app: deploy-demo type: ClusterIP ports: - name: http protocol: TCP port: 80 targetPort: 80
應用配置清單
提示:可以看到建立clusterip型別service以後,對應svc會有一個clusterip,後端endpoints會透過標籤選擇器去關聯對應pod,即我們訪問對應svc的clusterip,對應流量會被轉發至後端endpoint pod之上進行響應;不過這種clusterip型別svc只能在k8s叢集內部客戶端訪問,叢集外部客戶端是訪問不到的,原因是這個clusterip是k8s內部網路IP地址;
驗證,訪問10.100.100.23的80埠,看看對應後端nginxpod是否可以正常被訪問呢?
root@k8s-node01:~# curl 10.100.100.23 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@k8s-node01:~#
nodeport型別service示例
apiVersion: v1 kind: Service metadata: name: ngx-nodeport-svc namespace: default spec: selector: app: deploy-demo type: NodePort ports: - name: http protocol: TCP port: 80 targetPort: 80 nodePort: 30012
提示:nodeport型別service只需要在clusterip型別的svc之上修改type為NodePort,然後再ports欄位下用nodePort指定對應node埠即可;
應用配置清單
root@k8s-deploy:/yaml# kubectl apply -f nodeport-svc-demo.yaml service/ngx-nodeport-svc created root@k8s-deploy:/yaml# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 16d ngx-nodeport-svc NodePort 10.100.209.225 <none> 80:30012/TCP 11s root@k8s-deploy:/yaml# kubectl describe svc ngx-nodeport-svc Name: ngx-nodeport-svc Namespace: default Labels: <none> Annotations: <none> Selector: app=deploy-demo Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.100.209.225 IPs: 10.100.209.225 Port: http 80/TCP TargetPort: 80/TCP NodePort: http 30012/TCP Endpoints: 10.200.155.178:80,10.200.211.138:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> root@k8s-deploy:/yaml#
驗證:訪問k8s叢集任意node的30012埠,看看對應nginxpod是否能夠被訪問到?
root@k8s-deploy:/yaml# curl 192.168.0.34:30012 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@k8s-deploy:/yaml#
提示:可以看到k8s外部客戶端訪問k8snode節點的30012埠是能夠正常訪問到nginxpod;當然叢集內部的客戶端是可以透過對應生成的clusterip進行訪問的;
root@k8s-node01:~# curl 10.100.209.225:30012 curl: (7) Failed to connect to 10.100.209.225 port 30012 after 0 ms: Connection refused root@k8s-node01:~# curl 127.0.0.1:30012 curl: (7) Failed to connect to 127.0.0.1 port 30012 after 0 ms: Connection refused root@k8s-node01:~# curl 192.168.0.34:30012 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@k8s-node01:~#
提示:叢集內部客戶端只能訪問clusterip的80埠,或者訪問node的對外IP的30012埠;
Volume資源,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14180752.html;
pod掛載nfs的使用
在nfs伺服器上準備資料目錄
root@harbor:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /data/k8sdata/kuboard *(rw,no_root_squash) /data/volumes *(rw,no_root_squash) /pod-vol *(rw,no_root_squash) root@harbor:~# mkdir -p /pod-vol root@harbor:~# ls /pod-vol -d /pod-vol root@harbor:~# exportfs -av exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting *:/pod-vol exporting *:/data/volumes exporting *:/data/k8sdata/kuboard root@harbor:~#
在pod上掛載nfs目錄
apiVersion: apps/v1 kind: Deployment metadata: name: ngx-nfs-80 namespace: default labels: app: ngx-nfs-80 spec: selector: matchLabels: app: ngx-nfs-80 replicas: 1 template: metadata: labels: app: ngx-nfs-80 spec: containers: - name: ngx-nfs-80 image: "harbor.ik8s.cc/baseimages/nginx:v1" resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi ports: - containerPort: 80 name: ngx-nfs-80 volumeMounts: - name: localtime mountPath: /etc/localtime - name: nfs-vol mountPath: /usr/share/nginx/html/ volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: nfs-vol nfs: server: 192.168.0.42 path: /pod-vol restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: ngx-nfs-svc namespace: default spec: selector: app: ngx-nfs-80 type: NodePort ports: - name: ngx-nfs-svc protocol: TCP port: 80 targetPort: 80 nodePort: 30013
應用配置清單
root@k8s-deploy:/yaml# kubectl apply -f nfs-vol.yaml deployment.apps/ngx-nfs-80 created service/ngx-nfs-svc created root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6849bdf444-pvsc9 1/1 Running 1 (57m ago) 46h deploy-demo-6849bdf444-sg8fz 1/1 Running 1 (57m ago) 46h ng-rc-l7xmp 1/1 Running 1 (57m ago) 47h ng-rc-wl5d6 1/1 Running 1 (57m ago) 47h ngx-nfs-80-66c9697cf4-8pm9k 1/1 Running 0 7s rs-demo-nzmqs 1/1 Running 1 (57m ago) 47h rs-demo-v2vb6 1/1 Running 1 (57m ago) 47h rs-demo-x27fv 1/1 Running 1 (57m ago) 47h test 1/1 Running 7 (57m ago) 17d test1 1/1 Running 7 (57m ago) 17d test2 1/1 Running 7 (57m ago) 17d root@k8s-deploy:/yaml# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18d ngx-nfs-svc NodePort 10.100.16.14 <none> 80:30013/TCP 15s ngx-nodeport-svc NodePort 10.100.209.225 <none> 80:30012/TCP 45h root@k8s-deploy:/yaml#
在nfs伺服器上/pod-vol目錄下提供index.html檔案
root@harbor:~# echo "this page from nfs server.." >> /pod-vol/index.html root@harbor:~# cat /pod-vol/index.html this page from nfs server.. root@harbor:~#
訪問pod,看看nfs伺服器上的inde.html是否能夠正常訪問到?
root@k8s-deploy:/yaml# curl 192.168.0.35:30013 this page from nfs server.. root@k8s-deploy:/yaml#
提示:能夠看到訪問pod對應返回的頁面就是剛才在nfs伺服器上建立的頁面,說明pod正常掛載了nfs提供的目錄;
PV、PVC資源,詳細說明請參考https://www.cnblogs.com/qiuhom-1874/p/14188621.html;
nfs實現靜態pvc的使用
在nfs伺服器上準備目錄
root@harbor:~# cat /etc/exports # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /data/k8sdata/kuboard *(rw,no_root_squash) /data/volumes *(rw,no_root_squash) /pod-vol *(rw,no_root_squash) /data/k8sdata/myserver/myappdata *(rw,no_root_squash) root@harbor:~# mkdir -p /data/k8sdata/myserver/myappdata root@harbor:~# exportfs -av exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver/myappdata". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting *:/data/k8sdata/myserver/myappdata exporting *:/pod-vol exporting *:/data/volumes exporting *:/data/k8sdata/kuboard root@harbor:~#
建立pv
apiVersion: v1 kind: PersistentVolume metadata: name: myapp-static-pv namespace: default
spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce nfs: path: /data/k8sdata/myserver/myappdata server: 192.168.0.42
建立pvc關聯pv
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myapp-static-pvc namespace: default spec: volumeName: myapp-static-pv accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
建立pod使用pvc
apiVersion: apps/v1 kind: Deployment metadata: name: ngx-nfs-pvc-80 namespace: default labels: app: ngx-pvc-80 spec: selector: matchLabels: app: ngx-pvc-80 replicas: 1 template: metadata: labels: app: ngx-pvc-80 spec: containers: - name: ngx-pvc-80 image: "harbor.ik8s.cc/baseimages/nginx:v1" resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi ports: - containerPort: 80 name: ngx-pvc-80 volumeMounts: - name: localtime mountPath: /etc/localtime - name: data-pvc mountPath: /usr/share/nginx/html/ volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: data-pvc persistentVolumeClaim: claimName: myapp-static-pvc --- apiVersion: v1 kind: Service metadata: name: ngx-pvc-svc namespace: default spec: selector: app: ngx-pvc-80 type: NodePort ports: - name: ngx-nfs-svc protocol: TCP port: 80 targetPort: 80 nodePort: 30014
應用上述配置清單
root@k8s-deploy:/yaml# kubectl apply -f nfs-static-pvc-demo.yaml persistentvolume/myapp-static-pv created persistentvolumeclaim/myapp-static-pvc created deployment.apps/ngx-nfs-pvc-80 created service/ngx-pvc-svc created root@k8s-deploy:/yaml# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE myapp-static-pv 2Gi RWO Retain Bound default/myapp-static-pvc 4s root@k8s-deploy:/yaml# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myapp-static-pvc Pending myapp-static-pv 0 7s root@k8s-deploy:/yaml# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6849bdf444-pvsc9 1/1 Running 1 (151m ago) 47h deploy-demo-6849bdf444-sg8fz 1/1 Running 1 (151m ago) 47h ng-rc-l7xmp 1/1 Running 1 (151m ago) 2d1h ng-rc-wl5d6 1/1 Running 1 (151m ago) 2d1h ngx-nfs-pvc-80-f776bb6d-nwwwq 0/1 Pending 0 10s rs-demo-nzmqs 1/1 Running 1 (151m ago) 2d rs-demo-v2vb6 1/1 Running 1 (151m ago) 2d rs-demo-x27fv 1/1 Running 1 (151m ago) 2d test 1/1 Running 7 (151m ago) 18d test1 1/1 Running 7 (151m ago) 18d test2 1/1 Running 7 (151m ago) 18d root@k8s-deploy:/yaml#
在nfs伺服器上/data/k8sdata/myserver/myappdata建立index.html,看看對應主頁是否能夠被訪問?
root@harbor:~# echo "this page from nfs-server /data/k8sdata/myserver/myappdata/index.html" >> /data/k8sdata/myserver/myappdata/index.html root@harbor:~# cat /data/k8sdata/myserver/myappdata/index.html this page from nfs-server /data/k8sdata/myserver/myappdata/index.html root@harbor:~#
訪問pod
root@harbor:~# curl 192.168.0.36:30014 this page from nfs-server /data/k8sdata/myserver/myappdata/index.html root@harbor:~#
nfs實現動態pvc的使用
建立名稱空間、服務賬號、clusterrole、clusterrolebindding、role、rolebinding
apiVersion: v1 kind: Namespace metadata: name: nfs --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
建立sc
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' reclaimPolicy: Retain #PV的刪除策略,預設為delete,刪除PV後立即刪除NFS server的資料 mountOptions: #- vers=4.1 #containerd有部分引數異常 #- noresvport #告知NFS客戶端在重新建立網路連線時,使用新的傳輸控制協議源埠 - noatime #訪問檔案時不更新檔案inode中的時間戳,高併發環境可提高效能 parameters: #mountOptions: "vers=4.1,noresvport,noatime" archiveOnDelete: "true" #刪除pod時保留pod資料,預設為false時為不保留資料
建立provision
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: nfs spec: replicas: 1 strategy: #部署策略 type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.0.42 - name: NFS_PATH value: /data/volumes volumes: - name: nfs-client-root nfs: server: 192.168.0.42 path: /data/volumes
呼叫sc建立pvc
apiVersion: v1 kind: Namespace metadata: name: myserver --- # Test PVC kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myserver-myapp-dynamic-pvc namespace: myserver spec: storageClassName: managed-nfs-storage #呼叫的storageclass 名稱 accessModes: - ReadWriteMany #訪問許可權 resources: requests: storage: 500Mi #空間大小
建立app使用pvc
kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: labels: app: myserver-myapp name: myserver-myapp-deployment-name namespace: myserver spec: replicas: 1 selector: matchLabels: app: myserver-myapp-frontend template: metadata: labels: app: myserver-myapp-frontend spec: containers: - name: myserver-myapp-container image: nginx:1.20.0 #imagePullPolicy: Always volumeMounts: - mountPath: "/usr/share/nginx/html/statics" name: statics-datadir volumes: - name: statics-datadir persistentVolumeClaim: claimName: myserver-myapp-dynamic-pvc --- kind: Service apiVersion: v1 metadata: labels: app: myserver-myapp-service name: myserver-myapp-service-name namespace: myserver spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30015 selector: app: myserver-myapp-frontend
應用上述配置清單
root@k8s-deploy:/yaml/myapp# kubectl apply -f . namespace/nfs created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created storageclass.storage.k8s.io/managed-nfs-storage created deployment.apps/nfs-client-provisioner created namespace/myserver created persistentvolumeclaim/myserver-myapp-dynamic-pvc created deployment.apps/myserver-myapp-deployment-name created service/myserver-myapp-service-name created root@k8s-deploy:
驗證:檢視sc、pv、pvc是否建立?pod是否正常執行?
root@k8s-deploy:/yaml/myapp# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Retain Immediate false 105s root@k8s-deploy:/yaml/myapp# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c 500Mi RWX Retain Bound myserver/myserver-myapp-dynamic-pvc managed-nfs-storage 107s root@k8s-deploy:/yaml/myapp# kubectl get pvc -n myserver NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myserver-myapp-dynamic-pvc Bound pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c 500Mi RWX managed-nfs-storage 117s root@k8s-deploy:/yaml/myapp# kubectl get pods -n myserver NAME READY STATUS RESTARTS AGE myserver-myapp-deployment-name-65ff65446f-xpd5p 1/1 Running 0 2m8s root@k8s-deploy:/yaml/myapp#
提示:可以看到pv自動由sc建立,pvc自動和pv關聯;
驗證:在nfs伺服器上的/data/volumes/下建立index.html檔案,訪問pod service,看看對應檔案是否能夠正常被訪問到?
root@harbor:/data/volumes# ls myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c root@harbor:/data/volumes# cd myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c/ root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# ls root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# echo "this page from nfs-server /data/volumes" >> index.html root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c# cat index.html this page from nfs-server /data/volumes root@harbor:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-01709c7f-0cf9-4554-9ae9-72db89e7308c#
提示:在nfs伺服器上的/data/volumes目錄下會自動生成一個使用pvcpod所在名稱空間+pvc名字+pv名字的一個目錄,這個目錄就是由provision建立;
訪問pod
root@harbor:~# curl 192.168.0.36:30015/statics/index.html this page from nfs-server /data/volumes root@harbor:~#
提示:能夠訪問到我們剛才建立的檔案,說明pod正常掛載nfs伺服器對應目錄;
PV/PVC總結
PV是對底層網路儲存的抽象,即將網路儲存定義為一種儲存資源,將一個整體的儲存資源拆分成多份後給不同的業務使用。
PVC是對PV資源的申請呼叫,pod是透過PVC將資料儲存至PV,PV再把資料儲存至真正的硬體儲存。
PersistentVolume引數
Capacity: #當前PV空間大小,kubectl explain PersistentVolume.spec.capacity
accessModes :訪問模式,#kubectl explain PersistentVolume.spec.accessModes
ReadWriteOnce – PV只能被單個節點以讀寫許可權掛載,RWO
ReadOnlyMany – PV以可以被多個節點掛載但是許可權是隻讀的,ROX
ReadWriteMany – PV可以被多個節點是讀寫方式掛載使用,RWX
persistentVolumeReclaimPolicy #刪除機制即刪除儲存卷卷時候,已經建立好的儲存卷由以下刪除操作:
Retain – 刪除PV後保持原裝,最後需要管理員手動刪除
Recycle – 空間回收,及刪除儲存捲上的所有資料(包括目錄和隱藏檔案),目前僅支援NFS和hostPath
Delete – 自動刪除儲存卷
volumeMode #卷型別,kubectl explain PersistentVolume.spec.volumeMode;定義儲存卷使用的檔案系統是塊裝置還是檔案系統,預設為檔案系統
mountOptions #附加的掛載選項列表,實現更精細的許可權控制;
官方文件:持久卷 | Kubernetes;
PersistentVolumeClaim建立引數
accessModes :PVC 訪問模式,#kubectl explain PersistentVolumeClaim.spec.volumeMode
ReadWriteOnce – PVC只能被單個節點以讀寫許可權掛載,RWO
ReadOnlyMany – PVC以可以被多個節點掛載但是許可權是隻讀的,ROX
ReadWriteMany – PVC可以被多個節點是讀寫方式掛載使用,RWX
resources: #定義PVC建立儲存卷的空間大小
selector: #標籤選擇器,選擇要繫結的PV
matchLabels #匹配標籤名稱
matchExpressions #基於正規表示式匹配
volumeName #要繫結的PV名稱
volumeMode #卷型別,定義PVC使用的檔案系統是塊裝置還是檔案系統,預設為檔案系統
Volume- 儲存卷型別
static:靜態儲存卷 ,需要在使用前手動建立PV、然後建立PVC並繫結到PV然後掛載至pod使用,適用於PV和PVC相對比較固定的業務場景。
dynamin:動態儲存卷,先建立一個儲存類storageclass,後期pod在使用PVC的時候可以透過儲存類動態建立PVC,適用於有狀態服務叢集如MySQL一主多從、zookeeper叢集等。
儲存類官方文件:儲存類 | Kubernetes