本文章參考:http://www.kubeasy.com/
上一篇介紹了master,node還有pod的基礎概念和一些用法。這一篇主要介紹k8s當中Pod的資源排程
一、RC和ReplicaSet
Replication Controller和ReplicaSet的建立刪除和Pod並無太大區別,Replication Controller目前幾乎已經不在生產環境中使用,ReplicaSet也很少單獨被使用,都是使用更高階的資源Deployment、DaemonSet、StatefulSet進行管理Pod。
1、Replication Controller和ReplicaSet
Replication Controller(複製控制器,RC)和ReplicaSet(複製集,RS)是兩種簡單部署Pod的方式。因為在生產環境中,主要使用更高階的Deployment等方式進行Pod的管理和部署,所以本節只對Replication Controller和Replica Set的部署方式進行簡單介紹。
1.1、Replication Controller
Replication Controller(簡稱RC)可確保Pod副本數達到期望值,也就是RC定義的數量。換句話說,Replication Controller可確保一個Pod或一組同類Pod總是可用。
如果存在的Pod大於設定的值,則Replication Controller將終止額外的Pod。如果太小,Replication Controller將啟動更多的Pod用於保證達到期望值。與手動建立Pod不同的是,用Replication Controller維護的Pod在失敗、刪除或終止時會自動替換。因此即使應用程式只需要一個Pod,也應該使用Replication Controller或其他方式管理。Replication Controller類似於程式管理程式,但是Replication Controller不是監視單個節點上的各個程式,而是監視多個節點上的多個Pod。
定義一個Replication Controller的示例如下。
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
1.2、ReplicaSet
ReplicaSet是支援基於集合的標籤選擇器的下一代Replication Controller,它主要用作Deployment協調建立、刪除和更新Pod,和Replication Controller唯一的區別是,ReplicaSet支援標籤選擇器。在實際應用中,雖然ReplicaSet可以單獨使用,但是一般建議使用Deployment來自動管理ReplicaSet,除非自定義的Pod不需要更新或有其他編排等。
定義一個ReplicaSet的示例如下:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
二、無狀態應用Deployment
2.1、Deployment概念
用於部署無狀態的服務,這個最常用的控制器。一般用於管理維護企業內部無狀態的微服務,比如configserver、zuul、springboot。他可以管理多個副本的Pod實現無縫遷移、自動擴容縮容、自動災難恢復、一鍵回滾等功能。
2.2、手動建立一個pod
#yaml檔案
# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-09-19T02:41:11Z"
generation: 1
labels:
app: nginx
name: nginx
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2 #副本數
revisionHistoryLimit: 10 # 歷史記錄保留的個數
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
#建立deployment
[root@master yaml]# kubectl apply -f dp-nginx.yaml
deployment.apps/nginx created
[root@master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-66bbc9fdc5-jnh5c 1/1 Running 0 63s
nginx-66bbc9fdc5-v5wq7 1/1 Running 0 63s
#狀態解析
[root@master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-66bbc9fdc5-jnh5c 1/1 Running 0 108s 172.171.205.134 master <none> <none>
nginx-66bbc9fdc5-v5wq7 1/1 Running 0 108s 172.165.11.3 node2 <none> <none>
NAME: Deployment名稱
READY:Pod的狀態,已經Ready的個數
UP-TO-DATE:已經達到期望狀態的被更新的副本數
AVAILABLE:已經可以用的副本數
AGE:顯示應用程式執行的時間
CONTAINERS:容器名稱
IMAGES:容器的映象
SELECTOR:管理的Pod的標籤
2.3、Deployment的更新
2.3.1、更改deployment的映象並記錄
#用命令列的方式更改
[root@master yaml]# kubectl set image deploy nginx nginx=nginx1.15.4 --record
deployment.apps/nginx image updated
2.3.2、檢視更新過程
[root@master yaml]# kubectl rollout status deploy nginx
Waiting for deployment "nginx" rollout to finish: 1 out of 2 new replicas have been updated...
2.3.3、檢視更新記錄
[root@master log]# kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy nginx nginx=nginx1.15.3 --record=true
3 kubectl set image deploy nginx nginx=nginx1.15.4 --record=true
4 kubectl set image deploy nginx nginx=nginx1.15.2 --record=true
2.4、回滾操作
2.4.1、回滾到上一個版本
[root@master log]# kubectl rollout undo deploy nginx
deployment.apps/nginx rolled back
2.4.2、回滾到指定版本
1、先檢視更新記錄
[root@master log]# kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy nginx nginx=nginx1.15.3 --record=true
3 kubectl set image deploy nginx nginx=nginx1.15.4 --record=true
4 kubectl set image deploy nginx nginx=nginx1.15.2 --record=true
2、回滾到第二個版本
[root@master log]# kubectl rollout undo deploy nginx --to-revision=2
deployment.apps/nginx rolled back
2.4.3、檢視指定版本的詳細資訊
[root@master log]# kubectl rollout history deploy nginx --revision=1
deployment.apps/nginx with revision #1
Pod Template:
Labels: app=nginx
pod-template-hash=66bbc9fdc5
Containers:
nginx:
Image: nginx:1.15.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
2.4.4、Deployment的暫停和恢復
#暫停功能
[root@master log]# kubectl rollout pause deploy nginx
deployment.apps/nginx paused
[root@master log]# kubectl set image deploy nginx nginx=1.19.1 --record
deployment.apps/nginx image updated
#進行第二次配置變更,新增記憶體CPU配置
[root@master log]# kubectl set resources deploy nginx -c nginx --limits=cpu=200m,memory=128Mi --requests=cpu=10m,memory=16Mi
deployment.apps/nginx resource requirements updated
[root@master log]# kubectl get deploy nginx -oyaml
#發現記憶體和CPU已經變成上面我們設定的大小
#檢視Pod是否被更新
[root@master log]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-66bbc9fdc5-jnh5c 1/1 Running 0 33m
nginx-66bbc9fdc5-v5wq7 1/1 Running 0 33m
#恢復啟動
[root@master log]# kubectl rollout resume deploy nginx
deployment.apps/nginx resumed
[root@master log]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-5b9975cbd8 0 0 0 21m
nginx-66945c45ff 0 0 0 17m
nginx-66bbc9fdc5 2 2 2 34m
nginx-7d596c7796 1 1 0 19s
nginx-c58645c45 0 0 0 19m
2.5、deployment注意事項
[root@master log]# kubectl get deploy nginx -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "7"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2020-09-19T02:41:11Z","generation":1,"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.15.2","imagePullPolicy":"IfNotPresent","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}}}
kubernetes.io/change-cause: kubectl set image deploy nginx nginx=1.19.1 --record=true
creationTimestamp: "2021-08-28T13:52:35Z"
generation: 10
labels:
app: nginx
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"nginx"}:
.: {}
f:imagePullPolicy: {}
f:name: {}
f:resources:
.: {}
f:limits:
f:cpu: {}
f:memory: {}
f:requests:
f:cpu: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-08-28T13:52:35Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubernetes.io/change-cause: {}
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"nginx"}:
f:image: {}
f:resources:
f:limits:
.: {}
f:cpu: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:memory: {}
manager: kubectl-set
operation: Update
time: "2021-08-28T14:22:50Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:unavailableReplicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-08-28T14:26:34Z"
name: nginx
namespace: default
resourceVersion: "13598"
uid: aea1a028-b412-41fe-b8ea-a955a4c6245d
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: 1.19.1
imagePullPolicy: IfNotPresent
name: nginx
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 10m
memory: 16Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2021-08-28T13:52:37Z"
lastUpdateTime: "2021-08-28T13:52:37Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-08-28T14:26:34Z"
lastUpdateTime: "2021-08-28T14:26:34Z"
message: ReplicaSet "nginx-7d596c7796" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 2
replicas: 3
unavailableReplicas: 1
updatedReplicas: 1
- .spec.revisionHistoryLimit:設定保留RS舊的revision的個數,設定為0的話,不保留歷史資料
- .spec.minReadySeconds:可選引數,指定新建立的Pod在沒有任何容器崩潰的情況下視為Ready最小的秒數,預設為0,即一旦被建立就視為可用。
- 滾動更新的策略:
- .spec.strategy.type:更新deployment的方式,預設是RollingUpdate
- RollingUpdate:滾動更新,可以指定maxSurge和maxUnavailable
- maxUnavailable:指定在回滾或更新時最大不可用的Pod的數量,可選欄位,預設25%,可以設定成數字或百分比,如果該值為0,那麼maxSurge就不能0
- maxSurge:可以超過期望值的最大Pod數,可選欄位,預設為25%,可以設定成數字或百分比,如果該值為0,那麼maxUnavailable不能為0
- Recreate:重建,先刪除舊的Pod,在建立新的Pod
三、有狀態應用StatefulSet
3.1、StatefulSet概念
StatefulSet(有狀態集,縮寫為sts)常用於部署有狀態的且需要有序啟動的應用程式,比如在進行SpringCloud專案容器化時,Eureka的部署是比較適合用StatefulSet部署方式的,可以給每個Eureka例項建立一個唯一且固定的識別符號,並且每個Eureka例項無需配置多餘的Service,其餘Spring Boot應用可以直接通過Eureka的Headless Service即可進行註冊。
- Eureka的statefulset的資源名稱是eureka,eureka-0 eureka-1 eureka-2
- Service:headless service,沒有ClusterIP eureka-svc (無頭service)
- Eureka-0.eureka-svc.NAMESPACE_NAME eureka-1.eureka-svc …
StatefulSet主要用於管理有狀態應用程式的工作負載API物件。比如在生產環境中,可以部署ElasticSearch叢集、MongoDB叢集或者需要持久化的RabbitMQ叢集、Redis叢集、Kafka叢集和ZooKeeper叢集等。
和Deployment類似,一個StatefulSet也同樣管理著基於相同容器規範的Pod。不同的是,StatefulSet為每個Pod維護了一個粘性標識。這些Pod是根據相同的規範建立的,但是不可互換,每個Pod都有一個持久的識別符號,在重新排程時也會保留,一般格式為StatefulSetName-Number。比如定義一個名字是Redis-Sentinel的StatefulSet,指定建立三個Pod,那麼建立出來的Pod名字就為Redis-Sentinel-0、Redis-Sentinel-1、Redis-Sentinel-2。而StatefulSet建立的Pod一般使用Headless Service(無頭服務)進行通訊,和普通的Service的區別在於Headless Service沒有ClusterIP,它使用的是Endpoint進行互相通訊,Headless一般的格式為:statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local。
說明:
- serviceName為Headless Service的名字,建立StatefulSet時,必須指定Headless Service名稱;
- 0..N-1為Pod所在的序號,從0開始到N-1;
- statefulSetName為StatefulSet的名字;
- namespace為服務所在的名稱空間;
- .cluster.local為Cluster Domain(叢集域)。
假如公司某個專案需要在Kubernetes中部署一個主從模式的Redis,此時使用StatefulSet部署就極為合適,因為StatefulSet啟動時,只有當前一個容器完全啟動時,後一個容器才會被排程,並且每個容器的識別符號是固定的,那麼就可以通過識別符號來斷定當前Pod的角色。
比如用一個名為redis-ms的StatefulSet部署主從架構的Redis,第一個容器啟動時,它的識別符號為redis-ms-0,並且Pod內主機名也為redis-ms-0,此時就可以根據主機名來判斷,當主機名為redis-ms-0的容器作為Redis的主節點,其餘從節點,那麼Slave連線Master主機配置就可以使用不會更改的Master的Headless Service,此時Redis從節點(Slave)配置檔案如下:
port 6379
slaveof redis-ms-0.redis-ms.public-service.svc.cluster.local 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
....
其中redis-ms-0.redis-ms.public-service.svc.cluster.local是Redis Master的Headless Service,在同一名稱空間下只需要寫redis-ms-0.redis-ms即可,後面的public-service.svc.cluster.local可以省略。
3.2、StatefulSet注意事項
一般StatefulSet用於有以下一個或者多個需求的應用程式:
- 需要穩定的獨一無二的網路識別符號。
- 需要持久化資料。
- 需要有序的、優雅的部署和擴充套件。
- 需要有序的自動滾動更新。
如果應用程式不需要任何穩定的識別符號或者有序的部署、刪除或者擴充套件,應該使用無狀態的控制器部署應用程式,比如Deployment或者ReplicaSet。
StatefulSet是Kubernetes 1.9版本之前的beta資源,在1.5版本之前的任何Kubernetes版本都沒有。
Pod所用的儲存必須由PersistentVolume Provisioner(持久化卷配置器)根據請求配置StorageClass,或者由管理員預先配置,當然也可以不配置儲存。
為了確保資料安全,刪除和縮放StatefulSet不會刪除與StatefulSet關聯的卷,可以手動選擇性地刪除PVC和PV
StatefulSet目前使用Headless Service(無頭服務)負責Pod的網路身份和通訊,需要提前建立此服務。
刪除一個StatefulSet時,不保證對Pod的終止,要在StatefulSet中實現Pod的有序和正常終止,可以在刪除之前將StatefulSet的副本縮減為0。
3.3、手動建立一個Pod
配置檔案如下
[root@master yaml]# cat state-nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
#建立pod
[root@master yaml]# kubectl apply -f statefulset.yaml
service/nginx created
statefulset.apps/web created
#檢視pod啟動狀態
[root@master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 2m56s
web-1 1/1 Running 0 2m31s
#檢視定義的無頭service,發現nginx這個service資源沒有CLUSTER-IP
[root@master yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
nginx ClusterIP None <none> 80/TCP 8m34s
關鍵注意點:
- kind: Service定義了一個名字為Nginx的Headless Service,建立的Service格式為nginx-0.nginx.default.svc.cluster.local,其他的類似,因為沒有指定Namespace(名稱空間),所以預設部署在default。
- kind: StatefulSet定義了一個名字為web的StatefulSet,replicas表示部署Pod的副本數,本例項為2。
在StatefulSet中必須設定Pod選擇器(.spec.selector)用來匹配其標籤(.spec.template.metadata.labels)。在1.8版本之前,如果未配置該欄位(.spec.selector),將被設定為預設值,在1.8版本之後,如果未指定匹配Pod Selector,則會導致StatefulSet建立錯誤。
當StatefulSet控制器建立Pod時,它會新增一個標籤statefulset.kubernetes.io/pod-name,該標籤的值為Pod的名稱,用於匹配Service。
3.4、驗證是否能夠解析
#建立一個busybox容器,用於驗證
[root@master yaml]# cat<<EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
> name: busybox
> namespace: default
> spec:
> containers:
> - name: busybox
> image: busybox:1.28
> command:
> - sleep
> - "3600"
> imagePullPolicy: IfNotPresent
> restartPolicy: Always
> EOF
#進入busybox容器,注意沒有bash,只能用sh
[root@master yaml]# kubectl exec -it busybox -- sh
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.244.32.152 web-0.nginx.default.svc.cluster.local
#可以正常解析到建立的statefulset控制器建立的pod,代表驗證沒有問題
3.5、StatefulSet更新策略
[root@k8s-master01 ~]# kubectl get sts web -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-09-19T07:46:49Z"
generation: 5
name: web
namespace: default
spec:
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: web
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
- partition數量代表更新時殺掉的副本數,儘量讓這個數量保持1。如果太高了容易影響業務。
- type型別跟deleployment一樣,都是滾動更新
四、守護程式DaemonSent
DaemonSet:守護程式集,縮寫為ds,在所有節點或者是匹配的節點上都部署一個Pod。
使用DaemonSet的場景
- 執行叢集儲存的daemon,比如ceph或者glusterd
- 節點的CNI網路外掛,calico
- 節點日誌的收集:fluentd或者是filebeat
- 節點的監控:node exporter
- 服務暴露:部署一個ingress nginx
4.1、建立一個DaemonSet
#deaemonset的配置檔案
[root@master yaml]# cat nginx-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: nginx
name: nginx
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
#建立
[root@master yaml]# kubectl create -f nginx-ds.yaml
daemonset.apps/nginx created
#可以看到每個節點上的都有一個nginx容器
[root@master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 52m 172.171.205.151 master <none> <none>
nginx-2bvmk 1/1 Running 0 71s 172.171.205.153 master <none> <none>
nginx-4rffn 1/1 Running 0 71s 172.165.149.25 node1 <none> <none>
nginx-f8qhk 1/1 Running 0 71s 172.165.11.13 node2 <none> <none>
#打標籤進行觀察
[root@master yaml]# kubectl label node node1 node2 ds=true
node/node1 labeled
node/node2 labeled
[root@master yaml]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane,master 18h v1.20.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node1 Ready <none> 18h v1.20.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2 Ready <none> 18h v1.20.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
#修改NGINX的映象版本後檢視更新歷史
[root@master yaml]# kubectl rollout history ds nginx
daemonset.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
4.2、DaemonSet更新回滾
Statefulset和DaemonSet更新回滾和Deployment一致,這裡不做介紹了。可以翻看上面的內容。
五、HPA控制器
Deployment、ReplicaSet、Replication Controller或StatefulSet控制器資源管控的Pod副本數量支援手動方式的執行時調整,從而更好地匹配業務規模的實際需求。不過,手動調整的方式依賴於使用者深度參與監控容器應用的資源壓力並且需要計算出合理的值進行調整,存在一定的程度的滯後性。為此,Kubernetes提供了多種自動彈性伸縮(Auto Scaling)工具
HPA:全稱為Horizontal Pod Autoscaler,一種支援控制器物件下Pod規模彈性伸縮的工具,目前有兩個版本的實現,分別稱為HPA和HPA(v2),前一種僅支援把CPU指標作為評估基準,而新版本支援可從資源指標API和自定義指標API中獲取指標資料。
- HPA v1為穩定版自動水平伸縮,只支援CPU指標
- V2為beta版本,分為v2beta1(支援CPU、記憶體和自定義指標)
- v2beta2(支援CPU、記憶體、自定義指標Custom和額外指標ExternalMetrics)
流程如下圖
5.1、建立一個HPA控制器
[root@master yaml]# cat hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: mynginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: mynginx
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 50Mi
#建立
[root@master yaml]# kubectl apply -f hpa.yaml
horizontalpodautoscaler.autoscaling/mynginx created
#檢視狀態
[root@master yaml]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
mynginx Deployment/mynginx <unknown>/50Mi, <unknown>/50% 2 10 0 89s
#或者直接跑一個容器
[root@master yaml]# kubectl run nginx-server-hpa --requests=cpu=10m --image=registry.cn-beijing.aliyuncs.com/dotbalo/nginx --port=80
暴露80埠
[root@master yaml]# kubectl exposedeployment hpa-nginx --port=80
#控制最大cpu限制和最大的POD個數
[root@master yaml]# kubectl autoscale deployment hpa-nginx --cpu-percent=10 --min=1 --max=10
必須安裝metrics-server或其他自定義metrics-server
必須配置requests引數
不能擴容無法縮放的物件,比如DaemonSet