目錄
- StatefulSet擴容縮容
- StatefulSet更新策略
- StatefulSet灰度釋出
- StatefulSet級聯刪除和非級聯刪除
- 守護程式服務DaemonSet
- DaemonSet的使用
- DaemonSet的更新和回滾
- Label&Selector
- 什麼是HPA?
- 自動擴縮容HPA實踐
StatefulSet擴容縮容
檢視nginx副本
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 1 (7h1m ago) 22h
web-1 1/1 Running 1 (7h1m ago) 22h
web-2 1/1 Running 1 (7h1m ago) 22h
StatefulSet副本啟動順序按照名稱0,1,2,只有web-0完全啟動之後才會啟動web-1,web-1完全啟動之後才會啟動web-2
刪除的時候順序與啟動相反,從最後一個序號開始,2,1,0,如果web-2刪除過程中,web-0掛掉了,那麼web-1不會被刪除,必須等待web-0啟動狀態變為ready之後,才會刪除web-1
開啟另一個視窗監控StatefulSet
[root@k8s-master01 ~]# kubectl get po -l app=nginx -w
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 1 (7h14m ago) 22h
web-1 1/1 Running 1 (7h14m ago) 22h
web-2 1/1 Running 1 (7h14m ago) 22h
擴容到5個副本
[root@k8s-master01 ~]# kubectl scale --replicas=5 sts web
statefulset.apps/web scaled
監控情況(可以看到按順序啟動)
[root@k8s-master01 ~]# kubectl get po -l app=nginx -w
NAME READY STATUS RESTARTS AGE
web-3 0/1 Pending 0 0s
web-3 0/1 Pending 0 0s
web-3 0/1 ContainerCreating 0 0s
web-3 1/1 Running 0 1s
web-4 0/1 Pending 0 0s
web-4 0/1 Pending 0 0s
web-4 0/1 ContainerCreating 0 0s
web-4 1/1 Running 0 1s
縮容到2個副本
[root@k8s-master01 ~]# kubectl scale --replicas=2 sts web
statefulset.apps/web scaled
監控情況(可以看到刪除的順序與啟動的順序相反)
web-4 1/1 Terminating 0 14m
web-4 0/1 Terminating 0 14m
web-4 0/1 Terminating 0 14m
web-4 0/1 Terminating 0 14m
web-3 1/1 Terminating 0 14m
web-3 0/1 Terminating 0 14m
web-3 0/1 Terminating 0 14m
web-3 0/1 Terminating 0 14m
web-2 1/1 Terminating 1 (7h29m ago) 22h
web-2 0/1 Terminating 1 (7h29m ago) 22h
web-2 0/1 Terminating 1 (7h29m ago) 22h
web-2 0/1 Terminating 1 (7h29m ago) 22h
StatefulSet滾動更新的時候會先刪除舊的副本,再建立新的副本,如果只有一個副本的話,會導致業務不可用,所以要根據自己的實際情況選擇使用StatefulSet或者Deployment,如果必須固定主機名或者pod名稱,建議使用StatefulSet
檢視主機名稱
[root@k8s-master01 ~]# kubectl exec -ti web-0 -- sh
# hostname
web-0
# exit
StatefulSet更新策略
- RollingUpdate
- OnDelete
StatefulSet和Deployment一樣,有幾種更新方式
RollingUpdate
檢視更新方式
[root@k8s-master01 ~]# kubectl get sts -o yaml
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate # 預設滾動更新,從下往上更新
擴容到3個副本
[root@k8s-master01 ~]# kubectl scale --replicas=3 sts web
statefulset.apps/web scaled
檢視pod
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 53m
web-1 1/1 Running 1 (8h ago) 23h
web-2 1/1 Running 0 15s
滾動更新順序是web-2,web-1,web-0,從下往上更新,如果更新過程中web-0掛掉了,則會等待web-0恢復到狀態為ready之後再繼續從下往上滾動更新
開啟另一個視窗監控StatefulSet
[root@k8s-master01 ~]# kubectl get po -l app=nginx -w
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 13s
web-1 1/1 Running 0 23s
web-2 1/1 Running 0 33s
修改映象地址觸發更新
[root@k8s-master01 ~]# kubectl edit sts web
/image 回車
# 修改映象
- image: nginx:1.15.3
檢視更新過程
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 58m
web-1 0/1 Terminating 1 (8h ago) 23h
web-2 1/1 Running 0 4s
檢視監控
web-2 1/1 Terminating 0 101s
web-2 0/1 Terminating 0 101s
web-2 0/1 Terminating 0 110s
web-2 0/1 Terminating 0 110s
web-2 0/1 Pending 0 0s
web-2 0/1 Pending 0 0s
web-2 0/1 ContainerCreating 0 0s
web-2 1/1 Running 0 2s
web-1 1/1 Terminating 0 102s
web-1 0/1 Terminating 0 103s
web-1 0/1 Terminating 0 110s
web-1 0/1 Terminating 0 110s
web-1 0/1 Pending 0 0s
web-1 0/1 Pending 0 0s
web-1 0/1 ContainerCreating 0 0s
web-1 1/1 Running 0 1s
web-0 1/1 Terminating 0 101s
web-0 0/1 Terminating 0 102s
web-0 0/1 Terminating 0 110s
web-0 0/1 Terminating 0 110s
web-0 0/1 Pending 0 0s
web-0 0/1 Pending 0 0s
web-0 0/1 ContainerCreating 0 0s
web-0 1/1 Running 0 1s
OnDelete
修改更新狀態為OnDelete
[root@k8s-master01 ~]# kubectl edit sts web
# 修改以下內容
updateStrategy:
type: OnDelete
修改映象地址
[root@k8s-master01 ~]# kubectl edit sts web
/image 回車
# 修改映象
- image: nginx:1.15.2
檢視pod,可以看到沒有更新
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m26s
web-1 1/1 Running 0 3m36s
web-2 1/1 Running 0 3m49s
手動刪除pod觸發更新
[root@k8s-master01 ~]# kubectl delete po web-2
pod "web-2" deleted
檢視pod
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 5m6s
web-1 1/1 Running 0 5m16s
web-2 1/1 Running 0 9s
檢視web-2映象,可以看到更新成功
[root@k8s-master01 ~]# kubectl get po web-2 -oyaml | grep image
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
image: nginx:1.15.2
imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
檢視web-1映象,可以看到沒有更新,所以只有刪除的時候才會更新映象
[root@k8s-master01 ~]# kubectl get po web-1 -oyaml | grep image
- image: nginx:1.15.3
imagePullPolicy: IfNotPresent
image: nginx:1.15.3
imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
刪除兩個pod
[root@k8s-master01 ~]# kubectl delete po web-0 web-1
pod "web-0" deleted
pod "web-1" deleted
檢視監控,可以看到按照刪除順序建立
web-0 0/1 Pending 0 0s
web-0 0/1 ContainerCreating 0 0s
web-0 1/1 Running 0 1s
web-1 0/1 Pending 0 0s
web-1 0/1 Pending 0 0s
web-1 0/1 ContainerCreating 0 0s
web-1 1/1 Running 0 1s
檢視所有pod映象,可以看到三個pod的映象都更新了
[root@k8s-master01 ~]# kubectl get po -oyaml | grep image
imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
image: nginx:1.15.2
imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
image: nginx:1.15.2
imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
image: nginx:1.15.2
imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
StatefulSet灰度釋出
修改配置
[root@k8s-master01 ~]# kubectl edit sts web
# 修改以下內容
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 2 # 小於2的不會被更新
開啟另一個視窗監控
[root@k8s-master01 ~]# kubectl get po -l app=nginx -w
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 44h
web-1 1/1 Running 0 44h
web-2 1/1 Running 0 44h
修改映象(nginx:1.15.2 -> nginx:1.15.3)
[root@k8s-master01 ~]# kubectl edit sts web
# 修改以下內容s
spec:
containers:
- image: nginx:1.15.3
檢視監控,可以看到只有大於2的在更新
[root@k8s-master01 ~]# kubectl get po -l app=nginx -w
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 44h
web-1 1/1 Running 0 44h
web-2 1/1 Running 0 44h
web-2 1/1 Terminating 0 44h
web-2 0/1 Terminating 0 44h
web-2 0/1 Terminating 0 44h
web-2 0/1 Terminating 0 44h
web-2 0/1 Pending 0 0s
web-2 0/1 Pending 0 0s
web-2 0/1 ContainerCreating 0 0s
web-2 1/1 Running 0 3s
檢視映象,可以看到web-2的映象是nginx:1.15.3,另外兩個是nginx:1.15.2
[root@k8s-master01 ~]# kubectl get po -oyaml | grep image
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
image: nginx:1.15.2
imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
image: nginx:1.15.2
imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- image: nginx:1.15.3
imagePullPolicy: IfNotPresent
image: nginx:1.15.3
imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
可以使用這種機制實現灰度機制,先發布一兩個例項,確認沒有問題之後再發布所有例項,這就是stateful的分段更新,相當於灰度釋出的機制,也可以使用其它的方式,比如服務網格,或者myservices
StatefulSet級聯刪除和非級聯刪除
- 級聯刪除:刪除sts時同時刪除Pod
- 非級聯刪除:刪除sts時不刪Pod
獲取sts
[root@k8s-master01 ~]# kubectl get sts
NAME READY AGE
web 3/3 2d20h
級聯刪除
[root@k8s-master01 ~]# kubectl delete sts web
statefulset.apps "web" deleted
檢視pod
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 0/1 Terminating 0 45h
web-1 0/1 Terminating 0 45h
web-2 0/1 Terminating 0 11m
建立pod
[root@k8s-master01 ~]# kubectl create -f nginx-sts.yaml
statefulset.apps/web created
Error from server (AlreadyExists): error when creating "nginx-sts.yaml": services "nginx" already exists
檢視pod
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 7s
web-1 1/1 Running 0 5s
非級聯刪除
[root@k8s-master01 ~]# kubectl delete sts web --cascade=false
warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
statefulset.apps "web" deleted
檢視sts,可以看到sts被刪除了
[root@k8s-master01 ~]# kubectl get sts
No resources found in default namespace.
檢視pod,可以看到pod依然存在,只是沒有sts管理了,再次刪除pod不會被重新建立
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m37s
web-1 1/1 Running 0 3m35s
刪除web-1,web-0
[root@k8s-master01 ~]# kubectl delete po web-1 web-0
pod "web-1" deleted
pod "web-0" deleted
檢視pod,可以看到沒有sts管理的pod,刪除之後不會重新建立
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
守護程式服務DaemonSet
DaemonSet:守護程式集,縮寫為ds,在所有節點或者是匹配的節點上都部署一個Pod。
使用DaemonSet的場景
- 執行叢集儲存的daemon,比如ceph或者glusterd
- 節點的CNI網路外掛,calico
- 節點日誌的收集:fluentd或者是filebeat
- 節點的監控:node exporter
- 服務暴露:部署一個ingress nginx
DaemonSet的使用
新建DaemonSet
[root@k8s-master01 ~]# cp nginx-deploy.yaml nginx-ds.yaml
[root@k8s-master01 ~]# vim nginx-ds.yaml
# 修改內容如下
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: nginx
name: nginx
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
建立一個ds,因為沒有配置notselect,所有它會在每個節點啟動一個
[root@k8s-master01 ~]# kubectl create -f nginx-ds.yaml
daemonset.apps/nginx created
檢視pod
[root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-2xtms 1/1 Running 0 90s 172.25.244.196 k8s-master01 <none> <none>
nginx-66bbc9fdc5-4xqcw 1/1 Running 0 5m43s 172.25.244.195 k8s-master01 <none> <none>
nginx-ct4xh 1/1 Running 0 90s 172.17.125.2 k8s-node01 <none> <none>
nginx-hx9ws 1/1 Running 0 90s 172.27.14.195 k8s-node02 <none> <none>
nginx-mjph9 1/1 Running 0 90s 172.18.195.2 k8s-master03 <none> <none>
nginx-p64rf 1/1 Running 0 90s 172.25.92.67 k8s-master02 <none> <none>
給需要部署的容器打標籤
[root@k8s-master01 ~]# kubectl label node k8s-node01 k8s-node02 ds=true
node/k8s-node01 labeled
node/k8s-node02 labeled
檢視容器標籤
[root@k8s-master01 ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master01 Ready <none> 3d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-master02 Ready <none> 3d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-master03 Ready <none> 3d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-node01 Ready <none> 3d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-node02 Ready <none> 3d v1.20.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,node.kubernetes.io/node=
修改nginx-ds.yaml
[root@k8s-master01 ~]# vim nginx-ds.yaml
#修改以下內容
spec:
nodeSelector:
ds: "true"
更新配置
[root@k8s-master01 ~]# kubectl replace -f nginx-ds.yaml
檢視pod,可以看到不符合標籤的pod被刪除了
[root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-66bbc9fdc5-4xqcw 1/1 Running 0 15m 172.25.244.195 k8s-master01 <none> <none>
nginx-gd6sp 1/1 Running 0 44s 172.27.14.196 k8s-node02 <none> <none>
nginx-pl4dz 1/1 Running 0 47s 172.17.125.3 k8s-node01 <none> <none>
DaemonSet的更新和回滾
Statefulset 和 DaemonSet 更新回滾和 Deployment 一致
更新策略推薦使用 OnDelete
updateStrategy:
type: OnDelete
因為 DaemonSet 可能部署在 k8s 叢集的很多節點上,一開始先在一些節點上進行測試,刪除後觸發更新不影響其他節點
檢視更新記錄
kubectl rollout history ds nginx
Label&Selector
Label:對k8s中各種資源進行分類、分組,新增一個具有特別屬性的一個標籤
Selector:通過一個過濾的語法進行查詢到對應標籤的資源
當Kubernetes對系統的任何API物件如Pod和節點進行“分組”時,會對其新增Label(key=value形式的“鍵-值對”)用以精準地選擇對應的API物件。而Selector(標籤選擇器)則是針對匹配物件的查詢方法。注:鍵-值對就是key-value pair
例如,常用的標籤tier可用於區分容器的屬性,如frontend、backend;或者一個release_track用於區分容器的環境,如canary、production等
Label
定義 Label
[root@k8s-master01 ~]# kubectl label node k8s-node02 region=subnet7
node/k8s-node02 labeled
通過Selector對其篩選
[root@k8s-master01 ~]# kubectl get no -l region=subnet7
NAME STATUS ROLES AGE VERSION
k8s-node02 Ready <none> 3d17h v1.17.3
在Deployment或其他控制器中指定將Pod部署到該節點
containers:
......
dnsPolicy: ClusterFirst
nodeSelector:
region: subnet7
restartPolicy: Always
......
對Service進行Label
[root@k8s-master01 ~]# kubectl label svc canary-v1 -n canary-production env=canary version=v1
service/canary-v1 labeled
檢視Labels
[root@k8s-master01 ~]# kubectl get svc -n canary-production --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
canary-v1 ClusterIP 10.110.253.62 <none> 8080/TCP 24h env=canary,version=v1
檢視所有Version為v1的svc
[root@k8s-master01 canary]# kubectl get svc --all-namespaces -l version=v1
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
canary-production canary-v1 ClusterIP 10.110.253.62 <none> 8080/TCP 25h
Selector
Selector主要用於資源的匹配,只有符合條件的資源才會被呼叫或使用,可以使用該方式對叢集中的各類資源進行分配
假如對Selector進行條件匹配,目前已有的Label如下
[root@k8s-master01 ~]# kubectl get svc --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
details ClusterIP 10.99.9.178 <none> 9080/TCP 45h app=details
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h component=apiserver,provider=kubernetes
nginx ClusterIP 10.106.194.137 <none> 80/TCP 2d21h app=productpage,version=v1
nginx-v2 ClusterIP 10.108.176.132 <none> 80/TCP 2d20h <none>
productpage ClusterIP 10.105.229.52 <none> 9080/TCP 45h app=productpage,tier=frontend
ratings ClusterIP 10.96.104.95 <none> 9080/TCP 45h app=ratings
reviews ClusterIP 10.102.188.143 <none> 9080/TCP 45h app=reviews
選擇app為reviews或者productpage的svc
[root@k8s-master01 ~]# kubectl get svc -l 'app in (details, productpage)' --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
details ClusterIP 10.99.9.178 <none> 9080/TCP 45h app=details
nginx ClusterIP 10.106.194.137 <none> 80/TCP 2d21h app=productpage,version=v1
productpage ClusterIP 10.105.229.52 <none> 9080/TCP 45h app=productpage,tier=frontend
選擇app為productpage或reviews但不包括version=v1的svc
[root@k8s-master01 ~]# kubectl get svc -l version!=v1,'app in (details, productpage)' --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
details ClusterIP 10.99.9.178 <none> 9080/TCP 45h app=details
productpage ClusterIP 10.105.229.52 <none> 9080/TCP 45h app=productpage,tier=frontend
選擇labelkey名為app的svc
[root@k8s-master01 ~]# kubectl get svc -l app --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
details ClusterIP 10.99.9.178 <none> 9080/TCP 45h app=details
nginx ClusterIP 10.106.194.137 <none> 80/TCP 2d21h app=productpage,version=v1
productpage ClusterIP 10.105.229.52 <none> 9080/TCP 45h app=productpage,tier=frontend
ratings ClusterIP 10.96.104.95 <none> 9080/TCP 45h app=ratings
reviews ClusterIP 10.102.188.143 <none> 9080/TCP 45h app=reviews
在實際使用中,Label的更改是經常發生的事情,可以使用overwrite引數修改標籤
修改標籤,比如將version=v1改為version=v2
[root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
canary-v1 ClusterIP 10.110.253.62 <none> 8080/TCP 26h env=canary,version=v1
[root@k8s-master01 canary]# kubectl label svc canary-v1 -n canary-production version=v2 --overwrite
service/canary-v1 labeled
[root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
canary-v1 ClusterIP 10.110.253.62 <none> 8080/TCP 26h env=canary,version=v2
刪除標籤,比如刪除version
[root@k8s-master01 canary]# kubectl label svc canary-v1 -n canary-production version-
service/canary-v1 labeled
[root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
canary-v1 ClusterIP 10.110.253.62 <none> 8080/TCP 26h env=canary
什麼是HPA?
Horizontal Pod Autoscaler
水平 pod 自動伸縮器
k8s 不推薦使用 VPA,因為節點有很多,推薦將流量分發到不同的節點上,而不是分發到同一個節點上
- HPA v1為穩定版自動水平伸縮,只支援CPU指標
- V2為beta版本,分為v2beta1(支援CPU、記憶體和自定義指標)
- v2beta2(支援CPU、記憶體、自定義指標Custom和額外指標ExternalMetrics)
自動擴縮容HPA實踐
- 必須安裝metrics-server或其他自定義metrics-server
- 必須配置requests引數
- 不能擴容無法縮放的物件,比如DaemonSet
dry-run匯出yaml檔案,以便於進行二次修改
kubectl create deployment hpa-nginx --image=registry.cn-beijing.aliyuncs.com/dotbalo/nginx --dry-run=client -oyaml > hpa-nginx.yaml
編輯檔案 hpa-nginx.yaml,containers 新增引數
containers:
- image: registry.cn-beijing.aliyuncs.com/dotbalo/nginx
name: nginx
resources:
requests:
cpu: 10m
建立
kubectl create hpa-nginx.yaml
暴露一個服務
kubectl expose deployment hpa-nginx --port=80
配置autoscale
kubectl autoscale deployment hpa-nginx --cpu-percent=10 --min=1 --max=10
迴圈執行提高cpu,暫停後cpu下降
while true; do wget -q -O- http://192.168.42.44 > /dev/null; done
課程連結
本作品採用知識共享署名-非商業性使用-相同方式共享 4.0 國際許可協議進行許可。
歡迎轉載、使用、重新發布,但務必保留文章署名 鄭子銘 (包含連結: http://www.cnblogs.com/MingsonZheng/ ),不得用於商業目的,基於本文修改後的作品務必以相同的許可釋出。