前文我們瞭解了用Prometheus監控k8s上的節點和pod資源,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14287942.html;今天我們來了解下k8s上的HPA資源的使用;
HPA的全稱是Horizontal Pod Autoscaler,從字面意思理解它就是水平pod自動伸縮器;簡單講HPA的主要作用是根據指定的指標資料,監控對應的pod控制器,一旦對應pod控制器下的pod的對應指標資料達到我們定義的閥值,即HPA就會被觸發,它會根據對應指標資料的值來擴充套件/縮減對應pod副本數量;擴充套件和縮減都是有上下限的,當pod數量達到上限,即便指標資料還是超過了我們定義的閥值它也不會再擴充套件,對於下線預設不指定就是為1;下限和上限都是一樣的邏輯,即便一個訪問都沒有,它會保持最低有多少個pod執行;需注意hpa只能用於監控可擴縮的pod控制器,對DaemonSet型別控制器不支援;
在k8s上HPA有兩個版本v1和v2;v1只支援根據cpu這個指標資料來自動擴充套件/縮減pod數量;V2支援根據自定義指標數量來自動擴充套件/縮減pod數量;HPA是k8s上的標準資源,我們可以通過命令或資源清單的方式去建立它;
使用命令建立HPA資源的使用語法格式
Usage: kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [options]
提示:預設不指定hpa的名稱,它會同對應的pod控制名稱相同;--min選項用於指定對應pod副本最低數量,預設不指定其值為1,--max用於指定pod最大副本數量;--cpu-percent選項用於指定對應pod的cpu資源的佔用比例,該佔用比例是同對應pod的資源限制做比較;
示例:使用命令建立v1版本的hpa資源
使用deploy控制器建立pod資源
[root@master01 ~]# cat myapp.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: default labels: app: myapp spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp image: ikubernetes/myapp:v1 resources: requests: cpu: 50m memory: 64Mi limits: cpu: 50m memory: 64Mi --- apiVersion: v1 kind: Service metadata: name: myapp-svc labels: app: myapp namespace: default spec: selector: app: myapp ports: - name: http port: 80 targetPort: 80 type: NodePort [root@master01 ~]#
應用資源清單
[root@master01 ~]# kubectl apply -f myapp.yaml deployment.apps/myapp created service/myapp-svc created [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-779867bcfc-657qr 1/1 Running 0 6s myapp-779867bcfc-txvj8 1/1 Running 0 6s [root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d myapp-svc NodePort 10.111.14.219 <none> 80:31154/TCP 13s [root@master01 ~]#
檢視pod的資源佔比情況
[root@master01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-779867bcfc-657qr 0m 3Mi myapp-779867bcfc-txvj8 0m 3Mi [root@master01 ~]#
提示:現在沒有訪問對應pod,其cpu指標為0;
使用命令建立hpa資源,監控myapp deploy,指定對應pod的cpu資源使用率達到50%就觸發hpa
[root@master01 ~]# kubectl autoscale deploy myapp --min=2 --max=10 --cpu-percent=50 horizontalpodautoscaler.autoscaling/myapp autoscaled [root@master01 ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp Deployment/myapp <unknown>/50% 2 10 0 10s [root@master01 ~]# kubectl describe hpa/myapp Name: myapp Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Mon, 18 Jan 2021 15:26:49 +0800 Reference: Deployment/myapp Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 0% (0) / 50% Min replicas: 2 Max replicas: 10 Deployment pods: 2 current / 2 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: <none> [root@master01 ~]#
驗證:使用ab壓測工具,對pod進行壓力訪問,看看對應pod cpu資源使用率大於50%,對應pod是否會擴充套件?
安裝ab工具
yum install httpd-tools -y
使用外部主機對pod svc 進行壓力訪問
檢視pod的資源佔比情況
[root@master01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-779867bcfc-657qr 51m 4Mi myapp-779867bcfc-txvj8 34m 4Mi [root@master01 ~]#
提示:可以看到對應pod的cpu資源都大於限制的資源上限的50%;這裡需要注意hpa擴充套件pod它有一定的延遲,不是立刻馬上就擴充套件;
檢視對應hpa資源的詳情
提示:hpa詳情告訴我們對應pod擴充套件到7個;
檢視pod數量是否被擴充套件到7個?
[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-779867bcfc-657qr 1/1 Running 0 16m myapp-779867bcfc-7c4dt 1/1 Running 0 3m27s myapp-779867bcfc-b2jmn 1/1 Running 0 3m27s myapp-779867bcfc-fmw7v 1/1 Running 0 2m25s myapp-779867bcfc-hxhj2 1/1 Running 0 2m25s myapp-779867bcfc-txvj8 1/1 Running 0 16m myapp-779867bcfc-xvh58 1/1 Running 0 2m25s [root@master01 ~]#
提示:可以看到對應pod被控制到7個;
檢視對應pod的資源佔比是否還高於限制的50%?
[root@master01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-779867bcfc-657qr 49m 4Mi myapp-779867bcfc-7c4dt 25m 4Mi myapp-779867bcfc-b2jmn 36m 4Mi myapp-779867bcfc-fmw7v 42m 4Mi myapp-779867bcfc-hxhj2 46m 3Mi myapp-779867bcfc-txvj8 21m 4Mi myapp-779867bcfc-xvh58 49m 4Mi [root@master01 ~]#
提示:可以看到對應pod的cpu使用率還是高於限制的50%,說明擴充套件到pod數量不能夠響應對應的請求,此時hpa還會擴充套件;
檢視hpa詳情,看看是否又一次擴充套件pod的數量?
提示:可以看到對應pod被擴充套件到10個,但是對應cpu資源使用率為94%,此時pod數量已經到達上限,即便對應指標資料還是大於指定的閥值,它也不會擴充套件;
檢視pod數量是否為10個?
[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-779867bcfc-57zw7 1/1 Running 0 5m39s myapp-779867bcfc-657qr 1/1 Running 0 23m myapp-779867bcfc-7c4dt 1/1 Running 0 10m myapp-779867bcfc-b2jmn 1/1 Running 0 10m myapp-779867bcfc-dvq6k 1/1 Running 0 5m39s myapp-779867bcfc-fmw7v 1/1 Running 0 9m45s myapp-779867bcfc-hxhj2 1/1 Running 0 9m45s myapp-779867bcfc-n8lmf 1/1 Running 0 5m39s myapp-779867bcfc-txvj8 1/1 Running 0 23m myapp-779867bcfc-xvh58 1/1 Running 0 9m45s [root@master01 ~]#
停止壓測,看看對應pod是否會縮減到最低pod數量為2個呢?
[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-779867bcfc-57zw7 1/1 Running 0 8m8s myapp-779867bcfc-657qr 1/1 Running 0 26m myapp-779867bcfc-7c4dt 1/1 Running 0 13m myapp-779867bcfc-b2jmn 1/1 Running 0 13m myapp-779867bcfc-dvq6k 1/1 Running 0 8m8s myapp-779867bcfc-fmw7v 1/1 Running 0 12m myapp-779867bcfc-hxhj2 1/1 Running 0 12m myapp-779867bcfc-n8lmf 1/1 Running 0 8m8s myapp-779867bcfc-txvj8 1/1 Running 0 26m myapp-779867bcfc-xvh58 1/1 Running 0 12m [root@master01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-779867bcfc-57zw7 0m 4Mi myapp-779867bcfc-657qr 0m 4Mi myapp-779867bcfc-7c4dt 7m 4Mi myapp-779867bcfc-b2jmn 0m 4Mi myapp-779867bcfc-dvq6k 0m 4Mi myapp-779867bcfc-fmw7v 0m 4Mi myapp-779867bcfc-hxhj2 3m 3Mi myapp-779867bcfc-n8lmf 10m 4Mi myapp-779867bcfc-txvj8 0m 4Mi myapp-779867bcfc-xvh58 0m 4Mi [root@master01 ~]#
提示:pod縮減也是不會立刻執行;從上面資訊可以看到停止壓測對應pod的cpu資源使用率都降下來了;
再次檢視對應pod數量
[root@master01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-779867bcfc-57zw7 0m 4Mi myapp-779867bcfc-657qr 0m 4Mi [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-779867bcfc-57zw7 1/1 Running 0 13m myapp-779867bcfc-657qr 1/1 Running 0 31m [root@master01 ~]#
提示:可以看到對應pod縮減到最低pod副本數量;
檢視hpa的詳情
提示:可以看到對應pod的cpu使用率小於50%,它會隔一段時間就縮減對應pod;
示例:使用資源清單定義hpa資源
[root@master01 ~]# cat hpa-demo.yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-demo spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp minReplicas: 2 maxReplicas: 10 targetCPUUtilizationPercentage: 50 [root@master01 ~]#
提示:以上是hpa v1的資源清單定義示例,其中targetCPUUtilizationPercentage用於指定cpu資源使用率閥值,50表示50%,即達到pod上限的50%對應hpa就會被觸發;
應用清單
[root@master01 ~]# kubectl apply -f hpa-demo.yaml horizontalpodautoscaler.autoscaling/hpa-demo created [root@master01 ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-demo Deployment/myapp <unknown>/50% 2 10 0 8s myapp Deployment/myapp 0%/50% 2 10 2 35m [root@master01 ~]# kubectl describe hpa/hpa-demo Name: hpa-demo Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Mon, 18 Jan 2021 16:02:25 +0800 Reference: Deployment/myapp Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 0% (0) / 50% Min replicas: 2 Max replicas: 10 Deployment pods: 2 current / 2 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: <none> [root@master01 ~]#
提示:可以看到使用命令建立hpa和使用資源清單建立hpa其建立出來的hpa都是一樣的;以上是hpa v1的使用示例和相關說明;使用命令建立hpa,只能建立v1的hpa;v2必須使用資源清單,明確指定對應hpa的群組版本;
使用自定義資源指標定義hpa
部署自定義資源指標伺服器
下載部署清單
[root@master01 ~]# mkdir custom-metrics-server [root@master01 ~]# cd custom-metrics-server [root@master01 custom-metrics-server]# git clone https://github.com/stefanprodan/k8s-prom-hpa Cloning into 'k8s-prom-hpa'... remote: Enumerating objects: 223, done. remote: Total 223 (delta 0), reused 0 (delta 0), pack-reused 223 Receiving objects: 100% (223/223), 102.23 KiB | 14.00 KiB/s, done. Resolving deltas: 100% (117/117), done. [root@master01 custom-metrics-server]# ls k8s-prom-hpa [root@master01 custom-metrics-server]#
檢視custom-metrics-server的部署清單
[root@master01 custom-metrics-server]# cd k8s-prom-hpa/ [root@master01 k8s-prom-hpa]# ls custom-metrics-api diagrams ingress LICENSE Makefile metrics-server namespaces.yaml podinfo prometheus README.md [root@master01 k8s-prom-hpa]# cd custom-metrics-api/ [root@master01 custom-metrics-api]# ls custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml custom-metrics-apiservice.yaml custom-metrics-apiserver-auth-reader-role-binding.yaml custom-metrics-cluster-role.yaml custom-metrics-apiserver-deployment.yaml custom-metrics-config-map.yaml custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml custom-metrics-resource-reader-cluster-role.yaml custom-metrics-apiserver-service-account.yaml hpa-custom-metrics-cluster-role-binding.yaml custom-metrics-apiserver-service.yaml [root@master01 custom-metrics-api]# cat custom-metrics-apiserver-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: custom-metrics-apiserver name: custom-metrics-apiserver namespace: monitoring spec: replicas: 1 selector: matchLabels: app: custom-metrics-apiserver template: metadata: labels: app: custom-metrics-apiserver name: custom-metrics-apiserver spec: serviceAccountName: custom-metrics-apiserver containers: - name: custom-metrics-apiserver image: quay.io/coreos/k8s-prometheus-adapter-amd64:v0.4.1 args: - /adapter - --secure-port=6443 - --tls-cert-file=/var/run/serving-cert/serving.crt - --tls-private-key-file=/var/run/serving-cert/serving.key - --logtostderr=true - --prometheus-url=http://prometheus.monitoring.svc:9090/ - --metrics-relist-interval=30s - --v=10 - --config=/etc/adapter/config.yaml ports: - containerPort: 6443 volumeMounts: - mountPath: /var/run/serving-cert name: volume-serving-cert readOnly: true - mountPath: /etc/adapter/ name: config readOnly: true volumes: - name: volume-serving-cert secret: secretName: cm-adapter-serving-certs - name: config configMap: name: adapter-config [root@master01 custom-metrics-api]#
提示:上述清單中明確定義了把自定義指標伺服器部署到monitoring名稱空間下,對應server的啟動還掛在了secret證書;所以應用上述清單前,我們要先建立名稱空間和secret;在建立secret前還要先準備好證書和私鑰;這裡還需要注意custom-metrics-server是連線Prometheus server,把對應自定義資料通過apiservice註冊到對應原生apiserver上,供k8s元件使用,所以這裡要注意對應Prometheus的地址;
建立monitoring名稱空間
[root@master01 custom-metrics-api]# cd .. [root@master01 k8s-prom-hpa]# ls custom-metrics-api diagrams ingress LICENSE Makefile metrics-server namespaces.yaml podinfo prometheus README.md [root@master01 k8s-prom-hpa]# cat namespaces.yaml --- apiVersion: v1 kind: Namespace metadata: name: monitoring [root@master01 k8s-prom-hpa]# kubectl apply -f namespaces.yaml namespace/monitoring created [root@master01 k8s-prom-hpa]# kubectl get ns NAME STATUS AGE default Active 41d ingress-nginx Active 27d kube-node-lease Active 41d kube-public Active 41d kube-system Active 41d kubernetes-dashboard Active 16d mongodb Active 4d20h monitoring Active 4s [root@master01 k8s-prom-hpa]#
生成serving.key和serving.csr
[root@master01 k8s-prom-hpa]# cd /etc/kubernetes/pki/ [root@master01 pki]# ls apiserver.crt apiserver.key ca.crt etcd front-proxy-client.crt sa.pub tom.key apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.crt front-proxy-client.key tom.crt apiserver-etcd-client.key apiserver-kubelet-client.key ca.srl front-proxy-ca.key sa.key tom.csr [root@master01 pki]# openssl genrsa -out serving.key 2048 Generating RSA private key, 2048 bit long modulus .............................................................................................................................................................+++ ..............................+++ e is 65537 (0x10001) [root@master01 pki]# openssl req -new -key ./serving.key -out ./serving.csr -subj "/CN=serving" [root@master01 pki]# ll total 80 -rw-r--r-- 1 root root 1277 Dec 8 14:38 apiserver.crt -rw-r--r-- 1 root root 1135 Dec 8 14:38 apiserver-etcd-client.crt -rw------- 1 root root 1679 Dec 8 14:38 apiserver-etcd-client.key -rw------- 1 root root 1679 Dec 8 14:38 apiserver.key -rw-r--r-- 1 root root 1143 Dec 8 14:38 apiserver-kubelet-client.crt -rw------- 1 root root 1679 Dec 8 14:38 apiserver-kubelet-client.key -rw-r--r-- 1 root root 1066 Dec 8 14:38 ca.crt -rw------- 1 root root 1675 Dec 8 14:38 ca.key -rw-r--r-- 1 root root 17 Jan 17 13:03 ca.srl drwxr-xr-x 2 root root 162 Dec 8 14:38 etcd -rw-r--r-- 1 root root 1078 Dec 8 14:38 front-proxy-ca.crt -rw------- 1 root root 1675 Dec 8 14:38 front-proxy-ca.key -rw-r--r-- 1 root root 1103 Dec 8 14:38 front-proxy-client.crt -rw------- 1 root root 1679 Dec 8 14:38 front-proxy-client.key -rw------- 1 root root 1679 Dec 8 14:38 sa.key -rw------- 1 root root 451 Dec 8 14:38 sa.pub -rw-r--r-- 1 root root 887 Jan 18 16:54 serving.csr -rw-r--r-- 1 root root 1679 Jan 18 16:54 serving.key -rw-r--r-- 1 root root 993 Dec 30 00:29 tom.crt -rw-r--r-- 1 root root 907 Dec 30 00:27 tom.csr -rw-r--r-- 1 root root 1675 Dec 30 00:21 tom.key [root@master01 pki]#
用kubenetes CA的key和證書為custom-metrics-server簽署證書
[root@master01 pki]# openssl x509 -req -in serving.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out serving.crt -days 3650 Signature ok subject=/CN=serving Getting CA Private Key [root@master01 pki]# ll serving.crt -rw-r--r-- 1 root root 977 Jan 18 16:55 serving.crt [root@master01 pki]#
在monitoring名稱空間下建立secret資源
[root@master01 pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=./serving.key --from-file=./serving.crt -n monitoring secret/cm-adapter-serving-certs created [root@master01 pki]# kubectl get secret -n monitoring NAME TYPE DATA AGE cm-adapter-serving-certs Opaque 2 10s default-token-k64tz kubernetes.io/service-account-token 3 2m27s [root@master01 pki]# kubectl describe secret/cm-adapter-serving-certs -n monitoring Name: cm-adapter-serving-certs Namespace: monitoring Labels: <none> Annotations: <none> Type: Opaque Data ==== serving.crt: 977 bytes serving.key: 1679 bytes [root@master01 pki]#
提示:這裡一定要使用generic型別建立secret,保持對應的名稱為serving.key和serving.crt;建立secret的名稱,必須同custom-metrics部署清單中的名稱保持一致;
部署prometheus
[root@master01 pki]# cd /root/custom-metrics-server/k8s-prom-hpa/prometheus/ [root@master01 prometheus]# ls prometheus-cfg.yaml prometheus-dep.yaml prometheus-rbac.yaml prometheus-svc.yaml [root@master01 prometheus]# cat prometheus-dep.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: prometheus namespace: monitoring spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus annotations: prometheus.io/scrape: 'false' spec: serviceAccountName: prometheus containers: - name: prometheus image: prom/prometheus:v2.1.0 imagePullPolicy: Always command: - prometheus - --config.file=/etc/prometheus/prometheus.yml - --storage.tsdb.retention=1h ports: - containerPort: 9090 protocol: TCP resources: limits: memory: 2Gi volumeMounts: - mountPath: /etc/prometheus/prometheus.yml name: prometheus-config subPath: prometheus.yml volumes: - name: prometheus-config configMap: name: prometheus-config items: - key: prometheus.yml path: prometheus.yml mode: 0644 [root@master01 prometheus]#
更改rbac資源清單中的群組版本為 rbac.authorization.k8s.io/v1
[root@master01 prometheus]# cat prometheus-rbac.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus rules: - apiGroups: [""] resources: - nodes - nodes/proxy - services - endpoints - pods verbs: ["get", "list", "watch"] - apiGroups: - extensions resources: - ingresses verbs: ["get", "list", "watch"] - nonResourceURLs: ["/metrics"] verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: monitoring --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: prometheus namespace: monitoring [root@master01 prometheus]#
應用prometheus目錄下的所有資源清單
[root@master01 prometheus]# kubectl apply -f . configmap/prometheus-config created deployment.apps/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created serviceaccount/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/prometheus created [root@master01 prometheus]# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE prometheus-5c5dc6d6d4-drrht 1/1 Running 0 26s [root@master01 prometheus]# kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus NodePort 10.99.1.110 <none> 9090:31190/TCP 35s [root@master01 prometheus]#
部署自定義指標服務,應用custom-metrics-server目錄下的所有資源清單
[root@master01 prometheus]# cd ../custom-metrics-api/ [root@master01 custom-metrics-api]# ls custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml custom-metrics-apiservice.yaml custom-metrics-apiserver-auth-reader-role-binding.yaml custom-metrics-cluster-role.yaml custom-metrics-apiserver-deployment.yaml custom-metrics-config-map.yaml custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml custom-metrics-resource-reader-cluster-role.yaml custom-metrics-apiserver-service-account.yaml hpa-custom-metrics-cluster-role-binding.yaml custom-metrics-apiserver-service.yaml [root@master01 custom-metrics-api]# kubectl apply -f . clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created deployment.apps/custom-metrics-apiserver created clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created serviceaccount/custom-metrics-apiserver created service/custom-metrics-apiserver created apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created configmap/adapter-config created clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created [root@master01 custom-metrics-api]# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE custom-metrics-apiserver-754dfc87c7-cdhqj 1/1 Running 0 18s prometheus-5c5dc6d6d4-drrht 1/1 Running 0 6m9s [root@master01 custom-metrics-api]# kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE custom-metrics-apiserver ClusterIP 10.99.245.190 <none> 443/TCP 31s prometheus NodePort 10.99.1.110 <none> 9090:31190/TCP 6m21s [root@master01 custom-metrics-api]#
提示:應用上述清單前,請把所有rbac.authorization.k8s.io/v1beta1更改為rbac.authorization.k8s.io/v1,把apiservice中的版本為apiregistration.k8s.io/v1;如果是1.17之前的k8s,不用修改;
驗證:檢視原生apiserver是否有custom.metrics.k8s.io的群組註冊進來?
[root@master01 custom-metrics-api]# kubectl api-versions|grep custom custom.metrics.k8s.io/v1beta1 [root@master01 custom-metrics-api]#
驗證:訪問對應群組,看看是否能夠請求到自定義資源指標?
[root@master01 custom-metrics-api]# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/" | jq . { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "metrics.k8s.io/v1beta1", "resources": [ { "name": "nodes", "singularName": "", "namespaced": false, "kind": "NodeMetrics", "verbs": [ "get", "list" ] }, { "name": "pods", "singularName": "", "namespaced": true, "kind": "PodMetrics", "verbs": [ "get", "list" ] } ] } [root@master01 custom-metrics-api]#
提示:如果訪問對應群組能夠響應資料,表示自定義資源指標伺服器沒有問題;
示例:建立podinfo pod,該pod輸出http_requests資源指標
[root@master01 custom-metrics-api]# cd .. [root@master01 k8s-prom-hpa]# ls custom-metrics-api diagrams ingress LICENSE Makefile metrics-server namespaces.yaml podinfo prometheus README.md [root@master01 k8s-prom-hpa]# cd podinfo/ [root@master01 podinfo]# ls podinfo-dep.yaml podinfo-hpa-custom.yaml podinfo-hpa.yaml podinfo-ingress.yaml podinfo-svc.yaml [root@master01 podinfo]# cat podinfo-dep.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: podinfo spec: selector: matchLabels: app: podinfo replicas: 2 template: metadata: labels: app: podinfo annotations: prometheus.io/scrape: "true" spec: containers: - name: podinfod image: stefanprodan/podinfo:0.0.1 imagePullPolicy: Always command: - ./podinfo - -port=9898 - -logtostderr=true - -v=2 volumeMounts: - name: metadata mountPath: /etc/podinfod/metadata readOnly: true ports: - containerPort: 9898 protocol: TCP readinessProbe: httpGet: path: /readyz port: 9898 initialDelaySeconds: 1 periodSeconds: 2 failureThreshold: 1 livenessProbe: httpGet: path: /healthz port: 9898 initialDelaySeconds: 1 periodSeconds: 3 failureThreshold: 2 resources: requests: memory: "32Mi" cpu: "1m" limits: memory: "256Mi" cpu: "100m" volumes: - name: metadata downwardAPI: items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "annotations" fieldRef: fieldPath: metadata.annotations [root@master01 podinfo]# cat podinfo-svc.yaml --- apiVersion: v1 kind: Service metadata: name: podinfo labels: app: podinfo spec: type: NodePort ports: - port: 9898 targetPort: 9898 nodePort: 31198 protocol: TCP selector: app: podinfo [root@master01 podinfo]#
應用資源清單
[root@master01 podinfo]# kubectl apply -f podinfo-dep.yaml,./podinfo-svc.yaml deployment.apps/podinfo created service/podinfo created [root@master01 podinfo]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d myapp-svc NodePort 10.111.14.219 <none> 80:31154/TCP 4h35m podinfo NodePort 10.111.10.211 <none> 9898:31198/TCP 17s [root@master01 podinfo]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-779867bcfc-57zw7 1/1 Running 0 4h18m myapp-779867bcfc-657qr 1/1 Running 0 4h36m podinfo-56874dc7f8-5rb9q 1/1 Running 0 40s podinfo-56874dc7f8-t6jgn 1/1 Running 0 40s [root@master01 podinfo]#
驗證:訪問podinfo svc,看看對應pod是否能夠正常訪問?
[root@master01 podinfo]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d myapp-svc NodePort 10.111.14.219 <none> 80:31154/TCP 4h37m podinfo NodePort 10.111.10.211 <none> 9898:31198/TCP 116s [root@master01 podinfo]# curl 10.111.10.211:9898 runtime: arch: amd64 external_ip: "" max_procs: "4" num_cpu: "4" num_goroutine: "9" os: linux version: go1.9.2 labels: app: podinfo pod-template-hash: 56874dc7f8 annotations: cni.projectcalico.org/podIP: 10.244.3.133/32 cni.projectcalico.org/podIPs: 10.244.3.133/32 kubernetes.io/config.seen: 2021-01-18T19:57:31.325293640+08:00 kubernetes.io/config.source: api prometheus.io/scrape: "true" environment: HOME: /root HOSTNAME: podinfo-56874dc7f8-5rb9q KUBERNETES_PORT: tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP: tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP_ADDR: 10.96.0.1 KUBERNETES_PORT_443_TCP_PORT: "443" KUBERNETES_PORT_443_TCP_PROTO: tcp KUBERNETES_SERVICE_HOST: 10.96.0.1 KUBERNETES_SERVICE_PORT: "443" KUBERNETES_SERVICE_PORT_HTTPS: "443" MYAPP_SVC_PORT: tcp://10.111.14.219:80 MYAPP_SVC_PORT_80_TCP: tcp://10.111.14.219:80 MYAPP_SVC_PORT_80_TCP_ADDR: 10.111.14.219 MYAPP_SVC_PORT_80_TCP_PORT: "80" MYAPP_SVC_PORT_80_TCP_PROTO: tcp MYAPP_SVC_SERVICE_HOST: 10.111.14.219 MYAPP_SVC_SERVICE_PORT: "80" MYAPP_SVC_SERVICE_PORT_HTTP: "80" PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PODINFO_PORT: tcp://10.111.10.211:9898 PODINFO_PORT_9898_TCP: tcp://10.111.10.211:9898 PODINFO_PORT_9898_TCP_ADDR: 10.111.10.211 PODINFO_PORT_9898_TCP_PORT: "9898" PODINFO_PORT_9898_TCP_PROTO: tcp PODINFO_SERVICE_HOST: 10.111.10.211 PODINFO_SERVICE_PORT: "9898" [root@master01 podinfo]#
驗證:訪問apiserver,看看對應pod輸出的http_requests資源指標是否能夠被訪問到?
[root@master01 podinfo]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq . { "kind": "MetricValueList", "apiVersion": "custom.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests" }, "items": [ { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-56874dc7f8-5rb9q", "apiVersion": "/v1" }, "metricName": "http_requests", "timestamp": "2021-01-18T12:01:41Z", "value": "911m" }, { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-56874dc7f8-t6jgn", "apiVersion": "/v1" }, "metricName": "http_requests", "timestamp": "2021-01-18T12:01:41Z", "value": "888m" } ] } [root@master01 podinfo]#
提示:可以看到現在用kubectl 工具可以在apiserver上訪問到對應pod提供的自定義指標資料;
示例:根據自定義指標資料,定義hpa資源
[root@master01 podinfo]# cat podinfo-hpa-custom.yaml --- apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: podinfo spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: podinfo minReplicas: 2 maxReplicas: 10 metrics: - type: Pods pods: metric: name: http_requests target: type: AverageValue averageValue: 10 [root@master01 podinfo]#
提示:使用自定義資源指標,對應hpa的群組必須為autoscale/v2beta2;對應自定義指標用metrics欄位給定;type用來描述對應自定義指標資料是什麼型別,pod表示是pod自身提供的自定義指標資料;上述資源清單表示引用pod自身的自定義指標資料,其名稱為http_requests;對該指標資料的平均值做監控,如果對應指標平均值大於10,則觸發hpa對其擴充套件,當對應指標資料小於10,對應hpa會對應進行縮減操作;
應用資源清單
[root@master01 podinfo]# kubectl apply -f podinfo-hpa-custom.yaml horizontalpodautoscaler.autoscaling/podinfo created [root@master01 podinfo]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-demo Deployment/myapp 0%/50% 2 10 2 4h1m myapp Deployment/myapp 0%/50% 2 10 2 4h37m podinfo Deployment/podinfo <unknown>/10 2 10 0 6s [root@master01 podinfo]# kubectl describe hpa/podinfo Name: podinfo Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Mon, 18 Jan 2021 20:04:14 +0800 Reference: Deployment/podinfo Metrics: ( current / target ) "http_requests" on pods: 899m / 10 Min replicas: 2 Max replicas: 10 Deployment pods: 2 current / 2 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: <none> [root@master01 podinfo]#
對podinfo 進行壓測,看看對應hpa是否能夠自動擴充套件?
提示:可以看到對應pod能夠被對應的hpa通過自定義指標來擴充套件pod數量;
停止壓測,看看對應pod是否會自動縮減至最低數量?
提示:可以看到停止壓測以後,對應的指標資料降低下來,對應的pod也隨之縮減到最低副本數量;以上就是hpa v2的簡單使用方式,更多示例和說明請參考官方文件https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale/;