docker筆記35-資源指標API及自定義指標API
以前是用heapster來收集資源指標才能看,現在heapster要廢棄了。
從k8s v1.8開始後,引入了新的功能,即把資源指標引入api。
資源指標:metrics-server
自定義指標: prometheus,k8s-prometheus-adapter
因此,新一代架構:
1) 核心指標流水線:由kubelet、metrics-server以及由API server提供的api組成;cpu累計利用率、記憶體實時利用率、pod的資源佔用率及容器的磁碟佔用率
2) 監控流水線:用於從系統收集各種指標資料並提供終端使用者、儲存系統以及HPA,他們包含核心指標以及許多非核心指標。非核心指標不能被k8s所解析。
metrics-server是個api server,僅僅收集cpu利用率、記憶體利用率等。
[root@master ~]# kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1
資源指標(metrics)
訪問
把檔案下載到本地目錄,,注意, 一定要到和自己k8s叢集版本一致目錄裡面下載 ,比如我的k8s 是v1.11.2。否則安裝後metrics的pod執行不起來。
[root@master metrics-server]# cd kubernetes-1.11.2/cluster/addons/metrics-server
[root@master metrics-server]# ls auth-delegator.yaml metrics-apiservice.yaml metrics-server-service.yaml auth-reader.yaml metrics-server-deployment.yaml resource-reader.yaml
注意:需要修改的地方:
metrics-server-deployment.yaml # - --source=kubernetes.summary_api:'' - --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true resource-reader.yaml resources: - pods - nodes - namespaces - nodes/stats #新加
[root@master metrics-server]# kubectl apply -f ./ clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created configmap/metrics-server-config created deployment.extensions/metrics-server-v0.3.1 created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master metrics-server]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE metrics-server-v0.2.1-fd596d746-c7x6q 2/2 Running 0 1m 10.244.2.49 node2
[root@master metrics-server]# kubectl api-versions metrics.k8s.io/v1beta1
看到api-version裡面有metrics了。
[root@master ~]# kubectl proxy --port=8080 Starting to serve on 127.0.0.1:8080
[root@master ~]# curl { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "metrics.k8s.io/v1beta1", "resources": [ { "name": "nodes", "singularName": "", "namespaced": false, "kind": "NodeMetrics", "verbs": [ "get", "list" ] }, { "name": "pods", "singularName": "", "namespaced": true, "kind": "PodMetrics", "verbs": [ "get", "list" ] } ]
[root@master metrics-server]# curl /pods { "kind": "PodMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/metrics.k8s.io/v1beta1/pods" }, "items": [ { "metadata": { "name": "pod1", "namespace": "dev", "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/dev/pods/pod1", "creationTimestamp": "2018-10-15T09:26:57Z" }, "timestamp": "2018-10-15T09:26:00Z", "window": "1m0s", "containers": [ { "name": "myapp", "usage": { "cpu": "0", "memory": "2940Ki" } } ] }, { "metadata": { "name": "rook-ceph-osd-0-b9b94dc6c-ffs8z", "namespace": "rook-ceph", "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/rook-ceph/pods/rook-ceph-osd-0-b9b94dc6c-ffs8z", "creationTimestamp": "2018-10-15T09:26:57Z" }, "timestamp": "2018-10-15T09:26:00Z", "window": "1m0s", "containers": [ {
[root@master metrics-server]# curl /nodes { "kind": "NodeMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes" }, "items": [ { "metadata": { "name": "node2", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node2", "creationTimestamp": "2018-10-15T09:27:26Z" }, "timestamp": "2018-10-15T09:27:00Z", "window": "1m0s", "usage": { "cpu": "90m", "memory": "1172044Ki" } }, { "metadata": { "name": "master", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master", "creationTimestamp": "2018-10-15T09:27:26Z" }, "timestamp": "2018-10-15T09:27:00Z", "window": "1m0s", "usage": { "cpu": "186m", "memory": "1582972Ki" } }, { "metadata": { "name": "node1", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node1", "creationTimestamp": "2018-10-15T09:27:26Z" }, "timestamp": "2018-10-15T09:27:00Z", "window": "1m0s", "usage": { "cpu": "68m", "memory": "1079332Ki" } } ] }[root@master metrics-server]#
看到iterms裡面有資料了,說明可以採集各節點和pod裡面的資源使用情況了。注意,如果你看不到就多等一會,如果等了很長的時間,iterm裡面還是空,那麼就看看metrics容器裡面的日誌是不是有報錯。檢視日誌的方法為:
[root@master metrics-server]#kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE metrics-server-v0.2.1-84678c956-jdtr5 2/2 Running 0 14m
[root@master metrics-server]# kubectl logs metrics-server-v0.2.1-84678c956-jdtr5 -c metrics-server -n kube-system -8r6lz I1015 09:26:57.117323 1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node1-8r6lz I1015 09:26:57.117336 1 reststorage.go:140] No metrics for container rook-ceph-osd in pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97 I1015 09:26:57.117347 1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97
這樣,kubectl top命令就能使用了:
[root@master ~]# kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 131m 3% 1716Mi 46% node1 68m 1% 1169Mi 31% node2 96m 2% 1236Mi 33%
[root@master manifests]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-deploy-69b47bc96d-dfpvp 0m 2Mi myapp-deploy-69b47bc96d-g9kkz 0m 2Mi
[root@master manifests]# kubectl top pods -n kube-system NAME CPU(cores) MEMORY(bytes) canal-4h2ww 11m 49Mi canal-6tdxn 11m 49Mi canal-z2tp4 11m 43Mi coredns-78fcdf6894-2l2cf 1m 9Mi coredns-78fcdf6894-dkkfq 1m 10Mi etcd-master 14m 242Mi kube-apiserver-master 26m 527Mi kube-controller-manager-master 20m 68Mi kube-flannel-ds-amd64-6zqzr 2m 15Mi kube-flannel-ds-amd64-7qtcl 2m 17Mi kube-flannel-ds-amd64-kpctn 2m 18Mi kube-proxy-9snbs 2m 16Mi kube-proxy-psmxj 2m 18Mi kube-proxy-tc8g6 2m 17Mi kube-scheduler-master 6m 16Mi kubernetes-dashboard-767dc7d4d-4mq9z 0m 12Mi metrics-server-v0.2.1-84678c956-jdtr5 0m 29Mi
自定義指標(prometheus)
大家看到,我們的metrics已經可以正常工作了。不過,metrics只能監控cpu和記憶體,對於其他指標如使用者自定義的監控指標,metrics就無法監控到了。這時就需要另外一個元件叫prometheus。
prometheus的部署非常麻煩。
node_exporter是agent;
PromQL相當於sql語句來查詢資料;
k8s-prometheus-adapter:prometheus是不能直接解析k8s的指標的,需要藉助k8s-prometheus-adapter轉換成api
kube-state-metrics是用來整合資料的。
下面開始部署。
訪問
[root@master pro]# git clone
先建立一個叫prom的名稱空間:
[root@master k8s-prom]# kubectl apply -f namespace.yaml namespace/prom created
部署node_exporter:
[root@master k8s-prom]# cd node_exporter/ [root@master node_exporter]# ls node-exporter-ds.yaml node-exporter-svc.yaml [root@master node_exporter]# kubectl apply -f . daemonset.apps/prometheus-node-exporter created service/prometheus-node-exporter created
[root@master node_exporter]# kubectl get pods -n prom NAME READY STATUS RESTARTS AGE prometheus-node-exporter-dmmjj 1/1 Running 0 7m prometheus-node-exporter-ghz2l 1/1 Running 0 7m prometheus-node-exporter-zt2lw 1/1 Running 0 7m
部署prometheus:
[root@master k8s-prom]# cd prometheus/ [root@master prometheus]# ls prometheus-cfg.yaml prometheus-deploy.yaml prometheus-rbac.yaml prometheus-svc.yaml [root@master prometheus]# kubectl apply -f . configmap/prometheus-config created deployment.apps/prometheus-server created clusterrole.rbac.authorization.k8s.io/prometheus created serviceaccount/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/prometheus created
看prom名稱空間中的所有資源:
[root@master prometheus]# kubectl get all -n prom NAME READY STATUS RESTARTS AGE pod/prometheus-node-exporter-dmmjj 1/1 Running 0 10m pod/prometheus-node-exporter-ghz2l 1/1 Running 0 10m pod/prometheus-node-exporter-zt2lw 1/1 Running 0 10m pod/prometheus-server-65f5d59585-6l8m8 1/1 Running 0 55s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/prometheus NodePort 10.111.127.64 <none> 9090:30090/TCP 56s service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 10m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/prometheus-server 1 1 1 1 56s NAME DESIRED CURRENT READY AGE replicaset.apps/prometheus-server-65f5d59585 1 1 1 56s
上面我們看到透過NodePorts的方式,可以透過宿主機的30090埠,來訪問prometheus容器裡面的應用。
最好掛載個pvc的儲存,要不這些監控資料過一會就沒了。
部署kube-state-metrics,用來整合資料:
[root@master k8s-prom]# cd kube-state-metrics/ [root@master kube-state-metrics]# ls kube-state-metrics-deploy.yaml kube-state-metrics-rbac.yaml kube-state-metrics-svc.yaml [root@master kube-state-metrics]# kubectl apply -f . deployment.apps/kube-state-metrics created serviceaccount/kube-state-metrics created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created service/kube-state-metrics created
[root@master kube-state-metrics]# kubectl get all -n prom NAME READY STATUS RESTARTS AGE pod/kube-state-metrics-58dffdf67d-v9klh 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-state-metrics ClusterIP 10.111.41.139 <none> 8080/TCP 14m
部署k8s-prometheus-adapter,這個需要自制證書:
[root@master k8s-prometheus-adapter]# cd /etc/kubernetes/pki/ [root@master pki]# (umask 077; openssl genrsa -out serving.key 2048) Generating RSA private key, 2048 bit long modulus ...........................................................................................+++ ...............+++ e is 65537 (0x10001)
證書請求:
[root@master pki]# openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"
開始簽證:
[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650 Signature ok subject=/CN=serving Getting CA Private Key
建立加密的配置檔案:
[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n prom secret/cm-adapter-serving-certs created
注:cm-adapter-serving-certs是custom-metrics-apiserver-deployment.yaml檔案裡面的名字。
[root@master pki]# kubectl get secrets -n prom NAME TYPE DATA AGE cm-adapter-serving-certs Opaque 2 51s default-token-knsbg kubernetes.io/service-account-token 3 4h kube-state-metrics-token-sccdf kubernetes.io/service-account-token 3 3h prometheus-token-nqzbz kubernetes.io/service-account-token 3 3h
部署k8s-prometheus-adapter:
[root@master k8s-prom]# cd k8s-prometheus-adapter/ [root@master k8s-prometheus-adapter]# ls custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml custom-metrics-apiserver-service.yaml custom-metrics-apiserver-auth-reader-role-binding.yaml custom-metrics-apiservice.yaml custom-metrics-apiserver-deployment.yaml custom-metrics-cluster-role.yaml custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml custom-metrics-resource-reader-cluster-role.yaml custom-metrics-apiserver-service-account.yaml hpa-custom-metrics-cluster-role-binding.yaml
由於k8s v1.11.2和k8s-prometheus-adapter最新版不相容,解決辦法就是訪問下載最新版的custom-metrics-apiserver-deployment.yaml檔案,並把裡面的namespace的名字改成prom;同時還要下載custom-metrics-config-map.yaml檔案到本地來,並把裡面的namespace的名字改成prom。
[root@master k8s-prometheus-adapter]# kubectl apply -f . clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created deployment.apps/custom-metrics-apiserver created clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created serviceaccount/custom-metrics-apiserver created service/custom-metrics-apiserver created apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
[root@master k8s-prometheus-adapter]# kubectl get all -n prom NAME READY STATUS RESTARTS AGE pod/custom-metrics-apiserver-65f545496-64lsz 1/1 Running 0 6m pod/kube-state-metrics-58dffdf67d-v9klh 1/1 Running 0 4h pod/prometheus-node-exporter-dmmjj 1/1 Running 0 4h pod/prometheus-node-exporter-ghz2l 1/1 Running 0 4h pod/prometheus-node-exporter-zt2lw 1/1 Running 0 4h pod/prometheus-server-65f5d59585-6l8m8 1/1 Running 0 4h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/custom-metrics-apiserver ClusterIP 10.103.87.246 <none> 443/TCP 36m service/kube-state-metrics ClusterIP 10.111.41.139 <none> 8080/TCP 4h service/prometheus NodePort 10.111.127.64 <none> 9090:30090/TCP 4h service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 4h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 4h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-apiserver 1 1 1 1 36m deployment.apps/kube-state-metrics 1 1 1 1 4h deployment.apps/prometheus-server 1 1 1 1 4h NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-apiserver-5f6b4d857d 0 0 0 36m replicaset.apps/custom-metrics-apiserver-65f545496 1 1 1 6m replicaset.apps/custom-metrics-apiserver-86ccf774d5 0 0 0 17m replicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 4h replicaset.apps/prometheus-server-65f5d59585 1 1 1 4h
最終看到prom名稱空間裡面的所有資源都是running狀態了。
[root@master k8s-prometheus-adapter]# kubectl api-versions custom.metrics.k8s.io/v1beta1
可以看到custom.metrics.k8s.io/v1beta1這個api了。
開個代理:
[root@master k8s-prometheus-adapter]# kubectl proxy --port=8080
可以看到指標資料了:
[root@master pki]# curl http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1/ { "name": "pods/ceph_rocksdb_submit_transaction_sync", "singularName": "", "namespaced": true, "kind": "MetricValueList", "verbs": [ "get" ] }, { "name": "jobs.batch/kube_deployment_created", "singularName": "", "namespaced": true, "kind": "MetricValueList", "verbs": [ "get" ] }, { "name": "jobs.batch/kube_pod_owner", "singularName": "", "namespaced": true, "kind": "MetricValueList", "verbs": [ "get" ] },
下面我們就可以愉快的建立HPA了(水平Pod自動伸縮)。
另外,prometheus還可以和grafana整合。如下步驟。
先下載檔案grafana.yaml,訪問
[root@master pro]# wget
修改grafana.yaml檔案內容:
把namespace: kube-system改成prom,有兩處; 把env裡面的下面兩個註釋掉: - name: INFLUXDB_HOST value: monitoring-influxdb 在最有一行加個type: NodePort ports: - port: 80 targetPort: 3000 selector: k8s-app: grafana type: NodePort
[root@master pro]# kubectl apply -f grafana.yaml deployment.extensions/monitoring-grafana created service/monitoring-grafana created
[root@master pro]# kubectl get pods -n prom NAME READY STATUS RESTARTS AGE monitoring-grafana-ffb4d59bd-gdbsk 1/1 Running 0 5s
看到grafana這個pod執行起來了。
[root@master pro]# kubectl get svc -n prom NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE monitoring-grafana NodePort 10.106.164.205 <none> 80:32659/TCP 19m
我們可以訪問宿主機ip:
然後,就能從介面上看到相應的資料了。
登入下面的網站下載個grafana監控k8s-prometheus的模板:
然後再grafana的介面中匯入上面下載的模板:
匯入模板之後,就能看到監控資料了:
HPA(水平pod自動擴充套件)
當pod壓力大了,會根據負載自動擴充套件Pod個數以均勻壓力。
目前,HPA只支援兩個版本,v1版本只支援核心指標的定義(只能根據cpu利用率的指標進行pod的擴充套件);
[root@master pro]# kubectl explain hpa.spec.scaleTargetRef scaleTargetRef:表示基於什麼指標來計算pod伸縮的標準
[root@master pro]# kubectl api-versions |grep auto autoscaling/v1 autoscaling/v2beta1
上面看到分別支援hpav1和hpav2。
下面我們用命令列的方式重新建立一個帶有資源限制的pod myapp:
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80 service/myapp created deployment.apps/myapp created
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-6985749785-fcvwn 1/1 Running 0 58s
下面我們讓myapp 這個pod能自動水平擴充套件,用kubectl autoscale,其實就是指明HPA控制器的。
[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60 horizontalpodautoscaler.autoscaling/myapp autoscaled
--min:表示最小擴充套件pod的個數
--max:表示最多擴充套件pod的個數
--cpu-percent:cpu利用率
[root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp Deployment/myapp 0%/60% 1 8 1 4m
[root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp ClusterIP 10.105.235.197 <none> 80/TCP 19
下面我們把service改成NodePort的方式:
[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type": "NodePort"}}' service/myapp patched
[root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp NodePort 10.105.235.197 <none> 80:31990/TCP 22m
[root@master ~]# yum install httpd-tools #主要是為了安裝ab壓測工具
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-6985749785-fcvwn 1/1 Running 0 25m 10.244.2.84 node2
開始用ab工具壓測
[root@master ~]# ab -c 1000 -n 5000000 This is ApacheBench, Version 2.3 <$Revision: 1430300 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, Licensed to The Apache Software Foundation, Benchmarking 172.16.1.100 (be patient)
多等一會,會看到pods的cpu利用率為98%,需要擴充套件為2個pod了:
[root@master ~]# kubectl describe hpa resource cpu on pods (as a percentage of request): 98% (49m) / 60% Deployment pods: 1 current / 2 desired
[root@master ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) myapp-6985749785-fcvwn 49m (我們設定的總cpu是50m) 3Mi
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-6985749785-fcvwn 1/1 Running 0 32m 10.244.2.84 node2 myapp-6985749785-sr4qv 1/1 Running 0 2m 10.244.1.105 node1
上面我們看到已經自動擴充套件為2個pod了,再等一會,隨著cpu壓力的上升,還會看到自動擴充套件為4個或更多的pod:
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-6985749785-2mjrd 1/1 Running 0 1m 10.244.1.107 node1 myapp-6985749785-bgz6p 1/1 Running 0 1m 10.244.1.108 node1 myapp-6985749785-fcvwn 1/1 Running 0 35m 10.244.2.84 node2 myapp-6985749785-sr4qv 1/1 Running 0 5m 10.244.1.105 node1
等壓測一停止,pod個數還會收縮為正常個數的。
上面我們用的是hpav1來做的水平pod自動擴充套件的功能,我們前面也說過,hpa v1版本只能根據cpu利用率括水平自動擴充套件pod。
下面我們介紹一下hpa v2的功能,它可以根據自定義指標利用率來水平擴充套件pod。
在使用hpa v2版本前,我們先把前面建立的hpa v1版本刪除了,以免和我們測試的hpa v2版本衝突:
[root@master hpa]# kubectl delete hpa myapp horizontalpodautoscaler.autoscaling "myapp" deleted
好了,下面我們建立一個hpa v2:
[root@master hpa]# cat hpa-v2-demo.yaml apiVersion: autoscaling/v2beta1 #從這可以看出是hpa v2版本 kind: HorizontalPodAutoscaler metadata: name: myapp-hpa-v2 spec: scaleTargetRef: #根據什麼指標來做評估壓力 apiVersion: apps/v1 #對誰來做自動擴充套件 kind: Deployment name: myapp minReplicas: 1 #最少副本數量 maxReplicas: 10 metrics: #表示依據哪些指標來進行評估 - type: Resource #表示基於資源進行評估 resource: name: cpu targetAverageUtilization: 55 #表示pod cpu使用率超過55%,就自動水平擴充套件pod個數 - type: Resource resource: name: memory #我們知道hpa v1版本只能根據cpu來進行評估,而到了我們的hpa v2版本就可以根據記憶體來進行評估了 targetAverageValue: 50Mi #表示pod記憶體使用超過50M,就自動水平擴充套件pod個數
[root@master hpa]# kubectl apply -f hpa-v2-demo.yaml horizontalpodautoscaler.autoscaling/myapp-hpa-v2 created
[root@master hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp-hpa-v2 Deployment/myapp 3723264/50Mi, 0%/55% 1 10 1 37s
我們看到現在只有一個pod
[root@master hpa]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-6985749785-fcvwn 1/1 Running 0 57m 10.244.2.84 node2
開始壓測:
[root@master ~]# ab -c 100 -n 5000000
看hpa v2的檢測情況:
[root@master hpa]# kubectl describe hpa Metrics: ( current / target ) resource memory on pods: 3756032 / 50Mi resource cpu on pods (as a percentage of request): 82% (41m) / 55% Min replicas: 1 Max replicas: 10 Deployment pods: 1 current / 2 desired
[root@master hpa]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-6985749785-8frq4 1/1 Running 0 1m 10.244.1.109 node1 myapp-6985749785-fcvwn 1/1 Running 0 1h 10.244.2.84 node2
看到自動擴充套件出了2個Pod。等壓測一停止,pod個數還會收縮為正常個數的。
將來我們不光可以用hpa v2,根據cpu和記憶體使用率進行伸縮Pod個數,還可以根據http併發量等。
比如下面的:
[root@master hpa]# cat hpa-v2-custom.yaml apiVersion: autoscaling/v2beta1 #從這可以看出是hpa v2版本 kind: HorizontalPodAutoscaler metadata: name: myapp-hpa-v2 spec: scaleTargetRef: #根據什麼指標來做評估壓力 apiVersion: apps/v1 #對誰來做自動擴充套件 kind: Deployment name: myapp minReplicas: 1 #最少副本數量 maxReplicas: 10 metrics: #表示依據哪些指標來進行評估 - type: Pods #表示基於資源進行評估 pods: metricName: http_requests#自定義的資源指標 targetAverageValue: 800m #m表示個數,表示併發數800
關於併發數的hpa,具體映象可以參考,我這裡就不演示了,因為我是初學,這些內容基本夠用,以後生產需要再繼續深造。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28916011/viewspace-2216340/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Prometheus自定義指標Prometheus指標
- CSS自定義滑鼠指標CSS指標
- CSS自定義滑鼠指標樣式CSS指標
- 指標學習筆記指標筆記
- NULL 指標、零指標、野指標Null指標
- golang工作筆記(一)指標Golang筆記指標
- iOS指標學習筆記iOS指標筆記
- 雙指標維護筆記指標筆記
- C語言指標筆記C語言指標筆記
- 野指標 空指標指標
- C++筆記(11) 智慧指標C++筆記指標
- 《C與指標》讀書筆記指標筆記
- C語言指標安全及指標使用問題C語言指標
- 指標常量和常量指標指標
- 指向常量資料的指標和常量指標指標
- 機器學習筆記之效能評估指標機器學習筆記指標
- 【C#學習筆記】指標使用C#筆記指標
- C語言指標(三):陣列指標和字串指標C語言指標陣列字串
- C語言重點——指標篇(一文讓你完全搞懂指標)| 從記憶體理解指標 | 指標完全解析C語言指標記憶體
- CSS自定義滑鼠指標形狀程式碼例項CSS指標
- css自定義滑鼠指標圖示程式碼例項CSS指標
- 陣列指標,指標陣列陣列指標
- ARC中強指標與弱指標指標
- 控制指標與統計指標指標
- 陣列指標 指標陣列陣列指標
- 關於指標傳遞和指標的指標指標
- 人力資源指標分析庫(轉載)指標
- 百度地圖API新增自定義標註多點標註地圖API
- 指標指標
- yslow各個指標含義指標
- 指向指標的指標指標
- 比如,一級指標、二級指標等,通過不同層級的指標資料指標
- 指標陣列與陣列指標指標陣列
- 詳解 常量指標和指標常量指標
- 陣列指標和指標陣列陣列指標
- 指標函式 和 函式指標指標函式
- 指標問題的一點體會(區別 [指向指標的指標] 與 [指標的指標] .) (轉)指標
- css Cursor:url()自定義滑鼠指標樣式為圖片CSS指標