k8s--資源控制器限制配置詳解
文章目錄
資源型別
在K8S中可以對兩類資源進行限制:cpu和記憶體。
CPU的單位有:
正實數
,代表分配幾顆CPU,可以是小數點,比如0.5
代表0.5顆CPU,意思是一顆CPU的一半時間。2
代表兩顆CPU。正整數m
,也代表1000m=1
,所以500m
等價於0.5
。
記憶體的單位:
正整數
,直接的數字代表Bytek
、K
、Ki
,Kilobytem
、M
、Mi
,Megabyteg
、G
、Gi
,Gigabytet
、T
、Ti
,Terabytep
、P
、Pi
,Petabyte
一:資源請求和Pod和容器的限制
在K8S中,對於資源的設定是落在Pod裡的Container上的,主要有兩類,limits
控制上限,requests
控制下限。其位置在:
spec.containers[].resources.limits.cpu ‘//CPU上限’
spec.containers[].resources.limits.memory ‘//記憶體上線’
spec.containers[].resources.requests.cpu ‘//建立時候分配的基本CPU資源’
spec.containers[].resources.requests.memory ‘//建立時分配的基本記憶體資源’
1.1:編寫yaml檔案,並建立pod資源
這是一個例子。以下Pod具有兩個容器。每個容器都有一個0.25 cpu和64MiB(2 26個位元組)的記憶體請求。每個容器的限制為0.5 cpu和128MiB的記憶體。您可以說Pod的請求為0.5 cpu和128 MiB的記憶體,限制為1 cpu和256MiB的記憶體。
[root@master shuai]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db '//容器'
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources: '//資源'
requests:
memory: "64Mi" '使用基礎記憶體為64M'
cpu: "250m" '基礎cpu使用為25%'
limits:
memory: "128Mi" '//容器記憶體上限為128M'
cpu: "500m"
- name: wp '//第二個容器'
image: wordpress
resources:
requests:
memory: "64Mi" '//容器記憶體上限為128M'
cpu: "250m" '//容器cpu上限為50%'
limits:
memory: "128Mi"
cpu: "500m"
- 建立yaml資源pod
[root@master shuai]# kubectl apply -f shuai.yaml
'//檢視pod詳細資訊'
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38s default-scheduler Successfully assigned default/frontend to 20.0.0.43
Normal Pulling 37s kubelet, 20.0.0.43 pulling image "mysql"
Normal Pulled 2s kubelet, 20.0.0.43 Successfully pulled image "mysql"
Normal Created 2s kubelet, 20.0.0.43 Created container
Normal Started 2s kubelet, 20.0.0.43 Started container
Normal Pulling 2s kubelet, 20.0.0.43 pulling image "wordpress"
'//容器是在node2節點建立的'
'//檢視pod資源'
[root@master shuai]# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend 0/2 ContainerCreating 0 1s
- 檢視node節點資源分配
[root@master shuai]# kubectl describe nodes 20.0.0.43
....省略資訊....
Non-terminated Pods: (1 in total) '最大限制'
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default frontend 500m (12%) 1 (25%) 128Mi (3%) 256Mi (6%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 500m (12%) 1 (25%)
memory 128Mi (3%) 256Mi (6%)
Events: <none>
- node節點檢視容器
[root@node2 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6de02857802f mysql "docker-entrypoint.s…" 29 seconds ago Exited (137) 24 seconds ago k8s_db_frontend_default_d63eaeb1-0d46-11eb-a2d8-000c2984c1e3_2
f87aad7c178d wordpress "docker-entrypoint.s…" About a minute ago Up About a minute k8s_wp_frontend_default_d63eaeb1-0d46-11eb-a2d8-000c2984c1e3_0
二:pod的重啟策略
pod的重啟策略restartpolicy,在pod遇到故障之後的重啟的動作稱為重啟策略
1.Always:當容器終止退出之後,總是總是重啟容器,為預設策略
2.OnFailure:當容器異常退出之後(退出狀態碼為非0)時,重啟容器
3.Never:當容器終止退出,從不重啟容器
注意:k8s中不支援重啟pod資源,這裡說的重啟指的是刪除重建pod
2.1:檢視pod資源的重啟策略
[root@master shuai]# kubectl edit pod fontend
restartPolicy: Always '//重啟預設策略是Always'
2.11:編寫一個yaml檔案
[root@master shuai]# vim shuai2.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox '//最小化核心'
image: busybox
args: '//引數'
- /bin/sh '//sell環境'
- -c '//指定'
- sleep 30; exit 3 '//休眠30秒,異常退出'
2.12:建立pod資源
[root@master shuai]# kubectl apply -f shuai2.yaml
'-w"檢視狀態'
[root@master shuai]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
foo 0/1 ContainerCreating 0 10s
foo 1/1 Running 0 17s
foo 0/1 Error 0 46s
foo 1/1 Running 1 50s
foo 0/1 Error 1 80s '//重啟休眠30秒'
'//重啟次數為2'
^C[root@master shuai]kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 2 104s
2.13:修改shua2.yaml的重啟策略改為Never重啟從不重啟
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 10;exit 3 '//這邊設定休眠時間為10s'
restartPolicy: Never '//新增重啟策略為從不重啟'
'//刪除原有yaml檔案進行重新建立'
[root@master shuai]# kubectl delete -f shuai2.yaml
[root@master shuai]# kubectl apply -f shuai2.yaml
[root@master shuai]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 0 17s
foo 0/1 Error 0 26s '//這邊已經不進行重啟了,設定成功'
^C[root@master shuai]#
'//因為返回的是狀態碼3,所以顯示的是error,如果刪除這個異常狀態碼,那麼顯示的是completed'
三:K8S使用就緒和存活探針配置健康檢查
pod的健康檢查又被稱為探針,來檢查pod資源,探針的規則可以同時定義
探針的型別分為兩類:
1、親和性探針(LivenessProbe)
判斷容器是否存活(running),若不健康,則kubelet殺死該容器,根據Pod的restartPolicy來操作。
- 若容器中不包含此探針,則kubelet人為該容器的親和性探針返回值永遠是success
2、就緒性探針(ReadinessProbe)
- 判斷容器服務是否就緒(ready),若不健康,kubernetes會把Pod從service endpoints中剔除,後續在把恢復到Ready狀態的Pod加回後端的Endpoint列表。這樣就能保證客戶端在訪問service’時不會轉發到服務不可用的pod例項上
- endpoint是service負載均衡叢集列表,新增pod資源的地址
3.1:探針的三種檢查方式
1、exec(最常用):執行shell命令返回狀態碼為0代表成功,exec檢查後面所有pod資源,觸發策略就執行
2、httpGet:傳送http請求,返回200-400範圍狀態碼為成功
3、tcpSocket :發起TCP Socket建立成功
(注意:)規則可以同時定義
livenessProbe 如果檢查失敗,將殺死容器,根據Pod的restartPolicy來操作。
ReadinessProbe 如果檢查失敗,kubernetes會把Pod從service endpoints中剔除。
3.2:使用exec方式檢查
- 建立pod,下面是yaml檔案
[root@master shuai]# vim shuai3.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 30
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
'//在配置檔案中,您可以看到Pod具有單個Container。該periodSeconds欄位指定kubelet應該每5秒執行一次活動性探測。該initialDelaySeconds欄位告訴kubelet在執行第一個探測之前應等待5秒鐘。為了執行探測,kubeletcat /tmp/healthy在目標容器中執行命令。如果命令成功執行,則返回0,並且kubelet認為該容器處於活動狀態且健康。如果命令返回非零值,則kubelet將殺死容器並重新啟動它'
- 容器啟動時,它將執行以下命令
/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
在容器壽命的前30秒內,有一個/tmp/healthy檔案。因此,在前30秒內,該命令cat /tmp/healthy將返回成功程式碼。30秒後,cat /tmp/healthy返回失敗程式碼。
- 建立pod資源
[root@master shuai]# kubectl apply -f shuai3.yaml
'//檢視狀態'
liveness-exec 0/1 ContainerCreating 0 9s
liveness-exec 1/1 Running 0 17s
liveness-exec 1/1 Running 1 47s '//容器重啟'
'//檢視pod資源'
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 2 2m48s
檢視pod詳細資訊
[root@master shuai]# kubectl describe pod liveness-exec
'//下面是資源建立開啟等資訊'
Normal Scheduled 5m29s default-scheduler Successfully assigned default/liveness-exec to 20.0.0.43
Normal Pulled 2m42s (x3 over 5m13s) kubelet, 20.0.0.43 Successfully pulled image "busybox"
Normal Created 2m42s (x3 over 5m13s) kubelet, 20.0.0.43 Created container
Normal Started 2m42s (x3 over 5m13s) kubelet, 20.0.0.43 Started container
Warning Unhealthy 117s (x9 over 4m42s) kubelet, 20.0.0.43 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Pulling 102s (x4 over 5m29s) kubelet, 20.0.0.43 pulling image "busybox"
Normal Killing 26s (x4 over 4m13s) kubelet, 20.0.0.43 Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated.
3.3:使用httpGet方式檢查
-
另一種活動探針使用HTTP GET請求
-
建立pod資源、現在編寫yaml檔案
'//首先刪除原有資源'
[root@master shuai]# kubectl delete -f shuai3.yaml
'//編寫yaml檔案'
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: nginx
image: nginx
# args:
# - /server
livenessProbe:
httpGet: '//探針方式'
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
'//建立pod資源'
[root@master shuai]# kubectl apply -f shuai3.yaml
pod/liveness-http created
'//檢視狀態'
[root@master shuai]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
liveness-http 0/1 ContainerCreating 0 3s
liveness-http 1/1 Running 0 17s
^C
[root@master shuai]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-http 1/1 Running 0 28s
- 10秒後,檢視Pod事件以驗證活動性探測是否失敗以及容器已重新啟動:
Normal Scheduled 3m default-scheduler Successfully assigned default/liveness-http to 20.0.0.43
Normal Pulled 2m1s (x3 over 2m44s) kubelet, 20.0.0.43 Successfully pulled image "nginx"
Normal Created 2m1s (x3 over 2m44s) kubelet, 20.0.0.43 Created container
Normal Started 2m1s (x3 over 2m44s) kubelet, 20.0.0.43 Started container
Normal Pulling 112s (x4 over 3m) kubelet, 20.0.0.43 pulling image "nginx"
Warning Unhealthy 112s (x9 over 2m40s) kubelet, 20.0.0.43 Liveness probe failed: Get http://172.17.5.2:8080/healthz: dial tcp 172.17.5.2:8080: connect: connection refused
Normal Killing 112s (x3 over 2m34s) kubelet, 20.0.0.43 Killing container with id docker://nginx:Container failed liveness probe.. Container will be killed and recreated.
3.4:定義TCP活動探針
第三種型別的活動性探針使用TCP套接字。使用此配置,kubelet將嘗試在指定埠上開啟容器的套接字。如果可以建立連線,則認為該容器執行狀況良好,如果不能,則認為該容器是故障容器。
建立pod資源,並編寫yaml檔案
[root@master shuai]# vim shuai4.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-tcp
labels:
app: liveness-tcp
spec:
containers:
- name: liveness-tcp
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
TCP檢查的配置與HTTP檢查非常相似。此示例同時使用了就緒和活躍度探針。容器啟動5秒後,kubelet將傳送第一個就緒探測器。這將嘗試連線到goproxy埠8080上的容器。如果探測成功,則Pod將標記為就緒。kubelet將繼續每10秒執行一次此檢查。
除了就緒探針之外,此配置還包括活動探針。容器啟動後,kubelet將執行第一個活動探針15秒。就像就緒探針一樣,這將嘗試goproxy在埠8080上連線到 容器。如果活動探針失敗,則容器將重新啟動。
'建立並檢視資源狀態'
[root@master shuai]# kubectl apply -f shuai4.yaml
pod/liveness-tcp created
[root@master shuai]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
liveness-http 0/1 CrashLoopBackOff 6 6m43s
liveness-tcp 0/1 ContainerCreating 0 5s
liveness-tcp 0/1 Running 0 17s
[root@master shuai]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-tcp 0/1 Running 0 18s
- 15秒後檢視pod事件以驗證活動性探針
[root@master shuai]# kubectl describe pod liveness-tcp
....省略資訊.........
Normal Scheduled 103s default-scheduler Successfully assigned default/liveness-tcp to 20.0.0.43
Warning Unhealthy 20s (x7 over 80s) kubelet, 20.0.0.43 Readiness probe failed: dial tcp 172.17.5.2:8080: connect: connection refused
Normal Pulling 19s (x2 over 102s) kubelet, 20.0.0.43 pulling image "nginx"
Warning Unhealthy 19s (x3 over 59s) kubelet, 20.0.0.43 Liveness probe failed: dial tcp 172.17.5.2:8080: connect: connection refused
Normal Killing 19s kubelet, 20.0.0.43 Killing container with id docker://liveness-tcp:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 3s (x2 over 87s) kubelet, 20.0.0.43 Successfully pulled image "nginx"
Normal Created 3s (x2 over 87s) kubelet, 20.0.0.43 Created container
Normal Started 3s (x2 over 87s) kubelet, 20.0.0.43 Started container
相關文章
- Kubernetes筆記(四):詳解Namespace與資源限制ResourceQuota,LimitRange筆記namespaceMIT
- Docker CPU資源限制Docker
- Docker Memory資源限制Docker
- Docker的資源限制Docker
- services資源+pod詳解
- puppet package資源詳解Package
- puppet file資源詳解
- 資源限制類問題的常用解決方案
- 控制器 巢狀資源巢狀
- 資源編排模板詳解
- 詳解nginx的請求限制(連線限制和請求限制)Nginx
- Jmeter系列(50)- 詳解 If 控制器JMeter
- 修改profile實現資源限制
- 詳解 CORS 跨域資源共享CORS跨域
- 跨域資源共享CORS詳解跨域CORS
- 跨域資源共享 CORS 詳解跨域CORS
- 容器技術之Docker資源限制Docker
- profile資源限制基礎記載
- setrlimit函式限制程序資源MIT函式
- 常用開源協議商用限制解讀協議
- 詳解XMLHttpRequest的跨域資源共享XMLHTTP跨域
- puppet進階指南——service資源詳解
- Jmeter系列(37)- 詳解 ForEach控制器JMeter
- 詳解 RHEL7.1 yum源配置與軟體安裝
- MyBatis 配置詳解MyBatis
- zookeeper 配置詳解
- .htaccess配置詳解
- mysql配置詳解MySql
- iptables配置詳解
- nginx配置詳解Nginx
- Kubernetes資源請求與限制
- 【LINUX】linux相關資源限制Linux
- 如何設定Kubernetes資源限制
- 利用資源限制效能診斷resource limitMIT
- .NET Framework開源詳細配置Framework
- sbt配置——資料來源問題解決
- 跨域請求cookie資源共享詳解跨域Cookie
- 資源編排InstanceClone實現詳解