Kubernetes全棧架構師(二進位制高可用安裝k8s叢集擴充套件篇)--學習筆記

MingsonZheng發表於2021-07-20

目錄

  • 二進位制Metrics&Dashboard安裝
  • 二進位制高可用叢集可用性驗證
  • 生產環境k8s叢集關鍵性配置
  • Bootstrapping: Kubelet啟動過程
  • Bootstrapping: CSR申請和證書頒發原理
  • Bootstrapping: 證書自動續期原理

二進位制Metrics&Dashboard安裝

  • 安裝CoreDNS
  • 安裝Metrics Server
  • 安裝dashboard

安裝CoreDNS

安裝對應版本(推薦)

cd /root/k8s-ha-install/

如果更改了k8s service的網段需要將coredns的serviceIP改成k8s service網段的第十個IP

sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml

安裝coredns

kubectl create -f CoreDNS/coredns.yaml

安裝最新版CoreDNS(不推薦)

git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

檢視狀態

kubectl get po -n kube-system -l k8s-app=kube-dns

狀態

NAME                      READY   STATUS    RESTARTS   AGE
coredns-fb4874468-nr5nx   1/1     Running   0          49s

強制刪除一直處於Terminating的pod

[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
NAME                      READY   STATUS        RESTARTS   AGE
coredns-fb4874468-fgs2h   1/1     Terminating   0          6d20h

[root@k8s-master01 ~]# kubectl delete pods coredns-fb4874468-fgs2h --grace-period=0 --force -n kube-system
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "coredns-fb4874468-fgs2h" force deleted

[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
No resources found in kube-system namespace.

安裝Metrics Server

在新版的Kubernetes中系統資源的採集均使用Metrics-server,可以通過Metrics採集節點和Pod的記憶體、磁碟、CPU和網路的使用率。

安裝metrics server

cd /root/k8s-ha-install/metrics-server-0.4.x/

kubectl  create -f . 

等待metrics server啟動然後檢視狀態

kubectl  top node

節點狀態

NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   263m         13%    1239Mi          66%       
k8s-master02   213m         10%    1065Mi          57%       
k8s-master03   207m         10%    1050Mi          56%       
k8s-node01     89m          4%     514Mi           27%       
k8s-node02     158m         7%     493Mi           26% 

檢視pod狀態

kubectl  top po -A

pod狀態

NAMESPACE     NAME                                      CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-cdd5755b9-4fzg9   3m           18Mi            
kube-system   calico-node-8xg62                         26m          60Mi            
kube-system   calico-node-dczxz                         24m          60Mi            
kube-system   calico-node-gn8ws                         23m          62Mi            
kube-system   calico-node-qmwkd                         26m          60Mi            
kube-system   calico-node-zfw8n                         25m          59Mi            
kube-system   coredns-fb4874468-nr5nx                   3m           10Mi            
kube-system   metrics-server-64c6c494dc-9x727           2m           18Mi  

安裝dashboard

  • 安裝指定版本dashboard
  • 安裝最新版dashboard
  • 登入dashboard

Dashboard用於展示叢集中的各類資源,同時也可以通過Dashboard實時檢視Pod的日誌和在容器中執行一些命令等。

安裝指定版本dashboard

cd /root/k8s-ha-install/dashboard/

kubectl  create -f .

安裝最新版dashboard

官方GitHub地址:https://github.com/kubernetes/dashboard

可以在官方dashboard檢視到最新版dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

建立管理員使用者

vim admin.yaml
# 新增以下內容
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

執行

kubectl apply -f admin.yaml -n kube-system

登入dashboard

在谷歌瀏覽器(Chrome)啟動檔案中加入啟動引數,用於解決無法訪問Dashboard的問題,因為使用的證書是自簽名(屬性->快捷方式->目標,貼上到最後)

 --test-type --ignore-certificate-errors

更改dashboard的svc為NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

修改 type: ClusterIP 為 type:NodePort

修改完成之後會暴露一個埠號,檢視埠號:

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

埠號

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.108.217.183   <none>        443:31874/TCP   9m37s

根據自己的例項埠號,通過任意安裝了kube-proxy的宿主機或者VIP的IP+埠即可訪問到dashboard:訪問Dashboard:https://192.168.232.236:31874(請更改18282為自己的埠),選擇登入方式為令牌(即token方式)

檢視token值:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

token值

Name:         admin-user-token-9c4tz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d1f2e528-0ef8-4c6b-a384-a18fbca6bc54

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1411 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlNCbEdFa1RQZElhbTBRb29aTTNCTUE1dTJ2enBCeGZxMWJwbmpfZHBXdkEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTljNHR6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkMWYyZTUyOC0wZWY4LTRjNmItYTM4NC1hMThmYmNhNmJjNTQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.KFH5ed0kJEaU1HSpxkitJxqKJGnSNAWogNSGjGn1wEh7R9zKYkAfNLES6Vl3GU9jvxBCEZW415ZFILr96kpgl_88mD-K-AMgQxKLdpghYDx_CnsLtI6e8rLTNkaPS2Uo3sYAy9U280Niop14Yzuar5FQ3AfSbeXGcF_9Jrgyeh5XWPA0h69Au8pUEOkVdpADmuIaFSqfTnmkOSdGqCgFb_QsUqvjo4ifIxKnN6uW8wfR1s4esWkPq569xhCINaUY6g3rnT1jfVTU2XmrURrKOVok0OfSmtXTKCSs2jliEdmx7qEFTrw2KCPnTfORUtTnmdZ2ZnGGx9Fvf_hGaKk1FQ

二進位制高可用叢集可用性驗證

安裝busybox

[root@k8s-master01 ~]# cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

檢視狀態

[root@k8s-master01 ~]# kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          29s
  • Pod必須能解析Service
  • Pod必須能解析跨namespace的Service
  • 每個節點都必須要能訪問Kubernetes的kubernetes svc 443和kube-dns的service 53
  • Pod和Pod之前要能通(同namespace能通訊、跨namespace能通訊、跨機器能通訊)

叢集安裝成功後預設的kubernetes server

[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d22h

Pod必須能解析Service

[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

Pod必須能解析跨namespace的Service

[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

每個節點都必須要能訪問Kubernetes的kubernetes svc 443和kube-dns的service 53(傳送鍵輸入到所有的會話)

[root@k8s-master01 ~]# yum install telnet -y

[root@k8s-master01 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

[root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   53m
metrics-server   ClusterIP   10.107.95.145   <none>        443/TCP                  4h33m
You have new mail in /var/spool/mail/root

[root@k8s-master01 ~]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server

Pod和Pod之前要能通(同namespace能通訊、跨namespace能通訊、跨機器能通訊)(取消傳送鍵輸入到所有的會話)

[root@k8s-master01 ~]# kubectl get po -n kube-system -owide
NAME                                      READY   STATUS    RESTARTS        AGE     IP                NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-cdd5755b9-4fzg9   1/1     Running   0               4h59m   192.168.232.131   k8s-node01     <none>           <none>
calico-node-8xg62                         1/1     Running   0               4h59m   192.168.232.129   k8s-master02   <none>           <none>
calico-node-dczxz                         1/1     Running   0               4h59m   192.168.232.131   k8s-node01     <none>           <none>
calico-node-gn8ws                         1/1     Running   0               4h59m   192.168.232.128   k8s-master01   <none>           <none>
calico-node-qmwkd                         1/1     Running   0               4h59m   192.168.232.130   k8s-master03   <none>           <none>
calico-node-zfw8n                         1/1     Running   2 (4h59m ago)   4h59m   192.168.232.132   k8s-node02     <none>           <none>
coredns-fb4874468-fgs2h                   1/1     Running   0               56m     172.25.92.66      k8s-master02   <none>           <none>
metrics-server-64c6c494dc-9x727           1/1     Running   0               4h35m   172.27.14.193     k8s-node02     <none>           <none>

# 進入k8s-master02
[root@k8s-master01 ~]# kubectl exec -ti calico-node-8xg62  -n kube-system -- bash
Defaulted container "calico-node" out of: calico-node, install-cni (init), flexvol-driver (init)

# ping k8s-master03
[root@k8s-master02 /]# ping 192.168.232.130
PING 192.168.232.130 (192.168.232.130) 56(84) bytes of data.
64 bytes from 192.168.232.130: icmp_seq=1 ttl=64 time=0.416 ms
64 bytes from 192.168.232.130: icmp_seq=2 ttl=64 time=0.240 ms
64 bytes from 192.168.232.130: icmp_seq=3 ttl=64 time=0.191 ms

# 退出k8s-master02
[root@k8s-master02 /]# exit
exit

[root@k8s-master01 ~]# kubectl get po -owide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          22m   172.25.92.69   k8s-master02   <none>           <none>

# 進入k8s-master02容器ping k8s-master03
[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
/ # ping 192.168.232.130
PING 192.168.232.130 (192.168.232.130): 56 data bytes
64 bytes from 192.168.232.130: seq=0 ttl=63 time=0.329 ms
64 bytes from 192.168.232.130: seq=1 ttl=63 time=0.452 ms
64 bytes from 192.168.232.130: seq=2 ttl=63 time=0.675 ms

建立一個帶有三個副本的deployment

[root@k8s-master01 ~]# kubectl create deploy nginx --image=nginx --replicas=3
deployment.apps/nginx created

[root@k8s-master01 ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/3     3            2           35s

# 檢視部署所在的NODE
[root@k8s-master01 ~]# kubectl get po -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
busybox                  1/1     Running   0          28m     172.25.92.69     k8s-master02   <none>           <none>
nginx                    1/1     Running   0          2m37s   172.18.195.4     k8s-master03   <none>           <none>
nginx-6799fc88d8-lbhgm   1/1     Running   0          54s     172.25.244.197   k8s-master01   <none>           <none>
nginx-6799fc88d8-nq2gz   1/1     Running   0          54s     172.17.125.1     k8s-node01     <none>           <none>
nginx-6799fc88d8-tzgz8   1/1     Running   0          54s     172.27.14.194    k8s-node02     <none>           <none>

# 刪除
[root@k8s-master01 ~]# kubectl delete deploy nginx
deployment.apps "nginx" deleted

[root@k8s-master01 ~]# kubectl delete po busybox

生產環境k8s叢集關鍵性配置

  • docker配置
  • controller-manager配置
  • kubelet配置
  • kubelet-conf.yml
  • 安裝總結

docker配置

(傳送鍵輸入到所有的會話)

vim /etc/docker/daemon.json

{  
    "registry-mirrors": [
    "https://registry.docker-cn.com",
    "http://hub-mirror.c.163.com",
    "https://docker.mirrors.ustc.edu.cn"
  ],
 "exec-opts": ["native.cgroupdriver=systemd"],
 "max-concurrent-downloads": 10, # 併發下載的執行緒數
 "max-concurrent-uploads": 5, # 併發上傳的執行緒數
 "log-opts": {
     "max-size": "300m", # 限制日誌檔案最大容量,超過則分割
     "max-file": "2" # 日誌儲存最大數量
 },
 "live-restore": true # 更改docker配置之後需要重啟docker才能生效,這個引數可以使得重啟docker不影響正在執行的容器程式
}

更新配置

systemctl daemon-reload

controller-manager配置

註釋新版本已經預設配置,設定證書過期時間

(傳送鍵輸入到所有的會話,取消node節點)

vim /usr/lib/systemd/system/kube-controller-manager.service

# --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \
--cluster-signing-duration=876000h0m0s \ 

更新配置

systemctl daemon-reload

systemctl restart kube-controller-manager

kubelet配置

(傳送鍵輸入到所有的會話)

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf

[Service] 
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig" 
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" 
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" 
Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m" 
ExecStart= 
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS 

k8s預設加密方式會被識別為漏洞,需要修改加密方式

--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

kubelet下載映象的deadline,避免重複迴圈載入

--image-pull-progress-deadline=30m

kubelet-conf.yml

vim /etc/kubernetes/kubelet-conf.yml

# 新增如下配置
rotateServerCertificates: true
allowedUnsafeSysctls: # 允許修改核心,才能修改核心引數,比如修改最大併發量,但是涉及到安全問題,所以按需配置
 - "net.core*"
 - "net.ipv4.*"
kubeReserved: # 預留資源,生產環境需要設定高一點,預留足夠資源
  cpu: "10m"
  memory: 10Mi
  ephemeral-storage: 10Mi
systemReserved:
  cpu: "10m"
  memory: 20Mi
  ephemeral-storage: 1Gi

更新配置

systemctl daemon-reload

systemctl restart kubelet

檢視日誌

tail -f /var/log/messages

驗證

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    <none>   7h3m   v1.22.0-beta.1
k8s-master02   Ready    <none>   7h3m   v1.22.0-beta.1
k8s-master03   Ready    <none>   7h3m   v1.22.0-beta.1
k8s-node01     Ready    <none>   7h3m   v1.22.0-beta.1
k8s-node02     Ready    <none>   7h3m   v1.22.0-beta.1

安裝總結

  • kubeadm
  • 二進位制
  • 自動化安裝
  • 安裝需要注意的細節

自動化安裝(Ansible)

  • Master節點安裝不需要寫自動化。
  • 新增Node節點,playbook。

安裝需要注意的細節

  • 上面的細節配置
  • 生產環境中etcd一定要和系統盤分開,一定要用ssd硬碟。
  • Docker資料盤也要和系統盤分開,有條件的話可以使用ssd硬碟

Bootstrapping: Kubelet啟動過程

Bootstrapping:自動為node節點頒發證書

二進位制高可用安裝生成證書的時候,我們為每個k8s元件都生成了一個kubeconfig檔案,比如controller-manager.kubeconfig,配置中儲存了apiserver的一些資訊,以及它去連線apiserver的證書

kube-controller-manager.service指定了kubeconfig,元件啟動的時候會通過config檔案去連線apiserver進行認證,通訊

[root@k8s-master01 kubernetes]# vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ # 指定了kubeconfig
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=172.16.0.0/12 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
      --node-cidr-mask-size=24

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

kube-controller-manager.service和kube-scheduler.service是我們自己頒發證書,但是kubelet不建議通過這種方式,建議使用TLS BootStrapping,它會自動地為node節點的kubelet頒發證書,那麼它是如何生成的呢?

使用node02節點舉例

[root@k8s-node02 ~]# cd /etc/kubernetes
[root@k8s-node02 kubernetes]# ls
bootstrap-kubelet.kubeconfig  kubelet-conf.yml  kubelet.kubeconfig  kube-proxy.conf  kube-proxy.kubeconfig  manifests  pki

當我們配置一個節點的時候,節點上面是沒有kubelet.kubeconfig檔案的,它是通過bootstrap-kubelet.kubeconfig與apiserver進行互動,然後自動申請了一個kubelet.kubeconfig檔案,kubelet啟動的時候如果缺少kubelet.kubeconfig檔案,就會這樣申請

刪除kubelet.kubeconfig

[root@k8s-node02 kubernetes]# rm -rf kubelet.kubeconfig 

檢視kubelet配置檔案

[root@k8s-node02 kubernetes]# cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf 
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m" "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

在配置中可以看到指定了bootstrap-kubeconfig檔案

--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

指定了kubeconfig檔案,但是這個檔案是沒有的,kubelet啟動的時候會通過前面描述的方式申請一個新的kubelet.kubeconfig

--kubeconfig=/etc/kubernetes/kubelet.kubeconfig

重啟kubelet,可以看到生成了kubelet.kubeconfig證書,生成之後就可以和apiserver進行互動

[root@k8s-node02 kubernetes]# systemctl restart kubelet
[root@k8s-node02 kubernetes]# ls
bootstrap-kubelet.kubeconfig  kubelet-conf.yml  kubelet.kubeconfig  kube-proxy.conf  kube-proxy.kubeconfig  manifests  pki

TLS BootStrapping 官方文件:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#initialization-process

在一個k8s叢集中,工作節點的元件kubelet和kube-proxy需要連線master節點的元件,尤其是apiserver。為了確保連線是私有的,強烈建議為每一個客戶端配置一個TLS證書在所有的節點上

kubelet啟動過程

  • 查詢kubeconfig檔案,檔案一般位於/etc/kubernetes/kubelet.kubeconfig
  • 從kubeconfig檔案中檢索APIServer的URL和證書
  • 然後去和APIServer進行互動

檢視證書

[root@k8s-node02 kubernetes]# more kubelet.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVYlFPRnd0dDVmdTlFZndnNUFjMnB2a1JYWWFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6
RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10
CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl4TURjd09UQTNNelV3TUZvWUR6SXgKTWpFd05qRTFNRGN6TlRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBh
bWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpY
TXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhaMUgya2d2QTltZDVnQ0Z3TjNSeGFOck9KQ3F3ampJOApNRG5TZkk2ZldzcjlDR2dvb0VpQTZRdDk0c2szcDNkMkZ0c3hNaWNkdHRO
QVJZQWJQU3JSdkZBdGhkeGpvNWVCCjFTYVFXcXh5ckp3ZFN4UW5hUkMyaXZPRm55NUNmU0VOekYyVnBOQVIwVTVLUjRLWko2MzQyQk1yYzF3aEE5VjkKd05aaXVySi95emZPSTY5dzJaWUFwQTVYaldDSXczOXQzdjBr
WVM3clZkQkhkMWFnUzJMcWNMb0dlZnJvT1BzZgp1ZEZCa0tkbnE5T0tLejRSVHpsQ2hnMXFTSk1wK0xmT1hiYTUzRmQ5c01WRFhvS3lDQVk4N1E3RHltWmlsZ2F5Cjl3YTFGZnNXajRKQzJzVy9lSWtWczQzazV4QVR2
ems0ZmF6QVhZZDdvUXUrbStKZUZvL1JBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTLwpnN2xDaXBuZlkycGhVbWY4RXdXdE5o
ZE9LREFmQmdOVkhTTUVHREFXZ0JTL2c3bENpcG5mWTJwaFVtZjhFd1d0Ck5oZE9LREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBamdRcE1hbndsaHN3dkNRWkd4NitwTXdqVm00LzNQbnYKcWZNU1F0YmVFWEpPRndY
WGN5T0h0clphRVNrbTV2eHZiWkRwMXZFcFc0aWN0V3U4emhGS3JKQWtRN0NKNHZDaAo3THBnVUdFSFFzTkJEU09reUxPcUhEN1RNOHFZV3hvWUZPbWFhdm1KelMwbFhZRkx1VmNYTm1GTTBPWlRsL01OCjVpdE5vMEVz
bWxBaUVYTFpJRkdpL0dZQkZXRHBQNVB2SlJIdUptd2JVQTdiMkNoVmhreFBXa0FYaDgvZjNzMjUKOWZSTC9ya0VTMGJqQVlHQy9lTGRDL09uUng2VFRyMHVRTUFSVjBqZGs3Qkg1dElXQjFtakthNjViR1Z6SlBZYQox
anNIRVhvOUo0MnBaQ3lWNnQxTklzRkFrL3pyazJzSFZHbFZzUzB4ZHpPaDhZaEhsKzVwVEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.232.236:8443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

檢視kubelet.kubeconfig證書有效期:

[root@k8s-node02 kubernetes]# echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVYlFPRnd0dDVmdTlFZndnNUFjMnB2a1JYWWFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6
RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10
CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl4TURjd09UQTNNelV3TUZvWUR6SXgKTWpFd05qRTFNRGN6TlRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBh
bWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpY
TXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhaMUgya2d2QTltZDVnQ0Z3TjNSeGFOck9KQ3F3ampJOApNRG5TZkk2ZldzcjlDR2dvb0VpQTZRdDk0c2szcDNkMkZ0c3hNaWNkdHRO
QVJZQWJQU3JSdkZBdGhkeGpvNWVCCjFTYVFXcXh5ckp3ZFN4UW5hUkMyaXZPRm55NUNmU0VOekYyVnBOQVIwVTVLUjRLWko2MzQyQk1yYzF3aEE5VjkKd05aaXVySi95emZPSTY5dzJaWUFwQTVYaldDSXczOXQzdjBr
WVM3clZkQkhkMWFnUzJMcWNMb0dlZnJvT1BzZgp1ZEZCa0tkbnE5T0tLejRSVHpsQ2hnMXFTSk1wK0xmT1hiYTUzRmQ5c01WRFhvS3lDQVk4N1E3RHltWmlsZ2F5Cjl3YTFGZnNXajRKQzJzVy9lSWtWczQzazV4QVR2
ems0ZmF6QVhZZDdvUXUrbStKZUZvL1JBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTLwpnN2xDaXBuZlkycGhVbWY4RXdXdE5o
ZE9LREFmQmdOVkhTTUVHREFXZ0JTL2c3bENpcG5mWTJwaFVtZjhFd1d0Ck5oZE9LREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBamdRcE1hbndsaHN3dkNRWkd4NitwTXdqVm00LzNQbnYKcWZNU1F0YmVFWEpPRndY
WGN5T0h0clphRVNrbTV2eHZiWkRwMXZFcFc0aWN0V3U4emhGS3JKQWtRN0NKNHZDaAo3THBnVUdFSFFzTkJEU09reUxPcUhEN1RNOHFZV3hvWUZPbWFhdm1KelMwbFhZRkx1VmNYTm1GTTBPWlRsL01OCjVpdE5vMEVz
bWxBaUVYTFpJRkdpL0dZQkZXRHBQNVB2SlJIdUptd2JVQTdiMkNoVmhreFBXa0FYaDgvZjNzMjUKOWZSTC9ya0VTMGJqQVlHQy9lTGRDL09uUng2VFRyMHVRTUFSVjBqZGs3Qkg1dElXQjFtakthNjViR1Z6SlBZYQox
anNIRVhvOUo0MnBaQ3lWNnQxTklzRkFrL3pyazJzSFZHbFZzUzB4ZHpPaDhZaEhsKzVwVEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
U5EIENFUlRJRklDQVRFLS0tLS0K" | base64 --decode >/tmp/1

然後使用OpenSSL即可檢視證書過期時間(100年有效期):

[root@k8s-node02 kubernetes]# openssl x509 -in /tmp/1 -noout -dates
notBefore=Jul  9 07:35:00 2021 GMT
notAfter=Jun 15 07:35:00 2121 GMT

Bootstrapping: CSR申請和證書頒發原理

官方文件:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#bootstrap-initialization

1、 Kubelet啟動

2、 Kubelet檢視kubelet.kubeconfig檔案,假設沒有這個檔案

3、 Kubelet會檢視本地的bootstrap.kubeconfig

4、 Kubelet讀取bootstrap.kubeconfig檔案,檢索apiserver的url和一個token

5、 Kubelet連結apiserver,使用這個token進行認證

a) Apiserver會識別tokenid,apiserver會檢視該tokenid對於的bootstrap的一個secret

secret就是二進位制高可用安裝叢集的時候建立的bootstrap.secret.yaml

如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保證 c8ad9c 字串一致的,並且位數是一樣的。還要保證上個命令的黃色字型:c8ad9c.2e4d610cf3e7426e與你修改的字串要一致

[root@k8s-master01 kubernetes]# more bootstrap-kubelet.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVYlFPRnd0dDVmdTlFZndnNUFjMnB2a1JYWWFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6
RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10
CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl4TURjd09UQTNNelV3TUZvWUR6SXgKTWpFd05qRTFNRGN6TlRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBh
bWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpY
TXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhaMUgya2d2QTltZDVnQ0Z3TjNSeGFOck9KQ3F3ampJOApNRG5TZkk2ZldzcjlDR2dvb0VpQTZRdDk0c2szcDNkMkZ0c3hNaWNkdHRO
QVJZQWJQU3JSdkZBdGhkeGpvNWVCCjFTYVFXcXh5ckp3ZFN4UW5hUkMyaXZPRm55NUNmU0VOekYyVnBOQVIwVTVLUjRLWko2MzQyQk1yYzF3aEE5VjkKd05aaXVySi95emZPSTY5dzJaWUFwQTVYaldDSXczOXQzdjBr
WVM3clZkQkhkMWFnUzJMcWNMb0dlZnJvT1BzZgp1ZEZCa0tkbnE5T0tLejRSVHpsQ2hnMXFTSk1wK0xmT1hiYTUzRmQ5c01WRFhvS3lDQVk4N1E3RHltWmlsZ2F5Cjl3YTFGZnNXajRKQzJzVy9lSWtWczQzazV4QVR2
ems0ZmF6QVhZZDdvUXUrbStKZUZvL1JBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTLwpnN2xDaXBuZlkycGhVbWY4RXdXdE5o
ZE9LREFmQmdOVkhTTUVHREFXZ0JTL2c3bENpcG5mWTJwaFVtZjhFd1d0Ck5oZE9LREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBamdRcE1hbndsaHN3dkNRWkd4NitwTXdqVm00LzNQbnYKcWZNU1F0YmVFWEpPRndY
WGN5T0h0clphRVNrbTV2eHZiWkRwMXZFcFc0aWN0V3U4emhGS3JKQWtRN0NKNHZDaAo3THBnVUdFSFFzTkJEU09reUxPcUhEN1RNOHFZV3hvWUZPbWFhdm1KelMwbFhZRkx1VmNYTm1GTTBPWlRsL01OCjVpdE5vMEVz
bWxBaUVYTFpJRkdpL0dZQkZXRHBQNVB2SlJIdUptd2JVQTdiMkNoVmhreFBXa0FYaDgvZjNzMjUKOWZSTC9ya0VTMGJqQVlHQy9lTGRDL09uUng2VFRyMHVRTUFSVjBqZGs3Qkg1dElXQjFtakthNjViR1Z6SlBZYQox
anNIRVhvOUo0MnBaQ3lWNnQxTklzRkFrL3pyazJzSFZHbFZzUzB4ZHpPaDhZaEhsKzVwVEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.232.236:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: tls-bootstrap-token-user
  name: tls-bootstrap-token-user@kubernetes
current-context: tls-bootstrap-token-user@kubernetes
kind: Config
preferences: {}
users:
- name: tls-bootstrap-token-user
  user:
    token: c8ad9c.2e4d610cf3e7426e

根據字尾c8ad9c找到token

[root@k8s-master01 kubernetes]# kubectl get secret -n kube-system
NAME                                             TYPE                                  DATA   AGE
bootstrap-token-c8ad9c                           bootstrap.kubernetes.io/token         6      3d18h

讀取secret中的內容,得到token-id,token-secret

[root@k8s-master01 kubernetes]# kubectl get secret -n kube-system bootstrap-token-c8ad9c -n kube-system -oyaml
apiVersion: v1
data:
  auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmYXVsdC1ub2RlLXRva2VuLHN5c3RlbTpib290c3RyYXBwZXJzOndvcmtlcixzeXN0ZW06Ym9vdHN0cmFwcGVyczppbmdyZXNz
  description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWxldCAnLg==
  token-id: YzhhZDlj
  token-secret: MmU0ZDYxMGNmM2U3NDI2ZQ==
  usage-bootstrap-authentication: dHJ1ZQ==
  usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
  creationTimestamp: "2021-07-09T09:54:36Z"
  name: bootstrap-token-c8ad9c
  namespace: kube-system
  resourceVersion: "1408"
  uid: 9b77f873-d449-4ab2-aed7-fce9c32bdb21
type: bootstrap.kubernetes.io/token

解密token-id,與配置檔案kubectl config中的token=c8ad9c.2e4d610cf3e7426e的字首一致

[root@k8s-master01 kubernetes]# echo "YzhhZDlj" | base64 -d
c8ad9c

解密token-secret,與配置檔案kubectl config中的token=c8ad9c.2e4d610cf3e7426e的字尾一致

[root@k8s-master01 kubernetes]# echo "MmU0ZDYxMGNmM2U3NDI2ZQ==" | base64 -d
2e4d610cf3e7426e

驗證通過之後才會進入下一步

b) 找個這個secret中的一個欄位auth-extra-groups,apiserver把這個token識別成一個username,名稱是system:bootstrap:,屬於system:bootstrappers這個組,這個組具有申請csr的許可權,該組的許可權繫結在一個叫system:node-bootstrapper的clusterrole

解密auth-extra-groups查詢組名

[root@k8s-master01 kubernetes]# echo "c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmYXVsdC1ub2RlLXRva2VuLHN5c3RlbTpib290c3RyYXBwZXJzOndvcmtlcixzeXN0ZW06Ym9vdHN0cmFwcGVyczppbmdyZXNz" | base64 -d
system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
  • clusterrole是k8s叢集級別的許可權控制,它作用整個k8s叢集
  • clusterrolebinding是叢集許可權的繫結,它可以幫某個clusterrole繫結到一個使用者、組或者seviceaccount

檢視clusterrole

[root@k8s-master01 kubernetes]# kubectl get clusterrole
NAME                                                                   CREATED AT
system:node-bootstrapper                                               2021-07-09T09:34:55Z

檢視system:node-bootstrapper

[root@k8s-master01 kubernetes]# kubectl get clusterrole system:node-bootstrapper -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2021-07-09T09:34:55Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:node-bootstrapper
  resourceVersion: "96"
  uid: b44bed52-fce4-4cf3-b3b1-38f887749d70
rules:
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests # 定義了一個csr許可權
  verbs:
  - create
  - get
  - list
  - watch

檢視clusterrolebinding

[root@k8s-master01 kubernetes]# kubectl get clusterrolebinding kubelet-bootstrap -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2021-07-09T09:54:36Z"
  name: kubelet-bootstrap
  resourceVersion: "1409"
  uid: 40de9025-a8f6-41c2-be85-763daca80bb2
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token

可以看到clusterrolebinding把一個名為system:node-bootstrapper的ClusterRole繫結到一個名為system:bootstrappers:default-node-token的Group

檢視Group是否一致,解密auth-extra-groups查詢組名

[root@k8s-master01 kubernetes]# echo "c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmYXVsdC1ub2RlLXRva2VuLHN5c3RlbTpib290c3RyYXBwZXJzOndvcmtlcixzeXN0ZW06Ym9vdHN0cmFwcGVyczppbmdyZXNz" | base64 -d
system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

c) CSR:相當於一個申請表,可以拿著這個申請表去申請我們的證書。

6、 經過上面的認證,kubelet就有了一個建立和檢索CSR的許可權

7、 Kubelet為自己建立一個CSR,名稱為kubernetes.io/kube-apiserver-client-kubelet

8、 CSR被允許有兩種方式:

a) K8s管理員使用kubectl手動的頒發證書

b) 如果配置了相關許可權,kube-controller-manager會自動同意。

i. Controller-manager有一個CSRApprovingController。他會校驗kubelet發來的csr的username和group是否有建立csr的許可權,而且還要驗證簽發著是否是kubernetes.io/kube-apiserver-client-kubelet

ii. Controller-manager同意CSR請求

Bootstrapping: 證書自動續期原理

9、 CSR被同意後,controller-manager建立kubelet的證書檔案

10、 Controller-manager將證書更新至csr的status欄位

11、 Kubelet從apiserver獲取證書

12、 Kubelet從獲取到的key和證書檔案建立kubelet.kubeconfig

13、 Kubelet啟動完成並正常工作

14、 可選:如果配置了自動續期,kubelet會在證書檔案過期的時候利用之前的kubeconfig檔案去申請一個新的證書,相當於續約。

15、 新的證書被同意或簽發,取決於我們的配置

檢視node-autoapprove-certificate-rotation

[root@k8s-master01 kubernetes]# kubectl get clusterrolebinding node-autoapprove-certificate-rotation -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2021-07-09T09:54:36Z"
  name: node-autoapprove-certificate-rotation
  resourceVersion: "1411"
  uid: d7c618c8-0860-4d03-949f-4e2bbd33659a
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes

檢視名為system:certificates.k8s.io:certificatesigningrequests:selfnodeclient的ClusterRole

[root@k8s-master01 kubernetes]# kubectl get clusterrole system:certificates.k8s.io:certificatesigningrequests:selfnodeclient -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2021-07-09T09:34:55Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  resourceVersion: "103"
  uid: d5215b54-8dd4-47ba-9e6e-bf664eb56dce
rules:
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests/selfnodeclient # 自動續期許可權
  verbs:
  - create

a) Kubelet建立的CSR是屬於一個組織:system:nodes

b) CN(比如域名):system:nodes:主機名

課程連結(私信領取福利)

http://www.kubeasy.com/

知識共享許可協議

本作品採用知識共享署名-非商業性使用-相同方式共享 4.0 國際許可協議進行許可。

歡迎轉載、使用、重新發布,但務必保留文章署名 鄭子銘 (包含連結: http://www.cnblogs.com/MingsonZheng/ ),不得用於商業目的,基於本文修改後的作品務必以相同的許可釋出。

相關文章