前文我們聊到了k8s上crd資源的使用和相關說明,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14267400.html;今天我們來了解下k8s的第二種擴充套件機制自定義apiserver,以及apiservice資源的相關話題;
在開始聊自定義apiserver前,我們先來了解下k8s原生的apiserver;其實apiserver就是一個https伺服器,我們可以使用kubectl工具通過https協議請求apiserver建立資源,刪除資源,檢視資源等等操作;每個請求都對應著RESTful api中的請求方法,對應資源就是http協議中的url路徑;比如我們要建立一個pod,其kubectl請求apiserver 使用post方法將資源定義提交給apiserver;pod資源就是對應群組中的某個版本下某個名稱空間下的某個pod資源;
apiserver資源組織邏輯
提示:客戶端訪問apiserver,對應資源類似上圖中的組織方式;比如訪問default名稱空間下某個pod,其路徑就為/apis/core/v1/namespace/default/pod/mypod;對應resource包含名稱空間以及對應資源型別;
k8s原生apiserver組成
k8s原生apiserver主要有兩個元件組成,第一個元件aggregator,其功能類似web代理伺服器,第二個元件就是真正的apiserver;其工作邏輯是,使用者請求首先送達給aggregator,由aggregator根據使用者請求的資源,將對應請求路由至apiserver;簡單講aggregator這個元件主要作用就是用來路由使用者請求;預設情況aggregator會把所有請求都路由至原生的apiserver上進行響應;如果我們需要自定義apiserver,就需要在預設的aggregator上使用apiservice資源將自定義apiserver註冊到原生的apiserver上,讓其使用者請求能夠被路由至自定義apiserver進行響應;如下圖
提示:apiserver是k8s的唯一訪問入口,預設客戶端的所有操作都是傳送給apiserver進行響應,我們自定義的apiserver要想能夠被客戶端訪問,就必須通過內建apiserver中的aggregator元件中的路由資訊,把對應路徑的訪問路由至對應apiserver進行訪問;對應aggregator中的路由資訊,由k8s內建apiservice資源定義;簡單講apiservice資源就是用來定義原生apiserver中aggregator元件上的路由資訊;該路由就是將某某端點的訪問路由至對應apiserver;
檢視原生apiserver中的群組/版本資訊
[root@master01 ~]# kubectl api-versions admissionregistration.k8s.io/v1 admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 autoscaling/v2beta2 batch/v1 batch/v1beta1 certificates.k8s.io/v1 certificates.k8s.io/v1beta1 coordination.k8s.io/v1 coordination.k8s.io/v1beta1 crd.projectcalico.org/v1 discovery.k8s.io/v1beta1 events.k8s.io/v1 events.k8s.io/v1beta1 extensions/v1beta1 flowcontrol.apiserver.k8s.io/v1beta1 mongodb.com/v1 networking.k8s.io/v1 networking.k8s.io/v1beta1 node.k8s.io/v1 node.k8s.io/v1beta1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 scheduling.k8s.io/v1 scheduling.k8s.io/v1beta1 stable.example.com/v1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1 [root@master01 ~]#
提示:只有上面列出的群組版本才能夠被客戶端訪問,即客戶端只能訪問上述列表中群組版本中的資源,沒有出現群組版本是一定不會被客戶端訪問到;
apiservice資源的使用
示例:建立apiservice資源
[root@master01 ~]# cat apiservice-demo.yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: name: v2beta1.auth.ilinux.io spec: insecureSkipTLSVerify: true group: auth.ilinux.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: auth-api namespace: default version: v2beta1 [root@master01 ~]#
提示:apiservice資源屬於apiregistration.k8s.io/v1群組,其型別為APIService,其中spec.insecureSkipTLSVerify欄位用來描述是否忽略安全驗證,即不驗證https證書;true表示不驗證,false表示要驗證;group欄位用來描述對應自定義apiserver的群組;groupPriorityMinimum欄位用來描述對應群組的優先順序;versionPriority欄位用來描述對應群組版本的優先順序;service欄位是用來描述把對應群組版本的請求路由至某個service;該service就是對應自定義apiserver關聯的service;version欄位用來描述對應apiserver屬於對應群組中的某個版本;上述資源清單表示在aggregator上註冊auth.ilinux.io/v2beta1這個端點,該端點對應的後端apiserver的service是default名稱空間下的auth-api service;即客戶端訪問auth.ilinux.io/v2beta1下的資源都會被路由至default名稱空間下的auth-api service進行響應;
應用清單
[root@master01 ~]# kubectl apply -f apiservice-demo.yaml apiservice.apiregistration.k8s.io/v2beta1.auth.ilinux.io created [root@master01 ~]# kubectl get apiservice |grep auth.ilinux.io v2beta1.auth.ilinux.io default/auth-api False (ServiceNotFound) 16s [root@master01 ~]# kubectl api-versions |grep auth.ilinux.io auth.ilinux.io/v2beta1 [root@master01 ~]#
提示:可以看到應用清單以後,對應的端點資訊就出現在api-versions中;
上述清單只是用來說明對應apiservice資源的使用;其實應用上述清單建立apiservice資源沒有實質的作用,其原因是我們對應名稱空間下並沒有對應的服務,也沒有對應自定義apiserver;所以通常自定義apiserver,我們會用到apiservice資源來把自定義apiserver整合進原生apiserver中;接下來我們部署一個真正意義上的自定義apiserver
部署metrics-server
metrics-server是用來擴充套件k8s的第三方apiserver,其主要作用是收集pod或node上的cpu,記憶體,磁碟等指標資料,並提供一個api介面供kubectl top命令訪問;預設情況kubectl top 命令是沒法正常使用,其原因是預設apiserver上沒有對應的介面提供收集pod或node的cpu,記憶體,磁碟等核心指標資料;kubectl top命令主要用來顯示pod/node資源的cpu,記憶體,磁碟的佔用比例;該命令能夠正常使用必須依賴Metrics API;
預設沒有部署metrics server使用kubectl top pod/node檢視pod或節點的cpu,記憶體佔用比例
[root@master01 ~]# kubectl top Display Resource (CPU/Memory/Storage) usage. The top command allows you to see the resource consumption for nodes or pods. This command requires Metrics Server to be correctly configured and working on the server. Available Commands: node Display Resource (CPU/Memory/Storage) usage of nodes pod Display Resource (CPU/Memory/Storage) usage of pods Usage: kubectl top [flags] [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). [root@master01 ~]# kubectl top pod error: Metrics API not available [root@master01 ~]# kubectl top node error: Metrics API not available [root@master01 ~]#
提示:預設沒有部署metrics server,使用kubectl top pod/node命令,它會告訴我們沒有可用的metrics api;
部署metrics server
下載部署清單
[root@master01 ~]# mkdir metrics-server [root@master01 ~]# cd metrics-server [root@master01 metrics-server]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml --2021-01-14 23:54:30-- https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml Resolving github.com (github.com)... 52.74.223.119 Connecting to github.com (github.com)|52.74.223.119|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/c700f080-1f7e-11eb-9e30-864a63f442f4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210114%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210114T155432Z&X-Amz-Expires=300&X-Amz-Signature=fc5a6f41ca50ec22e87074a778d2cb35e716ae6c3231afad17dfaf8a02203e35&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream [following] --2021-01-14 23:54:32-- https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/c700f080-1f7e-11eb-9e30-864a63f442f4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210114%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210114T155432Z&X-Amz-Expires=300&X-Amz-Signature=fc5a6f41ca50ec22e87074a778d2cb35e716ae6c3231afad17dfaf8a02203e35&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.39.44 Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.39.44|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 3962 (3.9K) [application/octet-stream] Saving to: ‘components.yaml’ 100%[===========================================================================================>] 3,962 11.0KB/s in 0.4s 2021-01-14 23:54:35 (11.0 KB/s) - ‘components.yaml’ saved [3962/3962] [root@master01 metrics-server]# ls components.yaml [root@master01 metrics-server]#
修改部署清單內容
[root@master01 metrics-server]# cat components.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --kubelet-insecure-tls image: k8s.gcr.io/metrics-server/metrics-server:v0.4.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 [root@master01 metrics-server]#
提示:在deploy中,spec.template.containers.args欄位中加上--kubelet-insecure-tls選項,表示不驗證客戶端證書;上述清單主要用deploy控制器將metrics server執行為一個pod,然後授權metrics-server使用者能夠對pod/node資源進行只讀許可權;然後把metrics.k8s.io/v1beta1註冊到原生apiserver上,讓其客戶端訪問metrics.k8s.io下的資源能夠被路由至metrics-server這個服務上進行響應;
應用資源清單
[root@master01 metrics-server]# kubectl apply -f components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created [root@master01 metrics-server]#
驗證:檢視原生apiserver是否有metrics.k8s.io/v1beta1?
[root@master01 metrics-server]# kubectl api-versions|grep metrics metrics.k8s.io/v1beta1 [root@master01 metrics-server]#
提示:可以看到metrics.k8s.io/v1beta1群組已經註冊到原生apiserver上;
檢視metrics server pod是否執行正常?
[root@master01 metrics-server]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-744cfdf676-kh6rm 1/1 Running 4 5d7h canal-5bt88 2/2 Running 20 11d canal-9ldhl 2/2 Running 22 11d canal-fvts7 2/2 Running 20 11d canal-mwtg4 2/2 Running 23 11d canal-rt8nn 2/2 Running 21 11d coredns-7f89b7bc75-k9gdt 1/1 Running 32 37d coredns-7f89b7bc75-kp855 1/1 Running 31 37d etcd-master01.k8s.org 1/1 Running 36 37d kube-apiserver-master01.k8s.org 1/1 Running 14 13d kube-controller-manager-master01.k8s.org 1/1 Running 43 37d kube-flannel-ds-fnd2w 1/1 Running 5 5d5h kube-flannel-ds-k9l4k 1/1 Running 7 5d5h kube-flannel-ds-s7w2j 1/1 Running 4 5d5h kube-flannel-ds-vm4mr 1/1 Running 6 5d5h kube-flannel-ds-zgq92 1/1 Running 37 37d kube-proxy-74fxn 1/1 Running 10 10d kube-proxy-fbl6c 1/1 Running 8 10d kube-proxy-n82sf 1/1 Running 10 10d kube-proxy-ndww5 1/1 Running 11 10d kube-proxy-v8dhk 1/1 Running 11 10d kube-scheduler-master01.k8s.org 1/1 Running 39 37d metrics-server-58fcfcc9d-drbw2 1/1 Running 0 32s [root@master01 metrics-server]#
提示:可以看到對應pod已經正常執行;
檢視pod裡的日誌是否正常?
[root@master01 metrics-server]# kubectl logs metrics-server-58fcfcc9d-drbw2 -n kube-system I0114 17:52:03.601493 1 serving.go:325] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) E0114 17:52:04.140587 1 pathrecorder.go:107] registered "/metrics" from goroutine 1 [running]: runtime/debug.Stack(0x1942e80, 0xc00069aed0, 0x1bb58b5) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).trackCallers(0xc00028afc0, 0x1bb58b5, 0x8) /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/mux/pathrecorder.go:109 +0x86 k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).Handle(0xc00028afc0, 0x1bb58b5, 0x8, 0x1e96f00, 0xc0006d88d0) /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/mux/pathrecorder.go:173 +0x84 k8s.io/apiserver/pkg/server/routes.MetricsWithReset.Install(0xc00028afc0) /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/routes/metrics.go:43 +0x5d k8s.io/apiserver/pkg/server.installAPI(0xc00000a1e0, 0xc000589b00) /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/config.go:711 +0x6c k8s.io/apiserver/pkg/server.completedConfig.New(0xc000589b00, 0x1f099c0, 0xc0001449b0, 0x1bbdb5a, 0xe, 0x1ef29e0, 0x2cef248, 0x0, 0x0, 0x0) /go/pkg/mod/k8s.io/apiserver@v0.19.2/pkg/server/config.go:657 +0xb45 sigs.k8s.io/metrics-server/pkg/server.Config.Complete(0xc000589b00, 0xc000599440, 0xc000599b00, 0xdf8475800, 0xc92a69c00, 0x0, 0x0, 0xdf8475800) /go/src/sigs.k8s.io/metrics-server/pkg/server/config.go:52 +0x312 sigs.k8s.io/metrics-server/cmd/metrics-server/app.runCommand(0xc00001c6e0, 0xc000114600, 0x0, 0x0) /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/app/start.go:66 +0x157 sigs.k8s.io/metrics-server/cmd/metrics-server/app.NewMetricsServerCommand.func1(0xc0000d9340, 0xc0005a4cd0, 0x0, 0x5, 0x0, 0x0) /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/app/start.go:37 +0x33 github.com/spf13/cobra.(*Command).execute(0xc0000d9340, 0xc00013a130, 0x5, 0x5, 0xc0000d9340, 0xc00013a130) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842 +0x453 github.com/spf13/cobra.(*Command).ExecuteC(0xc0000d9340, 0xc00013a180, 0x0, 0x0) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349 github.com/spf13/cobra.(*Command).Execute(...) /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 main.main() /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/metrics-server.go:38 +0xae I0114 17:52:04.266492 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0114 17:52:04.267021 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I0114 17:52:04.266641 1 secure_serving.go:197] Serving securely on [::]:4443 I0114 17:52:04.266670 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0114 17:52:04.266682 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key I0114 17:52:04.266688 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0114 17:52:04.267120 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0114 17:52:04.266692 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0114 17:52:04.267301 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0114 17:52:04.367448 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0114 17:52:04.367472 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0114 17:52:04.367462 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController [root@master01 metrics-server]#
提示:只要metrics server pod沒有出現錯誤日誌,或者無法註冊等資訊,就表示pod裡的容器執行正常;
驗證:使用kubectl top 命令檢視pod的cpu ,記憶體佔比,看看對應命令是否可以正常執行?
[root@master01 metrics-server]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master01.k8s.org 235m 11% 1216Mi 70% node01.k8s.org 140m 3% 747Mi 20% node02.k8s.org 120m 3% 625Mi 17% node03.k8s.org 133m 3% 594Mi 16% node04.k8s.org 125m 3% 700Mi 19% [root@master01 metrics-server]# kubectl top pods -n kube-system NAME CPU(cores) MEMORY(bytes) calico-kube-controllers-744cfdf676-kh6rm 2m 23Mi canal-5bt88 50m 118Mi canal-9ldhl 22m 86Mi canal-fvts7 49m 106Mi canal-mwtg4 57m 113Mi canal-rt8nn 56m 113Mi coredns-7f89b7bc75-k9gdt 3m 12Mi coredns-7f89b7bc75-kp855 3m 15Mi etcd-master01.k8s.org 25m 72Mi kube-apiserver-master01.k8s.org 99m 410Mi kube-controller-manager-master01.k8s.org 14m 88Mi kube-flannel-ds-fnd2w 3m 45Mi kube-flannel-ds-k9l4k 3m 27Mi kube-flannel-ds-s7w2j 4m 46Mi kube-flannel-ds-vm4mr 3m 45Mi kube-flannel-ds-zgq92 2m 19Mi kube-proxy-74fxn 1m 27Mi kube-proxy-fbl6c 1m 23Mi kube-proxy-n82sf 1m 25Mi kube-proxy-ndww5 1m 25Mi kube-proxy-v8dhk 2m 23Mi kube-scheduler-master01.k8s.org 3m 33Mi metrics-server-58fcfcc9d-drbw2 6m 23Mi [root@master01 metrics-server]#
提示:可以看到kubectl top命令可以正常執行,說明metrics server 部署成功沒有問題;
以上就是使用apiservice資源結合自定義apiserver擴充套件k8s功能的示例,簡單總結apiservice資源的主要作用就是在aggregator上建立對應的路由資訊,該路由資訊的主要作用是將對應端點訪問路由至自定義apiserver所對應的service進行響應;