K8S 實用工具之四 - kubectl實用外掛

東風微鳴發表於2023-03-06

開篇

? 引言

  • 磨刀不誤砍柴工
  • 工欲善其事必先利其器

在《K8S 實用工具之一 - 如何合併多個 kubeconfig?》一文中,我們介紹了 kubectl 的外掛管理工具 krew。接下來就順勢介紹幾個實用的 kubectl 外掛。

kubectl 實用外掛

access-matrix

顯示伺服器資源的 RBAC 訪問矩陣。

您是否曾經想過您對所提供的 kubernetes 叢集擁有哪些訪問許可權?對於單個資源,您可以使用kubectl auth can-i 列表部署,但也許您正在尋找一個完整的概述?這就是它的作用。它列出當前使用者和所有伺服器資源的訪問許可權,類似於kubectl auth can-i --list

安裝

kubectl krew install access-matrix

使用

  Review access to cluster-scoped resources
   $ kubectl access-matrix

  Review access to namespaced resources in 'default'
   $ kubectl access-matrix --namespace default

  Review access as a different user
   $ kubectl access-matrix --as other-user

  Review access as a service-account
   $ kubectl access-matrix --sa kube-system:namespace-controller

  Review access for different verbs
   $ kubectl access-matrix --verbs get,watch,patch

  Review access rights diff with another service account
   $ kubectl access-matrix --diff-with sa=kube-system:namespace-controller

顯示效果如下:

 access-matrix

ca-cert

列印當前叢集的 PEM CA 證書

安裝

kubectl krew install ca-cert

使用

kubectl ca-cert

kubectl ca-cert

cert-manager

這個不用多介紹了吧?大名鼎鼎的 cert-manager,用來管理叢集內的證書資源。

cert-manager

需要配合在 K8S 叢集中安裝 cert-manager 來使用。後面有時間再詳細介紹

kubectl cert-manager help

cost

檢視叢集成本資訊。

kubectl-cost 是一個 kubectl 外掛,透過 kubeccost api 提供簡單的 CLI 訪問 Kubernetes 成本分配指標。它允許開發人員、devops 和其他人快速確定 Kubernetes 工作負載的成本和效率。

安裝

  1. 安裝 Kubecost (Helm 的 options 可以看這裡:cost-analyzer-helm-chart

    helm repo add kubecost https://kubecost.github.io/cost-analyzer/
    helm upgrade -i --create-namespace kubecost kubecost/cost-analyzer --namespace kubecost --set kubecostToken="a3ViZWN0bEBrdWJlY29zdC5jb20=xm343yadf98"
    

    部署完成顯示如下:

    NAME: kubecost
    LAST DEPLOYED: Sat Nov 27 13:44:30 2021
    NAMESPACE: kubecost
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    --------------------------------------------------Kubecost has been successfully installed. When pods are Ready, you can enable port-forwarding with the following command:
    
        kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090
    
    Next, navigate to http://localhost:9090 in a web browser.
    
    Having installation issues? View our Troubleshooting Guide at http://docs.kubecost.com/troubleshoot-install
    
  2. 安裝 kubectl cost

    kubectl krew install cost
    

使用

使用可以直接透過瀏覽器來看:

kubecost UI

ctx

在 kubeconfig 中切換上下文

安裝

kubectl krew install ctx

使用

使用也很簡單,執行 kubectl ctx 然後選擇要切換到哪個 context 即可。

$ kubectl ctx
Switched to context "multicloud-k3s".

deprecations

檢查叢集中已經棄用的物件。一般用在升級 K8S 之前做檢查。又叫 KubePug

KubePug

安裝

kubectl krew install deprecations

使用

使用也很簡單,執行 kubectl deprecations 即可,然後如下面所示,它會告訴你哪些 API 已經棄用了,方便規劃 K8S 升級規劃。

$ kubectl deprecations
W1127 16:04:58.641429   28561 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1127 16:04:58.664058   28561 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
W1127 16:04:59.622247   28561 warnings.go:70] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1127 16:05:00.777598   28561 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1127 16:05:00.808486   28561 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
RESULTS:
Deprecated APIs:

PodSecurityPolicy found in policy/v1beta1
         ├─ PodSecurityPolicy governs the ability to make requests that affect the Security Context that will be applied to a pod and container. Deprecated in 1.21.
                -> GLOBAL: kube-prometheus-stack-admission
                -> GLOBAL: loki-grafana-test
                -> GLOBAL: loki-promtail
                -> GLOBAL: loki
                -> GLOBAL: loki-grafana
                -> GLOBAL: prometheus-operator-grafana-test
                -> GLOBAL: prometheus-operator-alertmanager
                -> GLOBAL: prometheus-operator-grafana
                -> GLOBAL: prometheus-operator-prometheus
                -> GLOBAL: prometheus-operator-prometheus-node-exporter
                -> GLOBAL: prometheus-operator-kube-state-metrics
                -> GLOBAL: prometheus-operator-operator
                -> GLOBAL: kubecost-grafana
                -> GLOBAL: kubecost-cost-analyzer-psp

ComponentStatus found in /v1
         ├─ ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+
                -> GLOBAL: controller-manager
                -> GLOBAL: scheduler


Deleted APIs:

還可以和 CI 流程結合起來使用:

$ kubectl deprecations --input-file=./deployment/ --error-on-deleted --error-on-deprecated

df-pv

安裝

kubectl krew install df-pv

使用

執行 kubectl df-pv:

kubectl df-pv

get-all

真正能 get 到 Kubernetes 的所有資源。

安裝

kubectl krew install get-all

使用

直接執行 kubectl get-all, 示例效果如下:

kubectl get-all

images

顯示叢集中使用的容器映象。

安裝

kubectl krew install images

使用

執行 kubectl images -A ,結果如下:

kubectl images

kubesec-scan

使用 kubesec.io 掃描 Kubernetes 資源。

安裝

kubectl krew install kubesec-scan

使用

示例如下:

$ kubectl kubesec-scan statefulset loki -n loki-stack
scanning statefulset loki in namespace loki-stack
kubesec.io score: 4
-----------------
Advise1. .spec .volumeClaimTemplates[] .spec .accessModes | index("ReadWriteOnce")
2. containers[] .securityContext .runAsNonRoot == true
Force the running image to run as a non-root user to ensure least privilege
3. containers[] .securityContext .capabilities .drop
Reducing kernel capabilities available to a container limits its attack surface
4. containers[] .securityContext .runAsUser > 10000
Run as a high-UID user to avoid conflicts with the host's user table
5. containers[] .securityContext .capabilities .drop | index("ALL")
Drop all capabilities and add only those required to reduce syscall attack surface

neat

從Kubernetes顯示中刪除雜亂以使其更具可讀性。

安裝

kubectl krew install neat

使用

示例如下:

我們不關注的一些資訊如:creationTimeStampmanagedFields 等被移除了。很清爽

$ kubectl neat get -- pod loki-0 -oyaml -n loki-stack
apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/config: b9ab988df734dccd44833416670e70085a2a31cfc108e68605f22d3a758f50b5
    prometheus.io/port: http-metrics
    prometheus.io/scrape: "true"
  labels:
    app: loki
    controller-revision-hash: loki-79684c849
    name: loki
    release: loki
    statefulset.kubernetes.io/pod-name: loki-0
  name: loki-0
  namespace: loki-stack
spec:
  containers:
  - args:
    - -config.file=/etc/loki/loki.yaml
    image: grafana/loki:2.3.0
    livenessProbe:
      httpGet:
        path: /ready
        port: http-metrics
      initialDelaySeconds: 45
    name: loki
    ports:
    - containerPort: 3100
      name: http-metrics
    readinessProbe:
      httpGet:
        path: /ready
        port: http-metrics
      initialDelaySeconds: 45
    securityContext:
      readOnlyRootFilesystem: true
    volumeMounts:
    - mountPath: /etc/loki
      name: config
    - mountPath: /data
      name: storage
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-jhsvm
      readOnly: true
  hostname: loki-0
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  securityContext:
    fsGroup: 10001
    runAsGroup: 10001
    runAsNonRoot: true
    runAsUser: 10001
  serviceAccountName: loki
  subdomain: loki-headless
  terminationGracePeriodSeconds: 4800
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: config
    secret:
      secretName: loki
  - name: storage
  - name: kube-api-access-jhsvm
    projected:
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              fieldPath: metadata.namespace
            path: namespace

node-shell

透過 kubectl 在一個 node 上生成一個 root shell

安裝

kubectl krew install node-shell

使用

示例如下:

$ kubectl node-shell instance-ykx0ofns
spawning "nsenter-fr393w" on "instance-ykx0ofns"
If you don't see a command prompt, try pressing enter.
root@instance-ykx0ofns:/# hostname
instance-ykx0ofns
root@instance-ykx0ofns:/# ifconfig
...
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.64.4  netmask 255.255.240.0  broadcast 192.168.79.255
        inet6 fe80::f820:20ff:fe16:3084  prefixlen 64  scopeid 0x20<link>
        ether fa:20:20:16:30:84  txqueuelen 1000  (Ethernet)
        RX packets 24386113  bytes 26390915146 (26.3 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18840452  bytes 3264860766 (3.2 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
...
root@instance-ykx0ofns:/# exit
logout
pod default/nsenter-fr393w terminated (Error)
pod "nsenter-fr393w" deleted

ns

切換 Kubernetes 的 ns。

安裝

kubectl krew install ns

使用

$ kubectl ns loki-stack
Context "multicloud-k3s" modified.
Active namespace is "loki-stack".

$ kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
loki-promtail-fbbjj            1/1     Running   0          12d
loki-promtail-sx5gj            1/1     Running   0          12d
loki-0                         1/1     Running   0          12d
loki-grafana-8bffbb679-szdpj   1/1     Running   0          12d
loki-promtail-hmc26            1/1     Running   0          12d
loki-promtail-xvnbc            1/1     Running   0          12d
loki-promtail-5d5h8            1/1     Running   0          12d

outdated

查詢叢集中執行的過時容器映象。

安裝

kubectl krew install outdated

使用

$ kubectl outdated
Image                                                  Current             Latest             Behind
index.docker.io/rancher/klipper-helm                   v0.6.6-build202110220.6.8-build202111232
docker.io/rancher/klipper-helm                         v0.6.4-build202108130.6.8-build202111234
docker.io/alekcander/k3s-flannel-fixer                 0.0.2               0.0.2              0
docker.io/rancher/metrics-server                       v0.3.6              0.4.1              1
docker.io/rancher/coredns-coredns                      1.8.3               1.8.3              0
docker.io/rancher/library-traefik                      2.4.8               2.4.9              1
docker.io/rancher/local-path-provisioner               v0.0.19             0.0.20             1
docker.io/grafana/promtail                             2.1.0               2.4.1              5
docker.io/grafana/loki                                 2.3.0               2.4.1              2
quay.io/kiwigrid/k8s-sidecar                           1.12.3              1.14.2             5
docker.io/grafana/grafana                              8.1.6               8.3.0-beta1        8
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v0.18.1             1.3.0              9
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v0.20.0             0.23.0             5
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v0.0.1              0.0.1              0
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v1.9.4              2.0.0-beta         5
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v2.15.2             2.31.1             38
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v0.35.0             0.42.1             11
docker.io/kiwigrid/k8s-sidecar                         0.1.20              1.14.2             46
docker.io/grafana/grafana                              6.5.2               8.3.0-beta1        75
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay...     v0.35.0             0.42.1             12
docker.io/squareup/ghostunnel                          v1.5.2              1.5.2              0
docker.io/grafana/grafana                              8.1.2               8.3.0-beta1        12
docker.io/kiwigrid/k8s-sidecar                         1.12.3              1.14.2             5
docker.io/prom/prometheus                              v2.22.2             2.31.1             21

popeye(大力水手)

掃描叢集以發現潛在的資源問題。就是 K9S 也在使用的 popeye。

Popeye 是一個實用程式,它掃描實時的Kubernetes叢集,並報告部署的資源和配置的潛在問題。它根據已部署的內容而不是磁碟上的內容來清理叢集。透過掃描叢集,它可以檢測到錯誤配置,並幫助您確保最佳實踐已經到位,從而避免未來的麻煩。它旨在減少人們在野外操作Kubernetes叢集時所面臨的認知過載。此外,如果您的叢集使用度量伺服器,它會報告分配的資源超過或低於分配的資源,並試圖在叢集耗盡容量時警告您。

Popeye 是一個只讀的工具,它不會改變任何你的Kubernetes資源在任何方式!

安裝

kubectl krew install popeye

使用

如下:

❯ kubectl popeye

 ___     ___ _____   _____                                                      K          .-'-.
| _ \___| _ \ __\ \ / / __|                                                      8     __|      `\
|  _/ _ \  _/ _| \ V /| _|                                                        s   `-,-`--._   `\
|_| \___/_| |___| |_| |___|                                                      []  .->'  a     `|-'
  Biffs`em and Buffs`em!                                                          `=/ (__/_       /
                                                                                    \_,    `    _)
                                                                                       `----;  |




DAEMONSETS (1 SCANNED)                                                         ? 0 ? 1 ? 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · loki-stack/loki-promtail.......................................................................?
    ? [POP-404] Deprecation check failed. Unable to assert resource version.
    ? promtail
      ? [POP-106] No resources requests/limits defined.


DEPLOYMENTS (1 SCANNED)                                                        ? 0 ? 1 ? 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · loki-stack/loki-grafana........................................................................?
    ? [POP-404] Deprecation check failed. Unable to assert resource version.
    ? grafana
      ? [POP-106] No resources requests/limits defined.
    ? grafana-sc-datasources
      ? [POP-106] No resources requests/limits defined.



PODS (7 SCANNED)                                                               ? 0 ? 7 ? 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · loki-stack/loki-0..............................................................................?
    ? [POP-206] No PodDisruptionBudget defined.
    ? [POP-301] Connects to API Server? ServiceAccount token is mounted.
    ? loki
      ? [POP-106] No resources requests/limits defined.
  · loki-stack/loki-grafana-8bffbb679-szdpj........................................................?
    ? [POP-206] No PodDisruptionBudget defined.
    ? [POP-301] Connects to API Server? ServiceAccount token is mounted.
    ? grafana
      ? [POP-106] No resources requests/limits defined.
      ? [POP-105] Liveness probe uses a port#, prefer a named port.
      ? [POP-105] Readiness probe uses a port#, prefer a named port.
    ? grafana-sc-datasources
      ? [POP-106] No resources requests/limits defined.
  · loki-stack/loki-promtail-5d5h8.................................................................?
    ? [POP-206] No PodDisruptionBudget defined.
    ? [POP-301] Connects to API Server? ServiceAccount token is mounted.
    ? [POP-302] Pod could be running as root user. Check SecurityContext/image.
    ? promtail
      ? [POP-106] No resources requests/limits defined.
      ? [POP-103] No liveness probe.
      ? [POP-306] Container could be running as root user. Check SecurityContext/Image.

SUMMARY
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
Your cluster score: 80 -- B
                                                                                o          .-'-.
                                                                                 o     __| B    `\
                                                                                  o   `-,-`--._   `\
                                                                                 []  .->'  a     `|-'
                                                                                  `=/ (__/_       /
                                                                                    \_,    `    _)
                                                                                       `----;  |

resource-capacity

提供資源請求、限制和使用率的概覽。

這是一個簡單的 CLI,它提供 Kubernetes 叢集中資源請求、限制和利用率的概覽。它試圖將來自 kubectl top 和 kubectl describe 的輸出的最好部分組合成一個簡單易用的 CLI,專注於叢集資源。

安裝

kubectl krew install resource-capacity

使用

下面示例是看 node 的,也可以看 pod,透過 label 篩選, 並排序等功能。

$ kubectl resource-capacity
NODE                       CPU REQUESTS   CPU LIMITS   MEMORY REQUESTS   MEMORY LIMITS
*                          710m (14%)     300m (6%)    535Mi (6%)        257Mi (3%)
09b2brd7robnn5zi-1106883   0Mi (0%)       0Mi (0%)     0Mi (0%)          0Mi (0%)
hecs-348550                100m (10%)     100m (10%)   236Mi (11%)       27Mi (1%)
instance-wy7ksibk          310m (31%)     0Mi (0%)     174Mi (16%)       0Mi (0%)
instance-ykx0ofns          200m (20%)     200m (20%)   53Mi (5%)         53Mi (5%)
izuf656om146vu1n6pd6lpz    100m (10%)     0Mi (0%)     74Mi (3%)         179Mi (8%)

score

Kubernetes 靜態程式碼分析。

安裝

kubectl krew install score

使用

也是可以和 CI 進行整合的。示例如下:

kubectl score

sniff

強烈推薦,之前有次 POD 網路出現問題就是透過這個幫助來進行分析的。它會使用 tcpdump 和 wireshark 在 pod 上啟動遠端抓包

安裝

kubectl krew install sniff

使用

kubectl < 1.12:
kubectl plugin sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

kubectl >= 1.12:
kubectl sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

如下:

kubectl sniff

starboard

也是一個安全掃描工具。

安裝

kubectl krew install starboard

使用

kubectl starboard report deployment/nginx > nginx.deploy.html

就可以生成一份安全報告:

starboard

tail - kubernetes tail

Kubernetes tail。將所有匹配 pod 的所有容器的日誌流。按 service、replicaset、deployment 等匹配 pod。調整到變化的叢集——當pod落入或退出選擇時,將從日誌中新增或刪除它們。

安裝

kubectl krew install tail

使用

  # 匹配所有 pod
  $ kubectl tail

  # 配置 staging ns 的所有 pod
  $ kubectl tail --ns staging

  # 匹配所有 ns 的 rs name 為 worker 的 pod
  $ kubectl tail --rs workers

  # 匹配 staging ns 的 rs name 為 worker 的 pod
  $ kubectl tail --rs staging/workers

  # 匹配 deploy 屬於 webapp,且 svc 屬於 frontend 的 pod
  $ kubectl tail --svc frontend --deploy webapp

使用效果如下,最前面會加上日誌對應的 pod:

tail 效果

trace

使用系統工具跟蹤 Kubernetes pod 和 node。

kubectl trace 是一個 kubectl 外掛,它允許你在 Kubernetes 叢集中排程 bpftrace 程式的執行。

安裝

kubectl krew install trace

使用

這塊不太瞭解,就不多做評論了。

kubectl trace run ip-180-12-0-152.ec2.internal -f read.bt

tree

一個 kubectl 外掛,透過對 Kubernetes 物件的 ownersReferences 來探索它們之間的所有權關係。

安裝

使用 krew 外掛管理器安裝:

kubectl krew install tree
kubectl tree --help

使用

DaemonSet 示例:

DaemonSet  Tree

Knative Service 示例:

Knative Service

tunnel

叢集和你自己機器之間的反向隧道.

它允許您將計算機作為叢集中的服務公開,或者將其公開給特定的部署。這個專案的目的是為這個特定的問題提供一個整體的解決方案(從kubernetes pod訪問本地機器)。

安裝

kubectl krew install tunnel

使用

以下命令將允許叢集中的 pod 透過 http 訪問您的本地 web 應用程式(監聽埠 8000)(即 kubernetes 應用程式可以傳送請求到myapp:8000)

ktunnel expose myapp 80:8000
ktunnel expose myapp 80:8000 -r #deployment & service will be reused if exists or they will be created

warp

在 Pod 中同步和執行本地檔案

kubectl (Kubernetes CLI)外掛,就像 kubectl 執行與 rsync。

它建立臨時 Pod,並將本地檔案同步到所需的容器,並執行任何命令。

例如,這可以用於在 Kubernetes 中構建和執行您的本地專案,其中有更多的資源、所需的架構等,同時在本地使用您的首選編輯器。

安裝

kubectl krew install warp

使用

# 在 ubuntu 映象中啟動 bash。並將當前目錄中的檔案同步到容器中
kubectl warp -i -t --image ubuntu testing -- /bin/bash

# 在 node 容器中啟動 nodejs 專案
cd examples/nodejs
kubectl warp -i -t --image node testing-node -- npm run watch

who-can

顯示誰具有訪問 Kubernetes 資源的 RBAC 許可權

安裝

kubectl krew install who-can

使用

$ kubectl who-can create ns all-namespaces
No subjects found with permissions to create ns assigned through RoleBindings

CLUSTERROLEBINDING            SUBJECT           TYPE            SA-NAMESPACE
cluster-admin                 system:masters    Group
helm-kube-system-traefik-crd  helm-traefik-crd  ServiceAccount  kube-system
helm-kube-system-traefik      helm-traefik      ServiceAccount  kube-system

EOF

三人行, 必有我師; 知識共享, 天下為公. 本文由東風微鳴技術部落格 EWhisper.cn 編寫.

相關文章