檢視daemonset
如下,k8s自身的 DaemonSet kube-flannel-ds和kube-proxy分別負責在每個結點上執行flannel和kube-proxy元件
daemonset在每個節點上最多隻能執行一個副本。
[machangwei@mcwk8s-master ~]$ kubectl get daemonset No resources found in default namespace. [machangwei@mcwk8s-master ~]$ kubectl get daemonset --namespace=kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel-ds 3 3 1 3 1 <none> 5d kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 5d [machangwei@mcwk8s-master ~]$ kubectl get pod --namespace=kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-6d8c4cb4d-cnj2t 1/1 Running 0 5d 10.244.0.2 mcwk8s-master <none> <none> coredns-6d8c4cb4d-ngfm4 1/1 Running 0 5d 10.244.0.3 mcwk8s-master <none> <none> etcd-mcwk8s-master 1/1 Running 0 5d 10.0.0.4 mcwk8s-master <none> <none> kube-apiserver-mcwk8s-master 1/1 Running 1 (13h ago) 5d 10.0.0.4 mcwk8s-master <none> <none> kube-controller-manager-mcwk8s-master 1/1 Running 5 (80m ago) 5d 10.0.0.4 mcwk8s-master <none> <none> kube-flannel-ds-cn4m9 0/1 CrashLoopBackOff 147 (3m51s ago) 12h 10.0.0.6 mcwk8s-node2 <none> <none> kube-flannel-ds-hpgkz 1/1 Running 0 5d 10.0.0.4 mcwk8s-master <none> <none> kube-flannel-ds-nnjvj 0/1 CrashLoopBackOff 185 (3m38s ago) 5d 10.0.0.5 mcwk8s-node1 <none> <none> kube-proxy-92g5c 1/1 Running 0 12h 10.0.0.6 mcwk8s-node2 <none> <none> kube-proxy-kk22j 1/1 Running 0 5d 172.16.0.5 mcwk8s-node1 <none> <none> kube-proxy-xjjgf 1/1 Running 0 5d 10.0.0.4 mcwk8s-master <none> <none> kube-scheduler-mcwk8s-master 1/1 Running 5 (82m ago) 5d 10.0.0.4 mcwk8s-master <none> <none> [machangwei@mcwk8s-master ~]$
學習kube-flannel-ds
擷取部分:https://www.cnblogs.com/machangwei-8/p/15759077.html#_label7
apiVersion: apps/v1 kind: DaemonSet #語法結構和Deployment幾乎往前一樣,只是將kind設為DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true #指定Pod直接使用的是Node網路,相當於docker run --network=host。 #考慮到flannel需要為叢集提供網路連線,這個要求是合理的 initContainers:#定義了執行flannel服務的兩個容器 - name: install-cni-plugin image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0 command: containers: #定義了執行flannel服務的兩個容器 - name: kube-flannel image: quay.io/coreos/flannel:v0.15.1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr
學習kube-proxy,檢視daemonset配置
[machangwei@mcwk8s-master ~]$ kubectl edit daemonset kube-proxy --namespace=kube-system # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: apps/v1 kind: DaemonSet #指定資源型別 metadata: annotations: deprecated.daemonset.template.generation: "1" creationTimestamp: "2022-01-12T15:15:18Z" generation: 1 labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: "18274" uid: 04cfea8d-94b3-4963-b8d2-b10a7b6a46b0 spec: revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kube-proxy template: metadata: creationTimestamp: null labels: k8s-app: kube-proxy spec: containers: - command: - /usr/local/bin/kube-proxy # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: apps/v1 kind: DaemonSet #指定資源型別 metadata: annotations: deprecated.daemonset.template.generation: "1" creationTimestamp: "2022-01-12T15:15:18Z" generation: 1 labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: "18274" uid: 04cfea8d-94b3-4963-b8d2-b10a7b6a46b0 spec: revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kube-proxy template: metadata: creationTimestamp: null labels: k8s-app: kube-proxy spec: containers: #定義kube-proxy的容器 - command: - /usr/local/bin/kube-proxy - --config=/var/lib/kube-proxy/config.conf - --hostname-override=$(NODE_NAME) env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true dnsPolicy: ClusterFirst hostNetwork: true nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock - hostPath: path: /lib/modules type: "" name: lib-modules updateStrategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate status: #當前DamonSet的執行狀態,這個部分是kubectl edit 特有的。其實kubernetes叢集中每個當前執行的資源都 currentNumberScheduled: 3 #可以通過kubectl edit 檢視器配置和執行狀態, desiredNumberScheduled: 3 #比如: kubectl edit deployment mcwnginx-deployment numberAvailable: 3 numberMisscheduled: 0 numberReady: 3 observedGeneration: 1 updatedNumberScheduled: 3 [machangwei@mcwk8s-master ~]$ kubectl edit daemonset kube-proxy --namespace=kube-system #跟vim一樣,q退出,如果修改了那麼應該是wq吧 Edit cancelled, no changes made.
執行自己的DaemonSet失敗
有時間再看這個失敗問題,如果哪位大佬知道,指點下也好
[machangwei@mcwk8s-master ~]$ vim mcwJiankong.yml [machangwei@mcwk8s-master ~]$ cat mcw cat: mcw: No such file or directory [machangwei@mcwk8s-master ~]$ cat mcw mcwJiankong.yml mcwNginx.yml [machangwei@mcwk8s-master ~]$ cat mcwJiankong.yml apiVersion: apps/v1 kind: DaemonSet metadata: name: mcw-prometheus-node-daemonset spec: replicas: 3 selector: matchLabels: app: mcw_prometheus template: metadata: labels: app: mcw_prometheus spec: hostNetwork: true containers: - name: mcw-pr-node image: prom/node-exporter imagePullPolicy: IfNotPresent command: - /bin/node_exporter - --pathprocfs - /host/proc - --path.sysfs - /host/sys - --collector.filesystem.ignored-mount-points - ^/(sys|proc|dev|host|etc)($|/) volumeMounts: - name: proc mountPath: /host/proc - name: sys mountPath: /host/sys - name: root mountPath: /rootfs volumes: - name: proc hostPath: path: /proc - name: sys hostPath: path: /sys - name: root hostPath: path: / [machangwei@mcwk8s-master ~]$ kubectl apply -f mcwJiankong.yml #daemonset不能指定副本數量,好像每個節點都會執行,這裡可能沒包括主節點 error: error validating "mcwJiankong.yml": error validating data: ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false [machangwei@mcwk8s-master ~]$ [machangwei@mcwk8s-master ~]$ vim mcwJiankong.yml [machangwei@mcwk8s-master ~]$ cat mcwJiankong.yml #去掉副本數 apiVersion: apps/v1 kind: DaemonSet metadata: name: mcw-prometheus-node-daemonset spec: selector: matchLabels: app: mcw_prometheus template: metadata: labels: app: mcw_prometheus spec: hostNetwork: true containers: - name: mcw-pr-node image: prom/node-exporter imagePullPolicy: IfNotPresent command: - /bin/node_exporter - --pathprocfs - /host/proc - --path.sysfs - /host/sys - --collector.filesystem.ignored-mount-points - ^/(sys|proc|dev|host|etc)($|/) volumeMounts: - name: proc mountPath: /host/proc - name: sys mountPath: /host/sys - name: root mountPath: /rootfs volumes: - name: proc hostPath: path: /proc - name: sys hostPath: path: /sys - name: root hostPath: path: / [machangwei@mcwk8s-master ~]$ [machangwei@mcwk8s-master ~]$ [machangwei@mcwk8s-master ~]$ kubectl apply -f mcwJiankong.yml daemonset.apps/mcw-prometheus-node-daemonset created [machangwei@mcwk8s-master ~]$ kubectl get daemonset #檢視daemonset NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE mcw-prometheus-node-daemonset 2 2 0 2 0 <none> 19s [machangwei@mcwk8s-master ~]$ kubectl get pod -o wide #檢視pod,結果未能成功執行。不清楚原因,有時間再看這個問題 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mcw-prometheus-node-daemonset-cv8k2 0/1 CrashLoopBackOff 2 (17s ago) 60s 10.0.0.5 mcwk8s-node1 <none> <none> mcw-prometheus-node-daemonset-z2vvc 0/1 Error 2 (30s ago) 60s 10.0.0.6 mcwk8s-node2 <none> <none> [machangwei@mcwk8s-master ~]$ kubectl describe pod mcw-prometheus-node-daemonset-cv8k2 Name: mcw-prometheus-node-daemonset-cv8k2 Namespace: default Priority: 0 Node: mcwk8s-node1/10.0.0.5 Start Time: Tue, 18 Jan 2022 00:34:32 +0800 Labels: app=mcw_prometheus controller-revision-hash=7b99d77578 pod-template-generation=1 Annotations: <none> Status: Running IP: 10.0.0.5 IPs: IP: 10.0.0.5 Controlled By: DaemonSet/mcw-prometheus-node-daemonset Containers: mcw-pr-node: Container ID: docker://7ff049b0c303ebb997c4794c9ffadbe7520f8cfdba71caec3b6b32c193ea1369 Image: prom/node-exporter Image ID: docker-pullable://prom/node-exporter@sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd Port: <none> Host Port: <none> Command: /bin/node_exporter --pathprocfs /host/proc --path.sysfs /host/sys --collector.filesystem.ignored-mount-points ^/(sys|proc|dev|host|etc)($|/) State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 18 Jan 2022 00:36:24 +0800 Finished: Tue, 18 Jan 2022 00:36:24 +0800 Ready: False Restart Count: 4 Environment: <none> Mounts: /host/proc from proc (rw) /host/sys from sys (rw) /rootfs from root (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9bkl (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: proc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: sys: Type: HostPath (bare host directory volume) Path: /sys HostPathType: root: Type: HostPath (bare host directory volume) Path: / HostPathType: kube-api-access-w9bkl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m36s default-scheduler Successfully assigned default/mcw-prometheus-node-daemonset-cv8k2 to mcwk8s-node1 Normal Pulling 2m36s kubelet Pulling image "prom/node-exporter" Normal Pulled 2m9s kubelet Successfully pulled image "prom/node-exporter" in 27.526626918s Normal Created 45s (x5 over 2m9s) kubelet Created container mcw-pr-node Normal Started 45s (x5 over 2m8s) kubelet Started container mcw-pr-node Normal Pulled 45s (x4 over 2m8s) kubelet Container image "prom/node-exporter" already present on machine Warning BackOff 31s (x9 over 2m7s) kubelet Back-off restarting failed container [machangwei@mcwk8s-master ~]$