kubeadm + containerd 部署 k8s-v1.23.3(含證書升級)

chen2ha發表於2022-02-19

 

 

前言

kubeadm 和 二進位制 部署的區別

  • kubeadm
    • 優點:
      • 部署很方便,兩個引數就可以完成叢集的部署和節點的加入
        1. kubeadm init 初始化節點
        2. kubeadm join 節點加入叢集
    • 缺點:
      1. 叢集證書有效期只有一年,要麼破解,要麼升級 k8s 版本
  • 二進位制部署
    • 優點:
      1. 可以自定義叢集證書有效期(一般都是十年)
      2. 所有元件的細節,可以在部署前定製
      3. 部署過程中,能更好的理解 k8s 各個元件之間的關聯
    • 缺點:
      1. 部署相對 kubeadm 會複雜很多

人生苦短,我選二進位制部署

環境準備

IP角色核心版本
192.168.91.8 master centos7.6/3.10.0-957.el7.x86_64
192.168.91.9 work centos7.6/3.10.0-957.el7.x86_64

答應我,所有節點都要關閉防火牆

systemctl disable firewalld
systemctl stop firewalld

答應我,所有節點都要關閉selinux

setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config

答應我,所有節點都要關閉swap

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

答應我,所有節點都要開啟核心模組

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack
modprobe nf_conntrack_ipv4
modprobe br_netfilter
modprobe overlay

答應我,所有節點都要開啟模組自動載入服務

cat > /etc/modules-load.d/k8s-modules.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
overlay
EOF

答應我,記得重啟服務,並設定為開機自啟

systemctl enable systemd-modules-load
systemctl restart systemd-modules-load

答應我,所有節點都要做核心優化

cat <<EOF > /etc/sysctl.d/kubernetes.conf
# 開啟資料包轉發功能(實現vxlan)
net.ipv4.ip_forward=1
# iptables對bridge的資料進行處理
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
# 關閉tcp_tw_recycle,否則和NAT衝突,會導致服務不通
net.ipv4.tcp_tw_recycle=0
# 不允許將TIME-WAIT sockets重新用於新的TCP連線
net.ipv4.tcp_tw_reuse=0
# socket監聽(listen)的backlog上限
net.core.somaxconn=32768
# 最大跟蹤連線數,預設 nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_max=1000000
# 禁止使用 swap 空間,只有當系統 OOM 時才允許使用它
vm.swappiness=0
# 計算當前的記憶體對映檔案數。
vm.max_map_count=655360
# 核心可分配的最大檔案數
fs.file-max=6553600
# 持久連線
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
EOF

答應我,讓配置生效

sysctl -p /etc/sysctl.d/kubernetes.conf

答應我,所有節點都要清空 iptables 規則

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT

安裝 containerd

所有節點都需要安裝

配置 docker 源 (docker 源裡面有 containerd)

wget -O /etc/yum.repos.d/docker.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查詢 containerd 安裝包的名稱

yum search containerd

安裝 containerd

yum install -y containerd.io

修改 containerd 配置檔案

root 容器儲存路徑,修改成磁碟空間充足的路徑

sandbox_image pause 映象名稱以及映象tag(一定要可以拉取到 pause 映象的,否則會導致叢集初始化的時候 kubelet 重啟失敗)

bin_dir cni 外掛存放路徑,yum 安裝的 containerd 預設存放在 /opt/cni/bin 目錄下

cat <<EOF > /etc/containerd/config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/approot1/data/containerd"
state = "/run/containerd"
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = "/etc/cni/net.d/cni-default.conf"
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0
EOF

啟動 containerd 服務,並設定為開機啟動

systemctl enable containerd
systemctl restart containerd

配置 kubernetes 源

所有節點都需要配置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

通過 yum list 命令可以檢視當前源的穩定版本,目前的穩定版本是 1.23.3-0

yum list kubeadm kubelet

安裝 kubeadm 以及 kubelet

所有節點都需要安裝

yum install 不帶版本,就會安裝當前穩定版本,為了後面文件通用,我這裡就在安裝的時候帶上了版本

yum install -y kubelet-1.23.3-0 kubeadm-1.23.3-0

配置命令引數自動補全功能

所有節點都需要安裝

yum install -y bash-completion
echo 'source <(kubectl completion bash)' >> $HOME/.bashrc
echo 'source <(kubeadm completion bash)' >> $HOME/.bashrc
source $HOME/.bashrc

啟動 kubelet 服務

所有節點都要操作

systemctl enable kubelet
systemctl restart kubelet

kubeadm 部署 master 節點

注意在 master 節點上操作

檢視 kubeadm init 預設配置

kubeadm config print init-defaults
vim kubeadm.yaml

kubeadm 配置 (v1beta3)

advertiseAddress 引數需要修改成當前 master 節點的 ip

bindPort 引數為 apiserver 服務的訪問埠,可以自定義

criSocket 引數定義 容器執行時 使用的套接字,預設是 dockershim ,這裡需要修改為 contained 的套接字檔案,在 conf.toml 裡面可以找到

imagePullPolicy 引數定義映象拉取策略,IfNotPresent 本地沒有映象則拉取映象;Always 總是重新拉取映象;Never 從不拉取映象,本地沒有映象,kubelet 啟動 pod 就會報錯 (注意駝峰命名,這裡的大寫別改成小寫)

certificatesDir 引數定義證書檔案儲存路徑,沒特殊要求,可以不修改

controlPlaneEndpoint 引數定義穩定訪問 ip ,高可用這裡可以填 vip

dataDir 引數定義 etcd 資料持久化路徑,預設 /var/lib/etcd ,部署前,確認路徑所在磁碟空間是否足夠

imageRepository 引數定義映象倉庫名稱,預設 k8s.gcr.io ,如果要修改,需要注意確定映象一定是可以拉取的到,並且所有的映象都是從這個映象倉庫拉取的

kubernetesVersion 引數定義映象版本,和映象的 tag 一致

podSubnet 引數定義 pod 使用的網段,不要和 serviceSubnet 以及本機網段有衝突

serviceSubnet 引數定義 k8s 服務 ip 網段,注意是否和本機網段有衝突

cgroupDriver 引數定義 cgroup 驅動,預設是 cgroupfs

mode 引數定義轉發方式,可選為iptablesipvs

name 引數定義節點名稱,如果是主機名需要保證可以解析(kubectl get nodes 命令檢視到的節點名稱)

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.91.8
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: 192.168.91.8
  taints: null

---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.91.8:6443
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.3
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 172.22.0.0/16
scheduler: {}

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
cgroupsPerQOS: true

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

叢集初始化

kubeadm init --config kubeadm.yaml

以下是 kubeadm init 的過程,

[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [192.168.91.8 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.91.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [192.168.91.8 localhost] and IPs [192.168.91.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [192.168.91.8 localhost] and IPs [192.168.91.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.504586 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node 192.168.91.8 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node 192.168.91.8 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964

以下操作二選一

kubectl 不加 --kubeconfig 引數,預設找的是 $HOME/.kube/config ,如果不建立目錄,並且將證書複製過去,就要生成環境變數,或者每次使用 kubectl 命令的時候,都要加上 --kubeconfig 引數指定證書檔案,否則 kubectl 命令就找不到叢集了

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc
source ~/.bashrc

檢視 k8s 元件執行情況

kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-cglz9               0/1     Pending   0          12s
coredns-65c54cc984-qwd5b               0/1     Pending   0          12s
etcd-192.168.91.8                      1/1     Running   0          27s
kube-apiserver-192.168.91.8            1/1     Running   0          21s
kube-controller-manager-192.168.91.8   1/1     Running   0          21s
kube-proxy-zwdlm                       1/1     Running   0          12s
kube-scheduler-192.168.91.8            1/1     Running   0          27s

因為還沒有網路元件,coredns 沒有執行成功

安裝 flannel 元件

在 master 節點操作即可

Network 引數的 ip 段要和上面 kubeadm 配置檔案的 podSubnet 一樣

cat <<EOF> flannel.yaml | kubectl apply -f flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['policy']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "172.22.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
EOF

稍等 2-3 分鐘,等待 flannel pod 成為 running 狀態 (具體時間視映象下載速度)

NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-cglz9               1/1     Running   0          2m7s
coredns-65c54cc984-qwd5b               1/1     Running   0          2m7s
etcd-192.168.91.8                      1/1     Running   0          2m22s
kube-apiserver-192.168.91.8            1/1     Running   0          2m16s
kube-controller-manager-192.168.91.8   1/1     Running   0          2m16s
kube-flannel-ds-26drg                  1/1     Running   0          100s
kube-proxy-zwdlm                       1/1     Running   0          2m7s
kube-scheduler-192.168.91.8            1/1     Running   0          2m22s

work 節點加入叢集

在 master 節點初始化完成的時候,已經給出了加入叢集的引數

只需要複製一下,到 work 節點執行即可

--node-name 引數定義節點名稱,如果是主機名需要保證可以解析(kubectl get nodes 命令檢視到的節點名稱)

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
--node-name 192.168.91.9

如果忘記記錄了,或者以後需要增加節點怎麼辦?

執行下面的命令就可以了

kubeadm token create --print-join-command --ttl=0

輸出也很少,這個時候只需要去 master 節點執行 kubectl get nodes 命令就可以檢視節點的狀態了

[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

節點變成 Ready 的時間取決於 work 節點的 flannel 映象拉取時間

可以通過 kubectl get node -n kube-system 檢視 flannel 是否為 Running 狀態

NAME           STATUS   ROLES                  AGE     VERSION
192.168.91.8   Ready    control-plane,master   9m34s   v1.23.3
192.168.91.9   Ready    <none>                 6m11s   v1.23.3

master 節點加入叢集

需要先從其中一個 master 節點獲取 CA 鍵雜湊值

這個值在 kubeadm init 完成時也是已經輸出到終端了

kubeadm init 時如果有修改過 certificatesDir 引數,/etc/kubernetes/pki/ca.crt 這裡的路徑需要注意確認和修改

獲取到的 hash 值,使用格式: sha256:<hash 值>

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

也可以直接建立新的 token ,並且會給出 hash 值,並給出如下的命令,只需要加上--certificate-key--control-plane 引數即可

kubeadm join 192.168.91.8:6443 --token 352obx.dw7rqphzxo6cvz9r --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964

kubeadm token create --print-join-command --ttl=0

解密由 kubeadm init 上傳的證書 secret

對應的 kubeadm join 引數為 --certificate-key

kubeadm init phase upload-certs --upload-certs

在需要擴容的 master 節點執行 kubeadm join 命令加入叢集

--node-name 引數定義節點名稱,如果是主機名需要保證可以解析(kubectl get nodes 命令檢視到的節點名稱)

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
--certificate-key a7a12fb565bf94c768f0097898926e4d0805eb7ecc1477b48fdaaf4d27eb26b0 \
--control-plane \
--node-name 192.168.91.10

檢視節點

kubectl get nodes

NAME            STATUS   ROLES                  AGE    VERSION
192.168.91.10   Ready    control-plane,master   96m    v1.23.3
192.168.91.8    Ready    control-plane,master   161m   v1.23.3
192.168.91.9    Ready    <none>                 158m   v1.23.3

檢視 master 元件

kubectl get pod -n kube-system | egrep -v 'flannel|dns'

NAME                                    READY   STATUS    RESTARTS      AGE
etcd-192.168.91.10                      1/1     Running   0             97m
etcd-192.168.91.8                       1/1     Running   0             162m
kube-apiserver-192.168.91.10            1/1     Running   0             97m
kube-apiserver-192.168.91.8             1/1     Running   0             162m
kube-controller-manager-192.168.91.10   1/1     Running   0             97m
kube-controller-manager-192.168.91.8    1/1     Running   0             162m
kube-proxy-6cczc                        1/1     Running   0             158m
kube-proxy-bfmzz                        1/1     Running   0             97m
kube-proxy-zwdlm                        1/1     Running   0             162m
kube-scheduler-192.168.91.10            1/1     Running   0             97m
kube-scheduler-192.168.91.8             1/1     Running   0             162m

k8s 元件證書續費

檢視當前元件到期時間

kubeadm certs check-expiration

根證書其實是10年的,只是元件的證書只有1年

[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 17, 2023 02:45 UTC   364d            ca                      no
apiserver                  Feb 17, 2023 02:45 UTC   364d            ca                      no
apiserver-etcd-client      Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Feb 17, 2023 02:45 UTC   364d            ca                      no
controller-manager.conf    Feb 17, 2023 02:45 UTC   364d            ca                      no
etcd-healthcheck-client    Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
etcd-peer                  Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
etcd-server                Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
front-proxy-client         Feb 17, 2023 02:45 UTC   364d            front-proxy-ca          no
scheduler.conf             Feb 17, 2023 02:45 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   9y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   9y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   9y              no

使用 kubeadm 命令續費1年

前提是證書已經到期了

這裡使用 date -s 2023-2-18 命令修改系統時間來模擬證書到期的情況

kubectl get nodes --kubeconfig /etc/kubernetes/admin.conf

Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2023-02-18T00:00:15+08:00 is after 2023-02-17T05:34:40Z

因為證書到期,就會出現如下的輸出,然後使用下面的命令再次續費一年,然後重啟 kubelet 以及重啟 etcd kube-apiserver kube-controller-manager kube-scheduler 元件

所有的 master 節點都操作一遍,或者其中一臺 master 節點操作完成後,將 /etc/kubernetes/admin.conf 證書檔案分發到其他 master 節點,替換掉老的證書檔案

cp -r /etc/kubernetes/pki{,.old}
kubeadm certs renew all
systemctl restart kubelet

kubeadm certs check-expiration 再次檢視證書,就可以看到,證書到期時間變成 2024 年了

[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 17, 2024 16:01 UTC   364d            ca                      no
apiserver                  Feb 17, 2024 16:01 UTC   364d            ca                      no
apiserver-etcd-client      Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Feb 17, 2024 16:01 UTC   364d            ca                      no
controller-manager.conf    Feb 17, 2024 16:01 UTC   364d            ca                      no
etcd-healthcheck-client    Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
etcd-peer                  Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
etcd-server                Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
front-proxy-client         Feb 17, 2024 16:01 UTC   364d            front-proxy-ca          no
scheduler.conf             Feb 17, 2024 16:01 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   8y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   8y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   8y              no

編譯 kubeadm 達成十年契約

編譯 kubeadm 需要有 go 語言環境,先來一個 go

go 官方下載地址

官方下載上傳到csdn

wget https://go.dev/dl/go1.17.7.linux-amd64.tar.gz
tar xvf go1.17.7.linux-amd64.tar.gz -C /usr/local/
echo 'PATH=$PATH:/usr/local/go/bin' >> $HOME/.bashrc
source $HOME/.bashrc
go version

下載 k8s 原始碼包,要和當前叢集版本一致

github下載上傳到csdn

wget https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.23.3.tar.gz
tar xvf v1.23.3.tar.gz
cd kubernetes-1.23.3/
vim staging/src/k8s.io/client-go/util/cert/cert.go

duration365d * 10 改成 duration365d * 100

now.Add(duration365d * 100).UTC(),
vim cmd/kubeadm/app/constants/constants.go

CertificateValidity = time.Hour * 24 * 365 改成 CertificateValidity = time.Hour * 24 * 3650

CertificateValidity = time.Hour * 24 * 3650

編譯 kubeadm

make WHAT=cmd/kubeadm GOFLAGS=-v

續費證書

cp -r /etc/kubernetes/pki{,.old}
_output/bin/kubeadm certs renew all
systemctl restart kubelet

檢視證書到期時間

_output/bin/kubeadm certs check-expiration

十年了

[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 15, 2032 07:08 UTC   9y              ca                      no
apiserver                  Feb 15, 2032 07:08 UTC   9y              ca                      no
apiserver-etcd-client      Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
apiserver-kubelet-client   Feb 15, 2032 07:08 UTC   9y              ca                      no
controller-manager.conf    Feb 15, 2032 07:08 UTC   9y              ca                      no
etcd-healthcheck-client    Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
etcd-peer                  Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
etcd-server                Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
front-proxy-client         Feb 15, 2032 07:08 UTC   9y              front-proxy-ca          no
scheduler.conf             Feb 15, 2032 07:08 UTC   9y              ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   9y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   9y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   9y              no

替換 kubeadm 二進位制檔案,如果有多個 master 節點,也要分發過去,進行替換

mv /usr/bin/kubeadm{,-oneyear}
cp _output/bin/kubeadm /usr/bin/

如果是訪問 $HOME/.kube/conf 檔案,需要替換 admin.conf

如果是 export 設定環境變數的,可以不用替換

mv $HOME/.kube/conf{,-oneyear}
cp /etc/kubernetes/admin.conf $HOME/.kube/conf

相關文章