Kubernetes — 在 OpenStack 上使用 kubeadm 部署高可用叢集

範桂颶發表於2020-12-21

目錄

高可用叢集部署拓撲

官方文件:https://kubernetes.io/zh/docs/setup/production-environment/

  • 基礎設施:OpenStack
  • 虛擬機器叢集:3 Master、2 Node、2 Load Balancer
  • 計算資源:2C/2G/20G
  • 作業系統:CentOS 7
  • 版本:Kubernetes 1.18.14
  • Container Runtime:Docker

在這裡插入圖片描述

高可用叢集網路拓撲

網路代理配置

因為要科學上網,所以需要對 HTTP/S Proxy 和 No Proxy 進行精心的配置,否則要麼下不下來軟體,要麼出現網路連通性的錯誤。

export https_proxy=http://{proxy_ip}:7890 http_proxy=http://{proxy_ip}:7890 all_proxy=socks5://{proxy_ip}:7890 no_proxy=localhost,127.0.0.1,{apiserver_endpoint_ip},{master1_ip},{master2_ip},{master3_ip},{node1_ip},{node2_ip},{pod_ip_pool},{service_ip_pool}

Load Balancer 環境準備

基於 OpenStack Octavia LBaaS 來提供 HA Load Balancer,也可以手動的配置 keepalived and haproxy(https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing)。

  • VIP 選擇 kube-mgmt-subnet
    在這裡插入圖片描述

  • Listener 選擇 TCP :6443 Socket(kube-apiserver 的監聽埠)
    在這裡插入圖片描述

  • Members 選擇 3 個 k8s-master
    在這裡插入圖片描述

  • Monitor 同樣選擇 TCP :6443 Socket
    在這裡插入圖片描述

注意:建立好 Load Balancer 之後,首先要測試一下 TCP 反向代理執行正常。由於 apiserver 現在尚未執行,所以預期會出現一個連線拒絕錯誤。在我們初始化了第一個控制平面節點之後,要記得再次進行測試。

# nc -v LOAD_BALANCER_IP PORT
nc -v 192.168.0.100 6443

Kubernetes Cluster 環境準備

注意:在所有節點上執行以下操作。

  • 科學上網。
  • 解析全節點之前的 Hostname。
# vi /etc/hosts

192.168.0.100 kube-apiserver-endpoint
192.168.0.148 k8s-master-1
192.168.0.112 k8s-master-2
192.168.0.193 k8s-master-3
192.168.0.208 k8s-node-1
192.168.0.174 k8s-node-2
  • 開啟全節點之間的 SSH 免密登入。
  • 禁用 Swap 交換分割槽,為了保證 kubelet 正常工作。
  • 確保 iptables 工具不使用 nftables 後端,nftables 後端與當前的 kubeadm 軟體包不相容,它會導致重複的防火牆規則並破壞 kube-proxy。
  • 確保節點之間的網路聯通性。
    在這裡插入圖片描述
  • 關閉 SELinux,為了允許容器訪問主機的檔案系統。
# 將 SELinux 設定為 permissive 模式(相當於將其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  • 在 RHEL/CentOS 7 上為了保證 kube-proxy 控制的資料流量必須進過 iptables 的處理來進行本地路由,所以要確保 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被設定為 1。
# 確保載入了 br_netfilter 模組。
modprobe br_netfilter
lsmod | grep br_netfilter

# 確保 sysctl 配置。
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
  • 安裝基礎依賴軟體:
yum install ebtables ethtool ipvsadm -y

安裝 Container Runtime

注意:當 Linux 使用 systemd 時,會建立一個 cgroup,此時需要保證 Container Runtime、kubelet 和 systemd 使用的是同一個 cgroup,否則會出現不可預測的問題。為此,我們需要將 Container Runtime、kubelet 配置成使用 systemd 來作為 cgroup 驅動,以此使系統更為穩定。

對於 Docker 而言,設定 native.cgroupdriver=systemd 選項即可。

  • 安裝
# 安裝依賴包
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# 新增 Docker 倉庫
sudo yum-config-manager --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

# 安裝 Docker CE
sudo yum update -y && sudo yum install -y \
  containerd.io-1.2.13 \
  docker-ce-19.03.11 \
  docker-ce-cli-19.03.11
  • 配置
# 建立 /etc/docker 目錄
sudo mkdir /etc/docker

# 設定 Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
  • 重啟
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d

# 重啟 Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker

sudo systemctl status docker

安裝 kubeadm、kubelet 和 kubectl

注意:kubeadm 是 Kubernetes Cluster 的部署工具,但 kubeadm 不能用於安裝、管理 kubelet 或 kubectl,所以我們需要收到安裝它們,並且確保三者的版本倉庫是一致的。

  • 更新 Kubernetes YUM 倉庫
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • 安裝
# 查詢版本
$ yum list kubelet kubeadm kubectl --showduplicates | grep 1.18.14 | sort -r
kubelet.x86_64                       1.18.14-0                       kubernetes
kubectl.x86_64                       1.18.14-0                       kubernetes
kubeadm.x86_64                       1.18.14-0                       kubernetes

# 安裝指定版本
yum install -y kubelet-1.18.14 kubeadm-1.18.14 kubectl-1.18.14 --disableexcludes=kubernetes

# 確定版本一致
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:08:45Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:11:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

$ kubelet --version
Kubernetes v1.18.14
  • 配置:上面我們提到過,需要將 Container Runtime、kubelet 配置成使用 systemd 來作為 cgroup 驅動,以此使系統更為穩定。
# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
  • 啟動
$ systemctl daemon-reload
$ systemctl restart kubelet
$ systemctl enable --now kubelet
$ systemctl status kubelet

注意:kubelet.sercice 每隔幾秒就會重啟一次,迴圈等待 kubeadm 的指令。

初始化 Master 主控制平面節點

kubeadm init 的工作流

kubeadm init 命令通過執行下列步驟來啟動一個 Kubernetes Master:

  1. 預檢測系統狀態:當出現 ERROR 時就退出 kubeadm,除非問題得到解決或者顯式指定了 --ignore-preflight-errors=<錯誤列表> 引數。此外,也會出現 WARNING。

  2. 生成一個自簽名的 CA 證書來為每個系統元件建立身份標識:可以顯式指定 --cert-dir CA 中心目錄(預設為 /etc/kubernetes/pki),在該目錄下方式 CA 證書、金鑰等檔案。API Server 證書將為任何 --apiserver-cert-extra-sans 引數值提供附加的 SAN 條目,必要時將其小寫。

  3. 將 kubeconfig 檔案寫入 /etc/kubernetes/ 目錄:以便 kubelet、Controller Manager 和 Scheduler 用來連線到 API Server,它們都有自己的身份標識,同時生成一個名為 admin.conf 的獨立的 kubeconfig 檔案,用於管理操作。

  4. 為 API Server、Controller Manager 和 Scheduler 生成 static Pod 的清單檔案:存放在 /etc/kubernetes/manifests 下,kubelet 會輪訓監視這個目錄,在 Kubernetes 啟動的時候用於建立 Pod。假使沒有提供一個外部的 etcd 服務的話,也會為 etcd 生成一份額外的 static Pod 清單檔案。

待 Master 的 static Pods 都執行正常後,kubeadm init 的工作流程才會繼續往下執行。

  1. 對 Master 使用 Labels 和 Stain mark(汙點標記):以此隔離生產工作負載不會排程到 Master 上。

  2. 生成 Token:將來其他的 Node 可使用該 Token 向 Master 註冊自己。也可以顯式指定 --token 提供 Token String。

  3. 為了使 Node 能夠遵照啟動引導令牌(Bootstrap Tokens)TLS 啟動引導(TLS bootstrapping)這兩份文件中描述的機制加入到 Cluster 中,kubeadm 會執行所有的必要配置:

    1. 建立一個 ConfigMap 提供新增 Node 到 Cluster 中所需的資訊,併為該 ConfigMap 設定相關的 RBAC 訪問規則。
    2. 允許啟動引導令牌訪問 CSR 簽名 API。
    3. 配置自動簽發新的 CSR 請求。
  4. 通過 API Server 安裝一個 DNS 伺服器(CoreDNS)和 kube-proxy:注意,儘管現在已經部署了 DNS 伺服器,但直到安裝 CNI 時才排程它。

執行初始化

注意 1:因為我們要部署高可用叢集,所以必須使用選項 --control-plane-endpoint 指定 API Server 的 HA Endpoint。
注意 2:由於 kubeadm 預設從 k8s.grc.io 下載所需映象,因此可以通過 --image-repository 指定阿里雲的映象倉庫。
注意 3:如果顯式指定 --upload-certs,則意味著在擴充套件冗餘 Master 時,你必須要手動地將 CA 證書從主控制平面節點複製到將要加入的冗餘控制平面節點上,推薦使用。

  • 初始化
kubeadm init \
  --control-plane-endpoint "192.168.0.100" \
  --kubernetes-version "1.18.14" \
  --pod-network-cidr "10.0.0.0/8" \
  --service-cidr "172.16.0.0/16" \
  --token "abcdef.0123456789abcdef" \
  --token-ttl "0" \
  --image-repository registry.aliyuncs.com/google_containers \
  --upload-certs

W1221 00:02:43.240309   10942 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.14
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.0.148 192.168.0.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.148 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.148 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1221 00:02:47.773223   10942 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1221 00:02:47.774303   10942 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.117265 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \
    --control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1
  • 檢視 Pods:檢查 Master 的元件是否齊全。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-fh9vb               0/1     Pending   0          23m
coredns-7ff77c879f-qmk7z               0/1     Pending   0          23m
etcd-k8s-master-1                      1/1     Running   0          24m
kube-apiserver-k8s-master-1            1/1     Running   0          24m
kube-controller-manager-k8s-master-1   1/1     Running   0          24m
kube-proxy-7hx55                       1/1     Running   0          23m
kube-scheduler-k8s-master-1            1/1     Running   0          24m
  • 檢視 Images
$ docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.14            8e6bca1d4e68        2 days ago          117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.14            f17e261f4c8a        2 days ago          173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.14            b734a959c6fb        2 days ago          162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.14            95660d582e82        2 days ago          95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        10 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        10 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        14 months ago       288MB
  • 檢視 Containers
$ docker ps -a
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
f9a068b890d7        8e6bca1d4e68                                        "/usr/local/bin/kube…"   2 minutes ago       Up 2 minutes                            k8s_kube-proxy_kube-proxy-7hx55_kube-system_aacb0da3-16ec-414c-b138-856e2b470bb9_0
3b6adfa0b1a5        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 minutes ago       Up 2 minutes                            k8s_POD_kube-proxy-7hx55_kube-system_aacb0da3-16ec-414c-b138-856e2b470bb9_0
dcc47de63e50        f17e261f4c8a                                        "kube-apiserver --ad…"   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-k8s-master-1_kube-system_c693bd1fadf036d8e2e4df0afd49f062_0
53afb7fbe8c0        b734a959c6fb                                        "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-k8s-master-1_kube-system_f75424d466cd7197fb8095b0f59ea8d9_0
a4101a231c1b        303ce5db0e90                                        "etcd --advertise-cl…"   3 minutes ago       Up 3 minutes                            k8s_etcd_etcd-k8s-master-1_kube-system_f85e02734d6479f3bb3e468eea87fd3a_0
197f510ff6c5        95660d582e82                                        "kube-scheduler --au…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-k8s-master-1_kube-system_0213a889f9350758ac9847629f75db19_0
3a4590590093        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-controller-manager-k8s-master-1_kube-system_f75424d466cd7197fb8095b0f59ea8d9_0
4bbdc99a7a68        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-apiserver-k8s-master-1_kube-system_c693bd1fadf036d8e2e4df0afd49f062_0
19488127c269        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_etcd-k8s-master-1_kube-system_f85e02734d6479f3bb3e468eea87fd3a_0
e67d2f7a27b0        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-scheduler-k8s-master-1_kube-system_0213a889f9350758ac9847629f75db19_0
  • 測試 API Server LB 是否正常
$ nc -v 192.168.0.100 6443
Connection to 192.168.0.100 port 6443 [tcp/sun-sr-https] succeeded!

重新進行初始化

要再次執行 kubeadm init,你必須首先解除安裝叢集,可以在 Master 上觸發盡力而為的清理:

kubeadm reset

Reset 過程不會重置或清除 iptables 規則或 IPVS 表。如果你希望重置 iptables 或 IPVS,則必須手動進行:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm -C

根據需求調整引數,並重新進行初始化:

kubeadm init <args>

新增 Master 冗餘控制平面節點

在第一個 Master 初始化完畢之後,我們就可以繼續新增冗餘 Master 節點了。

  • 新增 k8s-master-2
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \
    --control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [192.168.0.112 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [192.168.0.112 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.0.112 192.168.0.100]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W1221 00:30:18.978564   27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1221 00:30:18.986650   27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1221 00:30:18.987613   27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-12-21T00:30:34.018+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.0.112:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

	Run 'kubectl get nodes' to see this node join the cluster.
  • 新增 k8s-master-3
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \
    --control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544
  • 檢查 Master 節點數
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master-1   NotReady   master   35m     v1.18.14
k8s-master-2   NotReady   master   8m14s   v1.18.14
k8s-master-3   NotReady   master   2m30s   v1.18.14

新增 Node 工作負載節點

部署完高可用的 Master 控制平面之後,我們就可以註冊任意個 Node 工作負載節點了。

  • 新增 Node
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1

W1221 00:39:36.256784   29495 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 檢查 Node:
$ kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master-1   NotReady   master   37m     v1.18.14
k8s-master-2   NotReady   master   10m     v1.18.14
k8s-master-3   NotReady   master   4m24s   v1.18.14
k8s-node-1     NotReady   <none>   51s     v1.18.14
k8s-node-2     NotReady   <none>   48s     v1.18.14

相關文章