第一章 1.1.1節 Kubeadm安裝K8S高可用叢集

持之以道發表於2023-03-18

1.1 安裝前必讀

請不要使用帶中文的伺服器和克隆的虛擬機器。

生產環境建議使用二進位制的方式安裝。

文件中的IP地址要更換成自己的IP地址,要謹記!!!

1.2 基本環境配置

kubeadm安裝方式自1.14版本以後,安裝方法幾乎沒有任何變化,此文件可以嘗試安裝最新的K8S叢集,centos採用的是7.x版本。

K8S官網:

https://kubernetes.io/docs/setup/

最新版高可用安裝:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

1.2.1 高可用Kubernetes叢集規劃

 主機名  IP地址  說明
 k8s-master01  10.3.50.11  master節點1
 k8s-master02  10.3.50.12  master節點02
 k8s-master03  10.3.50.13  master節點03
 k8s-master-lb  10.3.50.100  keepalived虛擬IP
 k8s-node01  10.3.50.14  worker節點01
 k8s-node02  10.3.50.15  worker節點02

 

配置資訊 備註
系統版本 CentOS 7.9
Docker版本 20.10.x
Pod網段 10.16.0.0/12
Service網段 10.244.0.0、16

注意:宿主機網段、k8s Service網段、Pod網段不能重複!!!

VIP(虛擬IP)不要和公司內網IP重複,首先去ping一下,不通才可用。VIP需要和你的主機在同一個區域網內(不是直接用上述IP,要和本主機網段相同)!

公有云上搭建VIP是公有云的負載均衡的IP,比如阿里雲的內網SLB的地址,騰訊雲內網ELB的地址。不需要再搭建keepalived和haproxy!

1.2.2 基本環境配置

所有節點配置hosts,修改/etc/hosts如下:

注意用自己本機的IP地址!!!

[root@k8s-master01 ~]# vim /etc/hosts
10.3.50.11 k8s-master01
10.3.50.12 k8s-master02
10.3.50.13 k8s-master03
10.3.50.100 k8s-master-lb      # 如果不是高可用叢集,該IP為master01的IP!
10.3.50.14 k8s-node01
10.3.50.15 k8s-node02

CentOS 7安裝yum源如下:

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed
-i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

必備工具安裝

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有節點關閉防火牆、selinux、dnsmasq、swap伺服器配置如下:

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager      # 公有云不要關閉

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

關閉swap分割槽

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

安裝ntpdate

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y

所有節點同步時間。時間同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
crntab -e # 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有節點配置limit:

ulimit -SHn 65535

 

   vim /etc/security/limits.conf

 

   # 末尾新增如下內容

   * soft nofile 65536

   * hard nofile 131072

   * soft nproc 65535

   * hard nproc 655350

   * soft memlock unlimited

   * hard memlock unlimited

master01節點免金鑰登陸其他節點,安裝過程中生成配置檔案和證書均在master上操作,叢集管理也在master01上操作,阿里雲或者AWS上需要單獨一臺kubectl伺服器。金鑰配置如下:

ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

下載安裝所有的原始碼檔案:

cd /root/
git clone https://github.com/dotbalo/k8s-ha-install.git    
git clone https://gitee.com/dukuan/k8s-ha-install.git      # 如果上面的無法下載就使用這個

如果無法下載就下載:

https://gitee.com/dukuan/k8s-ha-install.git

所有節點升級系統並重啟,此處沒有升級核心。

yum update -y --exclude=kernel* && reboot      # CentOS 7需要升級,CentOS 8可以按需升級系統

1.3 核心配置

CentOS 7需要升級核心至4.18+,本地升級的版本為4.19。

在master01節點下載核心!

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

從master01節點傳到其他節點:

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

所有節點安裝核心

cd /root && yum localinstall -y kernel-ml*

所有節點更改核心啟動順序

grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

檢查預設核心是不是4.19

[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

所有節點重啟,然後檢查核心是不是4.19

[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

所有節點安裝ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有節點配置ipvs模組,在核心4.19+版本nf_conntrack_ipv4已經改為nf_conntrack,4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf 
# 加入以下內容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

然後執行systemctl enable --now systemd-modules-load.service即可

開啟一些k8s叢集中必須的核心引數,所有節點配置k8s核心:

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

所有節點配置完核心後,重啟伺服器,保證重啟後核心依舊載入

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

1.4 k8s元件和Runtime安裝

如果安裝的版本低於1.24,選擇Docker和Containerd均可,高於1.24選擇Containerd作為Runtime。

注意:Runtime安裝選擇兩個小節的其中一個小節即可。

1.4.1 Containerd作為Runtime

所有節點安裝docker-ce-20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

可以無需啟動Docker,只需要配置和啟動Containerd即可。

首先配置Containerd所需的模組(所有節點):

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

所有節點載入模組:

modprobe -- overlay
modprobe -- br_netfilter

所有節點,配置Containerd所需的核心:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

所有節點載入核心:

sysctl --system

所有節點配置Containerd的配置檔案:

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

所有節點將Containerd的Ggroup改為Systemd:

vim /etc/containerd/config.toml

找到containerd.runtimes.runc.options,新增SystemdCgroup = true(如果已存在直接修改,否則會報錯),如下圖:

 所有節點將sandbox_image的Pause映象改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6:

 所有節點啟動Containerd,並配置開機自啟動:

systemctl daemon-reload
systemctl enable --now containerd

所有節點配置crictl客戶端連線的執行時位置:

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

1.4.2 Docker作為Runtime(版本小於1.24)

如果選擇Docker作為Runtime,安裝步驟較Containerd較為簡單,只需要安裝並啟動即可。

所有節點安裝docker-ce 20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

由於新版kubelet建議使用systemd,所以把Docker的CgroupDriver也改成systemd:

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

所有節點設定開機自啟動Docker:

systemctl daemon-reload && systemctl enable --now docker

1.5 安裝Kubernetes元件

首先在master01節點檢視最新的Kubernetes版本是多少:

yum list kubeadm.x86_64 --showduplicates | sort -r

所有節點安裝1.23最新版本kubeadm、kubelet和kubectl:

yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y

如果選擇的是Containerd作為Runtime的,需要更改Kubelet的配置使用Containerd作為Runtime:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

注意:如果不是採用Containerd作為Runtime的,請不要執行上述命令!!!

所有節點設定kubelet開機自啟動(由於還未初始化,沒有kubelet的配置檔案,此時kubelet無法啟動,無需管理)

systemctl daemon-reload
systemctl enable --now kubelet

此時kubelet是起不來的,日誌會有報錯不影響!

1.6 高可用元件安裝

(注意:如果不是高可用叢集,haproxy和keepalived無需安裝)

公有云要用公有云自帶的負載均衡,比如阿里雲的SLB,騰訊雲的ELB,用來替代haproxy和keepalived,因為公有云大部分都是不支援keepalived的,另外如果用阿里雲的話,kubelet控制端不能放在master節點,推薦使用騰訊雲,因為阿里雲的SLB有迴環的問題,也就是SLB代理的伺服器不能反向訪問SLB,但是騰訊雲修復了這個問題。

所有master節點透過yum安裝haproxy和keepalived:

yum install keepalived haproxy -y

所有master節點配置haproxy(詳細配置參考haproxy文件,所有master節點的haproxy配置相同):

[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    10.3.50.11:6443  check
  server k8s-master02    10.103.236.202:6443  check
  server k8s-master03    10.103.236.203:6443  check

所有master節點配置keepalived,配置不一樣,注意區分:

[root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf      # 注意每個節點的IP和網路卡(interface引數)

master01節點的配置:

[root@k8s-master01 etc]# mkdir /etc/keepalived

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 10.3.50.11
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.3.50.100
    }
    track_script {
       chk_apiserver
    }
}

master02節點的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 10.103.236.202
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.3.50.100
    }
    track_script {
       chk_apiserver
    }
}

master03節點的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 10.103.236.203
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.3.50.100
    }
    track_script {
       chk_apiserver
    }
}

所有master節點配置keepalived健康檢查檔案:

[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh

啟動haproxy和keepalived

[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived

重要:如果安裝了keepalived和haproxy,需要測試keepalived是否是正常的

測試VIP
[root@k8s-master01 ~]# ping 10.3.50.100 -c 4
PING 10.3.50.100 (10.3.50.100) 56(84) bytes of data.
64 bytes from 10.3.50.100: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 10.3.50.100: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 10.3.50.100: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 10.3.50.100: icmp_seq=4 ttl=64 time=0.063 ms

--- 10.3.50.100 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3106ms
rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms
[root@k8s-master01 ~]# telnet 10.3.50.100 16443
Trying 10.3.50.100...
Connected to 10.3.50.100.
Escape character is '^]'.
Connection closed by foreign host.

如果ping不通且telnet沒有出現 ] ,則認為VIP不可以,不可在繼續往下執行,需要排查keepalived的問題,比如防火牆和selinux、haproxy和keepalived的狀態,監聽埠等

所有節點檢視防火牆狀態必須為disable和inactive:systemctl status firewalld

所有節點檢視selinux狀態,必須為disable;getenforce

master節點檢視haproxy和keepalived狀態:systemctl status keepalived haproxy

master節點檢視監聽埠:netstat -lntp

1.7 叢集初始化

官方初始化文件:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

以下操作只在master01節點執行

master01節點建立kubeadm-config.yaml配置檔案如下:

master01:(# 注意,如果不是高可用叢集,10.3.50.100:16443改為master01的地址,16443改為apiserver的埠,預設是6443,注意更改kubernetesVersion的值和自己伺服器kubeadm的版本一致:kubeadm version)

以下檔案內容,宿主機網段、podSubnet網段、serviceSubnet網段不能重複。

以下操作在master01:

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.3.50.11
  bindPort: 6443
nodeRegistration:
  # criSocket: /var/run/dockershim.sock      # 如果是Docker作為Runtime配置此項
  criSocket: /run/containerd/containerd.sock      # 如果是Containerd作為Runtime配置此項
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 10.3.50.100
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.3.50.100:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.0      # 更改此處的版本號和kubeadm version一致
networking:
  dnsDomain: cluster.local
  podSubnet: 10.16.0.0/12
  serviceSubnet: 10.244.0.0/16
scheduler: {}

更新kubeadm檔案

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

將new.yaml檔案複製到其他master節點

for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done

之前所有master節點提前下載映象,可以節省初始化時間(其他節點不需要更改任何配置,包括IP地址也不需要更改)

kubeadm config images pull --config /root/new.yaml 

所有節點設定開機自啟動kubelet

systemctl enable --now kubelet      #(如果啟動失敗無需管理,初始化成功以後即可啟動)        

master01節點初始化,初始化以後會在/etc/kubernetes目錄下生成對應的證書和配置檔案,之後其他master節點加入master01即可:

kubeadm init --config /root/new.yaml  --upload-certs

如果初始化失敗,重置後再次初始化,命令如下(沒有失敗不要執行):

kubeadm reset -f ; ipvsadm --clear  ; rm -rf ~/.kube

初始化成功以後,會產生Token值,用於其他節點加入時使用,因此要記錄下初始化成功生成的Token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
    --control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94

master01節點配置環境變數,用於訪問Kubernetes叢集:

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc

檢視節點狀態:

 採用初始化安裝方式,所有的系統元件均以容器的方式執行並且在kube-system命令空間內,此時可以檢視Pod狀態:

 1.8 高可用Master

注意:以下步驟是上述init命令產生的Token過期了才需要執行以下步驟,如果沒有過期不需要執行,直接join即可

Token過期後生成新的token:
kubeadm token create --print-join-command

Master需要生成--certificate-key
kubeadm init phase upload-certs  --upload-certs

Token沒有過期直接執行Join就行了

其他master加入叢集,master02和master03分別執行

kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
    --control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c

檢視當前狀態:

 1.9 Node節點的配置

Node節點上主要部署公司的一些業務應用,生產環境中不建議master節點部署系統元件之外的其他Pod,測試環境可以允許master節點部署Pod以節省系統資源。

kubeadm join 10.3.50.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94

所有節點初始化完成後,檢視叢集狀態

 2.0 Calico元件的安裝

以下步驟只在master01執行

cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x && cd calico/

修改Pod網段:

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
kubectl apply -f calico.yaml

檢視容器和節點狀態:

 

 2.1 Metrics部署

在新版的Kubernetes中系統資源的採集均使用Metrics-server,可以透過Metrics採集節點和Pod的記憶體、磁碟、CPU和網路的使用率。

將master01節點的front-proxy-ca.crt複製到所有Node節點

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他節點自行複製):/etc/kubernetes/pki/front-proxy-ca.crt

安裝metrics server

cd /root/k8s-ha-install/kubeadm-metrics-server

# kubectl  create -f comp.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

檢視狀態

kubectl get po -n kube-system -l k8s-app=metrics-server

 變成1/1     Running後

# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   153m         3%     1701Mi          44%       
k8s-master02   125m         3%     1693Mi          44%       
k8s-master03   129m         3%     1590Mi          41%       
k8s-node01     73m          1%     989Mi           25%       
k8s-node02     64m          1%     950Mi           24%       
# kubectl top po -A
NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-66686fdb54-74xkg   2m           17Mi            
kube-system   calico-node-6gqpb                          21m          85Mi            
kube-system   calico-node-bmvjt                          29m          76Mi            
kube-system   calico-node-hdp9c                          15m          82Mi            
kube-system   calico-node-wwrfv                          23m          86Mi            
kube-system   calico-node-zzv88                          22m          84Mi            
kube-system   calico-typha-67c6dc57d6-hj6l4              2m           23Mi            
kube-system   calico-typha-67c6dc57d6-jm855              2m           22Mi            
kube-system   coredns-7d89d9b6b8-sr6mf                   1m           16Mi            
kube-system   coredns-7d89d9b6b8-xqwjk                   1m           16Mi            
kube-system   etcd-k8s-master01                          24m          96Mi            
kube-system   etcd-k8s-master02                          20m          91Mi            
kube-system   etcd-k8s-master03                          21m          92Mi            
kube-system   kube-apiserver-k8s-master01                41m          502Mi           
kube-system   kube-apiserver-k8s-master02                35m          476Mi           
kube-system   kube-apiserver-k8s-master03                71m          480Mi           
kube-system   kube-controller-manager-k8s-master01       15m          65Mi            
kube-system   kube-controller-manager-k8s-master02       1m           26Mi            
kube-system   kube-controller-manager-k8s-master03       2m           27Mi            
kube-system   kube-proxy-8lt45                           1m           18Mi            
kube-system   kube-proxy-d6jfh                           1m           18Mi            
kube-system   kube-proxy-hfnvz                           1m           19Mi            
kube-system   kube-proxy-nsms8                           1m           18Mi            
kube-system   kube-proxy-xmlhq                           3m           21Mi            
kube-system   kube-scheduler-k8s-master01                2m           26Mi            
kube-system   kube-scheduler-k8s-master02                2m           24Mi            
kube-system   kube-scheduler-k8s-master03                2m           24Mi            
kube-system   metrics-server-d54b585c4-4dqpf             46m          16Mi

2.2 Dashboard部署

Dashboard用於展示叢集中的各類資源,同時也可以透過Dashboard實時檢視Pod的日誌和在容器中執行一些命令等。

2.2.1 安裝執行版本dashboard

cd /root/k8s-ha-install/dashboard/

[root@k8s-master01 dashboard]# kubectl  create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

2.2.2 安裝最新版

官方GitHub地址:

https://github.com/kubernetes/dashboard

可以在官方dashboard檢視到最新版dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

2.0.3以具體版本號為準

vim admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
kubectl apply -f admin.yaml -n kube-system

2.2.3 登陸dashboard

在谷歌瀏覽器(Chrome)啟動檔案中加入啟動引數,用於解決無法訪問Dashboard的問題,參考圖1-1:

--test-type --ignore-certificate-errors

更改dashboard的svc為NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

將ChusterIP更改為NodePort(如果已經為NodePort忽略此步驟):

檢視埠號:

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

根據自己的例項埠號,透過任意安裝了kube-proxy的宿主機的IP+埠即可訪問到dashboard:

訪問Dashboard:

https://10.3.50.11:18282(請更改18282為自己的埠)選擇登陸方式為令牌(即Token方式),參考圖1-2

 檢視Token值:

[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-r4vcp
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

將Token值輸入到令牌後,單機登陸即可訪問Dashboard,參考圖1-3:

 2.2.4 【必看】一些必須的配置更改

將kube-proxy改為ipvs模式,因為在初始化叢集的時候註釋了ipvs配置,所以需要自行修改一下:

在master01節點執行

kubectl edit cm kube-proxy -n kube-system
mode: ipvs

更新kube-proxy的Pod:

kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

驗證kube-proxy模式

[root@k8s-master01 1.1.1]# curl 127.0.0.1:10249/proxyMode
ipvs

2.3 【必看】注意事項

注意:kubeadm安裝的叢集,證書有效期預設是一年。master節點的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器執行的。可以透過kubectl get po -n kube-system檢視。

啟動和二進位制不同的是,kubelet的配置檔案在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml,修改後需要重啟kubelet程式。

其他元件的配置檔案在/etc/kubernetes/manifests目錄下,比如kube-apiserver.yaml,該yaml檔案更改後,kubelet會自動重新整理配置,也就是會重啟Pod。不能再次建立該檔案kube-proxy的配置在kube-system明明空間下的configmap中,可以透過

kubectl edit cm kube-proxy -n kube-system

進行更改,更改完成後,可以透過patch重啟kube-proxy

kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

kubeadm安裝後,master節點預設不允許部署Pod,可以透過以下方式開啟:

檢視Taints:

[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule

刪除Taint:

[root@k8s-master01 ~]# kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted
[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             <none>
Taints:             <none>
Taints:             <none>

相關文章