基於Containerd容器引擎和kubeadm工具部署K8sv1.26.3

1874發表於2023-04-13

  前文我瞭解了基於ubuntu2204部署containerd容器引擎以及containerd客戶端工具的部署和使用相關話題,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/17301455.html;今天我們來使用基於containerd容器引擎,使用kubeadm工具部署k8s;

  kubernetes master節點及node節點各核心元件的功能概述

  master節點主要元件概述

  master節點主要是k8s的控制節點,在master節點上主要有三個元件必不可少,apiserver、scheduler和controllermanager;etcd是叢集的資料庫,是非常核心的元件,一般是單獨的一個叢集;主要用來儲存k8s叢集各元件的配置資訊以及各類狀態資訊;

  apiserver:apiserver元件主要負責接入請求、認證使用者合法性以及提交資源請求的准入控制,是管理k8s叢集的唯一入口,也是讀寫etcd中資料的唯一元件;其工作原理是叢集各元件透過tcp會話,一直監視(watch)apiserver之上的資源變化,一旦有與之相關的資源變化,對應元件會透過apiserver到etcd中去獲取應配置資訊,然後根據etcd中的配置資訊,做對應的操作,然後把操作後的狀態資訊再透過apiserver寫入etcd中;apiserver認證和准入控制過程,使用者透過https將管理叢集的請求傳送給apiserver,apiserver收到對應請求後,首先會驗證使用者的身份資訊以及合法性;這個認證主要透過使用者提供的證書資訊;如果使用者提供的證書資訊apiserver能夠再etcd中完全匹配到對應資訊,那麼apiserver會認為該使用者是一個合法的使用者;除此之外,apiserver還會對使用者提交的資源請求進行准入控制,所謂准入控制是指對應使用者提交的資源請求做語法格式檢查,如果使用者提交的請求,不滿足apiserver中各api的格式或語法定義,則對應請求同樣會被禁止,只有透過了apiserver的認證和准入控制規則以後,對應資源請求才會透過apiserver存入etcd中或從etcd中獲取,然後被其他元件透過watch機制去發現與自己相關的訊息事件,從而完成使用者的資源請求;簡單講apiserver主要作用就是接入使用者請求(這裡主要是指管理員的各種管理請求和各元件連結apiserver的請求)認證使用者合法性以及提交資源請求的准入控制;叢集其他元件的資料的讀寫是不能直接在etcd中讀寫的,必須透過apiserver代理它們的請求到etcd中做對應資料的操作;

  scheduler:scheduler主要負責為待排程Pod列表的每個Pod從可用Node列表中選擇一個最適合的Node,並將資訊寫入etcd中。node節點上的kubelet透過APIServer監聽到kubernetesScheduler產生的Pod繫結資訊,然後獲取對應的Pod清單,下載Image,並啟動容器;這個過程會經歷兩個階段,第一個階段時預選階段,所謂預選就是透過判斷pod所需的卷是否和節點已存在的卷衝突,判讀備選節點的資源是否滿足備選pod的需求,判斷備選系欸但是否包含備選pod的標籤選擇器指定的標籤,以及節點親和性、汙點和汙點容忍度的關係來判斷pod是否滿足節點容忍度的一些條件;來篩選掉不滿足pod執行的節點,剩下的節點會進入第二階段的優選過程;所謂優選就是在透過預選策略留下來的節點中再透過優選策略來選擇一個最適合執行pod的節點來;如優先從備用節點列表中選擇消耗資源最小的節點(透過cpu和記憶體來計算資源得分,消耗越少,得分越高,即誰的得分高就把pod排程到對應節點上);優先選擇含有指定Label的節點。優先從備選節點列表中選擇各項資源使用率最均衡的節點。使用Pod中tolerationList與節點Taint進行匹配並實現pod排程;簡單講scheduler主要作用就是為pod找到一個合適執行的node;透過預選和優選兩個節點來實現pod的排程,然後把排程資訊寫入到etcd中;

  controllermanager:controllermanager主要用來管理叢集的其他控制器,比如副本控制器、節點控制器、名稱空間控制器和服務賬號控制器等;控制器作為叢集內部的管理控制中心,負責叢集內的Node、Pod副本、服務端點(Endpoint)、名稱空間(Namespace)、服務賬號(ServiceAccount)、資源定額(ResourceQuota)的管理,當某個Node意外當機時,ControllerManager會及時發現並執行自動化修復流程,確保叢集中的pod副本始終處於預期的工作狀態。controller-manager控制器每間隔5秒檢查一次節點的狀態。如果controller-manager控制器沒有收到自節點的心跳,則將該node節點被標記為不可達。controller-manager將在標記為無法訪問之前等待40秒。如果該node節點被標記為無法訪問後5分鐘還沒有恢復,controller-manager會刪除當前node節點的所有pod並在其它可用節點重建這些pod;簡單講controllermanager主要負責叢集pod副本始終處於用於期望的狀態,如果對應pod不滿足使用者期望狀態,則controllermanager會呼叫對應控制器,透過重啟或重建的方式讓對應pod始終和使用者期望狀態保持一致;

  node節點主要元件概述

  kube-proxy:kube-proxy是k8s網路代理,執行在node之上它反映了node上KubernetesAPI中定義的服務,並可以透過一組後端進行簡單的TCP、UDP和SCTP流轉發或者在一組後端進行迴圈TCP、UDP和SCTP轉發,使用者必須使用apiserverAPI建立一個服務來配置代理,其實就是kube-proxy透過在主機上維護網路規則並執行連線轉發來實現Kubernetes服務訪問;kube-proxy執行在每個節點上,監聽APIServer中服務物件的變化,再透過管理IPtables或者IPVS規則來實現網路的轉發;IPVS相對IPtables效率會更高一些,使用IPVS模式需要在執行Kube-Proxy的節點上安裝ipvsadm、ipset工具包和載入ip_vs核心模組,當Kube-Proxy以IPVS代理模式啟動時,Kube-Proxy將驗證節點上是否安裝了IPVS模組,如果未安裝,則Kube-Proxy將回退到IPtables代理模式;使用IPVS模式,Kube-Proxy會監視KubernetesService物件和Endpoints,呼叫宿主機核心Netlink介面以相應地建立IPVS規則並定期與KubernetesService物件Endpoints物件同步IPVS規則,以確保IPVS狀態與期望一致,訪問服務時,流量將被重定向到其中一個後端Pod,IPVS使用雜湊表作為底層資料結構並在核心空間中工作,這意味著IPVS可以更快地重定向流量,並且在同步代理規則時具有更好的效能,此外,IPVS為負載均衡演算法提供了更多選項,例如:rr(輪詢排程)、lc(最小連線數)、dh(目標雜湊)、sh(源雜湊)、sed(最短期望延遲)、nq(不排隊排程)等。kubernetesv1.11之後預設使用IPVS,預設排程演算法為rr;

  kubelet:kubelet是執行在每個worker節點的代理元件,它會監視已分配給節點的pod;它主要功能有,向master彙報node節點的狀態資訊;接受指令並在Pod中建立docker容器;準備Pod所需的資料卷;返回pod的執行狀態;在node節點執行容器健康檢查;

  客戶端工具元件

  1、命令列工具kubectl:它時一個透過命令列對k8s叢集進行管理的客戶端工具;工作邏輯是,預設情況是在使用者家目錄的.kube目錄中查詢一個名為config的配置檔案,這個配置檔案主要是儲存用於連線k8s叢集的認證資訊;當然我們也可以使用設定KUBECONFIG環境變數或者使用--kubeconfig引數來指定kubeconfig配置檔案來使用此工具;

  2、Dashboard:該工具是基於web網頁的k8s客戶端工具;我們可以使用該web頁面檢視叢集中的應用概覽資訊,也可以建立或者修改k8s資源,也可以對deployment實現彈性伸縮、發起滾動升級、刪除pod或這是用嚮導建立新的應用;

  基於ubuntu2204,containerd為容器引擎部署K8S

  1、系統最佳化

   新增核心引數

root@k8s-master01:~# cat <<EOF >>/etc/sysctl.conf
> net.ipv4.ip_forward=1
> vm.max_map_count=262144
> kernel.pid_max=4194303
> fs.file-max=1000000
> net.ipv4.tcp_max_tw_buckets=6000
> net.netfilter.nf_conntrack_max=2097152
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> vm.swappiness=0
> EOF
root@k8s-master01:~#

  應用上述配置,使其核心引數生效

root@k8s-master01:~# sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
kernel.pid_max = 4194303
fs.file-max = 1000000
net.ipv4.tcp_max_tw_buckets = 6000
net.netfilter.nf_conntrack_max = 2097152
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
vm.swappiness = 0
root@k8s-master01:~# 

  提示:這裡提示bridge-nf-call-ip6tables和bridge-nf-call-iptables沒有找到檔案;其原因是br_netfilter模組沒有掛載;

  掛載br_netfilter核心模組

root@k8s-master01:~# lsmod |grep br_netfilter
root@k8s-master01:~# modprobe br_netfilter
root@k8s-master01:~# lsmod |grep br_netfilter
br_netfilter           32768  0
bridge                307200  1 br_netfilter
root@k8s-master01:~# 

  再次使用sysctl -p命令看看 bridge-nf-call-ip6tables和bridge-nf-call-iptables是否還會包未找到檔案呢?

root@k8s-master01:~# sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
kernel.pid_max = 4194303
fs.file-max = 1000000
net.ipv4.tcp_max_tw_buckets = 6000
net.netfilter.nf_conntrack_max = 2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
root@k8s-master01:~# 

  核心模組開機掛載

root@k8s-master01:~# cat <<EOF >>/etc/modules-load.d/modules.conf
> ip_vs
> ip_vs_lc
> ip_vs_lblc
> ip_vs_lblcr
> ip_vs_rr
> ip_vs_wrr
> ip_vs_sh
> ip_vs_dh
> ip_vs_fo
> ip_vs_nq
> ip_vs_sed
> ip_vs_ftp
> ip_vs_sh
> ip_tables
> ip_set
> ipt_set
> ipt_rpfilter
> ipt_REJECT
> ipip
> xt_set
> br_netfilter
> nf_conntrack
> overlay
> EOF
root@k8s-master01:~# 

  提示:上述配置需要重啟或者使用modprobe命令分別載入;如: tail -23  /etc/modules-load.d/modules.conf |xargs -L1 modprobe;

  安裝ipvs管理工具和一些依賴包

root@k8s-master01:~# apt update && apt install bash-completion conntrack ipset ipvsadm jq libseccomp2 nfs-common psmisc rsync socat -y

  配置資源限制

root@k8s-master01:~# tail -10 /etc/security/limits.conf
root    soft    core            unlimited
root    hard    core            unlimited
root    soft    nproc           1000000
root    hard    nproc           1000000
root    soft    nofile          1000000
root    hard    nofile          1000000
root    soft    memlock         32000
root    hard    memlock         32000
root    soft    msgqueue        8192000
root    hard    msgqueue        8192000
root@k8s-master01:~# 

  提示:配置上述資源限制重啟伺服器生效;

  禁用swap裝置

root@k8s-master01:~# free -mh
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       330Mi       2.2Gi       1.0Mi       1.3Gi       3.2Gi
Swap:          3.8Gi          0B       3.8Gi
root@k8s-master01:~# swapoff -a
root@k8s-master01:~# free -mh  
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       329Mi       2.2Gi       1.0Mi       1.3Gi       3.2Gi
Swap:             0B          0B          0B
root@k8s-master01:~# cat /etc/fstab 
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-yecQxSAXrKdCNj1XNrQeaacvLAmKdL5SVadOXV0zHSlfkdpBEsaVZ9erw8Ac9gpm / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/80fe59b8-eb79-4ce9-a87d-134bc160e976 /boot ext4 defaults 0 1
#/swap.img      none    swap    sw      0       0
root@k8s-master01:~# 

  2、基於指令碼自動化安裝containerd

root@k8s-master01:/usr/local/src# cat runtime-install.sh
#!/bin/bash
DIR=`pwd`
PACKAGE_NAME="docker-20.10.19.tgz"
DOCKER_FILE=${DIR}/${PACKAGE_NAME}
#read -p "請輸入使用docker server的普通使用者名稱稱,預設為docker:" USERNAME
if test -z ${USERNAME};then
  USERNAME=docker
fi
centos_install_docker(){
  grep "Kernel" /etc/issue &> /dev/null
  if [ $? -eq 0 ];then
    /bin/echo  "當前系統是`cat /etc/redhat-release`,即將開始系統初始化、配置docker-compose與安裝docker" && sleep 1
    systemctl stop firewalld && systemctl disable firewalld && echo "防火牆已關閉" && sleep 1
    systemctl stop NetworkManager && systemctl disable NetworkManager && echo "NetworkManager" && sleep 1
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux && setenforce  0 && echo "selinux 已關閉" && sleep 1
    \cp ${DIR}/limits.conf /etc/security/limits.conf 
    \cp ${DIR}/sysctl.conf /etc/sysctl.conf
    /bin/tar xvf ${DOCKER_FILE}
    \cp docker/*  /usr/local/bin
    mkdir /etc/docker && \cp daemon.json /etc/docker

    \cp containerd.service /lib/systemd/system/containerd.service
    \cp docker.service  /lib/systemd/system/docker.service
    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.28.6 /usr/bin/docker-compose
    
    groupadd docker && useradd docker -s /sbin/nologin -g docker
    id -u  ${USERNAME} &> /dev/null
    if [ $? -ne 0 ];then
      useradd ${USERNAME}
      usermod ${USERNAME} -G docker
    else 
      usermod ${USERNAME} -G docker
    fi
    docker_install_success_info
  fi
}

ubuntu_install_docker(){
  grep "Ubuntu" /etc/issue &> /dev/null
  if [ $? -eq 0 ];then
    /bin/echo  "當前系統是`cat /etc/issue`,即將開始系統初始化、配置docker-compose與安裝docker" && sleep 1
    \cp ${DIR}/limits.conf /etc/security/limits.conf
    \cp ${DIR}/sysctl.conf /etc/sysctl.conf
    
    /bin/tar xvf ${DOCKER_FILE}
    \cp docker/*  /usr/local/bin 
    mkdir /etc/docker && \cp daemon.json /etc/docker

    \cp containerd.service /lib/systemd/system/containerd.service
    \cp docker.service  /lib/systemd/system/docker.service
    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.28.6 /usr/bin/docker-compose

    groupadd docker && useradd docker -r -m -s /sbin/nologin -g docker
    id -u  ${USERNAME} &> /dev/null
    if [ $? -ne 0 ];then
      groupadd  -r  ${USERNAME}
      useradd -r -m -s /bin/bash -g ${USERNAME} ${USERNAME}
      usermod ${USERNAME} -G docker
    else
      usermod ${USERNAME} -G docker
    fi  
    docker_install_success_info
  fi
}

ubuntu_install_containerd(){
  DIR=`pwd`
  PACKAGE_NAME="containerd-1.6.20-linux-amd64.tar.gz"
  CONTAINERD_FILE=${DIR}/${PACKAGE_NAME}
  NERDCTL="nerdctl-1.3.0-linux-amd64.tar.gz"
  CNI="cni-plugins-linux-amd64-v1.2.0.tgz"
  RUNC="runc.amd64"
  
  mkdir -p /etc/containerd /etc/nerdctl
  tar xvf ${CONTAINERD_FILE} &&  cp bin/* /usr/local/bin/
  \cp runc.amd64   /usr/bin/runc && chmod  a+x /usr/bin/runc
  \cp config.toml  /etc/containerd/config.toml
  \cp containerd.service /lib/systemd/system/containerd.service

  #CNI 
  mkdir  /opt/cni/bin -p 
  tar xvf ${CNI}  -C  /opt/cni/bin/

  #nerdctl
  tar xvf ${NERDCTL}  -C /usr/local/bin/
  \cp nerdctl.toml /etc/nerdctl/nerdctl.toml

  containerd_install_success_info
}

containerd_install_success_info(){
    /bin/echo "正在啟動containerd server並設定為開機自啟動!" 
    #start containerd  service
    systemctl daemon-reload && systemctl  restart  containerd && systemctl  enable containerd
    /bin/echo "containerd is:" `systemctl  is-active  containerd`
    sleep 0.5 && /bin/echo "containerd server安裝完成,歡迎進入containerd的容器世界!" && sleep 1
}

docker_install_success_info(){
    /bin/echo "正在啟動docker server並設定為開機自啟動!" 
    systemctl  enable containerd.service && systemctl  restart containerd.service
    systemctl  enable docker.service && systemctl  restart docker.service
    systemctl  enable docker.socket && systemctl  restart docker.socket
    sleep 0.5 && /bin/echo "docker server安裝完成,歡迎進入docker世界!" && sleep 1
}

usage(){
    echo "使用方法為[shell指令碼  containerd|docker]"
}

main(){
  RUNTIME=$1
  case ${RUNTIME}  in 
    docker)
      centos_install_docker  
      ubuntu_install_docker
      ;;
    containerd)
      ubuntu_install_containerd
      ;;
    *)
      usage;
    esac;
}

main $1
root@k8s-master01:/usr/local/src# 

  提示:上述指令碼主要用幾個函式來描述了在centos和ubuntu系統之上安裝docker、docker-compose和containerd相關步驟;使用該指令碼,需要將所需安裝二進位制包,配置檔案都放置與指令碼同一目錄;

  執行指令碼安裝containerd

root@k8s-master01:/usr/local/src# sh runtime-install.sh containerd
bin/
bin/containerd-shim
bin/containerd-shim-runc-v1
bin/containerd-stress
bin/containerd
bin/ctr
bin/containerd-shim-runc-v2
./
./loopback
./bandwidth
./ptp
./vlan
./host-device
./tuning
./vrf
./sbr
./dhcp
./static
./firewall
./macvlan
./dummy
./bridge
./ipvlan
./portmap
./host-local
nerdctl
containerd-rootless-setuptool.sh
containerd-rootless.sh
正在啟動containerd server並設定為開機自啟動!
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
containerd is: active
containerd server安裝完成,歡迎進入containerd的容器世界!
root@k8s-master01:/usr/local/src# 

  驗證:containerd是否安裝成功,nerdctl是否可正常使用?

root@k8s-master01:/usr/local/src# systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2023-04-12 08:13:02 UTC; 59s ago
       Docs: https://containerd.io
   Main PID: 1136 (containerd)
      Tasks: 10
     Memory: 13.7M
        CPU: 609ms
     CGroup: /system.slice/containerd.service
             └─1136 /usr/local/bin/containerd

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.134059545Z" level=info msg="containerd successfully booted in 0.032924s"
Apr 12 08:13:02 k8s-master01.ik8s.cc systemd[1]: Started containerd container runtime.
Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.133927633Z" level=info msg="Start recovering state"
Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135031130Z" level=info msg="Start event monitor"
Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135070328Z" level=info msg="Start snapshots syncer"
Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135083377Z" level=info msg="Start cni network conf syncer for default"
Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135089191Z" level=info msg="Start streaming server"
Apr 12 08:13:02 k8s-master01.ik8s.cc systemd[1]: /lib/systemd/system/containerd.service:1: Assignment outside of section. Ignoring.
Apr 12 08:13:02 k8s-master01.ik8s.cc systemd[1]: /lib/systemd/system/containerd.service:1: Assignment outside of section. Ignoring.
Apr 12 08:14:02 k8s-master01.ik8s.cc systemd[1]: /lib/systemd/system/containerd.service:1: Assignment outside of section. Ignoring.
root@k8s-master01:/usr/local/src# nerdctl ps -a
CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES
root@k8s-master01:/usr/local/src# 

  跑一個nginx容器看看是否可以正常跑起來?

root@k8s-master01:/usr/local/src# nerdctl ps -a
CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES
docker.io/library/nginx:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:dbf632af6963e56f6b3fc4196578b75742482490c236f5009b3e68cf93a62997:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:bfb112db4075460ec042ce13e0b9c3ebd982f93ae0be155496d050bb70006750: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9862f2ee2e8cd9dab487d7dc2152a3f76cb503772dfb8e830973264340d6233e:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f1f26f5702560b7e591bef5c4d840f76a232bf13fd5aefc4e22077a1ae4440c7:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:86b2457cc2b0d68200061e3420623c010de5e6fb184e18328a46ef22dbba490a:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7f7f30930c6b1fa9e421ba5d234c3030a838740a22a42899d3df5f87e00ea94f:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:e1eeb0f1c06b25695a5b9df587edf4bf12a5af9432696811dd8d5fcfd01d7949:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:2836b727df80c28853d6c505a2c3a5959316e48b1cff42d98e70cb905b166c82:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 18.5s                                                                    total:  54.4 M (2.9 MiB/s)                                       
83f7da147436d7f621fd85e9546f44beed9511525f17bf1dc2c01230108d31a9
root@k8s-master01:/usr/local/src# nerdctl ps 
CONTAINER ID    IMAGE                             COMMAND                   CREATED           STATUS    PORTS                 NAMES
83f7da147436    docker.io/library/nginx:latest    "/docker-entrypoint.…"    12 seconds ago    Up        0.0.0.0:80->80/tcp    nginx-83f7d
root@k8s-master01:/usr/local/src# 

  瀏覽器訪問master01的IP地址80埠,看看nginx是否能夠正常訪問?

  提示:能夠看到我們剛才跑的nginx容器是可以正常訪問的,說明我們基於自動化指令碼部署的containerd容器環境已經準備好了;

  刪除容器測試

root@k8s-master01:/usr/local/src# nerdctl ps 
CONTAINER ID    IMAGE                             COMMAND                   CREATED          STATUS    PORTS                 NAMES
83f7da147436    docker.io/library/nginx:latest    "/docker-entrypoint.…"    4 minutes ago    Up        0.0.0.0:80->80/tcp    nginx-83f7d
root@k8s-master01:/usr/local/src# nerdctl rm -f 83f7da147436
83f7da147436
root@k8s-master01:/usr/local/src# nerdctl ps -a
CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES
root@k8s-master01:/usr/local/src# 

  3、配置apt源並安裝kubeadm、kubelet、kubectl

root@k8s-master01:~# apt-get update && apt-get install -y apt-transport-https -y
root@k8s-master01:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg |apt-key add -
root@k8s-master01:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
root@k8s-master01:~# 

  安裝kubeadm、kubelet、kubectl

root@k8s-master01:~# apt update && apt-get install -y kubeadm=1.26.3-00 kubectl=1.26.3-00 kubelet=1.26.3-00

  提示:新增了軟體源以後,必須使用apt update命令來更新源;然後可以使用apt-cache madison kubeadm來檢視倉庫中所有kubeadm版本,如果要安裝指定版本的軟體,可以使用上述方式,如果不指定就是安裝倉庫中最新版本;

  驗證kubeadm的版本資訊

root@k8s-master01:~# kubeadm  version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:38:47Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-master01:~# 

  檢視部署k8s所需的映象

root@k8s-master01:~# kubeadm config images list  --kubernetes-version v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
root@k8s-master01:~# 

  提示:上述列出部署k8s v1.26.3版本所需映象;上述映象在國內需要代理才可以正常下載,解決方案有兩個,第一個是使用國內阿里雲的映象倉庫中的映象;第二就是使用代理;

  第一種方式,使用阿里雲映象倉庫中的映象

root@k8s-master01:~# kubeadm config images list  --kubernetes-version v1.26.3 --image-repository="registry.aliyuncs.com/google_containers"
registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
registry.aliyuncs.com/google_containers/kube-proxy:v1.26.3
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.aliyuncs.com/google_containers/coredns:v1.9.3
root@k8s-master01:~# 

  提示:可以使用--image-repository選項來指定映象倉庫地址;下載和初始化叢集master節點的時候也是用相同選項指定映象倉庫地址即可;

  第二種方式,在crt終端使用export 匯出代理配置到當前終端環境變數

root@k8s-master01:~# export https_proxy=http://192.168.0.80:8123
root@k8s-master01:~# export http_proxy=http://192.168.0.80:8123 
root@k8s-master01:~# 

  提示:後面寫自己的代理伺服器IP地址和埠;

  驗證當前終端是否是代理服務的IP地址?

  測試,使用nerdctl命令列工具,下載registry.k8s.io/etcd:3.5.6-0映象,看看可以正常下載下來?

  提示:可以看到現在直接下載谷歌倉庫中的映象是沒有問題;這種方式網速稍微有點慢,建議使用國內倉映象庫;

  下載部署k8s所需的映象 並指定從阿里雲倉庫下載

root@k8s-master01:~# kubeadm config images pull  --kubernetes-version v1.26.3 --image-repository="registry.aliyuncs.com/google_containers"
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.3
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3
root@k8s-master01:~# nerdctl images
REPOSITORY                                                         TAG        IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE
registry.aliyuncs.com/google_containers/coredns                    v1.9.3     8e352a029d30    6 seconds ago     linux/amd64    47.0 MiB     14.2 MiB
registry.aliyuncs.com/google_containers/coredns                    <none>     8e352a029d30    6 seconds ago     linux/amd64    47.0 MiB     14.2 MiB
registry.aliyuncs.com/google_containers/etcd                       3.5.6-0    dd75ec974b0a    11 seconds ago    linux/amd64    289.3 MiB    97.8 MiB
registry.aliyuncs.com/google_containers/etcd                       <none>     dd75ec974b0a    11 seconds ago    linux/amd64    289.3 MiB    97.8 MiB
registry.aliyuncs.com/google_containers/kube-apiserver             v1.26.3    b8dda58b0c68    44 seconds ago    linux/amd64    131.4 MiB    33.7 MiB
registry.aliyuncs.com/google_containers/kube-apiserver             <none>     b8dda58b0c68    44 seconds ago    linux/amd64    131.4 MiB    33.7 MiB
registry.aliyuncs.com/google_containers/kube-controller-manager    v1.26.3    28c0deb96fd8    37 seconds ago    linux/amd64    121.3 MiB    30.7 MiB
registry.aliyuncs.com/google_containers/kube-controller-manager    <none>     28c0deb96fd8    37 seconds ago    linux/amd64    121.3 MiB    30.7 MiB
registry.aliyuncs.com/google_containers/kube-proxy                 v1.26.3    d89b6c6a8ecc    29 seconds ago    linux/amd64    66.9 MiB     20.5 MiB
registry.aliyuncs.com/google_containers/kube-proxy                 <none>     d89b6c6a8ecc    29 seconds ago    linux/amd64    66.9 MiB     20.5 MiB
registry.aliyuncs.com/google_containers/kube-scheduler             v1.26.3    ef87c0880906    33 seconds ago    linux/amd64    57.5 MiB     16.7 MiB
registry.aliyuncs.com/google_containers/kube-scheduler             <none>     ef87c0880906    33 seconds ago    linux/amd64    57.5 MiB     16.7 MiB
registry.aliyuncs.com/google_containers/pause                      3.9        7031c1b28338    27 seconds ago    linux/amd64    732.0 KiB    314.0 KiB
registry.aliyuncs.com/google_containers/pause                      <none>     7031c1b28338    27 seconds ago    linux/amd64    732.0 KiB    314.0 KiB
<none>                                                             <none>     b8dda58b0c68    44 seconds ago    linux/amd64    131.4 MiB    33.7 MiB
<none>                                                             <none>     8e352a029d30    6 seconds ago     linux/amd64    47.0 MiB     14.2 MiB
<none>                                                             <none>     ef87c0880906    33 seconds ago    linux/amd64    57.5 MiB     16.7 MiB
<none>                                                             <none>     d89b6c6a8ecc    29 seconds ago    linux/amd64    66.9 MiB     20.5 MiB
<none>                                                             <none>     28c0deb96fd8    37 seconds ago    linux/amd64    121.3 MiB    30.7 MiB
<none>                                                             <none>     7031c1b28338    27 seconds ago    linux/amd64    732.0 KiB    314.0 KiB
<none>                                                             <none>     dd75ec974b0a    11 seconds ago    linux/amd64    289.3 MiB    97.8 MiB
root@k8s-master01:~# 

  4、k8s叢集初始化

  master節點初始化

root@k8s-master01:~# kubeadm init --apiserver-advertise-address=192.168.0.71 \
>              --apiserver-bind-port=6443 \
>              --kubernetes-version=v1.26.3 \
>              --pod-network-cidr=10.100.0.0/16 \
>              --service-cidr=10.200.0.0/16 \
>              --service-dns-domain=cluster.local \
>              --image-repository="registry.aliyuncs.com/google_containers" \
>              --ignore-preflight-errors=swap
[init] Using Kubernetes version: v1.26.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01.ik8s.cc kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.200.0.1 192.168.0.71]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01.ik8s.cc localhost] and IPs [192.168.0.71 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01.ik8s.cc localhost] and IPs [192.168.0.71 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.504393 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.ik8s.cc as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01.ik8s.cc as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: xc1xea.briuce4ykh8qulcn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.71:6443 --token xc1xea.briuce4ykh8qulcn \
        --discovery-token-ca-cert-hash sha256:ef06b68c35354849f25b985efb36eefc91dbc6cc1a7591537dd563cfd13e7504 
root@k8s-master01:~# 

  提示:如果你能看到類似上述資訊,表示k8smaster節點初始化成功;

  建立使用者家目錄/.kube目錄,並複製配置檔案

root@k8s-master01:~# mkdir -p $HOME/.kube
root@k8s-master01:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master01:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-master01:~# 

  驗證kubectl工具是否可用?

root@k8s-master01:~# kubectl get nodes 
NAME                   STATUS     ROLES           AGE   VERSION
k8s-master01.ik8s.cc   NotReady   control-plane   79s   v1.26.3
root@k8s-master01:~# kubectl get pods -n kube-system 
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-5bbd96d687-822lh                       0/1     Pending   0          71s
coredns-5bbd96d687-mxvth                       0/1     Pending   0          71s
etcd-k8s-master01.ik8s.cc                      1/1     Running   0          85s
kube-apiserver-k8s-master01.ik8s.cc            1/1     Running   0          85s
kube-controller-manager-k8s-master01.ik8s.cc   1/1     Running   0          87s
kube-proxy-bt79n                               1/1     Running   0          71s
kube-scheduler-k8s-master01.ik8s.cc            1/1     Running   0          85s
root@k8s-master01:~# 

  提示:能夠看到上述資訊,表示kubectl工具使用是沒有問題的;

  加入node節點

root@k8s-node01:~# kubeadm join 192.168.0.71:6443 --token xc1xea.briuce4ykh8qulcn \
>         --discovery-token-ca-cert-hash sha256:ef06b68c35354849f25b985efb36eefc91dbc6cc1a7591537dd563cfd13e7504 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-node01:~# 

  提示:能夠看到上述資訊,表示節點加入成功;

  驗證:在master節點使用kubectl get nodes命令,看看對應節點是否都加入叢集?

root@k8s-master01:~# kubectl get nodes
NAME                   STATUS     ROLES           AGE     VERSION
k8s-master01.ik8s.cc   NotReady   control-plane   4m31s   v1.26.3
k8s-node01.ik8s.cc     NotReady   <none>          68s     v1.26.3
k8s-node02.ik8s.cc     NotReady   <none>          69s     v1.26.3
root@k8s-master01:~# 

  提示:能夠看到有兩個節點現在已經加入叢集;但是狀態是notready,這是因為叢集還沒有部署網路外掛;

  5、部署網路組元件calico

   下載網路外掛calico的部署清單

root@k8s-master01:~# wget https://docs.projectcalico.org/v3.25/manifests/calico.yaml --no-check-certificate
--2023-04-13 12:56:42--  https://docs.projectcalico.org/v3.25/manifests/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 34.142.149.67, 52.74.166.77, 2406:da18:880:3800::c8, ...
Connecting to docs.projectcalico.org (docs.projectcalico.org)|34.142.149.67|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://docs.tigera.io/archive/v3.25/manifests/calico.yaml [following]
--2023-04-13 12:56:43--  https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
Resolving docs.tigera.io (docs.tigera.io)... 34.126.184.144, 18.139.194.139, 2406:da18:880:3800::c8, ...
Connecting to docs.tigera.io (docs.tigera.io)|34.126.184.144|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 238089 (233K) [text/yaml]
Saving to: ‘calico.yaml’

calico.yaml                                100%[=======================================================================================>] 232.51K  28.6KB/s    in 8.1s    

2023-04-13 12:56:52 (28.6 KB/s) - ‘calico.yaml’ saved [238089/238089]

root@k8s-master01:~# 

  修改calica.yaml

  提示:這個配置必須和叢集初始化指定的pod網路相同;

  在calica.yaml中指定網路卡資訊

  提示:這個根據自己環境中伺服器的網路卡名稱來指定;我這裡伺服器都是ens33;需要注意的是,修改yaml檔案一定要注意格式縮排;

  檢視calico所需映象

root@k8s-master01:~# cat calico.yaml |grep image:
          image: docker.io/calico/cni:v3.25.0
          image: docker.io/calico/cni:v3.25.0
          image: docker.io/calico/node:v3.25.0
          image: docker.io/calico/node:v3.25.0
          image: docker.io/calico/kube-controllers:v3.25.0
root@k8s-master01:~#

  提前下載映象

root@k8s-master01:~# nerdctl pull docker.io/calico/cni:v3.25.0
WARN[0000] skipping verifying HTTPS certs for "docker.io" 
docker.io/calico/cni:v3.25.0:                                                     resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:ec0fa42b4d03398995800b44131b200aee2c76354d405d5d91689ec99cc70c56: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8833c0c1f858ee768478151ab71b3b0a3eeae160963b7d006c05ec9b493e8940:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8729f736e48f5c644d03591958297cd4c1942b5aaf451074f8cd80bac898149a:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:79eb57bec78a8d14d1085acffe5577fe88b470d38dde295a1aba66d17e663d61:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:bc84ed7b6a651f36d1486db36f1c2c1181b6c14463ea310823e6c2f69d0af100:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:84d025afc533dc367b05ee95125697adff781f24ef1366c522d8d7f65df0319b:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:ae5822c70daca619af58b197f6a3ea6f7cac1b785f6fbea673fb37be4853f6d5:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:df79b6dbf625547000b30f4c62ac5c5133fcb22b7d85d16b6f4bbb3c7733fc27:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:5e4c3414e9caf71bcc544ba2d483d0f05818e82c6a1b900e55ccea1c635fbd7b:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 31.5s                                                                    total:  83.9 M (2.7 MiB/s)                                       
root@k8s-master01:~# nerdctl pull docker.io/calico/node:v3.25.0
WARN[0000] skipping verifying HTTPS certs for "docker.io" 
docker.io/calico/node:v3.25.0:                                                    resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:56db28c3632192f56a1ff1360b83ef640fc8f41fa21a83126194811713e2f022: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:4044e890e577f930490efd4f931549733818f28b5d5f8c63f47617e19a48a177:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:2db093bd3f50f881bd382b13a13d661699ca335fea1a83d167528f85db2e74cd:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 39.9s                                                                    total:  83.1 M (2.1 MiB/s)                                       
root@k8s-master01:~# nerdctl pull docker.io/calico/kube-controllers:v3.25.0
WARN[0000] skipping verifying HTTPS certs for "docker.io" 
docker.io/calico/kube-controllers:v3.25.0:                                        resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:095484425365e4eac24abfdd28ba14d133ffcc782c36b5d4f08533ef75ee91e4: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:5e785d005ccc1ab22527a783835cf2741f6f5f385a8956144c661f8c23ae9d78:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:a26da0f61ecbf48440a5921ea6bc8bafbebc76f139cb43387e6e6a3987505fda:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0a74accf7d2383a32999addc064ca91f45ad9c94fda36ffe19b58110cf6c43eb:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:cbb0c534f93adbeeb07299bec2596c609cc5828830830fd3b8897580f6d9ff50:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:107bccf16111c3f6c222bd3979f1fbd9102b2cf8b6d205a3fbaf5dabe7ecfc71:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:45e5e2183223e3fb71640062e7bf8e22d72906dc71118746f3b83ba69c550d14:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9f79d0f3e4006841b078828fb7c68d1c974236839d25c45aaaf06445033a79dc:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:c5c5f58ee63d0b207f01342606edd6f09c09e49e46bb4d990edbc44f5f4beac5:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:4b89205dc639f7d14e32893b3a4051fe6f7f6e8ed4848a74c2c7531038fce120:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:afd04547218a7eed341a7ef3b72bbac362970a4c429b348e7b71b47995cf48ed:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7bc3468f50bc15474a6d5c338778467c7680043a3bac13b0789f6bda3c9c8c50:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:1724dd4f31db707f61068ee56a270ece4fa8f604b64935cf02fc5a9dac7da88d:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:b57460b249bbbb3e002c07b80d4311a42a8c2ce6ca25fc7c68a3d240afc290be:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:ec7526ad34c20113824da392a3ff4646f38cd5c884ad2871817b1271cf272afe:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 78.1s                                                                    total:  29.8 M (391.0 KiB/s)                                     
root@k8s-master01:~#

  在k8smaster上應用calica部署清單

root@k8s-master01:~# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
root@k8s-master01:~# 

  驗證:檢視pod是否正常running?node是否都準備就緒?

root@k8s-master01:~# kubectl get nodes
NAME                   STATUS   ROLES           AGE   VERSION
k8s-master01.ik8s.cc   Ready    control-plane   44m   v1.26.3
k8s-node01.ik8s.cc     Ready    <none>          40m   v1.26.3
k8s-node02.ik8s.cc     Ready    <none>          40m   v1.26.3
root@k8s-master01:~# kubectl get pods -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57b57c56f-nmr7d        1/1     Running   0          39s
kube-system   calico-node-42rcw                              1/1     Running   0          39s
kube-system   calico-node-lwn5w                              1/1     Running   0          39s
kube-system   calico-node-zcnzb                              1/1     Running   0          39s
kube-system   coredns-5bbd96d687-822lh                       1/1     Running   0          44m
kube-system   coredns-5bbd96d687-mxvth                       1/1     Running   0          44m
kube-system   etcd-k8s-master01.ik8s.cc                      1/1     Running   0          44m
kube-system   kube-apiserver-k8s-master01.ik8s.cc            1/1     Running   0          44m
kube-system   kube-controller-manager-k8s-master01.ik8s.cc   1/1     Running   0          44m
kube-system   kube-proxy-67kjq                               1/1     Running   0          41m
kube-system   kube-proxy-bt79n                               1/1     Running   0          44m
kube-system   kube-proxy-l2zz8                               1/1     Running   0          41m
kube-system   kube-scheduler-k8s-master01.ik8s.cc            1/1     Running   0          44m
root@k8s-master01:~# 

  提示:能夠看到上述node和pod都處ready狀態表示網路外掛部署完成;這裡需要注意一點的是,如果calico-kube-controllers被排程到非master節點上執行的話,需要複製master節點使用者家目錄下.kube/config檔案到node節點上~/.kube/config,因為calico-kube-controllers初始化需要連線到k8s叢集上,沒有這個檔案認證通不過會導致calico-kube-controllers初始化不成功;

  複製master家目錄的下的.kube目錄到node節點,防止calico-kube-controllers排程到node節點初始化不成功

root@k8s-node02:~# scp 192.168.0.71:/root/.kube/config ./.kube/
config                                                                                                                                   100% 5636     5.2MB/s   00:00    
root@k8s-node02:~# 

  提示:node3也是相同的操作;

  6、部署官方dashboard

  下載官方dashboard部署清單

root@k8s-master01:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

  檢視清單所需映象

root@k8s-master01:~# mv recommended.yaml dashboard-v2.7.0.yaml
root@k8s-master01:~# cat dashboard-v2.7.0.yaml|grep image:
          image: kubernetesui/dashboard:v2.7.0
          image: kubernetesui/metrics-scraper:v1.0.8
root@k8s-master01:~# 

  提前下載所需映象

root@k8s-node01:~# nerdctl pull kubernetesui/dashboard:v2.7.0
WARN[0000] skipping verifying HTTPS certs for "docker.io" 
docker.io/kubernetesui/dashboard:v2.7.0:                                          resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8e052fd7e2d0aec4ef51e4505d006158414775ad5f0ea3e479ac0ba92f90dfff:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:ee3247c7e545df975ba3826979c7a8d73f1373cbb3ac47def3b734631cef2965:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 61.8s                                                                    total:  72.3 M (1.2 MiB/s)                                       
root@k8s-node01:~# nerdctl pull kubernetesui/metrics-scraper:v1.0.8
WARN[0000] skipping verifying HTTPS certs for "docker.io" 
docker.io/kubernetesui/metrics-scraper:v1.0.8:                                    resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:5866d2c04d960790300cbd8b18d67be6b930870d044dd75849c8c96191fe7580:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:978be80e3ee3098e11be2b18322822513d692988440ec1e74620e8539b07704d:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 52.1s                                                                    total:  18.8 M (370.1 KiB/s)                                     
root@k8s-node01:~# 

  應用配置清單

root@k8s-master01:~# kubectl apply -f dashboard-v2.7.0.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
root@k8s-master01:~# 

  驗證pod是否都running?

  提示:能夠看到這兩個pod正常running,表示dashboard部署成功;

  建立使用者和金鑰

root@k8s-master01:~# cat admin-user.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
root@k8s-master01:~# cat admin-secret.yaml 
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: dashboard-admin-user
  namespace: kubernetes-dashboard 
  annotations:
    kubernetes.io/service-account.name: "admin-user"
root@k8s-master01:~# 

  應用及配置清單

root@k8s-master01:~# kubectl apply -f admin-user.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@k8s-master01:~# kubectl apply -f admin-secret.yaml          
secret/dashboard-admin-user created
root@k8s-master01:~# 

  驗證:檢視admin-user的token資訊

  提示:在叢集上能夠查到上述使用者和token資訊,表示我們建立使用者和secret成功;

  檢視dashboard服務

  提示:從上圖可以看到dashboard透過nodeport將主機30000埠和pod的443埠繫結,這告訴我們訪問叢集任意節點的30000埠都可以訪問到dashboard;

  驗證:透過訪問叢集任意節點的30000埠,看看是否能夠訪問到dashboard?

  透過token登入dashboard

  提示:能夠透過token登入到dashboard,能夠看到叢集相關資訊,說明dashboard部署沒有問題,同時基於containerd和kubeadm部署的k8s叢集也沒有問題;

相關文章