1、準備說明
- 8臺Linux主機,安裝Ubuntu 20.04系統,其中2臺haproxy,3臺master節點,3臺work節點
- 每臺主機不低於2GB 記憶體大小,CPU大於2核心
- 叢集中的所有主機網路互通
- 節點中不能存在重複的主機名、mac地址或者product_uuid
- 交換分割槽配置。kubelet預設是在節點上檢測到交換分割槽時,無法啟動kubelet。kubelet從1.22版本開始支援交換分割槽。1.28版本,僅針對cgroup v2支援交換分割槽。如果kubelet沒有配置好對swap分割槽的支援,則必須進行關閉交換分割槽
swapoff -a
,進行暫時關於交換分割槽。
1.1、確保節點的mac地址和product_uuid的唯一性
#每一個節點進行檢視
root@K8SMS0001:~# ip link #檢視mac地址
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:b6:97:cd brd ff:ff:ff:ff:ff:ff
root@K8SMS0001:~# cat /sys/class/dmi/id/product_uuid #檢視uuid的唯一
42364795-73ca-edde-a606-9ab6b26b6516
2、主機系統引數配置
2.1、主機名配置說明
主機名 | 角色 | ip地址 |
---|---|---|
K8SHA0001 | HA主節點 | 10.64.143.31 |
K8SHA0002 | HA從節點 | 10.64.143.32 |
K8SMS0001 | K8S-master01 | 10.64.143.33 |
K8SMS0002 | K8S-master02 | 10.64.143.34 |
K8SMS0003 | K8S-master03 | 10.64.143.35 |
K8SWK0001 | K8S-worker01 | 10.64.143.36 |
K8SWK0002 | K8S-worker02 | 10.64.143.37 |
K8SWK0003 | K8S-worker03 | 10.64.143.38 |
2.2、hosts解析配置
# 每個節點進行配置
$ cat >> /etc/hosts << EOF
K8SHA0001 10.64.143.31
K8SHA0002 10.64.143.32
K8SMS0001 10.64.143.33
K8SMS0002 10.64.143.34
K8SMS0003 10.64.143.35
K8SWK0001 10.64.143.36
K8SWK0002 10.64.143.37
K8SWK0003 10.64.143.38
2.3、selinux、防火牆關閉
$ systemctl stop firewalld
$ firewall-cmd --state
$ setenforce 0
$ getenforce
$ sed -i 's/SELINUX=enforceing/SELINUX=disabled/g' /etc/selinux/config
2.4、時間同步
$ crontab -e
*/5 * * * * ntpdate time1.aliyun.com
$ ntpdate time1.aliyun.com
2.5、配置核心轉發和網橋過濾
# 新增網橋過濾及核心轉發配置檔案
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
載入br_netfilter模組
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf
#檢視是否載入
# lsmod |grep br_netfilter
br_netfilter 28672 0
bridge 176128 1 br_netfilter
# lsmod | grep overlay
overlay 118784 0
# sysctl -a |grep 'bridge-nf-call-ip\|ip_forward = 1'
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
2.6、安裝ipset和ipvsadm
# yum install -y ipset ipvsadm
配置ipvsadm模組載入
# cat >> /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# 授權並載入模組
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack
2.7、關閉SWAP分割槽
# swappoff -a
3、HAproxy和keepalived安裝配置
可以參考1.22.8的配置
# yum install haproxy keepalived -y
4、容器執行時安裝
4.1、下載cri-containerd
github containerd地址
cri-containerd-1.7.14版本下載
# 下載cri-containerd 1.7.14版本
# wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-1.7.14-linux-amd64.tar.gz
4.2、二進位制安裝cri-containerd
# mkdir containerd && tar -zxf cri-containerd-1.7.14-linux-amd64.tar.gz -C containerd/ && cp ./usr/local/bin/* /usr/local/bin/ && cp ./etc/systemd/system/containerd.service /usr/lib/systemd/system/ && systemctl daemon-reload && containerd --version
4.3、生成並配置containerd預設配置檔案
# 生成containerd配置檔案,然後/etc/containerd/config.toml中修改如下配置項:
# mkdir /etc/containerd && containerd config default > /etc/containerd/config.toml
# vim /etc/containerd/config.toml
...... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
......
SystemdCgroup = true #修改SystemdCgroup
[plugins."io.containerd.grpc.v1.cri"]
......
#修改pause映象路徑
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
......
runtime_type = 'io.containerd.runc.v2' #修改預設的runtime_type,否則後面crictl image存在報錯資訊
# 重啟containerd服務
# systemctl restart containerd
# systemctl enable containerd
4.4、下載並安裝runc
github runc下載地址
runc 1.1.12版本下載
# wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
# mv runc.amd64 /usr/sbin/runc && chmod +x /usr/sbin/runc && runc -v
5、安裝kubeadm、kubelet、kubectl
需要在每臺機器上安裝以下的軟體包
- kubeadm:用來初始化叢集的指令。
- kubelet:在叢集中的每個節點上用來啟動 Pod 和容器等。
- kubectl:用來與叢集通訊的命令列工具。
kubeadm 無法進行安裝或者管理 kubelet 或 kubectl, 所以需要確保它們與透過 kubeadm 安裝的控制平面的版本相匹配。 如果不這樣做,則存在發生版本偏差的風險,可能會導致一些預料之外的錯誤和問題。
5.1、增加阿里雲的kubernetes源
# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
# yum clean all
# yum makecache
5.2、安裝指定1.28.8版本kubeadm、kubectl、kubelet
# yum --showduplicate list kubeadm kubectl kublet
# yum install -y kubelet-1.28.8 kubeadm-1.28.8 kubectl-1.28.8
5.3、修改kubelet預設cgroup驅動
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
# systemctl enable kubelet --now
6、初始化叢集
6.1、檢視初始化需要的映象並下載
映象所有的節點都先下好
# kubeadm config image list
# crictl pull registry.aliyuncs.com/google_containers/pause:3.9
# crictl pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8
# crictl pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.8
# crictl pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.8
# crictl pull registry.aliyuncs.com/google_containers/kube-proxy:v1.28.8
# crictl pull registry.aliyuncs.com/google_containers/etcd:3.5.12-0
6.2、叢集初始化
root@K8SMS0001:~# kubeadm init --control-plane-endpoint 10.64.143.30:6443 --kubernetes-version=v1.28.8 --pod-network-cidr 172.16.0.0/16 --service-cidr 10.96.0.0/16 --service-dns-domain cluster.local --image-repository registry.aliyuncs.com/google_containers --upload-certs
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local sphqjdk8sms0001] and IPs [10.96.0.1 10.64.143.33 10.64.143.30]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost sphqjdk8sms0001] and IPs [10.64.143.33 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost sphqjdk8sms0001] and IPs [10.64.143.33 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.037389 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node sphqjdk8sms0001 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node sphqjdk8sms0001 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: xy67nn.3yagma81us8dzatp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.64.143.30:6443 --token xy67nn.3yagma81us8dzatp \
--discovery-token-ca-cert-hash sha256:e4c4f8a7495c9f3a07e208200f355f1f6c97cfb7c5cd562f6758dbf172acef96 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.64.143.30:6443 --token xy67nn.3yagma81us8dzatp \
--discovery-token-ca-cert-hash sha256:e4c4f8a7495c9f3a07e208200f355f1f6c97cfb7c5cd562f6758dbf172acef96
6.3、複製配置
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl get node
6.4、master叢集加入和work節點叢集加入
master節點加入指令
kubeadm join 10.64.143.30:6443 --token xy67nn.3yagma81us8dzatp \
--discovery-token-ca-cert-hash sha256:e4c4f8a7495c9f3a07e208200f355f1f6c97cfb7c5cd562f6758dbf172acef96 \
--control-plane
work節點加入指令
kubeadm join 10.64.143.30:6443 --token xy67nn.3yagma81us8dzatp \
--discovery-token-ca-cert-hash sha256:e4c4f8a7495c9f3a07e208200f355f1f6c97cfb7c5cd562f6758dbf172acef96
7、calico網路外掛部署
官方指導文件
安裝calico自定義資源
# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
下載calico基礎配置
# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml
# vim custom-resources.yaml
修改pod地址段,和初始化保持一致
cidr: 172.16.0.0/16
# watch kubectl get pods -n calico-system
calico相關pod都正常running後,叢集就正常了。