搭建k8s高可用
高可用只針對於api-server,需要用到nginx + keepalived,nginx提供4層負載,keepalived提供vip(虛擬IP)
系統採用openEuler 22.03 LTS
1. 前期準備
因為機器記憶體只有16G,所有我採用3master + 1node
主機名 | IP | VIP |
---|---|---|
master01 | 192.168.200.163 | 192.168.200.200 |
master02 | 192.168.200.164 | 192.168.200.200 |
master03 | 192.168.200.165 | 192.168.200.200 |
node | 192.168.200.166 |
1.1 修改主機配置(所有節點操作)
- 修改主機名
- 關閉防火牆,selinux
- 關閉swap
- 配置時間同步
主機過多,我只寫master01的操作
# 修改主機名
[root@localhost ~]# hostnamectl set-hostname master01
[root@localhost ~]# bash
# 關閉防火牆,selinux
[root@master01 ~]# systemctl disable --now firewalld
[root@master01 ~]# setenforce 0
[root@master01 ~]# sed -i s"/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
# 關閉swap
[root@master01 ~]# swapoff -a
# 配置時間同步
[root@master01 ~]# yum install chrony -y
[root@master01 ~]# chronyc sources
1.2 開啟ipvs(所有節點)
[root@master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules << END
> #!/bin/bash
> ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
> for kernel_module in ${ipvs_modules};do
> /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
> if [ 0 -eq 0]; then
> /sbin/modprobe ${kernel_module}
> fi
> done
> END
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@master01 ~]# bash /etc/sysconfig/modules/ipvs.modules
1.3 配置k8s yum源(所有節點)
# 直接到華為映象站搜尋kubernetes
[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
使用尤拉的話需要將$basearch 改為自己的架構 x86_64
2. 安裝docker(所有節點)
由於尤拉目前最高支援k8s的版本是1.23 ,所以需要安裝docker
2.1 安裝
[root@master01 ~]# yum install docker -y
2.2 修改docker配置
[root@master01 ~]# vim /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
2.3 重啟docker
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
3. 配置高可用(所有master節點)
3.1 安裝軟體包
[root@master01 ~]# yum install nginx nginx-all-modules keepalived -y
3.2 配置nginx負載
在nginx的配置檔案內加入一下內容
[root@master01 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 新增這一段,要寫在http段之外,因為我們用的是四層負載,並不是七層負載
stream {
upstream k8s-apiserver {
server 192.168.200.163:6443;
server 192.168.200.164:6443;
server 192.168.200.165:6443;
}
server {
listen 16443;
proxy_pass k8s-apiserver;
}
}
# 到這裡結束
# 檢測語法
[root@master01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# 重啟
[root@master01 ~]# systemctl restart nginx
3.3 配置keepalived
# 備份原有配置
[root@master01 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
# 修改配置
[root@master01 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id master1
}
vrrp_instance Nginx {
state MASTER # 只有master01寫MASTER,其他master寫BACKUP
interface ens33 # 寫上網路卡
virtual_router_id 51
priority 200 # 其他節點的值要低於這個,另外2個節點的值也不要一樣
advert_int 1
authentication {
auth_type PASS
auth_pass 123
}
virtual_ipaddress {
192.168.200.200 # 寫VIP
}
}
# 重啟
[root@master01 ~]# systemctl restart keepalived.service
將原本的配置項都刪除,寫入這些內容
注意:只有master01的 state 是MASTER,其他2個節點應該寫為BACKUP。且priority要低於master01
3.4 驗證keepalived
# 檢視master01的ens33
[root@master01 ~]# ip a show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:8d:ce:8a brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 192.168.200.163/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.200.200/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::ce91:fe4e:625d:6e32/64 scope link noprefixroute
valid_lft forever preferred_lft forever
現在他有自己的iP和VIP
# 停掉keepalived
[root@master01 ~]# systemctl stop keepalived.service
[root@master01 ~]# ip a show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:8d:ce:8a brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 192.168.200.163/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::ce91:fe4e:625d:6e32/64 scope link noprefixroute
valid_lft forever preferred_lft forever
停掉之後vip不存在了,切換到master02 來看看
[root@master02 ~]# ip a show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:2d:b0:8a brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 192.168.200.164/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.200.200/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::f409:2f97:f02e:a8d4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
現在vip跑到master02了,將master01的keepalived啟動,vip會回來,因為master01的
優先順序高於他
[root@master01 ~]# systemctl restart keepalived.service
[root@master01 ~]# systemctl enable --now nginx keepalived.service
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
4. 部署k8s
尤拉目前只支援1.23版本,所以目前的容器執行時是docker,沒有寫執行節點那麼就是master01
4.1 安裝軟體包(所有master節點)
[root@master01 ~]# yum install kubeadm kubelet kubectl -y
[root@master01 ~]# systemctl enable kubelet
4.3 生成部署檔案
[root@master01 ~]# kubeadm config print init-defaults > init.yaml
4.3.1 修改部署檔案
[root@master01 ~]# vim init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.200.163 # 這個地方需要修改為自己的IP地址
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: master01 # 這個地方改成你的主機名或者IP,作用是叢集部署出來之後在叢集內顯示的名稱,這裡寫什麼到時候就是什麼
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
certSANs: # 新增這一整段,目的是讓這些地址所在的主機都能夠使用證書
- master01
- master02
- master03
- 127.0.0.1
- localhost
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- 192.168.200.163 # 這裡3個是master的IP地址
- 192.168.200.164
- 192.168.200.165
- 192.168.200.200 # VIP也需要寫上
controlPlaneEndpoint: 192.168.200.200:16443 # 新增這一行,IP為VIP
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: swr.cn-east-3.myhuaweicloud.com/hcie_openeuler # 映象倉庫要改為國內的
kind: ClusterConfiguration
kubernetesVersion: 1.23.1 # 改為kubeadm版本一樣的
networking:
dnsDomain: cluster.local
podSunbet: 10.244.0.0/12 # 新增這一行
serviceSubnet: 10.96.0.0/12
scheduler: {}
--- # 新增這一整段
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
4.4 提前拉取映象
# 這是在部署之前提前先把映象拉取下來
[root@master01 ~]# kubeadm config images pull --config ./init.yaml
4.5 開始部署
[root@master01 ~]# kubeadm init --upload-certs --config ./init.yaml
# 如果安裝失敗了可以執行kubeadm reset -f 重置環境再來init,如果直接init會報錯
執行成功之後會輸出一些資訊
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of the control-plane node running the following command on each as root:
# 加入新的master節點使用這個命令
kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 \
--control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
# 加入node節點就使用這個命令
kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3
4.6 其他master節點加入叢集
生成的token只有24小時有效,如果token過期了還需要有節點加入叢集的話可以執行
[root@master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.200.200:16443 --token gb00dz.tevdizf7mxqx1egj --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 這個命令可以直接讓node節點加入如果需要加入master節點,那麼需要加上 --control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
[root@master02 ~]# kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 \
--control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
[root@master02 ~]# mkdir -p $HOME/.kube
[root@master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master03 ~]# kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 \
--control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e
[root@master03 ~]# mkdir -p $HOME/.kube
[root@master03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
可以使用 --node-name 指定加入叢集后的名字
4.7 node節點加入叢集
[root@node ~]# kubeadm join 192.168.200.200:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3
4.8 檢視叢集節點
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane,master 45m v1.23.1
master02 NotReady control-plane,master 27m v1.23.1
master03 NotReady control-plane,master 27m v1.23.1
node NotReady <none> 10s v1.23.1
5. 安裝網路外掛 calico
沒安裝網路外掛之前狀態是NotReady,裝完之後就是Ready
calico官方安裝文件
在官方文件裡面可以找到最新的版本
[root@master01 ~]# wget https://docs.projectcalico.org/archive/v3.23/manifests/calico.yaml
[root@master01 ~]# kubectl apply -f calico.yaml
稍等一會之後,檢視叢集節點狀態
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 5h38m v1.23.1
master02 Ready control-plane,master 5h21m v1.23.1
master03 Ready control-plane,master 5h21m v1.23.1
node Ready <none> 4h53m v1.23.1
如果登了很久還沒有ready的話可以使用
[root@master01 ~]# kubectl get pods -A
看看那些pod沒有起來,找到原因並解決之後就可以了
6. 驗證叢集是否可用
[root@master01 ~]# kubectl run web01 --image nginx:1.24
pod/web01 created
[root@master01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web01 1/1 Running 0 27s
能夠正常啟動pod