一、環境準備
K8S叢集角色 | IP | 主機名 | 安裝相關元件 |
master | 10.1.16.160 | hqiotmaster07l | apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico |
master | 10.1.16.161 | hqiotmaster08l | apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico |
master | 10.1.16.162 | hqiotmaster09l | apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico |
worker | 10.1.16.163 | hqiotnode12l | kubelet、kube-porxy、docker、calico、coredns、ingress-nginx |
worker | 10.1.16.164 | hqiotnode13l | kubelet、kube-porxy、docker、calico、coredns、ingress-nginx |
worker | 10.1.16.165 | hqiotnode14l | kubelet、kube-porxy、docker、calico、coredns、ingress-nginx |
vip | 10.1.16.202 | nginx、keeplived |
1.1、伺服器環境初始化
# 控制節點、工作節點都需要安裝 # 1、修改主機名:對應主機名修改 hostnamectl set-hostname master && bash # 2、新增hosts cat << EOF > /etc/hosts
10.1.16.160 hqiotmaster07l
10.1.16.161 hqiotmaster08l
10.1.16.162 hqiotmaster09l
10.1.16.163 hqiotnode12l
10.1.16.164 hqiotnode13l
10.1.16.165 hqiotnode14l
EOF
# 3、關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
# 4、關閉selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
# 5、關閉交換分割槽 swapoff -a # 臨時關閉 永久關閉
vi /etc/fstab
#註釋這一行:/mnt/swap swap swap defaults 0 0
free -m
檢視swap是否全為0
# 6、每臺機器都設定 時間同步
yum install chrony -y
systemctl start chronyd && systemctl enable chronyd
chronyc sources
# 7、建立/etc/modules-load.d/containerd.conf配置檔案:
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
執行以下命令使配置生效:
modprobe overlay
modprobe br_netfilter
# 8、將橋接的IPv4流量傳遞到iptables的鏈
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
# 9、配置伺服器支援開啟ipvs的前提條件(如果用istio,請不要開啟IPVS模式)
接下來還需要確保各個節點上已經安裝了ipset軟體包,為了便於檢視ipvs的代理規則,最好安裝一下管理工具ipvsadm。
yum install -y ipset ipvsadm
由於ipvs已經加入到了核心的主幹,所以為kube-proxy開啟ipvs的前提需要載入以下的核心模組:ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
在各個伺服器節點上執行以下指令碼:
cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
賦權:
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面指令碼建立了的/etc/sysconfig/modules/ipvs.modules檔案,保證在節點重啟後能自動載入所需模組。
如果報錯modprobe: FATAL: Module nf_conntrack_ipv4 not found.
這是因為使用了高核心,較如博主就是使用了5.2的核心,一般教程都是3.2的核心。在高版本核心已經把nf_conntrack_ipv4替換為nf_conntrack了。所以正確配置應該如下
在各個伺服器節點上執行以下指令碼:
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
賦權:
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
# 10、生效sysctl
sysctl --system
二、基礎軟體包安裝
yum install -y gcc gcc-c++ make
yum install wget net-tools vim* nc telnet-server telnet curl openssl-devel libnl3-devel net-snmp-devel zlib zlib-devel pcre-devel openssl openssl-devel
# 修改linux命令歷史記錄、ssh關閉時間
vi /etc/profile
HISTSIZE=3000
TMOUT=3600
退出儲存,執行
source /etc/profile
三、Docker安裝
安裝yum的工具包集合
yum install -y yum-utils
安裝docker倉庫
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
解除安裝docker-ce
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
yum list installed | grep docker
yum remove -y docker-ce.x86_64
rm -rf /var/lib/docker
rm -rf /etc/docker/
檢視可安裝版本
yum list docker-ce --showduplicates | sort -r
安裝最新版本
yum -y install docker-ce
安裝特定版本的docker-ce:
yum -y install docker-ce-23.0.3-1.el7
啟動docker,並設為開機自啟動
systemctl enable docker && systemctl start docker
/etc/docker上傳daemon.json
systemctl daemon-reload
systemctl restart docker.service
docker info
docker相關命令:
systemctl stop docker
systemctl start docker
systemctl enable docker
systemctl status docker
systemctl restart docker
docker info
docker --version
containerd --version
四、containerd安裝
下載Containerd的二進位制包:
可先在網路可達的機器上下載好,再上傳到伺服器
wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-cni-1.7.14-linux-amd64.tar.gz
壓縮包中已經按照官方二進位制部署推薦的目錄結構佈局好。 裡面包含了systemd配置檔案,containerd以及cni的部署檔案。
將解壓縮到系統的根目錄中:
tar -zvxf cri-containerd-cni-1.7.14-linux-amd64.tar.gz -C /
注意經測試cri-containerd-cni-1.7.14-linux-amd64.tar.gz包中包含的runc在CentOS 7下的動態連結有問題,
這裡從runc的github上單獨下載runc,並替換上面安裝的containerd中的runc:
wget https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64
install -m 755 runc.amd64 /usr/sbin/runc
runc -v
runc version 1.1.10
commit: v1.1.10-0-g18a0cb0f
spec: 1.0.2-dev
go: go1.20.10
libseccomp: 2.5.4
接下來生成containerd的配置檔案:
rm -rf /etc/containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
根據文件 Container runtimes 中的內容,對於使用systemd作為init system的Linux的發行版,使用systemd作為容器的cgroup driver可以確保伺服器節點在資源緊張的情況更加穩定,因此這裡配置各個節點上containerd的cgroup driver為systemd。
修改前面生成的配置檔案/etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
# 設定aliyun地址,不設定會連線不上
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]
...
# sandbox_image = "k8s.gcr.io/pause:3.6"
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
# 設定Harbor私有倉庫
vi /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."10.1.1.167".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."10.1.1.167".auth]
username = "admin"
password = "Harbor12345"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
endpoint = ["https://registry.aliyuncs.com/google_containers"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.1.1.167"]
endpoint = ["https://10.1.1.167"]
# 配置containerd開機啟動,並啟動containerd
systemctl daemon-reload
systemctl enable --now containerd && systemctl restart containerd
# 使用crictl測試一下,確保可以列印出版本資訊並且沒有錯誤資訊輸出:
crictl version
Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.14
RuntimeApiVersion: v1
五、安裝配置kubernetes
5.1 kubernetes高可用方案
為了能很好的講解Kubernetes叢集的高可用配置,我們可以透過一下方案來解答。
在這個方案中,我們透過keepalive+nginx實現k8s apiserver元件高可用。
按照舊的方案,我們以某一個master節點作為主節點,讓其餘的兩個master節點加入,是無法達到叢集的高可用的。一旦主master節點當機,整個叢集將處於不可用的狀態。
5.2 透過keepalive+nginx實現k8s apiserver高可用
三臺master節點,Nginx安裝與配置
yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel
tar -zvxf nginx-1.27.0.tar.gz
cd nginx-1.27.0
全量安裝
./configure --prefix=/usr/local/nginx --with-stream --with-http_stub_status_module --with-http_ssl_module
make & make install
ln -s /usr/local/nginx/sbin/nginx /usr/sbin/
nginx -v
cd /usr/local/nginx/sbin/
#啟動服務
./nginx
#停止服務
./nginx -s stop
#檢視80埠
netstat -ntulp |grep 80
建立服務,啟動服務方式
vi /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginx - high performance web server
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s stop
[Install]
WantedBy=multi-user.target
上傳nginx.service 到 /usr/lib/systemd/system
systemctl daemon-reload
systemctl start nginx.service && systemctl enable nginx.service
systemctl status nginx.service
修改nginx 配置檔案
#user nobody;
worker_processes 1;
#error_log logs/error.log;
error_log /var/log/nginx/error.log;
#error_log logs/error.log info;
pid /var/log/nginx/nginx.pid;
events {
worker_connections 1024;
}
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.1.16.169:6443 weight=5 max_fails=3 fail_timeout=30s;
server 10.1.16.170:6443 weight=5 max_fails=3 fail_timeout=30s;
server 10.1.16.171:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 16443; # 由於 nginx 與 master 節點複用,這個監聽埠不能是 6443,否則會衝突
proxy_pass k8s-apiserver;
}
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
#gzip on;
server {
listen 8080;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
重啟nginx.service
systemctl restart nginx.service
三臺master節點,Keepalived安裝與配置
yum install -y curl gcc openssl-devel libnl3-devel net-snmp-devel
yum install -y keepalived
cd /etc/keepalived/
mv keepalived.conf keepalived.conf.bak
vi /etc/keepalived/keepalived.conf
# master節點1配置
! Configuration File for keepalived
global_defs {
router_id NGINX_MASTER
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens192 # 網路卡名稱
mcast_src_ip 10.1.16.160 # 伺服器IP
virtual_router_id 51 #vrrp路由ID例項,每個例項唯一
priority 100 # 權重
nopreempt
advert_int 2 # 指定vrrp心跳包通告間隔時間,預設1s
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.1.16.202/24 # 虛擬VIP
}
track_script {
chk_apiserver # 健康檢查指令碼
}
}
# master節點2配置
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP1
interface ens192
mcast_src_ip 10.1.16.161
virtual_router_id 51
priority 99
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.1.16.202/24
}
track_script {
chk_apiserver
}
}
# master節點2配置
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP2
interface ens192
mcast_src_ip 10.1.16.162
virtual_router_id 51
priority 98
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.1.16.202/24
}
track_script {
chk_apiserver
}
}
# 健康檢查指令碼
vi /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
賦權:
chmod 644 /etc/keepalived/check_apiserver.sh
chmod 644 /etc/keepalived/keepalived.conf
啟動:
systemctl daemon-reload
systemctl start keepalived && systemctl enable keepalived
systemctl restart keepalived
systemctl status keepalived
# 檢視VIP,在master上看
[root@master nginx]# ip addr
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:9d:e5:7a brd ff:ff:ff:ff:ff:ff
altname enp11s0
inet 10.1.16.160/24 brd 10.1.16.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet 10.1.16.202/24 scope global secondary ens192
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe9d:e57a/64 scope link noprefixroute
valid_lft forever preferred_lft forever
測試:停止master的nginx就會發現10.1.16.202這個IP漂移到master2伺服器上,重啟master的nginx和keepalived後,IP還會漂移回master
5.3 使用kubeadm部署Kubernetes
# 下面在各節點安裝kubeadm和kubelet,建立yum源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
# 如已經安裝了相關元件,建議先徹底刪除
# 重置kubernetes服務,重置網路。刪除網路配置,link
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig docker0 down
ip link delete cni0
systemctl start docker
systemctl start kubelet
# 刪除kubernetes相關軟體
yum -y remove kubelet kubeadm kubectl
rm -rvf $HOME/.kube
rm -rvf ~/.kube/
rm -rvf /etc/kubernetes/
rm -rvf /etc/systemd/system/kubelet.service.d
rm -rvf /etc/systemd/system/kubelet.service
rm -rvf /usr/bin/kube*
rm -rvf /etc/cni
rm -rvf /opt/cni
rm -rvf /var/lib/etcd
rm -rvf /var/etcd
# 檢視kubelet kubeadm kubectl版本
yum list kubelet kubeadm kubectl --showduplicates | sort -r
# 安裝k8s軟體包,master和node都需要
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
kubernetes相關命令:
systemctl enable kubelet
systemctl restart kubelet
systemctl stop kubelet
systemctl start kubelet
systemctl status kubelet
kubelet --version
注:每個軟體包的作用
Kubeadm: kubeadm 是一個工具,用來初始化 k8s 叢集的
kubelet: 安裝在叢集所有節點上,用於啟動 Pod 的,kubeadm 安裝k8s,k8s 控制節點和工作節點的元件,都是基於 pod 執行的,只要 pod 啟動,就需要 kubelet
kubectl: 透過 kubectl 可以部署和管理應用,檢視各種資源,建立、刪除和更新各種元件
5.4 kubeadm 初始化
使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以列印叢集初始化預設的使用的配置:
從預設的配置中可以看到,可以使用imageRepository定製在叢集初始化時拉取k8s所需映象的地址。
基於預設配置定製出本次使用kubeadm初始化叢集所需的配置檔案kubeadm.yaml
# 新建kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
# localAPIEndpoint:
# advertiseAddress: 10.1.16.160
# bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
controlPlaneEndpoint: 10.1.16.202:16443 # 控制平面使用虛擬IP
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 20.244.0.0/16 # 指定pod網段
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
這裡定製了imageRepository為阿里雲的registry,避免因gcr被牆,無法直接拉取映象。criSocket設定了容器執行時為containerd。 同時設定kubelet的cgroupDriver為systemd,設定kube-proxy代理模式為ipvs。
在開始初始化叢集之前可以使用kubeadm config images pull --config kubeadm.yaml預先在各個伺服器節點上拉取所k8s需要的容器映象。
# 拉取所k8s需要的容器映象
kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
# 如果出現無法下載的問題,可以線下匯出匯入
ctr -n k8s.io image export kube-proxy.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
ctr -n k8s.io image import kube-proxy.tar
# 使用kubeadm初始化叢集
kubeadm init --config kubeadm.yaml
# 檢視初始化結果
[root@HQIOTMASTER10L yum.repos.d]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hqiotmaster10l kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.16.169 10.1.16.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hqiotmaster10l localhost] and IPs [10.1.16.160 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hqiotmaster10l localhost] and IPs [10.1.16.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0715 16:18:15.468503 67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0715 16:18:15.544132 67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0715 16:18:15.617290 67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0715 16:18:15.825899 67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.523308 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node hqiotmaster10l as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node hqiotmaster10l as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0715 16:18:51.448813 67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.1.16.202:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0cc00fbdbfaa12d6d784b2f20c36619c6121a1dbd715f380fae53f8406ab6e4c \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.1.16.202:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0cc00fbdbfaa12d6d784b2f20c36619c6121a1dbd715f380fae53f8406ab6e4c
上面記錄了完成的初始化輸出的內容,根據輸出的內容基本上可以看出手動初始化安裝一個Kubernetes叢集所需要的關鍵步驟。
# 其中有以下關鍵內容:
• [certs]生成相關的各種證書
• [kubeconfig]生成相關的kubeconfig檔案
• [kubelet-start] 生成kubelet的配置檔案"/var/lib/kubelet/config.yaml"
• [control-plane]使用/etc/kubernetes/manifests目錄中的yaml檔案建立apiserver、controller-manager、scheduler的靜態pod
• [bootstraptoken]生成token記錄下來,後邊使用kubeadm join往叢集中新增節點時會用到
• [addons]安裝基本外掛:CoreDNS, kube-proxy
# 配置使用kubectl訪問叢集:
rm -rvf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# 檢視一下叢集狀態,確認個元件都處於healthy狀態
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
# 驗證 kubectl
[root@k8s-master-0 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hqiotmaster07l NotReady control-plane 2m12s v1.28.2
5.5 擴容k8s叢集,新增master
# 1. 從節點拉取映象
# 將kubeadm.yaml傳送到master2、master3,提前拉取所需映象
kubectl config images pull --config=kubeadm.yaml
# 2.將master節點證書複製到其餘master節點
mkdir -p /etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.* master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.* master3:/etc/kubernetes/pki/etcd/
# 3.在master主節點生成token
[root@master etcd]# kubeadm token create --print-join-command
kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918
# 4.將master2、master3加入叢集,成為控制節點
kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918 --control-plane
成功結果:Run 'kubectl get nodes' to see this node join the cluster.
# 5.master2/3執行kubectl訪問叢集
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# 6.檢視
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 97m v1.28.2
master2 NotReady control-plane 85m v1.28.2
master3 NotReady control-plane 84m v1.28.2
5.6 新增node節點進入叢集
# 1.將node1加入叢集作為工作節點
[root@node1 containerd]# kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918
成功標誌:Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 在任意master節點檢視
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 109m v1.28.2
master2 NotReady control-plane 97m v1.28.2
master3 NotReady control-plane 96m v1.28.2
node1 NotReady <none> 67s v1.28.2
# 2.修改node節點 ROLES
[root@master k8s]# kubectl label node node1 node-role.kubernetes.io/worker=worker
node/node1 labeled
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 110m v1.28.2
master2 NotReady control-plane 98m v1.28.2
master3 NotReady control-plane 97m v1.28.2
node1 NotReady worker 2m48s v1.28.2
六、etcd配置為高可用狀態
# 修改master、master2、master3上的配置檔案etcd.yaml
vi /etc/kubernetes/manifests/etcd.yaml
將
- --initial-cluster=hqiotmaster10l=https://10.1.16.160:2380
修改為
- --initial-cluster=hqiotmaster10l=https://10.1.16.160:2380,hqiotmaster11l=https://10.1.16.161:2380,hqiotmaster12l=https://10.1.16.162:2380
6.1 檢視etcd叢集是否配置成功
# etcdctl下載地址:https://github.com/etcd-io/etcd/releases
cd etcd-v3.5.9-linux-amd64
cp etcd* /usr/local/bin
[root@HQIOTMASTER07L ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list
42cd16c4205e7bee, started, hqiotmaster07l, https://10.1.16.160:2380, https://10.1.16.160:2379, false
bb9be9499c3a8464, started, hqiotmaster09l, https://10.1.16.162:2380, https://10.1.16.162:2379, false
c8761c7050ca479a, started, hqiotmaster08l, https://10.1.16.161:2380, https://10.1.16.161:2379, false
[root@HQIOTMASTER07L ~]# etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://10.1.16.160:2379,https://10.1.16.161:2379,https://10.1.16.162:2379 endpoint status --cluster
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.1.16.160:2379 | 42cd16c4205e7bee | 3.5.9 | 15 MB | false | false | 11 | 2905632 | 2905632 | |
| https://10.1.16.162:2379 | bb9be9499c3a8464 | 3.5.9 | 15 MB | false | false | 11 | 2905632 | 2905632 | |
| https://10.1.16.161:2379 | c8761c7050ca479a | 3.5.9 | 16 MB | true | false | 11 | 2905632 | 2905632 | |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+