使用kubeadm部署高可用IPV4/IPV6叢集

小陈运维發表於2024-05-05

使用kubeadm部署高可用IPV4/IPV6叢集

https://github.com/cby-chen/Kubernetes 開源不易,幫忙點個star,謝謝了

介紹

kubernetes(k8s)二進位制高可用安裝部署,支援IPv4+IPv6雙棧。

我使用IPV6的目的是在公網進行訪問,所以我配置了IPV6靜態地址。

若您沒有IPV6環境,或者不想使用IPv6,不對主機進行配置IPv6地址即可。

不配置IPV6,不影響後續,不過叢集依舊是支援IPv6的。為後期留有擴充套件可能性。

若不要IPv6 ,不給網路卡配置IPv6即可,不要對IPv6相關配置刪除或操作,否則會出問題。

強烈建議在Github上檢視文件 !!!

Github出問題會更新文件,並且後續儘可能第一時間更新新版本文件 !!!

k8s基礎系統環境配置

配置IP

# 注意!
# 若虛擬機器是進行克隆的那麼網路卡的UUID會重複
# 若UUID重複需要重新生成新的UUID
# UUID重複無法獲取到IPV6地址
# 
# 檢視當前的網路卡列表和 UUID:
# nmcli con show
# 刪除要更改 UUID 的網路連線:
# nmcli con delete uuid <原 UUID>
# 重新生成 UUID:
# nmcli con add type ethernet ifname <介面名稱> con-name <新名稱>
# 重新啟用網路連線:
# nmcli con up <新名稱>

# 更改網路卡的UUID
ssh root@192.168.1.31 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"
ssh root@192.168.1.32 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"
ssh root@192.168.1.33 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"
ssh root@192.168.1.34 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"
ssh root@192.168.1.35 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"

# 引數解釋
# 
# ssh ssh root@192.168.1.31
# 使用SSH登入到IP為192.168.1.31的主機,使用root使用者身份。
# 
# nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44
# 刪除 UUID 為 708a1497-2192-43a5-9f03-2ab936fb3c44 的網路連線,這是 NetworkManager 中一種特定網路配置的唯一識別符號。
# 
# nmcli con add type ethernet ifname eth0 con-name eth0
# 新增一種乙太網連線型別,並指定介面名為 eth0,連線名稱也為 eth0。
# 
# nmcli con up eth0
# 開啟 eth0 這個網路連線。
# 
# 簡單來說,這個命令的作用是刪除一個特定的網路連線配置,並新增一個名為 eth0 的乙太網連線,然後啟用這個新的連線。

# 修改靜態的IPv4地址
ssh root@192.168.1.104 "nmcli con mod eth0 ipv4.addresses 192.168.1.31/24; nmcli con mod eth0 ipv4.gateway  192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
ssh root@192.168.1.106 "nmcli con mod eth0 ipv4.addresses 192.168.1.32/24; nmcli con mod eth0 ipv4.gateway  192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
ssh root@192.168.1.107 "nmcli con mod eth0 ipv4.addresses 192.168.1.33/24; nmcli con mod eth0 ipv4.gateway  192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
ssh root@192.168.1.109 "nmcli con mod eth0 ipv4.addresses 192.168.1.34/24; nmcli con mod eth0 ipv4.gateway  192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"
ssh root@192.168.1.110 "nmcli con mod eth0 ipv4.addresses 192.168.1.35/24; nmcli con mod eth0 ipv4.gateway  192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"

# 引數解釋
# 
# ssh root@192.168.1.154
# 使用SSH登入到IP為192.168.1.154的主機,使用root使用者身份。
# 
# "nmcli con mod eth0 ipv4.addresses 192.168.1.31/24"
# 修改eth0網路連線的IPv4地址為192.168.1.31,子網掩碼為 24。
# 
# "nmcli con mod eth0 ipv4.gateway 192.168.1.1"
# 修改eth0網路連線的IPv4閘道器為192.168.1.1。
# 
# "nmcli con mod eth0 ipv4.method manual"
# 將eth0網路連線的IPv4配置方法設定為手動。
# 
# "nmcli con mod eth0 ipv4.dns "8.8.8.8"
# 將eth0網路連線的IPv4 DNS伺服器設定為 8.8.8.8。
# 
# "nmcli con up eth0"
# 啟動eth0網路連線。
# 
# 總體來說,這條命令是透過SSH遠端登入到指定的主機,並使用網路管理命令 (nmcli) 修改eth0網路連線的配置,包括IP地址、閘道器、配置方法和DNS伺服器,並啟動該網路連線。

# 沒有IPv6選擇不配置即可
ssh root@192.168.1.31 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
ssh root@192.168.1.32 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
ssh root@192.168.1.33 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
ssh root@192.168.1.34 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"
ssh root@192.168.1.35 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"

# 引數解釋
# 
# ssh root@192.168.1.31
# 透過SSH連線到IP地址為192.168.1.31的遠端主機,使用root使用者進行登入。
# 
# "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::10"
# 使用nmcli命令修改eth0介面的IPv6地址為fc00:43f4:1eea:1::10。
# 
# "nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1"
# 使用nmcli命令修改eth0介面的IPv6閘道器為fc00:43f4:1eea:1::1。
# 
# "nmcli con mod eth0 ipv6.method manual"
# 使用nmcli命令將eth0介面的IPv6配置方法修改為手動配置。
# 
# "nmcli con mod eth0 ipv6.dns "2400:3200::1"
# 使用nmcli命令設定eth0介面的IPv6 DNS伺服器為2400:3200::1。
# 
# "nmcli con up eth0"
# 使用nmcli命令啟動eth0介面。
# 
# 這個命令的目的是在遠端主機上配置eth0介面的IPv6地址、閘道器、配置方法和DNS伺服器,並啟動eth0介面。

# 檢視網路卡配置
# nmcli device show eth0
# nmcli con show eth0
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=2aaddf95-3f36-4a48-8626-b55ebf7f53e7
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.1.31
PREFIX=24
GATEWAY=192.168.1.1
DNS1=8.8.8.8
[root@localhost ~]# 

# 引數解釋
# 
# TYPE=Ethernet
# 指定連線型別為乙太網。
# 
# PROXY_METHOD=none
# 指定不使用代理方法。
# 
# BROWSER_ONLY=no
# 指定不僅僅在瀏覽器中使用代理。
# 
# BOOTPROTO=none
# 指定自動分配地址的方式為無(即手動配置IP地址)。
# 
# DEFROUTE=yes
# 指定預設路由開啟。
# 
# IPV4_FAILURE_FATAL=no
# 指定IPv4連線失敗時不宣告嚴重錯誤。
# 
# IPV6INIT=yes
# 指定啟用IPv6。
# 
# IPV6_AUTOCONF=no
# 指定不自動配置IPv6地址。
# 
# IPV6_DEFROUTE=yes
# 指定預設IPv6路由開啟。
# 
# IPV6_FAILURE_FATAL=no
# 指定IPv6連線失敗時不宣告嚴重錯誤。
# 
# IPV6_ADDR_GEN_MODE=stable-privacy
# 指定IPv6地址生成模式為穩定隱私模式。
# 
# NAME=eth0
# 指定裝置名稱為eth0。
# 
# UUID=424fd260-c480-4899-97e6-6fc9722031e8
# 指定裝置的唯一識別符號。
# 
# DEVICE=eth0
# 指定裝置名稱為eth0。
# 
# ONBOOT=yes
# 指定開機自動啟用這個連線。
# 
# IPADDR=192.168.1.31
# 指定IPv4地址為192.168.1.31。
# 
# PREFIX=24
# 指定IPv4地址的子網掩碼為24。
# 
# GATEWAY=192.168.8.1
# 指定IPv4的閘道器地址為192.168.8.1。
# 
# DNS1=8.8.8.8
# 指定首選DNS伺服器為8.8.8.8。
# 
# IPV6ADDR=fc00:43f4:1eea:1::10/128
# 指定IPv6地址為fc00:43f4:1eea:1::10,子網掩碼為128。
# 
# IPV6_DEFAULTGW=fc00:43f4:1eea:1::1
# 指定IPv6的預設閘道器地址為fc00:43f4:1eea:1::1。
# 
# DNS2=2400:3200::1
# 指定備用DNS伺服器為2400:3200::1。

設定主機名

hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

# 引數解釋
# 
# 引數: set-hostname
# 解釋: 這是hostnamectl命令的一個引數,用於設定系統的主機名。
# 
# 引數: k8s-master01
# 解釋: 這是要設定的主機名,將系統的主機名設定為"k8s-master01"。

配置yum源

# 其他系統的源地址
# https://mirrors.tuna.tsinghua.edu.cn/help/

# 對於 Ubuntu
sed -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list

# 對於 CentOS 7
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
         -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
         -i.bak \
         /etc/yum.repos.d/CentOS-*.repo

# 對於 CentOS 8
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
         -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
         -i.bak \
         /etc/yum.repos.d/CentOS-*.repo

# 對於私有倉庫
sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

# 引數解釋
# 
# 以上命令是用於更改系統軟體源的配置,以便從國內映象站點下載軟體包和更新。
# 
# 對於 Ubuntu 系統,將 /etc/apt/sources.list 檔案中的軟體源地址 cn.archive.ubuntu.com 替換為 mirrors.ustc.edu.cn。
# 
# 對於 CentOS 7 系統,將 /etc/yum.repos.d/CentOS-*.repo 檔案中的 mirrorlist 註釋掉,並將 baseurl 的值替換為 https://mirrors.tuna.tsinghua.edu.cn/centos。
# 
# 對於 CentOS 8 系統,同樣將 /etc/yum.repos.d/CentOS-*.repo 檔案中的 mirrorlist 註釋掉,並將 baseurl 的值替換為 https://mirrors.tuna.tsinghua.edu.cn/centos。
# 
# 對於私有倉庫,將 /etc/yum.repos.d/CentOS-*.repo 檔案中的 mirrorlist 註釋掉,並將 baseurl 的值替換為私有倉庫地址 http://192.168.1.123/centos。
# 
# 這些命令透過使用 sed 工具和正規表示式,對相應的配置檔案進行批次的替換操作,從而更改系統軟體源配置。

安裝一些必備工具

# 對於 Ubuntu
apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl

# 對於 CentOS 7
yum update -y && yum -y install  wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl

# 對於 CentOS 8
yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl

關閉防火牆

# Ubuntu忽略,CentOS執行
systemctl disable --now firewalld

關閉SELinux

# Ubuntu忽略,CentOS執行
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 引數解釋
# 
# setenforce 0
# 此命令用於設定 SELinux 的執行模式。0 表示關閉 SELinux。
# 
# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
# 該命令使用 sed 工具來編輯 /etc/selinux/config 檔案。其中 '-i' 參數列示直接修改原檔案,而不是輸出到終端或另一個檔案。's#SELINUX=enforcing#SELINUX=disabled#g' 是 sed 的替換命令,它將檔案中所有的 "SELINUX=enforcing" 替換為 "SELINUX=disabled"。這裡的 '#' 是分隔符,用於替代傳統的 '/' 分隔符,以避免與路徑中的 '/' 衝突。

關閉交換分割槽

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0

cat /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0


# 引數解釋:
# 
# -ri: 這個引數用於在原檔案中替換匹配的模式。-r表示擴充套件正規表示式,-i允許直接修改檔案。
# 's/.*swap.*/#&/': 這是一個sed命令,用於在檔案/etc/fstab中找到包含swap的行,並在行首新增#來註釋掉該行。
# /etc/fstab: 這是一個檔案路徑,即/etc/fstab檔案,用於儲存檔案系統表。
# swapoff -a: 這個命令用於關閉所有啟用的交換分割槽。
# sysctl -w vm.swappiness=0: 這個命令用於修改vm.swappiness引數的值為0,表示系統在實體記憶體充足時更傾向於使用實體記憶體而非交換分割槽。

網路配置(倆種方式二選一)

# Ubuntu忽略,CentOS執行

# 方式一
# systemctl disable --now NetworkManager
# systemctl start network && systemctl enable network

# 方式二
cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF
systemctl restart NetworkManager

# 引數解釋
#
# 這個引數用於指定不由 NetworkManager 管理的裝置。它由以下兩個部分組成
# 
# interface-name:cali*
# 表示以 "cali" 開頭的介面名稱被排除在 NetworkManager 管理之外。例如,"cali0", "cali1" 等介面不受 NetworkManager 管理。
# 
# interface-name:tunl*
# 表示以 "tunl" 開頭的介面名稱被排除在 NetworkManager 管理之外。例如,"tunl0", "tunl1" 等介面不受 NetworkManager 管理。
# 
# 透過使用這個引數,可以將特定的介面排除在 NetworkManager 的管理範圍之外,以便其他工具或程序可以獨立地管理和配置這些介面。

進行時間同步

# 服務端
# apt install chrony -y
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

# 客戶端
# apt install chrony -y
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool 192.168.1.31 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF

systemctl restart chronyd ; systemctl enable chronyd

#使用客戶端進行驗證
chronyc sources -v

# 引數解釋
#
# pool ntp.aliyun.com iburst
# 指定使用ntp.aliyun.com作為時間伺服器池,iburst選項表示在初始同步時會傳送多個請求以加快同步速度。
# 
# driftfile /var/lib/chrony/drift
# 指定用於儲存時鐘漂移資訊的檔案路徑。
# 
# makestep 1.0 3
# 設定當系統時間與伺服器時間偏差大於1秒時,會以1秒的步長進行調整。如果偏差超過3秒,則立即進行時間調整。
# 
# rtcsync
# 啟用硬體時鐘同步功能,可以提高時鐘的準確性。
# 
# allow 192.168.0.0/24
# 允許192.168.0.0/24網段範圍內的主機與chrony進行時間同步。
# 
# local stratum 10
# 將本地時鐘設為stratum 10,stratum值表示時鐘的準確度,值越小表示準確度越高。
# 
# keyfile /etc/chrony.keys
# 指定使用的金鑰檔案路徑,用於對時間同步進行身份驗證。
# 
# leapsectz right/UTC
# 指定時區為UTC。
# 
# logdir /var/log/chrony
# 指定日誌檔案存放目錄。

配置ulimit

ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

# 引數解釋
#
# soft nofile 655360
# soft表示軟限制,nofile表示一個程序可開啟的最大檔案數,預設值為1024。這裡的軟限制設定為655360,即一個程序可開啟的最大檔案數為655360。
#
# hard nofile 131072
# hard表示硬限制,即系統設定的最大值。nofile表示一個程序可開啟的最大檔案數,預設值為4096。這裡的硬限制設定為131072,即系統設定的最大檔案數為131072。
#
# soft nproc 655350
# soft表示軟限制,nproc表示一個使用者可建立的最大程序數,預設值為30720。這裡的軟限制設定為655350,即一個使用者可建立的最大程序數為655350。
#
# hard nproc 655350
# hard表示硬限制,即系統設定的最大值。nproc表示一個使用者可建立的最大程序數,預設值為4096。這裡的硬限制設定為655350,即系統設定的最大程序數為655350。
#
# seft memlock unlimited
# seft表示軟限制,memlock表示一個程序可鎖定在RAM中的最大記憶體,預設值為64 KB。這裡的軟限制設定為unlimited,即一個程序可鎖定的最大記憶體為無限制。
#
# hard memlock unlimited
# hard表示硬限制,即系統設定的最大值。memlock表示一個程序可鎖定在RAM中的最大記憶體,預設值為64 KB。這裡的硬限制設定為unlimited,即系統設定的最大記憶體鎖定為無限制。

配置免密登入

# apt install -y sshpass
yum install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="192.168.1.31 192.168.1.32 192.168.1.33 192.168.1.34 192.168.1.35"
export SSHPASS=123123
for HOST in $IP;do
     sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

# 這段指令碼的作用是在一臺機器上安裝sshpass工具,並透過sshpass自動將本機的SSH公鑰複製到多個遠端主機上,以實現無需手動輸入密碼的SSH登入。
# 
# 具體解釋如下:
# 
# 1. `apt install -y sshpass` 或 `yum install -y sshpass`:透過包管理器(apt或yum)安裝sshpass工具,使得後續可以使用sshpass命令。
# 
# 2. `ssh-keygen -f /root/.ssh/id_rsa -P ''`:生成SSH金鑰對。該命令會在/root/.ssh目錄下生成私鑰檔案id_rsa和公鑰檔案id_rsa.pub,同時不設定密碼(即-P引數後面為空),方便後續透過ssh-copy-id命令自動複製公鑰。
# 
# 3. `export IP="192.168.1.31 192.168.1.32 192.168.1.33 192.168.1.34 192.168.1.35"`:設定一個包含多個遠端主機IP地址的環境變數IP,用空格分隔開,表示要將SSH公鑰複製到這些遠端主機上。
# 
# 4. `export SSHPASS=123123`:設定環境變數SSHPASS,將sshpass所需的SSH密碼(在這裡是"123123")賦值給它,這樣sshpass命令可以自動使用這個密碼進行登入。
# 
# 5. `for HOST in $IP;do`:遍歷環境變數IP中的每個IP地址,並將當前IP地址賦值給變數HOST。
# 
# 6. `sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST`:使用sshpass工具複製本機的SSH公鑰到遠端主機。其中,-e選項表示使用環境變數中的密碼(即SSHPASS)進行登入,-o StrictHostKeyChecking=no選項表示連線時不檢查遠端主機的公鑰,以避免互動式確認。
# 
# 透過這段指令碼,可以方便地將本機的SSH公鑰複製到多個遠端主機上,實現無需手動輸入密碼的SSH登入。

新增啟用源

# Ubuntu忽略,CentOS執行

# 為 RHEL-8或 CentOS-8配置源
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y 
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 

# 為 RHEL-7 SL-7 或 CentOS-7 安裝 ELRepo 
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y 
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo 

# 檢視可用安裝包
yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

升級核心至4.18版本以上

# Ubuntu忽略,CentOS執行

# 安裝最新的核心
# 我這裡選擇的是穩定版kernel-ml   如需更新長期維護版本kernel-lt  
yum -y --enablerepo=elrepo-kernel  install  kernel-ml

# 檢視已安裝那些核心
rpm -qa | grep kernel

# 檢視預設核心
grubby --default-kernel

# 若不是最新的使用命令設定
grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo)

# 重啟生效
reboot

# v8 整合命令為:
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install kernel-lt -y ; grubby --default-kernel ; reboot 

# v7 整合命令為:
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-lt -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 

# 離線版本 
yum install -y /root/cby/kernel-lt-*-1.el7.elrepo.x86_64.rpm ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 

安裝ipvsadm

# 對於CentOS7離線安裝
# yum install /root/centos7/ipset-*.el7.x86_64.rpm /root/centos7/lm_sensors-libs-*.el7.x86_64.rpm  /root/centos7/ipset-libs-*.el7.x86_64.rpm /root/centos7/sysstat-*.el7_9.x86_64.rpm  /root/centos7/ipvsadm-*.el7.x86_64.rpm  -y

# 對於 Ubuntu
# apt install ipvsadm ipset sysstat conntrack -y

# 對於 CentOS
yum install ipvsadm ipset sysstat conntrack libseccomp -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl restart systemd-modules-load.service

lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          176128  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

# 引數解釋
#
# ip_vs
# IPVS 是 Linux 核心中的一個模組,用於實現負載均衡和高可用性。它透過在前端代理伺服器上分發傳入請求到後端實際伺服器上,提供了高效能和可擴充套件的網路服務。
#
# ip_vs_rr
# IPVS 的一種排程演算法之一,使用輪詢方式分發請求到後端伺服器,每個請求按順序依次分發。
#
# ip_vs_wrr
# IPVS 的一種排程演算法之一,使用加權輪詢方式分發請求到後端伺服器,每個請求按照指定的權重比例分發。
#
# ip_vs_sh
# IPVS 的一種排程演算法之一,使用雜湊方式根據源 IP 地址和目標 IP 地址來分發請求。
#
# nf_conntrack
# 這是一個核心模組,用於跟蹤和管理網路連線,包括 TCP、UDP 和 ICMP 等協議。它是實現防火牆狀態跟蹤的基礎。
#
# ip_tables
# 這是一個核心模組,提供了對 Linux 系統 IP 資料包過濾和網路地址轉換(NAT)功能的支援。
#
# ip_set
# 這是一個核心模組,擴充套件了 iptables 的功能,支援更高效的 IP 地址集合操作。
#
# xt_set
# 這是一個核心模組,擴充套件了 iptables 的功能,支援更高效的資料包匹配和操作。
#
# ipt_set
# 這是一個使用者空間工具,用於配置和管理 xt_set 核心模組。
#
# ipt_rpfilter
# 這是一個核心模組,用於實現反向路徑過濾,用於防止 IP 欺騙和 DDoS 攻擊。
#
# ipt_REJECT
# 這是一個 iptables 目標,用於拒絕 IP 資料包,並向傳送方傳送響應,指示資料包被拒絕。
#
# ipip
# 這是一個核心模組,用於實現 IP 封裝在 IP(IP-over-IP)的隧道功能。它可以在不同網路之間建立虛擬隧道來傳輸 IP 資料包。

修改核心引數

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
EOF

sysctl --system

# 這些是Linux系統的一些引數設定,用於配置和最佳化網路、檔案系統和虛擬記憶體等方面的功能。以下是每個引數的詳細解釋:
# 
# 1. net.ipv4.ip_forward = 1
#    - 這個引數啟用了IPv4的IP轉發功能,允許伺服器作為網路路由器轉發資料包。
# 
# 2. net.bridge.bridge-nf-call-iptables = 1
#    - 當使用網路橋接技術時,將資料包傳遞到iptables進行處理。
#   
# 3. fs.may_detach_mounts = 1
#    - 允許在掛載檔案系統時,允許被其他程序使用。
#   
# 4. vm.overcommit_memory=1
#    - 該設定允許原始的記憶體過量分配策略,當系統的記憶體已經被完全使用時,系統仍然會分配額外的記憶體。
# 
# 5. vm.panic_on_oom=0
#    - 當系統記憶體不足(OOM)時,禁用系統崩潰和重啟。
# 
# 6. fs.inotify.max_user_watches=89100
#    - 設定系統允許一個使用者的inotify例項可以監控的檔案數目的上限。
# 
# 7. fs.file-max=52706963
#    - 設定系統同時開啟的檔案數的上限。
# 
# 8. fs.nr_open=52706963
#    - 設定系統同時開啟的檔案描述符數的上限。
# 
# 9. net.netfilter.nf_conntrack_max=2310720
#    - 設定系統可以建立的網路連線跟蹤表項的最大數量。
# 
# 10. net.ipv4.tcp_keepalive_time = 600
#     - 設定TCP套接字的空閒超時時間(秒),超過該時間沒有活動資料時,核心會傳送心跳包。
# 
# 11. net.ipv4.tcp_keepalive_probes = 3
#     - 設定未收到響應的TCP心跳探測次數。
# 
# 12. net.ipv4.tcp_keepalive_intvl = 15
#     - 設定TCP心跳探測的時間間隔(秒)。
# 
# 13. net.ipv4.tcp_max_tw_buckets = 36000
#     - 設定系統可以使用的TIME_WAIT套接字的最大數量。
# 
# 14. net.ipv4.tcp_tw_reuse = 1
#     - 啟用TIME_WAIT套接字的重新利用,允許新的套接字使用舊的TIME_WAIT套接字。
# 
# 15. net.ipv4.tcp_max_orphans = 327680
#     - 設定系統可以同時存在的TCP套接字垃圾回收包裹數的最大數量。
# 
# 16. net.ipv4.tcp_orphan_retries = 3
#     - 設定系統對於孤立的TCP套接字的重試次數。
# 
# 17. net.ipv4.tcp_syncookies = 1
#     - 啟用TCP SYN cookies保護,用於防止SYN洪泛攻擊。
# 
# 18. net.ipv4.tcp_max_syn_backlog = 16384
#     - 設定新的TCP連線的半連線數(半連線佇列)的最大長度。
# 
# 19. net.ipv4.ip_conntrack_max = 65536
#     - 設定系統可以建立的網路連線跟蹤表項的最大數量。
# 
# 20. net.ipv4.tcp_timestamps = 0
#     - 關閉TCP時間戳功能,用於提供更好的安全性。
# 
# 21. net.core.somaxconn = 16384
#     - 設定系統核心層的連線佇列的最大值。
# 
# 22. net.ipv6.conf.all.disable_ipv6 = 0
#     - 啟用IPv6協議。
# 
# 23. net.ipv6.conf.default.disable_ipv6 = 0
#     - 啟用IPv6協議。
# 
# 24. net.ipv6.conf.lo.disable_ipv6 = 0
#     - 啟用IPv6協議。
# 
# 25. net.ipv6.conf.all.forwarding = 1
#     - 允許IPv6資料包轉發。

所有節點配置hosts本地解析

cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.31 k8s-master01
192.168.1.32 k8s-master02
192.168.1.33 k8s-master03
192.168.1.34 k8s-node01
192.168.1.35 k8s-node02
192.168.1.36 lb-vip
EOF

配置安裝源

簡介

Kubernetes是一個開源系統,用於容器化應用的自動部署、擴縮和管理。它將構成應用的容器按邏輯單位進行分組以便於管理和發現。

由於 Kubernetes 官方變更了倉庫的儲存路徑以及使用方式,如果需要使用 1.28 及以上版本,請使用 新版配置方法 進行配置。

下載地址:https://mirrors.aliyun.com/kubernetes/

新版下載地址:https://mirrors.aliyun.com/kubernetes-new/

配置方法

新版配置方法

新版 kubernetes 源使用方法和之前有一定區別,請求按照如下配置方法配置使用。

其中新版 kubernetes 源按照安裝版本區分不同倉庫,該文件示例為配置 1.30 版本,如需其他版本請在對應位置字串替換即可。

Debian / Ubuntu
  1. 在配置中新增映象(注意修改為自己需要的版本號):
apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/ /" |
    tee /etc/apt/sources.list.d/kubernetes.list
  1. 安裝必要應用:
apt-get update
apt-get install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

# 如安裝指定版本
# apt install kubelet=1.28.2-00 kubeadm=1.28.2-00 kubectl=1.28.2-00
CentOS / RHEL / Fedora
  1. 執行如下命令(注意修改為自己需要的版本號):
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF
  1. 安裝必要應用:
yum update
yum install -y kubelet kubeadm kubectl

# 如安裝指定版本
# yum install kubelet-1.28.2-0 kubeadm-1.28.2-0 kubectl-1.28.2-0

systemctl enable kubelet && systemctl start kubelet

# 將 SELinux 設定為 禁用
sudo setenforce 0
sudo sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

ps: 由於官網未開放同步方式, 可能會有索引gpg檢查失敗的情況, 這時請用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安裝

舊版配置方法

目前由於kubernetes官方變更了倉庫的儲存路徑以及使用方式,舊版 kubernetes 源只更新到 1.28 部分版本,後續更新版本請使用 新源配置方法 進行配置。

Debian / Ubuntu
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
CentOS / RHEL / Fedora
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

ps: 由於官網未開放同步方式, 可能會有索引gpg檢查失敗的情況, 這時請用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安裝

配置containerd

# 下載所需應用包
wget https://mirrors.chenby.cn/https://github.com/containerd/containerd/releases/download/v1.7.16/cri-containerd-cni-1.7.16-linux-amd64.tar.gz
wget https://mirrors.chenby.cn/https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz

# centos7 要升級libseccomp
yum -y install https://mirrors.tuna.tsinghua.edu.cn/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm


#建立cni外掛所需目錄
mkdir -p /etc/cni/net.d /opt/cni/bin 
#解壓cni二進位制包
tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/

#解壓
tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /

#建立服務啟動檔案
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

# 配置Containerd所需的模組
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

# 載入模組
systemctl restart systemd-modules-load.service

# 配置Containerd所需的核心
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 載入核心
sysctl --system

# 建立Containerd的配置檔案
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

# 修改Containerd的配置檔案
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup
sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image
sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep certs.d

# 配置加速器
mkdir /etc/containerd/certs.d/docker.io -pv
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
  capabilities = ["pull", "resolve"]
EOF


# 啟動並設定為開機啟動
systemctl daemon-reload
systemctl enable --now containerd.service
systemctl stop containerd.service
systemctl start containerd.service
systemctl restart containerd.service
systemctl status containerd.service

高可用keepalived、haproxy

安裝keepalived和haproxy服務

yum -y install keepalived haproxy

修改haproxy配置檔案(配置檔案一樣)

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
 maxconn 2000
 ulimit-n 16384
 log 127.0.0.1 local0 err
 stats timeout 30s

defaults
 log global
 mode http
 option httplog
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 timeout http-request 15s
 timeout http-keep-alive 15s


frontend monitor-in
 bind *:33305
 mode http
 option httplog
 monitor-uri /monitor

frontend k8s-master
 bind 0.0.0.0:9443
 bind 127.0.0.1:9443
 mode tcp
 option tcplog
 tcp-request inspect-delay 5s
 default_backend k8s-master


backend k8s-master
 mode tcp
 option tcplog
 option tcp-check
 balance roundrobin
 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
 server  k8s-master01  192.168.1.31:6443 check
 server  k8s-master02  192.168.1.32:6443 check
 server  k8s-master03  192.168.1.33:6443 check
EOF

引數

這段配置程式碼是指定了一個HAProxy負載均衡器的配置。下面對各部分進行詳細解釋:
1. global:
   - maxconn 2000: 設定每個程序的最大連線數為2000。
   - ulimit-n 16384: 設定每個程序的最大檔案描述符數為16384。
   - log 127.0.0.1 local0 err: 指定日誌的輸出地址為本地主機的127.0.0.1,並且只記錄錯誤級別的日誌。
   - stats timeout 30s: 設定檢視負載均衡器統計資訊的超時時間為30秒。

2. defaults:
   - log global: 使預設日誌與global部分相同。
   - mode http: 設定負載均衡器的工作模式為HTTP模式。
   - option httplog: 使負載均衡器記錄HTTP協議的日誌。
   - timeout connect 5000: 設定與後端伺服器建立連線的超時時間為5秒。
   - timeout client 50000: 設定與客戶端的連線超時時間為50秒。
   - timeout server 50000: 設定與後端伺服器連線的超時時間為50秒。
   - timeout http-request 15s: 設定處理HTTP請求的超時時間為15秒。
   - timeout http-keep-alive 15s: 設定保持HTTP連線的超時時間為15秒。

3. frontend monitor-in:
   - bind *:33305: 監聽所有IP地址的33305埠。
   - mode http: 設定frontend的工作模式為HTTP模式。
   - option httplog: 記錄HTTP協議的日誌。
   - monitor-uri /monitor: 設定監控URI為/monitor。

4. frontend k8s-master:
   - bind 0.0.0.0:9443: 監聽所有IP地址的9443埠。
   - bind 127.0.0.1:9443: 監聽本地主機的9443埠。
   - mode tcp: 設定frontend的工作模式為TCP模式。
   - option tcplog: 記錄TCP協議的日誌。
   - tcp-request inspect-delay 5s: 設定在接收到請求後延遲5秒進行檢查。
   - default_backend k8s-master: 設定預設的後端伺服器組為k8s-master。

5. backend k8s-master:
   - mode tcp: 設定backend的工作模式為TCP模式。
   - option tcplog: 記錄TCP協議的日誌。
   - option tcp-check: 啟用TCP檢查功能。
   - balance roundrobin: 使用輪詢演算法進行負載均衡。
   - default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100: 設定預設的伺服器引數。
   - server k8s-master01 192.168.1.31:6443 check: 增加一個名為k8s-master01的伺服器,IP地址為192.168.1.31,埠號為6443,並對其進行健康檢查。
   - server k8s-master02 192.168.1.32:6443 check: 增加一個名為k8s-master02的伺服器,IP地址為192.168.1.32,埠號為6443,並對其進行健康檢查。
   - server k8s-master03 192.168.1.33:6443 check: 增加一個名為k8s-master03的伺服器,IP地址為192.168.1.33,埠號為6443,並對其進行健康檢查。

以上就是這段配置程式碼的詳細解釋。它主要定義了全域性配置、預設配置、前端監聽和後端伺服器組的相關引數和設定。透過這些配置,可以實現負載均衡和監控功能。

Master01配置keepalived master節點

#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    # 注意網路卡名
    interface eth0 
    mcast_src_ip 192.168.1.31
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.36
    }
    track_script {
      chk_apiserver 
} }

EOF

Master02配置keepalived backup節點

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1

}
vrrp_instance VI_1 {
    state BACKUP
    # 注意網路卡名
    interface eth0
    mcast_src_ip 192.168.1.32
    virtual_router_id 51
    priority 80
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.36
    }
    track_script {
      chk_apiserver 
} }

EOF

Master03配置keepalived backup節點

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1

}
vrrp_instance VI_1 {
    state BACKUP
    # 注意網路卡名
    interface eth0
    mcast_src_ip 192.168.1.33
    virtual_router_id 51
    priority 50
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.36
    }
    track_script {
      chk_apiserver 
} }

EOF

引數

這是一個用於配置keepalived的配置檔案。下面是對每個部分的詳細解釋:

- `global_defs`部分定義了全域性引數。
- `router_id`引數指定了當前路由器的標識,這裡設定為"LVS_DEVEL"。

- `vrrp_script`部分定義了一個VRRP指令碼。`chk_apiserver`是指令碼的名稱,
    - `script`引數指定了指令碼的路徑。該指令碼每5秒執行一次,返回值為0表示服務正常,返回值為1表示服務異常。
    - `weight`引數指定了根據指令碼返回的值來調整優先順序,這裡設定為-5。
    - `fall`引數指定了失敗閾值,當連續2次指令碼返回值為1時認為服務異常。
    - `rise`引數指定了恢復閾值,當連續1次指令碼返回值為0時認為服務恢復正常。

- `vrrp_instance`部分定義了一個VRRP例項。`VI_1`是例項的名稱。
    - `state`引數指定了當前例項的狀態,這裡設定為MASTER表示當前例項是主節點。
    - `interface`引數指定了要監聽的網路卡,這裡設定為eth0。
    - `mcast_src_ip`引數指定了VRRP報文的源IP地址,這裡設定為192.168.1.31。
    - `virtual_router_id`引數指定了虛擬路由器的ID,這裡設定為51。
    - `priority`引數指定了例項的優先順序,優先順序越高(數值越大)越有可能被選為主節點。
    - `nopreempt`引數指定了當主節點失效後不要搶佔身份,即不要自動切換為主節點。
    - `advert_int`引數指定了傳送廣播的間隔時間,這裡設定為2秒。
    - `authentication`部分指定了認證引數
    	- `auth_type`引數指定了認證型別,這裡設定為PASS表示使用密碼認證,
    	- `auth_pass`引數指定了認證密碼,這裡設定為K8SHA_KA_AUTH。
    - `virtual_ipaddress`部分指定了虛擬IP地址,這裡設定為192.168.1.36。
    - `track_script`部分指定了要跟蹤的指令碼,這裡跟蹤了chk_apiserver指令碼。

健康檢查指令碼配置(lb主機)

cat >  /etc/keepalived/check_apiserver.sh << EOF
#!/bin/bash

err=0
for k in \$(seq 1 3)
do
    check_code=\$(pgrep haproxy)
    if [[ \$check_code == "" ]]; then
        err=\$(expr \$err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ \$err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF

# 給指令碼授權

chmod +x /etc/keepalived/check_apiserver.sh

# 這段指令碼是一個簡單的bash指令碼,主要用來檢查是否有名為haproxy的程序正在執行。
# 
# 指令碼的主要邏輯如下:
# 1. 首先設定一個變數err為0,用來記錄錯誤次數。
# 2. 使用一個迴圈,在迴圈內部執行以下操作:
#    a. 使用pgrep命令檢查是否有名為haproxy的程序在執行。如果不存在該程序,將err加1,並暫停1秒鐘,然後繼續下一次迴圈。
#    b. 如果存在haproxy程序,將err重置為0,並跳出迴圈。
# 3. 檢查err的值,如果不為0,表示檢查失敗,輸出一條錯誤資訊並執行“systemctl stop keepalived”命令停止keepalived程序,並退出指令碼返回1。
# 4. 如果err的值為0,表示檢查成功,退出指令碼返回0。
# 
# 該指令碼的主要作用是檢查是否存在執行中的haproxy程序,如果無法檢測到haproxy程序,將停止keepalived程序並返回錯誤狀態。如果haproxy程序存在,則返回成功狀態。這個指令碼可能是作為一個健康檢查指令碼的一部分,在確保haproxy服務可用的情況下,才繼續執行其他操作。

啟動服務

systemctl daemon-reload
# 用於重新載入systemd管理的單位檔案。當你新增或修改了某個單位檔案(如.service檔案、.socket檔案等),需要執行該命令來重新整理systemd對該檔案的配置。
systemctl enable --now haproxy.service
# 啟用並立即啟動haproxy.service單元。haproxy.service是haproxy守護程序的systemd服務單元。
systemctl enable --now keepalived.service
# 啟用並立即啟動keepalived.service單元。keepalived.service是keepalived守護程序的systemd服務單元。
systemctl status haproxy.service
# haproxy.service單元的當前狀態,包括執行狀態、是否啟用等資訊。
systemctl status keepalived.service
# keepalived.service單元的當前狀態,包括執行狀態、是否啟用等資訊。

測試高可用

# 能ping同
[root@k8s-node02 ~]# ping 192.168.1.36

# 能telnet訪問
[root@k8s-node02 ~]# telnet 192.168.1.36 9443

# 關閉主節點,看vip是否漂移到備節點

初始化安裝

# 檢視最新版本有那些映象
[root@k8s-master01 ~]# kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.30.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.30.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.30.0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.1
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.12-0
[root@k8s-master01 ~]# 

# 建立預設配置
kubeadm config print init-defaults > kubeadm-init.yaml
# 這是我使用的配置檔案
cat > kubeadm.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 72h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.31
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  kubeletExtraArgs:
    # 這裡使用maser01的IP 
    node-ip: 192.168.1.31,2408:822a:730:af01::7d8
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
    - x.oiox.cn
    - k8s-master01
    - k8s-master02
    - k8s-master03
    - 192.168.1.31
    - 192.168.1.32
    - 192.168.1.33
    - 192.168.1.34
    - 192.168.1.35
    - 192.168.1.36
    - 192.168.1.60
    - 127.0.0.1
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
kind: ClusterConfiguration
kubernetesVersion: 1.30.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16,2408:822a:730:af01::/64
  serviceSubnet: 10.96.0.0/16,2408:822a:730:af01::/112
scheduler: {}
# 這裡使用的是負載地址
controlPlaneEndpoint: "192.168.1.36:9443"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
cgroupDriver: systemd
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF







[root@k8s-master01 ~]# kubeadm init --config=kubeadm.yaml
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0505 03:06:30.873603   10998 checks.go:844] detected that the sandbox image "m.daocloud.io/registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local x.oiox.cn] and IPs [10.96.0.1 192.168.1.31 192.168.1.36 192.168.1.32 192.168.1.33 192.168.1.34 192.168.1.35 192.168.1.60 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.1.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.1.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0505 03:06:33.121345   10998 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0505 03:06:33.297328   10998 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "super-admin.conf" kubeconfig file
W0505 03:06:33.403541   10998 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0505 03:06:33.552221   10998 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0505 03:06:33.625848   10998 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.155946ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 16.665034989s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0505 03:06:54.233183   10998 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.1.36:9443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:583ddadd1318dae447c3890aa3a2469c5b00c6775e87102458db07e691c724be \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.36:9443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:583ddadd1318dae447c3890aa3a2469c5b00c6775e87102458db07e691c724be 
[root@k8s-master01 ~]# 



# 重新初始化
[root@k8s-master01 ~]# kubeadm reset



[root@k8s-master01 ~]# 
[root@k8s-master01 ~]#   mkdir -p $HOME/.kube
[root@k8s-master01 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# 

# 使用指令碼將這如果你睡複製到其他maser節點
USER=root
CONTROL_PLANE_IPS="192.168.1.32 192.168.1.33"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    # 如果你正使用外部 etcd,忽略下一行
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done

# 在其他的maser上面執行 ,將證書檔案放入所需目錄
USER=root
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# 如果你正使用外部 etcd,忽略下一行
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key


# 在maser02上執行操作,將加入控制節點
cat > kubeadm-join-master-02.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
controlPlane:
  localAPIEndpoint:
    advertiseAddress: "192.168.1.32"
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 192.168.1.36:9443
    token: "abcdef.0123456789abcdef"
    caCertHashes:
    - "sha256:583ddadd1318dae447c3890aa3a2469c5b00c6775e87102458db07e691c724be"
    # 請更改上面的認證資訊,使之與你的叢集中實際使用的令牌和 CA 證書匹配
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 192.168.1.32,2408:822a:730:af01::fab
EOF

kubeadm join --config=kubeadm-join-master-02.yaml

# 在maser03上執行操作,將加入控制節點
cat > kubeadm-join-master-03.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
controlPlane:
  localAPIEndpoint:
    advertiseAddress: "192.168.1.33"
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 192.168.1.36:9443
    token: "abcdef.0123456789abcdef"
    caCertHashes:
    - "sha256:583ddadd1318dae447c3890aa3a2469c5b00c6775e87102458db07e691c724be"
    # 請更改上面的認證資訊,使之與你的叢集中實際使用的令牌和 CA 證書匹配
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 192.168.1.33,2408:822a:730:af01::bea
EOF

kubeadm join --config=kubeadm-join-master-03.yaml


# 在node02上執行操作,將加入工作節點
cat > kubeadm-join-node-01.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    apiServerEndpoint: 192.168.1.36:9443
    token: "abcdef.0123456789abcdef"
    caCertHashes:
    - "sha256:583ddadd1318dae447c3890aa3a2469c5b00c6775e87102458db07e691c724be"
    # 請更改上面的認證資訊,使之與你的叢集中實際使用的令牌和 CA 證書匹配
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 192.168.1.34,2408:822a:730:af01::bcf
EOF

kubeadm join --config=kubeadm-join-node-01.yaml

# 在node02上執行操作,將加入工作節點
cat > kubeadm-join-node-02.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    apiServerEndpoint: 192.168.1.36:9443
    token: "abcdef.0123456789abcdef"
    caCertHashes:
    - "sha256:583ddadd1318dae447c3890aa3a2469c5b00c6775e87102458db07e691c724be"
    # 請更改上面的認證資訊,使之與你的叢集中實際使用的令牌和 CA 證書匹配
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 192.168.1.35,2408:822a:730:af01::443
EOF

kubeadm join --config=kubeadm-join-node-02.yaml

檢視叢集狀態

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   2m14s   v1.30.0
k8s-master02   NotReady   control-plane   48s     v1.30.0
k8s-master03   NotReady   control-plane   30s     v1.30.0
k8s-node01     NotReady   <none>          19s     v1.30.0
k8s-node02     NotReady   <none>          9s      v1.30.0
[root@k8s-master01 ~]# 

安裝Calico

更改calico網段

# 下載所需yaml檔案
wget https://mirrors.chenby.cn/https://github.com/projectcalico/calico/blob/master/manifests/calico-typha.yaml

# 備份指令碼檔案
cp calico-typha.yaml calico.yaml
cp calico-typha.yaml calico-ipv6.yaml

# 修改指令碼檔案中配置項

# vim calico.yaml
# calico-config ConfigMap處
    "ipam": {
        "type": "calico-ipam",
    },
    - name: IP
      value: "autodetect"

    - name: CALICO_IPV4POOL_CIDR
      value: "172.16.0.0/12"

vim calico-ipv6.yaml
# calico-config ConfigMap處
    "ipam": {
        "type": "calico-ipam",
        "assign_ipv4": "true",
        "assign_ipv6": "true"
    },
    - name: IP
      value: "autodetect"

    - name: IP6
      value: "autodetect"

    - name: CALICO_IPV4POOL_CIDR
      value: "10.244.0.0/16"

    - name: CALICO_IPV6POOL_CIDR
      value: "2408:822a:730:af01::/64"

    - name: FELIX_IPV6SUPPORT
      value: "true"
      
     # 設定IPv6 vxLAN的模式為CrossSubnet
     # 如果節點跨了子網,pod通訊用vxlan封裝,注意該功能3.23版本後才支援
    - name: CALICO_IPV6POOL_VXLAN
      value: "CrossSubnet"
     # 增加環境變數,開啟IPv6 pool nat outgoing功能
    - name: CALICO_IPV6POOL_NAT_OUTGOING
      value: "true"



# 若docker映象拉不下來,可以使用國內的倉庫
# sed -i "s#docker.io/calico/#m.daocloud.io/docker.io/calico/#g" calico.yaml 
# sed -i "s#docker.io/calico/#m.daocloud.io/docker.io/calico/#g" calico-ipv6.yaml
# sed -i "s#m.daocloud.io/docker.io/calico/#docker.io/calico/#g" calico.yaml 
# sed -i "s#m.daocloud.io/docker.io/calico/#docker.io/calico/#g" calico-ipv6.yaml

# 本地沒有公網 IPv6 使用 calico.yaml
# kubectl apply -f calico.yaml

# 本地有公網 IPv6 使用 calico-ipv6.yaml 
kubectl apply -f calico-ipv6.yaml 

檢視容器狀態

# calico 初始化會很慢 需要耐心等待一下,大約十分鐘左右
[root@k8s-master01 ~]# kubectl get pod -A| grep calico
kube-system   calico-kube-controllers-57cf4498-rqhhz   1/1     Running   0          4m1s
kube-system   calico-node-4mbth                        1/1     Running   0          4m1s
kube-system   calico-node-624z2                        1/1     Running   0          4m1s
kube-system   calico-node-646qq                        1/1     Running   0          4m1s
kube-system   calico-node-7m4z8                        1/1     Running   0          4m1s
kube-system   calico-node-889qb                        1/1     Running   0          4m1s
kube-system   calico-typha-7746b44b78-kcgkx            1/1     Running   0          4m1s
[root@k8s-master01 ~]# 

檢視叢集

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   10m     v1.30.0
k8s-master02   Ready    control-plane   9m3s    v1.30.0
k8s-master03   Ready    control-plane   8m45s   v1.30.0
k8s-node01     Ready    <none>          8m34s   v1.30.0
k8s-node02     Ready    <none>          8m24s   v1.30.0
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl get pod -A
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57cf4498-rqhhz   1/1     Running   0          93s
kube-system   calico-node-4mbth                        1/1     Running   0          93s
kube-system   calico-node-624z2                        1/1     Running   0          93s
kube-system   calico-node-646qq                        1/1     Running   0          93s
kube-system   calico-node-7m4z8                        1/1     Running   0          93s
kube-system   calico-node-889qb                        1/1     Running   0          93s
kube-system   calico-typha-7746b44b78-kcgkx            1/1     Running   0          93s
kube-system   coredns-7c445c467-kmjd7                  1/1     Running   0          10m
kube-system   coredns-7c445c467-xzhn6                  1/1     Running   0          10m
kube-system   etcd-k8s-master01                        1/1     Running   5          10m
kube-system   etcd-k8s-master02                        1/1     Running   70         9m8s
kube-system   etcd-k8s-master03                        1/1     Running   0          8m50s
kube-system   kube-apiserver-k8s-master01              1/1     Running   5          10m
kube-system   kube-apiserver-k8s-master02              1/1     Running   70         9m8s
kube-system   kube-apiserver-k8s-master03              1/1     Running   0          8m50s
kube-system   kube-controller-manager-k8s-master01     1/1     Running   5          10m
kube-system   kube-controller-manager-k8s-master02     1/1     Running   2          9m8s
kube-system   kube-controller-manager-k8s-master03     1/1     Running   2          8m50s
kube-system   kube-proxy-74c8q                         1/1     Running   0          8m52s
kube-system   kube-proxy-g6mcf                         1/1     Running   0          8m31s
kube-system   kube-proxy-lcrv7                         1/1     Running   0          10m
kube-system   kube-proxy-qbvc8                         1/1     Running   0          8m41s
kube-system   kube-proxy-vxhh9                         1/1     Running   0          9m10s
kube-system   kube-scheduler-k8s-master01              1/1     Running   5          10m
kube-system   kube-scheduler-k8s-master02              1/1     Running   2          9m8s
kube-system   kube-scheduler-k8s-master03              1/1     Running   2          8m50s
[root@k8s-master01 ~]# 

叢集驗證

部署pod資源

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

# 檢視
kubectl  get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          17s

用pod解析預設名稱空間中的kubernetes

# 檢視name
kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h

# 進行解析
kubectl exec  busybox -n default -- nslookup kubernetes
3Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

測試跨名稱空間是否可以解析

# 檢視有那些name
kubectl  get svc -A
NAMESPACE     NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes        ClusterIP   10.96.0.1       <none>        443/TCP         76m
kube-system   calico-typha      ClusterIP   10.105.100.82   <none>        5473/TCP        35m
kube-system   coredns-coredns   ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   8m14s
kube-system   metrics-server    ClusterIP   10.105.60.31    <none>        443/TCP         109s

# 進行解析
kubectl exec  busybox -n default -- nslookup coredns-coredns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 coredns-coredns.kube-system.svc.cluster.local

Name:      coredns-coredns.kube-system
Address 1: 10.96.0.10 coredns-coredns.kube-system.svc.cluster.local
[root@k8s-master01 metrics-server]# 

每個節點都必須要能訪問Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

 telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

curl 10.96.0.10:53
curl: (52) Empty reply from server

Pod和Pod之前要能通

kubectl get po -owide
NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>

kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-76754ff848-pw4xg   1/1     Running   0          38m     172.25.244.193   k8s-master01   <none>           <none>
calico-node-97m55                          1/1     Running   0          38m     192.168.1.34     k8s-node01     <none>           <none>
calico-node-hlz7j                          1/1     Running   0          38m     192.168.1.32     k8s-master02   <none>           <none>
calico-node-jtlck                          1/1     Running   0          38m     192.168.1.33     k8s-master03   <none>           <none>
calico-node-lxfkf                          1/1     Running   0          38m     192.168.1.35     k8s-node02     <none>           <none>
calico-node-t667x                          1/1     Running   0          38m     192.168.1.31     k8s-master01   <none>           <none>
calico-typha-59d75c5dd4-gbhfp              1/1     Running   0          38m     192.168.1.35     k8s-node02     <none>           <none>
coredns-coredns-c5c6d4d9b-bd829            1/1     Running   0          10m     172.25.92.65     k8s-master02   <none>           <none>
metrics-server-7c8b55c754-w7q8v            1/1     Running   0          3m56s   172.17.125.3     k8s-node01     <none>           <none>

# 進入busybox ping其他節點上的pod

kubectl exec -ti busybox -- sh
/ # ping 192.168.1.34
PING 192.168.1.34 (192.168.1.34): 56 data bytes
64 bytes from 192.168.1.34: seq=0 ttl=63 time=0.358 ms
64 bytes from 192.168.1.34: seq=1 ttl=63 time=0.668 ms
64 bytes from 192.168.1.34: seq=2 ttl=63 time=0.637 ms
64 bytes from 192.168.1.34: seq=3 ttl=63 time=0.624 ms
64 bytes from 192.168.1.34: seq=4 ttl=63 time=0.907 ms

# 可以連通證明這個pod是可以跨名稱空間和跨主機通訊的

建立三個副本,可以看到3個副本分佈在不同的節點上(用完可以刪了)

cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

kubectl  get pod 
NAME                               READY   STATUS    RESTARTS   AGE
busybox                            1/1     Running   0          6m25s
nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s

# 刪除nginx
[root@k8s-master01 ~]# kubectl delete deployments nginx-deployment 

測試IPV6

# 建立測試服務
[root@k8s-master01 ~]# cat > cby.yaml << EOF 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: chenby
spec:
  replicas: 3
  selector:
    matchLabels:
      app: chenby
  template:
    metadata:
      labels:
        app: chenby
    spec:
      containers:
      - name: chenby
        image: nginx
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: chenby
spec:
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
  - IPv6
  - IPv4
  type: NodePort
  selector:
    app: chenby
  ports:
  - port: 80
    targetPort: 80
EOF

[root@k8s-master01 ~]# kubectl  apply -f cby.yaml 

# 檢視pod情況
[root@k8s-master01 ~]# kubectl  get pod
NAME                      READY   STATUS    RESTARTS   AGE
chenby-868fd8f687-727hd   1/1     Running   0          23s
chenby-868fd8f687-lrxsr   1/1     Running   0          23s
chenby-868fd8f687-n7f2k   1/1     Running   0          23s
[root@k8s-master01 ~]#

# 檢視svc情況
[root@k8s-master01 ~]# kubectl get svc 
NAME         TYPE        CLUSTER-IP                 EXTERNAL-IP   PORT(S)        AGE
chenby       NodePort    2408:822a:730:af01::4466   <none>        80:30921/TCP   2m40s
kubernetes   ClusterIP   10.96.0.1                  <none>        443/TCP        58m
[root@k8s-master01 ~]# 

# 在叢集內訪問,需要在pod所在的節點上執行測試
[root@k8s-node01 ~]# curl -g -6 [2408:822a:730:af01::4466]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-node01 ~]# 

# 在叢集內訪問node地址,叢集內需要在pod所在的節點上執行測試,叢集外任意節點即可訪問
[root@k8s-node01 ~]# curl -g -6 [2408:822a:730:af01::bcf]:30921
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-node01 ~]#

# 測試ipv4地址
[root@k8s-master01 ~]# curl  http://192.168.1.31:30921/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master01 ~]# 

安裝Metrics-Server

# 下載 
wget https://mirrors.chenby.cn/https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# 修改配置
vim components.yaml

# 修改此處 新增   - --kubelet-insecure-tls
      - args:
        - --cert-dir=/tmp
        - --secure-port=10250
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls


# 修改映象地址
sed -i "s#registry.k8s.io/metrics-server#registry.aliyuncs.com/google_containers#g" components.yaml
cat components.yaml | grep image


[root@k8s-master01 ~]# kubectl apply -f components.yaml

# 需要稍等一會才可檢視到
[root@k8s-master01 ~]# kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   196m         4%     2270Mi          58%       
k8s-master02   165m         4%     1823Mi          47%       
k8s-master03   162m         4%     1784Mi          46%       
k8s-node01     72m          1%     1492Mi          38%       
k8s-node02     62m          1%     1355Mi          35%       
[root@k8s-master01 ~]# 

安裝HELM

wget https://mirrors.huaweicloud.com/helm/v3.14.4/helm-v3.14.4-linux-amd64.tar.gz
tar xvf helm-*-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin/

安裝dashboard

# 新增源資訊
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

# 預設引數安裝
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kube-system

# 我的叢集使用預設引數安裝 kubernetes-dashboard-kong 出現異常 8444 埠占用
# 使用下面的命令進行安裝,在安裝時關閉kong.tls功能
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --namespace kube-system --set kong.admin.tls.enabled=false

更改dashboard的svc為NodePort,如果已是請忽略

kubectl edit svc  -n kube-system kubernetes-dashboard-kong-proxy
  type: NodePort

檢視埠號

[root@k8s-master01 ~]# kubectl get svc kubernetes-dashboard-kong-proxy -n kube-system
NAME                              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard-kong-proxy   NodePort   10.96.247.74   <none>        443:32457/TCP   2m29s
[root@k8s-master01 ~]# 

建立token

cat > dashboard-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

kubectl  apply -f dashboard-user.yaml

# 建立token
kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikk0dXVHN05BZ0k3VXQ1ekR3NkMzTThad2tzVkpEbFp0bjAyR1lRYlpObmMifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzE0ODg1NDYzLCJpYXQiOjE3MTQ4ODE4NjMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNWYzYzkxYjctZDMzYy00ZjcwLTg0OTEtMmEwNTVmYzI1ZThhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZjdjYmFmMGItOGVkMC00ZmU4LThlNGUtZGUwZDEzZDk5ZDJhIn19LCJuYmYiOjE3MTQ4ODE4NjMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.JELSXYQM7fRt4ccaBhBe1O_rMvvVGtv_NzN3Hr8TIzxGTc0yvv3lwSP8SygFQVI3a60Y3ZU45khjqYJ5MbmJfO_t3BtjjMXE-WXmqTK4_lSS0urkmZ_7yxwJNwq4keAQYRIXcOJzzEwbhKhKblRoY5GgssW93nAOfcHZZNy2hKXzmlnzBoMbg46P2TmcSeYitYq4yLL877KALvQVUg7OWcUnX68NGWM3kW78Uakurjcx7WGSOZRm-vS2VWn3iyf--3Jz2v-oUHmtPUEj82SE0rXnBMC_VlrSlWBR34gk0p7NLeblAlmuqiY7FEOkWyHbtQmGZuCVm0DUtGnMsqAfew

建立長期token

cat > dashboard-user-token.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: "admin-user"   
type: kubernetes.io/service-account-token  
EOF

kubectl  apply -f dashboard-user-token.yaml

# 檢視密碼
kubectl get secret admin-user -n kube-system -o jsonpath={".data.token"} | base64 -d

eyJhbGciOiJSUzI1NiIsImtpZCI6Ikk0dXVHN05BZ0k3VXQ1ekR3NkMzTThad2tzVkpEbFp0bjAyR1lRYlpObmMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmN2NiYWYwYi04ZWQwLTRmZTgtOGU0ZS1kZTBkMTNkOTlkMmEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.B5UxbBooSeV5M9PfOhSp5bCwBs5434u3y1tjCmfEuKKfUYbwYMq2jsjm4n9M816kKWG30NoQ8aqVxfJK2EKThSURLMhhr4idq2E_ndftXel-fE4dqDfHj8jfDcuvfXMXJhsNFkD6jcQW25aMl_W1u8_5A5xNAE9EkspkQWYAiBFJHZO6jd5Evt134Q0i9mPGqw-kqK7QOaBoVlYPlJd4jPdrPUoIyx0VLj9rjNcYTFWhe_qkBndcu28nM33NfG9D-Qj6Z29_-rT3BrpCfe54S3ihdsn5YNxu3UQrKM6Vaquwgq0Z4SnMHUfSvV1OwsYGLeLC6gb8dgtVhwF5tJIuAQ

登入dashboard

https://192.168.1.31:32457/

ingress安裝

執行部署

wget https://mirrors.chenby.cn/https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

# 修改為國內源 docker源可選
sed -i "s#registry.k8s.io#k8s.dockerproxy.com#g" *.yaml
cat deploy.yaml | grep image

cat > backend.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    app.kubernetes.io/name: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
EOF

kubectl  apply -f deploy.yaml 
kubectl  apply -f backend.yaml 


cat > ingress-demo-app.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000
---
apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.chenby.cn"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.chenby.cn"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx"  
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
EOF

# 等建立完成後在執行:
kubectl  apply -f ingress-demo-app.yaml 

kubectl  get ingress
NAME               CLASS   HOSTS                            ADDRESS     PORTS   AGE
ingress-host-bar   nginx   hello.chenby.cn,demo.chenby.cn   192.168.1.32   80      7s

過濾檢視ingress埠

# 修改為nodeport
kubectl edit svc -n ingress-nginx   ingress-nginx-controller
type: NodePort

[root@hello ~/yaml]# kubectl  get svc -A | grep ingress
ingress-nginx          ingress-nginx-controller             NodePort    10.104.231.36    <none>        80:32636/TCP,443:30579/TCP   104s
ingress-nginx          ingress-nginx-controller-admission   ClusterIP   10.101.85.88     <none>        443/TCP                      105s
[root@hello ~/yaml]#

ingress測試

cat >> /etc/hosts <<EOF
192.168.1.31 hello.chenby.cn
192.168.1.31 demo.chenby.cn
EOF

[root@k8s-master01 ~]# curl hello.chenby.cn:32472
[root@k8s-master01 ~]# curl demo.chenby.cn:32472

安裝 Grafana Prometheus Altermanager 套件

下載離線包

# 新增 prometheus-community 官方Helm Chart倉庫
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# 下載離線包
helm pull  prometheus-community/kube-prometheus-stack

# 解壓下載下來的包
tar xvf kube-prometheus-stack-*.tgz 

修改映象地址

# 進入目錄進行修改images地址
cd kube-prometheus-stack/
sed -i "s#registry.k8s.io#k8s.dockerproxy.com#g" charts/kube-state-metrics/values.yaml
sed -i "s#quay.io#quay.dockerproxy.com#g" charts/kube-state-metrics/values.yaml

sed -i "s#registry.k8s.io#k8s.dockerproxy.com#g" values.yaml
sed -i "s#quay.io#quay.dockerproxy.com#g" values.yaml

安裝

# 進行安裝 
helm install  op  .  --create-namespace --namespace op
NAME: op
LAST DEPLOYED: Sun May  5 12:43:26 2024
NAMESPACE: op
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace op get pods -l "release=op"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.

修改 svc

# 修改 svc 將其設定為NodePort
kubectl  edit svc -n op op-grafana
kubectl  edit svc -n op op-kube-prometheus-stack-prometheus 
        type: NodePort

檢視

[root@hello ~/yaml]# kubectl --namespace op get pods -l "release=op"
NAME                                                 READY   STATUS    RESTARTS   AGE
op-kube-prometheus-stack-operator-5c586dfc7f-hmqdf   1/1     Running   0          96s
op-kube-state-metrics-57d49c9db4-r2mvn               1/1     Running   0          96s
op-prometheus-node-exporter-7lrks                    1/1     Running   0          96s
op-prometheus-node-exporter-7q2ns                    1/1     Running   0          96s
op-prometheus-node-exporter-9xblm                    1/1     Running   0          96s
op-prometheus-node-exporter-gf6gf                    1/1     Running   0          96s
op-prometheus-node-exporter-h976s                    1/1     Running   0          96s
[root@hello ~/yaml]# 

# 檢視svc
[root@hello ~/yaml]# kubectl --namespace op get svc
NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
alertmanager-operated                   ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP      2m8s
op-grafana                              NodePort    10.96.28.3      <none>        80:30833/TCP                    2m15s
op-kube-prometheus-stack-alertmanager   ClusterIP   10.96.134.225   <none>        9093/TCP,8080/TCP               2m15s
op-kube-prometheus-stack-operator       ClusterIP   10.96.106.106   <none>        443/TCP                         2m15s
op-kube-prometheus-stack-prometheus     NodePort    10.96.181.73    <none>        9090:31474/TCP,8080:31012/TCP   2m15s
op-kube-state-metrics                   ClusterIP   10.96.168.6     <none>        8080/TCP                        2m15s
op-prometheus-node-exporter             ClusterIP   10.96.43.139    <none>        9100/TCP                        2m15s
prometheus-operated                     ClusterIP   None            <none>        9090/TCP                        2m7s
[root@hello ~/yaml]# 

# 檢視POD
root@hello:~# kubectl --namespace op get pod
alertmanager-op-kube-prometheus-stack-alertmanager-0   2/2     Running   0          2m32s
op-grafana-6489698854-bhgc5                            3/3     Running   0          2m39s
op-kube-prometheus-stack-operator-5c586dfc7f-hmqdf     1/1     Running   0          2m39s
op-kube-state-metrics-57d49c9db4-r2mvn                 1/1     Running   0          2m39s
op-prometheus-node-exporter-7lrks                      1/1     Running   0          2m39s
op-prometheus-node-exporter-7q2ns                      1/1     Running   0          2m39s
op-prometheus-node-exporter-9xblm                      1/1     Running   0          2m39s
op-prometheus-node-exporter-gf6gf                      1/1     Running   0          2m39s
op-prometheus-node-exporter-h976s                      1/1     Running   0          2m39s
prometheus-op-kube-prometheus-stack-prometheus-0       2/2     Running   0          2m31s
root@hello:~# 

訪問

# 訪問
http://192.168.1.31:30833
http://192.168.1.31:31474

user: admin
password: prom-operator

安裝命令列自動補全功能

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

關於

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、開源中國、思否、掘金、簡書、華為雲、阿里雲、騰訊雲、嗶哩嗶哩、今日頭條、新浪微博、個人部落格

全網可搜《小陳運維》

文章主要釋出於微信公眾號:《Linux運維交流社群》

相關文章