目錄
- 二進位制高可用基本配置
- 二進位制系統和核心升級
- 二進位制基本元件安裝
- 二進位制生成證書詳解
- 二進位制高可用及etcd配置
- 二進位制K8s元件配置
- 二進位制使用Bootstrapping自動頒發證書
- 二進位制Node節點及Calico配置
二進位制高可用基本配置
k8s高可用架構解析,高可用Kubernetes叢集規劃,設定靜態ip,請參考上一篇文章
配置所有節點hosts檔案(傳送鍵輸到入所有會話)
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.232.128 k8s-master01
192.168.232.129 k8s-master02
192.168.232.130 k8s-master03
192.168.232.236 k8s-master-lb # 如果不是高可用叢集,該IP為Master01的IP
192.168.232.131 k8s-node01
192.168.232.132 k8s-node02
host節點主要是控制節點使用,控制節點下載一些檔案,然後通過sskey傳到其他的節點上面
CentOS 7安裝yum源如下:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
必備工具安裝
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
所有節點關閉firewalld 、dnsmasq、selinux(CentOS7需要關閉NetworkManager,CentOS8不需要)
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
檢查狀態(必須為 Disable)
getenforce
所有節點關閉swap分割槽,fstab註釋swap
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
所有節點同步時間
安裝ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
所有節點同步時間。時間同步配置如下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
檢查時間
date
加入到crontab
crontab -e
# 新增以下內容
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
所有節點配置limit:
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾新增如下內容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
Master01節點(取消傳送鍵輸到入所有會話)免金鑰登入其他節點,安裝過程中生成配置檔案和證書均在Master01上操作,叢集管理也在Master01上操作,阿里雲或者AWS上需要單獨一臺kubectl伺服器。金鑰配置如下:
ssh-keygen -t rsa
Master01配置免密碼登入其他節點
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
所有節點安裝基本工具(傳送鍵輸到入所有會話)
yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git -y
Master01下載安裝檔案(取消傳送鍵輸到入所有會話)
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
所有節點(傳送鍵輸到入所有會話)升級系統並重啟,此處升級沒有升級核心,下節會單獨升級核心:
yum update -y --exclude=kernel* && reboot #CentOS7需要升級,CentOS8可以按需升級系統
二進位制系統和核心升級
CentOS7 需要升級核心至4.18+,本地升級的版本為4.19
在master01節點(取消傳送鍵輸入到所有會話)下載核心:
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
從master01節點傳到其他節點:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有節點安裝核心
cd /root && yum localinstall -y kernel-ml*
所有節點更改核心啟動順序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
檢查預設核心是不是4.19
grubby --default-kernel
所有節點重啟,然後檢查核心是不是4.19
reboot
uname -a
所有節點安裝ipvsadm(實現負載均衡):
yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有節點配置ipvs模組,在核心4.19+版本nf_conntrack_ipv4已經改為nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf
# 加入以下內容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
然後執行
systemctl enable --now systemd-modules-load.service
檢查是否載入(需要重啟後才可以載入):
lsmod | grep -e ip_vs -e nf_conntrack
開啟一些k8s叢集中必須的核心引數,所有節點配置k8s核心:
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
net.ipv4.ip_forward 不開啟的話跨主機通訊不了
所有節點配置完核心後,重啟伺服器,保證重啟後核心依舊載入
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
二進位制基本元件安裝
- Docker安裝
- K8s及etcd安裝
Docker安裝
所有節點安裝Docker-ce 19.03(官方推薦)
yum install docker-ce-19.03.* -y
由於新版kubelet建議使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
所有節點設定開機自啟動Docker:
systemctl daemon-reload && systemctl enable --now docker
K8s及etcd安裝
Master01(取消傳送鍵輸入到所有的會話)下載kubernetes安裝包
訪問官網獲取最新版本:https://github.com/kubernetes/kubernetes
進入CHANGELOG目錄,可以看到目前最新的是1.22,點選Server Binaries獲取下載連結,如果有更新的版本需要下載最新的版本
wget https://dl.k8s.io/v1.22.0-beta.1/kubernetes-server-linux-amd64.tar.gz
如果下載不了可以通過本地下載再上傳到伺服器
下載etcd安裝包(3.4.13是官方推薦版本,已經經過驗證)
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
二進位制的安裝其實解壓之後就安裝完成了
解壓kubernetes安裝檔案
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
解壓etcd安裝檔案
tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}
版本檢視
kubelet --version
etcdctl version
將元件傳送到其他節點
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
所有節點建立/opt/cni/bin目錄(傳送鍵輸入到所有的會話)
mkdir -p /opt/cni/bin
檢視分支
cd k8s-ha-install/
git branch -a
Master01切換到1.20.x分支(其他版本可以切換到其他分支)(取消傳送鍵輸入到所有的會話)
git checkout manual-installation-v1.20.x
二進位制生成證書詳解
- etcd證書
- k8s元件證書
二進位制安裝最關鍵步驟,一步錯誤全盤皆輸,一定要注意每個步驟都要是正確的
Master01下載生成證書工具
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
etcd證書
所有Master節點建立etcd證書目錄(傳送鍵輸入到所有的會話,取消node節點)
mkdir /etc/etcd/ssl -p
所有節點建立kubernetes相關目錄(傳送鍵輸入到所有的會話)
mkdir -p /etc/kubernetes/pki
Master01節點生成etcd證書(取消傳送鍵輸入到所有的會話)
生成證書的CSR檔案:證書籤名請求檔案,配置了一些域名、公司、單位
# 這個目錄有我們生成證書需要用到的csr檔案
cd /root/k8s-ha-install/pki
# 生成etcd CA證書和CA證書的key
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
檢視生成的key
ls /etc/etcd/ssl/
頒發證書
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.232.128,192.168.232.129,192.168.232.130 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
檢視生成證書
ls /etc/etcd/ssl/
生成內容
etcd-ca.csr etcd-ca-key.pem etcd-ca.pem etcd.csr etcd-key.pem etcd.pem
將證書複製到其他節點
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
k8s元件證書
Master01生成kubernetes證書
cd /root/k8s-ha-install/pki
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
檢視生成的key
ls /etc/kubernetes/pki
生成apiserver的客戶端證書
10.96.0.是k8s service的網段,如果說需要更改k8s service網段,那就需要更改10.96.0.1,如果不是高可用叢集,192.168.232.236為Master01的IP
cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.232.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.232.128,192.168.232.129,192.168.232.130 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
檢視生成的證書
ls /etc/kubernetes/pki
生成apiserver的聚合證書。Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
生成 controller-manage 的證書
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 注意,如果不是高可用叢集,192.168.232.236:8443改為master01的地址,8443改為apiserver的埠,預設是6443
# set-cluster:設定一個叢集項
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.232.236:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# set-credentials 設定一個使用者項
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 設定一個環境項,一個上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 使用某個環境當做預設環境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
生成 scheduler 的證書
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 注意,如果不是高可用叢集,192.168.232.236:8443改為master01的地址,8443改為apiserver的埠,預設是6443
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.232.236:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
生成admin的證書
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 注意,如果不是高可用叢集,192.168.232.236:8443改為master01的地址,8443改為apiserver的埠,預設是6443
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.232.236:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
我們用同樣的命令生成了 admin.kubeconfig,scheduler.kubeconfig,controller-manager.kubeconfig,它們之間是如何區分的?
檢視 admin-csr.json
cat admin-csr.json
{
"CN": "admin", # 域名
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters", # 部門,相當於admin是屬於哪個組的
"OU": "Kubernetes-manual"
}
]
}
我們生成的證書會定義一個使用者 admin,它是屬於 system:masters 這個組,k8s 安裝的時候會有一個 clusterrole,它是一個叢集角色,相當於一個配置,它有著叢集最高的管理許可權,同時會建立一個 clusterrolebinding,它會把 admin 綁到 system:masters 這個組上,然後這個組上的所有使用者都會有這個叢集的許可權
建立ServiceAccount Key -> secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
ServiceAccount 是 k8s 一種認證方式,建立 ServiceAccount 的時候會建立一個與之繫結的 secret,這個 secret 會生成一個 token
傳送證書至其他節點
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
檢視證書檔案(一共23個檔案)
ls /etc/kubernetes/pki/
ls /etc/kubernetes/pki/ |wc -l
檢視證書過期時間(expiry 過期時間100年)
cat ca-config.json
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
二進位制高可用及etcd配置
- Etcd配置
- 高可用配置
Etcd配置
etcd生產環境中一定要啟動奇數個節點,不然容易產生腦裂
etcd配置大致相同,注意修改每個Master節點的etcd配置的主機名和IP地址
注意三個節點的配置是不同的
Master01
vim /etc/etcd/etcd.config.yml
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.232.128:2380'
listen-client-urls: 'https://192.168.232.128:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.232.128:2380'
advertise-client-urls: 'https://192.168.232.128:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.232.128:2380,k8s-master02=https://192.168.232.129:2380,k8s-master03=https://192.168.232.130:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
Master02
vim /etc/etcd/etcd.config.yml
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.232.129:2380'
listen-client-urls: 'https://192.168.232.129:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.232.129:2380'
advertise-client-urls: 'https://192.168.232.129:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.232.128:2380,k8s-master02=https://192.168.232.129:2380,k8s-master03=https://192.168.232.130:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
Master03
vim /etc/etcd/etcd.config.yml
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.232.130:2380'
listen-client-urls: 'https://192.168.232.130:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.232.130:2380'
advertise-client-urls: 'https://192.168.232.130:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.232.128:2380,k8s-master02=https://192.168.232.129:2380,k8s-master03=https://192.168.232.130:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
所有Master節點建立etcd service並啟動(傳送鍵輸入到所有的會話,取消node節點)
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
所有Master節點建立etcd的證書目錄
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
檢視etcd狀態
export ETCDCTL_API=3
etcdctl --endpoints="192.168.232.130:2379,192.168.232.129:2379,192.168.232.128:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
狀態
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.232.130:2379 | 8fa642cec63e074b | 3.4.13 | 20 kB | false | false | 2 | 8 | 8 | |
| 192.168.232.129:2379 | b23932e50da8a0ea | 3.4.13 | 25 kB | false | false | 2 | 8 | 8 | |
| 192.168.232.128:2379 | d79600c132f4ccdb | 3.4.13 | 20 kB | true | false | 2 | 8 | 8 | |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
高可用配置
高可用配置(注意:如果不是高可用叢集,haproxy和keepalived無需安裝)
如果在雲上安裝也無需執行此章節的步驟,可以直接使用雲上的lb,比如阿里雲slb,騰訊雲elb等
公有云要用公有云自帶的負載均衡,比如阿里雲的SLB,騰訊雲的ELB,用來替代haproxy和keepalived,因為公有云大部分都是不支援keepalived的,另外如果用阿里雲的話,kubectl控制端不能放在master節點,推薦使用騰訊雲,因為阿里雲的slb有迴環的問題,也就是slb代理的伺服器不能反向訪問SLB,但是騰訊雲修復了這個問題。
Slb -> haproxy -> apiserver
所有Master節點安裝keepalived和haproxy
yum install keepalived haproxy -y
所有Master配置HAProxy,配置一樣(刪除預設配置 ggdG 回車)
vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.232.128:6443 check
server k8s-master02 192.168.232.129:6443 check
server k8s-master03 192.168.232.130:6443 check
keepalived
所有Master節點配置KeepAlived,配置不一樣,注意區分
注意每個節點的IP和網路卡(interface引數),檢視網路卡(ens33)並修改配置檔案
ip a
如果公司有其他 keepalived ,注意 virtual_router_id 51 不能重複,它是一個廣播
Master01 keepalived(刪除預設配置 ggdG 回車)
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.232.128
virtual_router_id 51
priority 101
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.232.236
}
track_script {
chk_apiserver
} }
Master02 keepalived(刪除預設配置 ggdG 回車)
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.232.129
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.232.236
}
track_script {
chk_apiserver
} }
Master03 keepalived(刪除預設配置 ggdG 回車)
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.232.130
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.232.236
}
track_script {
chk_apiserver
} }
所有master節點健康檢查配置(傳送鍵輸入到所有的會話,取消node節點)
vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh
所有master節點啟動haproxy和keepalived
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
它會有一個選主的過程,然後繫結,這就是 VIP 的作用,它會在三個主節點之間選擇一個進行繫結,當這個節點出現問題的時候,VIP 會繫結到其他節點
檢視 192.168.232.236 繫結情況
ip a
VIP測試
ping 192.168.232.236
重要:如果安裝了keepalived和haproxy,需要測試keepalived是否是正常的
telnet 192.168.232.236 8443
如果ping不通且telnet沒有出現 ],則認為VIP不可以,不可在繼續往下執行,需要排查keepalived的問題,比如防火牆和selinux,haproxy和keepalived的狀態,監聽埠等
所有節點檢視防火牆狀態必須為disable和inactive
systemctl status firewalld
所有節點檢視selinux狀態,必須為disable:getenforce
master節點檢視haproxy和keepalived狀態:
systemctl status keepalived haproxy
master節點檢視監聽埠:
netstat -lntp
二進位制K8s元件配置
- Apiserver
- ControllerManager
- Scheduler
所有節點建立相關目錄(傳送鍵輸入到所有的會話)
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
Apiserver
所有Master節點建立kube-apiserver service,# 注意,如果不是高可用叢集,192.168.232.236改為master01的地址
Master01配置(取消傳送鍵輸入到所有的會話)
注意k8s service網段為10.96.0.0/12,該網段不能和宿主機的網段、Pod網段的重複,請按需修改
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.232.128 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.232.128:2379,https://192.168.232.129:2379,https://192.168.232.130:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
Master02配置
注意k8s service網段為10.96.0.0/12,該網段不能和宿主機的網段、Pod網段的重複,請按需修改
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.232.129 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.232.128:2379,https://192.168.232.129:2379,https://192.168.232.130:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
Master03配置
注意k8s service網段為10.96.0.0/12,該網段不能和宿主機的網段、Pod網段的重複,請按需修改
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.232.130 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.232.128:2379,https://192.168.232.129:2379,https://192.168.232.130:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
所有Master節點開啟kube-apiserver(傳送鍵輸入到所有的會話,取消node節點)
systemctl daemon-reload && systemctl enable --now kube-apiserver
檢測kube-server狀態
systemctl status kube-apiserver
ControllerManager
所有Master節點配置kube-controller-manager service
注意k8s Pod網段為172.16.0.0/12,該網段不能和宿主機的網段、k8s Service網段的重複,請按需修改
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
所有Master節點啟動kube-controller-manager
systemctl daemon-reload
systemctl enable --now kube-controller-manager
檢視啟動狀態
systemctl status kube-controller-manager
Scheduler
所有Master節點配置kube-scheduler service
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
啟動
systemctl daemon-reload
systemctl enable --now kube-scheduler
檢視啟動狀態
systemctl status kube-scheduler
二進位制使用Bootstrapping自動頒發證書
它可以給 node 節點自動頒發證書,也就是給 keepalived 頒發證書
為什麼這個證書不是手動管理?因為 k8s 主節點可能是固定的,建立好之後一直就是那幾臺,但是 node 節點可能變化比較多,如果新增,刪除,故障維護節點的時候手動新增會比較麻煩,keepalived 證書和主機名是有繫結的,而我們的主機名又是不一樣的,所以需要有一種機制自動頒發 keepalived 發來的證書請求
在Master01建立bootstrap(取消傳送鍵輸入到所有的會話)
注意,如果不是高可用叢集,192.168.232.236:8443改為master01的地址,8443改為apiserver的埠,預設是6443
cd /root/k8s-ha-install/bootstrap
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.232.236:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
bootstrap-kubelet.kubeconfig 是一個 keepalived 用來向 apiserver 申請證書的檔案
注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保證 c8ad9c 字串一致的,並且位數是一樣的。還要保證上個命令的黃色字型:c8ad9c.2e4d610cf3e7426e與你修改的字串要一致
cat bootstrap.secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-c8ad9c
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: c8ad9c
token-secret: 2e4d610cf3e7426e
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
建立配置檔案,缺乏此檔案無法執行 kubectl get node(The connection to the server localhost:8080 was refused),需要將證書複製過來
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
kubectl 命令只需要一個節點擁有就可以,這是控制節點,不可以讓每個節點都擁有,這樣非常危險,可以把他放到叢集之外的任何一個節點都可以,並不一定是我們的 k8s 節點,任何一臺伺服器與 k8s 相通即可,需要把這個檔案複製過去,就可以訪問到我們這個叢集
建立 bootstrap
kubectl create -f bootstrap.secret.yaml
二進位制Node節點及Calico配置
二進位制Node節點
- 複製證書
- Kubelet配置
- kube-proxy配置
複製證書
node節點使用自動頒發證書的形式配置
Master01節點複製證書至Node節點(取消傳送鍵輸入到所有的會話)
cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
done
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
Kubelet配置
所有節點建立相關目錄(傳送鍵輸入到所有的會話)
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
所有節點配置kubelet service
vim /usr/lib/systemd/system/kubelet.service
# 新增以下內容
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
所有節點配置kubelet service的配置檔案
vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
# 新增以下內容
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
建立kubelet的配置檔案
注意:如果更改了k8s的service網段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service網段的第十個地址,比如10.96.0.10
vim /etc/kubernetes/kubelet-conf.yml
# 新增以下內容
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
啟動所有節點kubelet
systemctl daemon-reload
systemctl enable --now kubelet
檢視系統日誌
tail -f /var/log/messages
顯示只有如下資訊為正常,因為Calico還沒安裝
Unable to update cni config" err="no networks found in /etc/cni/net.d
檢視叢集狀態
kubectl get node
叢集狀態NotReady,因為Calico還沒安裝
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady <none> 2m23s v1.22.0-beta.1
k8s-master02 NotReady <none> 2m16s v1.22.0-beta.1
k8s-master03 NotReady <none> 2m16s v1.22.0-beta.1
k8s-node01 NotReady <none> 2m16s v1.22.0-beta.1
k8s-node02 NotReady <none> 2m16s v1.22.0-beta.1
kube-proxy配置
注意,如果不是高可用叢集,192.168.232.236:8443改為master01的地址,8443改為apiserver的埠,預設是6443
在Master01執行(取消傳送鍵輸入到所有的會話)
cd /root/k8s-ha-install
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.232.236:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
在master01將kube-proxy的systemd Service檔案傳送到其他節點
如果更改了叢集Pod的網段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12引數為pod的網段。
for NODE in k8s-master01 k8s-master02 k8s-master03; do
scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
for NODE in k8s-node01 k8s-node02; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
所有節點啟動kube-proxy(傳送鍵輸入到所有的會話)
systemctl daemon-reload
systemctl enable --now kube-proxy
檢視狀態
systemctl status kube-proxy
Calico配置
在master01執行(取消傳送鍵輸入到所有的會話)
cd /root/k8s-ha-install/calico/
# 修改calico-etcd.yaml的以下位置
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.232.128:2379,https://192.168.232.129:2379,https://192.168.232.130:2379"#g' calico-etcd.yaml
ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
# 更改此處為自己的pod網段
POD_SUBNET="172.16.0.0/12"
# 注意下面的這個步驟是把calico-etcd.yaml檔案裡面的CALICO_IPV4POOL_CIDR下的網段改成自己的Pod網段,也就是把192.168.x.x/16改成自己的叢集網段,並開啟註釋:
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
kubectl apply -f calico-etcd.yaml
檢視容器狀態
kubectl get po -n kube-system
容器狀態
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-cdd5755b9-4fzg9 1/1 Running 0 113s
calico-node-8xg62 1/1 Running 0 113s
calico-node-dczxz 1/1 Running 0 113s
calico-node-gn8ws 1/1 Running 0 113s
calico-node-qmwkd 1/1 Running 0 113s
calico-node-zfw8n 1/1 Running 2 (78s ago) 113s
如果容器狀態異常可以使用kubectl describe 或者logs檢視容器的日誌
課程連結(私信領取福利)
本作品採用知識共享署名-非商業性使用-相同方式共享 4.0 國際許可協議進行許可。
歡迎轉載、使用、重新發布,但務必保留文章署名 鄭子銘 (包含連結: http://www.cnblogs.com/MingsonZheng/ ),不得用於商業目的,基於本文修改後的作品務必以相同的許可釋出。