叢集的搭建
叢集的型別
kubunetes的叢集型別大致上分為兩類: 一主多從和多主多從。
- 一主多從: 一臺master節點和多臺node節點,搭建簡單,但是有單機故障的風險,適用於測試環境
- 多主多從: 多臺master節點和多臺node節點,搭建麻煩,安全性高,適用於生產環境
為了測試簡單,本次搭建的是: 一主兩從
安裝方式
kubernetes有多種部署方式,目前主流的方式有如下:
- minikube: 一個用於快速搭建單節點kubernetes的工具
- kubeadm:一個用於快速搭建kubernetes叢集的工具
- 二進位制包:從官網下載每個元件的二進位制包,依次去安裝,次方式對與理解kubernetes元件更加有效
安裝前準備
主機規劃:
作用 | ip | 作業系統 | 配置 |
---|---|---|---|
master | 192.168.109.100 | centos7 基礎設施伺服器 | 2 cpu 2G記憶體 50G硬碟 |
node1 | 192.168.109.101 | centos7 基礎設施伺服器 | 2 cpu 2G記憶體 50G硬碟 |
node2 | 192.168.109.102 | centos7 基礎設施伺服器 | 2 cpu 2G記憶體 50G硬碟 |
搭建基礎環境
- 配置網路卡ip和yum源
- 檢視系統版本
[root@master ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core) [root@node1 ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core) [root@node2 ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
- 配置域名解析
- master
[root@master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.109.100 master 192.168.109.101 node1 192.168.109.102 node2
- node1
[root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.109.100 master 192.168.109.101 node1 192.168.109.102 node2
- node2
[root@node2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.109.100 master 192.168.109.101 node1 192.168.109.102 node2
- 配置時間同步
啟動chronyd服務並且設定開機自啟- master
[root@master ~]# systemctl start chronyd [root@master ~]# systemctl enable chronyd
- node1
[root@node1 ~]# systemctl start chronyd [root@node1 ~]# systemctl enable chronyd
- node2
[root@node2 ~]# systemctl start chronyd [root@node2 ~]# [root@node2 ~]# systemctl enable chronyd
- 禁用iptable和firewalld服務
1.關閉firewalld- master
[root@master ~]# systemctl enable chronyd [root@master ~]# systemctl stop firewalld.service [root@master ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: inactive (dead) since 三 2024-05-01 14:52:19 CST; 51s ago Docs: man:firewalld(1) Process: 717 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS) Main PID: 717 (code=exited, status=0/SUCCESS) 5月 01 13:51:29 master systemd[1]: Starting firewalld - dynamic fi.... 月 01 13:51:30 master systemd[1]: Started firewalld - dynamic fir.... 月 01 14:52:16 master systemd[1]: Stopping firewalld - dynamic fi.... 5月 01 14:52:19 master systemd[1]: Stopped firewalld - dynamic fir.... Hint: Some lines were ellipsized, use -l to show in full. [root@master ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
- node1
[root@node1 ~]# systemctl stop firewalld.service [root@node1 ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: inactive (dead) since 三 2024-05-01 14:52:20 CST; 51s ago Docs: man:firewalld(1) Process: 724 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS) Main PID: 724 (code=exited, status=0/SUCCESS) 5月 01 13:51:37 master systemd[1]: Starting firewalld - dynamic firewall daemon... 5月 01 13:51:37 master systemd[1]: Started firewalld - dynamic firewall daemon. 5月 01 14:52:16 node1 systemd[1]: Stopping firewalld - dynamic firewall daemon... 5月 01 14:52:20 node1 systemd[1]: Stopped firewalld - dynamic firewall daemon. [root@node1 ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
- node2
[root@node2 ~]# systemctl stop firewalld.service [root@node2 ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: inactive (dead) since 三 2024-05-01 14:52:19 CST; 52s ago Docs: man:firewalld(1) Process: 724 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS) Main PID: 724 (code=exited, status=0/SUCCESS) 5月 01 13:51:40 master systemd[1]: Starting firewalld - dynamic firewall daemon... 5月 01 13:51:41 master systemd[1]: Started firewalld - dynamic firewall daemon. 5月 01 14:52:16 node2 systemd[1]: Stopping firewalld - dynamic firewall daemon... 5月 01 14:52:19 node2 systemd[1]: Stopped firewalld - dynamic firewall daemon. [root@node2 ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
- master
- 關閉iptables
當前虛擬機器並沒有iptables服務,所以不用關閉
比如:[root@master ~]# systemctl stop iptables Failed to stop iptables.service: Unit iptables.service not loaded.
- 禁用selinux
- master
[root@master ~]# sed -i 's/'SELINUX=enforcing'/'SELINUX=disabled/g'' /etc/selinux/config
- node1
[root@node1 ~]# sed -i 's/'SELINUX=enforcing'/'SELINUX=disabled/g'' /etc/selinux/config
- node2
[root@node2 ~]# sed -i 's/'SELINUX=enforcing'/'SELINUX=disabled/g'' /etc/selinux/config
- 禁止使用swap分割槽
kubernetes不支援swap分割槽,所以需要禁止使用swap分割槽
直接將/etc/fstab檔案中的swap相關配置註釋掉- master
[root@master ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed May 1 21:37:04 2024 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=d97390d4-c671-411c-9371-015002de02a5 /boot xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0 [root@master ~]#
- node1
[root@node1 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed May 1 21:37:04 2024 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=d97390d4-c671-411c-9371-015002de02a5 /boot xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0
- node2
[root@node2 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed May 1 21:37:04 2024 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=d97390d4-c671-411c-9371-015002de02a5 /boot xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0
- 修改linux核心引數,新增網橋過濾和地址轉發功能
- master
[root@master ~]# cat /etc/ sysctl.d/kubernetes.conf net.bridge. bridge-nf-call-ip6tables = 1 net.bridge. bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 [root@master ~]#
- node1
[root@node1 ~]# cat /etc/ sysctl.d/kubernetes.conf net.bridge. bridge-nf-call-ip6tables = 1 net.bridge. bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
- node2
[root@node2 ~]# cat /etc/ sysctl.d/kubernetes.conf net.bridge. bridge-nf-call-ip6tables = 1 net.bridge. bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
- 重新載入配置和載入網橋過濾模組
- master
[root@master ~]# sysctl -p [root@master ~]# modprobe br_netfilter
- node1
[root@node1 ~]# sysctl -p [root@node1 ~]# modprobe br_netfilter
- node2
[root@node2 ~]# sysctl -p [root@node2 ~]# modprobe br_netfilter
- 檢視是否載入成功,使用如下命令
lsmod |grep br_netfil
- 配置ipvs功能
在kubernetes中service有兩種代理模型,一種是基於iptables的,一種是基於ipvs的。兩者比較的話,ipvs的效能,明顯要高一些,但是如果要是用它,需要手動載入ipvs模組- 安裝ipset和ipvsadmin (如果沒有指定虛擬機器,三臺同 樣的操作)
[root@master ~]# yum -y install ipset ipvsadmin
- 新增需要載入的模組寫入腳 本檔案
[root@master ~]# cat /etc/ sysconfig/modules/ipvs. modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4
- 設定指令碼檔案可執行許可權
[root@master ~]# chmod +x /etc/sysconfig/modules/ ipvs.modules
- 執行指令碼檔案
[root@master ~]# /bin/ bash /etc/sysconfig/ modules/ipvs.modules
- 檢視是否載入成功
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4 nf_conntrack_ipv4 15053 0 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 141432 6 ip_vs_rr, ip_vs_sh,ip_vs_wrr nf_conntrack 133053 2 ip_vs, nf_conntrack_ipv4 libcrc32c 12644 3 xfs,ip_vs, nf_conntrack
- 重啟虛擬機器
reboot
- 檢查swap分割槽和selinux
[root@master ~]# free -m
total used free shared buff/cache available
Mem: 1982 100 1752 9 130 1726
Swap: 0 0 0
[root@master ~]# getenforce
Disabled
[root@master ~]#
環境搭建(二)
安裝docker
- 切換映象源
[root@master yum.repos.d]# wget docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- 檢視當前映象源中支援的docker版本
yum list docker-ce --showduplicates
- 安裝指定版本的docker
ps: 安裝指定版本的docker必須使用--setopt=obsoletes=1選項,否則yum會自動安裝更高版本[root@master yum.repos.d] # yum install --setopt=obsoletes=0 docker-ce-18.06.2.ce-3. el7 -y
- 新增一個配置檔案,設定docker開機自啟
docker在預設情況下使用cgroupdiver為cgroupfs,而kubernetes推薦使用systemd來代替cgroupfs,所以需要修改配置檔案[root@master ~]# mkdir / etc/docker [root@master ~]# vim /etc/ docker/daemon.json [root@master ~]# systemctl restart docker. service [root@master ~]# systemctl enable docker Created symlink from /etc/ systemd/system/multi-user. target.wants/docker. service to /usr/lib/ systemd/system/docker. service. [root@master ~]# docker --version Docker version 18.06. 2-ce, build 6d37f41 [root@master ~]#
安裝kubernetes
- 配置yum源
[root@master yum.repos.d] # cat k8s.repo [kubernetes] name=kubernetes baseurl=http://mirrors. aliyun.com/kubernetes/ yum/repos/ kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors. aliyun.com/kubernetes/ yum/doc/yum-key.gpg http://mirrors. aliyun.com/ kubernetes/yum/ doc/ rpm-package-key. gpg //yum倉庫的配置檔案一定要頂格寫
- 安裝kubelet、kubeadm、kubectl
[root@master yum.repos.d]# yum -y install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
- 配置kubelet的cgroup
[root@master yum.repos.d]# cat /etc/sysconfig/kubelet KUBELET_CGROUP_ARGS="--cgroup-drive=systemd" KUBE_PROXT_MODE="ipvs" //手動指定使用ipvs代理
- 設定kubelet開機自啟
[root@master yum.repos.d]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@master yum.repos.d]# //後面配置叢集啟動的時候,會自動啟動kubelet,不需要手動啟動
準備叢集映象
#在安裝叢集之前,必須提前準備好叢集需要的映象,所需映象可以使用·kubeadm config images list·命令檢視
# 下載映象
# 次映象在k8s的倉庫中,由於網路問題,無法連線,所以嘗試使用aliyun倉庫中的映象。然後改名成k8s的映象,最後在刪除阿里雲的映象
images=(
kube-apiserver:v1.17.4
kube-controller-manager:v1.17.4
kube-scheduler:v1.17.4
kube-proxy:v1.17.4
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)
for imagename in ${images[@]}; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imagename
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imagename k8s.gcr.io/$imagename
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imagename
done
docker images
- docker images 檢視出來7個映象
[root@master yum.repos.d]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.4 6dec7cfde1e5 4 years ago 116MB
k8s.gcr.io/kube-apiserver v1.17.4 2e1ba57fe95a 4 years ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.4 7f997fcf3e94 4 years ago 161MB
k8s.gcr.io/kube-scheduler v1.17.4 5db16c1c7aff 4 years ago 94.4MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 4 years ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 4 years ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 6 years ago 742kB
[root@master yum.repos.d]#
從這裡開始只使用master節點
叢集初始化
- 初始化,只需要在master上執行
kubeadm init \
--kubernetes-version=v1.17.4 \
--pod-network-cidr=10.24 4.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.200.128
看到successful表示成功
-
如果遇到報錯,需要再次執行初始化,刪除掉之前生成的重複檔案,關閉服務。再次init執行
-
配置kubectl
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
(kubectl delete -f kube-flannel.yml 刪除)
- 測試(nginx)
kubectl create deployment nginx --image=nginx:1.14-alpine
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod
kubectl get service
curl http://192.168.109.101:31775