一、kubernetes簡介
1、什麼是kubernetes?它是幹什麼用的?
kubernetes是google公司用go語言開發的一套容器編排系統,簡稱k8s;它主要用於容器編排;所謂容器編排簡單的我們可以理解為管理容器;這個有點類似openstack,不同的是openstack是用來管理虛擬機器,而k8s中是管理的pod(所謂pod就是容器的一個外殼,裡面可以跑一個或多個容器,可以理解為pod就是將一個或多個容器邏輯的組織在一起);k8s除了可以全生命週期的管理pod,它還可以實現pod的自動化部署,自動修復以及動態的擴縮容等功能;
2、k8s架構
提示:k8s是master/node模型,master是整個k8s的管理端,其上主要執行etcd,api server ,scheduler,controllermanager以及網路相關外掛;其中etcd是一個kv鍵值儲存資料庫,主要存放k8s中所有配置資訊以及pod狀態資訊,一旦etcd當機,k8s整個系統將不可用;apiserver主要用來接收客戶端請求,也是k8s中唯一的入口;使用者的所有管理操作都是將請求傳送給apiserver;scheduler主要用來排程使用者請求,比如使用者要在k8s系統上執行一個pod,至於這個pod該執行在那個node節點,這個就需要scheduler的排程;controllermanager主要用來管理以及監控pod狀態;對於scheduler排程的結果,controlmanager就負責讓對應節點上的對應pod精準處於排程的狀態;node的節點是k8s的工作節點,主要用於執行pod;node節點主要執行的應用有docker,kubelet,kube-proxy;其中docker是用來執行容器的,kubelet主要負責執行master端controllermanager下發的任務;kube-proxy主要用來生成pod網路相關iptables或ipvs規則的;
3、k8s工作過程
提示:k8s工作過程如上圖所示,首先使用者將請求通過https傳送給apiserver,apiserver收到請求後,首先要驗證客戶端證書,如果通過驗證,然後再檢查使用者請求的資源是否滿足對應api請求的語法,滿足則就把對應的請求資源以及資源狀態資訊存放在etcd中;scheduler和controllermanager以及kubelet這三個元件會一直監視著apiserver上的資源變動,一旦發現有合法的請求進來,首先scheduler會根據使用者請求的資源,來評判該資源該在那個節點上建立,然後scheduler把對應的排程資訊傳送給apiserver,然後controllermanager結合scheduler的排程資訊,把對應建立資源的方法也傳送給apiserver;最後是各節點上的kubelet通過scheduler的排程資訊來判斷對應資源是否在本地執行,如果是,它就把controllermanager傳送給apiserver的建立資源的方法在本地執行,把對應的資源在本地跑起來;後續controllermanager會一直監視著對應的資源是否健康,如果對應資源不健康,它會嘗試重啟資源,或者重建資源,讓對應資源處於我們定義的狀態;
二、k8s叢集搭建
部署說明
部署k8s叢集的方式有兩種,一種是在各節點上把對應的元件執行為容器的形式;第二種是將各元件執行為守護程式的方式;對於不同的環境我們部署的方式也有不同,對於測試環境,我們可以使用單master節點,單etcd例項,node節點按需而定;生產環境首先是etcd要高可用,我們要建立etcd高可用叢集,一般建立3個或5個或7個節點;其次master也要高可用,高可用master我們需要注意apiserver是無狀態的可多例項,前端使用nginx或haproxy做排程即可;對於scheduler和controller這兩個元件各自只能有一個活動例項,如果是多個例項,其餘的只能是備用;
測試環境部署k8s,將各元件執行為容器
環境說明
主機名 | IP地址 | 角色 |
master01.k8s.org | 192.168.0.41 | master |
node01.k8s.org | 192.168.0.44 | node01 |
node02.k8s.org | 192.168.0.45 | node02 |
node03.k8s.org | 192.168.0.46 | node03 |
各節點主機名解析
[root@master01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.99 time.test.org time-node 192.168.0.41 master01 master01.k8s.org 192.168.0.42 master02 master02.k8s.org 192.168.0.43 master03 master03.k8s.org 192.168.0.44 node01 node01.k8s.org 192.168.0.45 node02 node02.k8s.org 192.168.0.46 node03 node03.k8s.org [root@master01 ~]#
各節點時間同步
[root@master01 ~]# grep server /etc/chrony.conf # Use public servers from the pool.ntp.org project. server time.test.org iburst # Serve time even if not synchronized to any NTP server. [root@master01 ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^~ time.test.org 3 6 377 56 -6275m[ -6275m] +/- 20ms [root@master01 ~]# ssh node01 'chronyc sources' 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^~ time.test.org 3 6 377 6 -6275m[ -6275m] +/- 20ms [root@master01 ~]# ssh node02 'chronyc sources' 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^~ time.test.org 3 6 377 41 -6275m[ -6275m] +/- 20ms [root@master01 ~]# ssh node03 'chronyc sources' 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^~ time.test.org 3 6 377 35 -6275m[ -6275m] +/- 20ms [root@master01 ~]#
提示:有關時間同步伺服器的搭建請參考https://www.cnblogs.com/qiuhom-1874/p/12079927.html;ssh互信請參考https://www.cnblogs.com/qiuhom-1874/p/11783371.html;
各節點關閉selinux
[root@master01 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted [root@master01 ~]# getenforce Disabled [root@master01 ~]#
提示:將/etc/selinux/config中的SELINUX=enforcing修改成SELINUX=disabled,然後重啟主機或者執行setenforce 0;
關閉iptabels服務或firewalld服務
[root@master01 ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@master01 ~]# iptables -nvL Chain INPUT (policy ACCEPT 650 packets, 59783 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 503 packets, 65293 bytes) pkts bytes target prot opt in out source destination [root@master01 ~]# ssh node01 'systemctl status firewalld' ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@master01 ~]# ssh node02 'systemctl status firewalld' ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@master01 ~]# ssh node03 'systemctl status firewalld' ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@master01 ~]#
提示:將firewalld服務停掉並設定為開機禁用;並確保iptables規則表中沒有任何規則;
各節點下載docker倉庫配置檔案
[root@master01 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo --2020-12-08 14:04:29-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 182.140.140.242, 110.188.26.241, 125.64.1.228, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|182.140.140.242|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2640 (2.6K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/docker-ce.repo’ 100%[======================================================================>] 2,640 --.-K/s in 0s 2020-12-08 14:04:30 (265 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640] [root@master01 ~]# ssh node01 'wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo' --2020-12-08 14:04:42-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 182.140.139.60, 125.64.1.228, 118.123.2.185, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|182.140.139.60|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2640 (2.6K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/docker-ce.repo’ 0K .. 100% 297M=0s 2020-12-08 14:04:42 (297 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640] [root@master01 ~]# ssh node02 'wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo' --2020-12-08 14:04:38-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 182.140.139.59, 118.123.2.183, 182.140.140.238, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|182.140.139.59|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2640 (2.6K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/docker-ce.repo’ 0K .. 100% 363M=0s 2020-12-08 14:04:38 (363 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640] [root@master01 ~]# ssh node03 'wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo' --2020-12-08 14:04:43-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 118.123.2.184, 182.140.140.240, 182.140.139.63, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|118.123.2.184|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2640 (2.6K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/docker-ce.repo’ 0K .. 100% 218M=0s 2020-12-08 14:04:43 (218 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640] [root@master01 ~]#
建立kubernetes倉庫配置檔案
[root@master01 yum.repos.d]# cat kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg [root@master01 yum.repos.d]#
複製kubernetes倉庫配置檔案到各node節點
[root@master01 yum.repos.d]# scp kubernetes.repo node01:/etc/yum.repos.d/ kubernetes.repo 100% 276 87.7KB/s 00:00 [root@master01 yum.repos.d]# scp kubernetes.repo node02:/etc/yum.repos.d/ kubernetes.repo 100% 276 13.6KB/s 00:00 [root@master01 yum.repos.d]# scp kubernetes.repo node03:/etc/yum.repos.d/ kubernetes.repo 100% 276 104.6KB/s 00:00 [root@master01 yum.repos.d]#
在各節點安裝docker-ce,kubectl,kubelet,kubeadm
yum install -y docker-ce kubectl kubeadm kubelet
編輯docker unitfile檔案,加上啟動docker後執行iptables -P FORWARD ACCEPT
複製docker.service到各節點
[root@master01 ~]# scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service docker.service 100% 1764 220.0KB/s 00:00 [root@master01 ~]# scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service docker.service 100% 1764 359.1KB/s 00:00 [root@master01 ~]# scp /usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service docker.service 100% 1764 792.3KB/s 00:00 [root@master01 ~]#
配置docker加速器
[root@master01 ~]# mkdir /etc/docker [root@master01 ~]# cd /etc/docker [root@master01 docker]# cat >> daemon.json << EOF > { > "registry-mirrors": ["https://cyr1uljt.mirror.aliyuncs.com"] > } > EOF [root@master01 docker]# cat daemon.json { "registry-mirrors": ["https://cyr1uljt.mirror.aliyuncs.com"] } [root@master01 docker]#
在各節點上建立/etc/docker目錄,並複製master端上daemon.json檔案到各節點
[root@master01 docker]# ssh node01 'mkdir /etc/docker' [root@master01 docker]# ssh node02 'mkdir /etc/docker' [root@master01 docker]# ssh node03 'mkdir /etc/docker' [root@master01 docker]# scp daemon.json node01:/etc/docker/ daemon.json 100% 65 30.6KB/s 00:00 [root@master01 docker]# scp daemon.json node02:/etc/docker/ daemon.json 100% 65 52.2KB/s 00:00 [root@master01 docker]# scp daemon.json node03:/etc/docker/ daemon.json 100% 65 17.8KB/s 00:00 [root@master01 docker]#
各節點啟動docker,並設定為開機啟動
[root@master01 docker]# systemctl enable docker --now Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@master01 docker]# ssh node01 'systemctl enable docker --now' Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@master01 docker]# ssh node02 'systemctl enable docker --now' Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@master01 docker]# ssh node03 'systemctl enable docker --now' Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@master01 docker]#
驗證各節點docker是加速器是否應用?
[root@master01 docker]# docker info |grep aliyun https://cyr1uljt.mirror.aliyuncs.com/ [root@master01 docker]# ssh node01 'docker info |grep aliyun' https://cyr1uljt.mirror.aliyuncs.com/ [root@master01 docker]# ssh node02 'docker info |grep aliyun' https://cyr1uljt.mirror.aliyuncs.com/ [root@master01 docker]# ssh node03 'docker info |grep aliyun' https://cyr1uljt.mirror.aliyuncs.com/ [root@master01 docker]#
提示:在對應節點執行docker info命令能夠看到對應的加速器地址,說明加速器應用成功;
驗證所有節點iptables FORWARD鏈預設規則是否是ACCEPT
[root@master01 docker]# iptables -nvL|grep FORWARD Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) [root@master01 docker]# ssh node01 'iptables -nvL|grep FORWARD' Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) [root@master01 docker]# ssh node02 'iptables -nvL|grep FORWARD' Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) [root@master01 docker]# ssh node03 'iptables -nvL|grep FORWARD' Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) [root@master01 docker]#
新增核心引數配置檔案,並複製配置檔案到其他節點
[root@master01 ~]# cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]# scp /etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/k8s.conf k8s.conf 100% 79 25.5KB/s 00:00 [root@master01 ~]# scp /etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/k8s.conf k8s.conf 100% 79 24.8KB/s 00:00 [root@master01 ~]# scp /etc/sysctl.d/k8s.conf node03:/etc/sysctl.d/k8s.conf k8s.conf 100% 79 20.9KB/s 00:00 [root@master01 ~]#
應用核心引數使其生效
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]# ssh node01 'sysctl -p /etc/sysctl.d/k8s.conf' net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]# ssh node02 'sysctl -p /etc/sysctl.d/k8s.conf' net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]# ssh node03 'sysctl -p /etc/sysctl.d/k8s.conf' net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]#
配置kubelet,讓其忽略swap開啟報錯
[root@master01 ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" [root@master01 ~]# scp /etc/sysconfig/kubelet node01:/etc/sysconfig/kubelet kubelet 100% 42 12.2KB/s 00:00 [root@master01 ~]# scp /etc/sysconfig/kubelet node02:/etc/sysconfig/kubelet kubelet 100% 42 16.2KB/s 00:00 [root@master01 ~]# scp /etc/sysconfig/kubelet node03:/etc/sysconfig/kubelet kubelet 100% 42 11.2KB/s 00:00 [root@master01 ~]#
檢視kubelet版本
[root@master01 ~]# rpm -q kubelet kubelet-1.20.0-0.x86_64 [root@master01 ~]#
初始化master節點
[root@master01 ~]# kubeadm init --pod-network-cidr="10.244.0.0/16" \ > --kubernetes-version="v1.20.0" \ > --image-repository="registry.aliyuncs.com/google_containers" \ > --ignore-preflight-errors=Swap
提示:初始化master需要注意,預設不指定映象倉庫地址它會到k8s.gcr.io這個倉庫中下載對應元件的映象;gcr.io這個地址是google的倉庫,在國內一般是無法正常連線;
提示:一定要看到初始化成功的提示才表示master初始化沒有問題;這裡還需要將最後的kubeadm join 這條命令記錄下來,後續加node節點需要用到這個命令;
在當前使用者家目錄下建立.kube目錄,並複製kubectl配置檔案到.kube目錄下命名為config
[root@master01 ~]# mkdir -p $HOME/.kube [root@master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
提示:複製配置檔案的主要原因是我們要在master端用kubectl命令來管理叢集,配置檔案中包含證書資訊以及對應master的地址,預設執行kubctl命令會在當前使用者的家目錄查詢config配置檔案,只有當kubectl驗證成功後才可以正常管理叢集;如果不是root使用者,是其他普通使用者,還需要將config檔案的屬主和屬組修改成對應的使用者;
安裝flannel外掛
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port? [root@master01 ~]#
提示:這裡提示raw.githubusercontent.com不能訪問,解決辦法在/etc/hosts檔案中加入對應的解析記錄
新增raw.githubusercontent.com的解析
[root@master01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.99 time.test.org time-node 192.168.0.41 master01 master01.k8s.org 192.168.0.42 master02 master02.k8s.org 192.168.0.43 master03 master03.k8s.org 192.168.0.44 node01 node01.k8s.org 192.168.0.45 node02 node02.k8s.org 192.168.0.46 node03 node03.k8s.org 151.101.76.133 raw.githubusercontent.com [root@master01 ~]#
再次執行kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml命令
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Unable to connect to the server: read tcp 192.168.0.41:46838->151.101.76.133:443: read: connection reset by peer [root@master01 ~]#
提示:這裡還是提示我們不能連線;解決辦法,用瀏覽器開啟 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml,將其內容複製出來,然後在當前目錄下建立flannel.yml檔案
flannel.yml檔案內容
[root@master01 ~]# cat flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.13.1-rc1 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.13.1-rc1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg [root@master01 ~]#
使用 flannel.yml檔案來安裝flannel外掛
[root@master01 ~]# kubectl apply -f flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created [root@master01 ~]#
檢視master端執行的pod情況
[root@master01 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f89b7bc75-k9gdt 0/1 Pending 0 30m coredns-7f89b7bc75-kp855 0/1 Pending 0 30m etcd-master01.k8s.org 1/1 Running 0 30m kube-apiserver-master01.k8s.org 1/1 Running 0 30m kube-controller-manager-master01.k8s.org 1/1 Running 0 30m kube-flannel-ds-zgq92 0/1 Init:0/1 0 9m45s kube-proxy-pjv9s 1/1 Running 0 30m kube-scheduler-master01.k8s.org 1/1 Running 0 30m [root@master01 ~]#
提示:這裡可以看到kube-flannel一直在初始化,原因是在flannel.yml資源清單中使用的是quay.io/coreos/flannel:v0.13.1-rc1這個映象,這個映象倉庫在國內訪問速度非常慢,有時候幾乎就下載不到對應的映象;解決辦法,翻牆出去把對應映象打包,然後再匯入映象;
匯入 quay.io/coreos/flannel:v0.13.1-rc映象
[root@master01 ~]# ll total 64060 -rw------- 1 root root 65586688 Dec 8 15:16 flannel-v0.13.1-rc1.tar -rw-r--r-- 1 root root 4822 Dec 8 14:57 flannel.yml [root@master01 ~]# docker load -i flannel-v0.13.1-rc1.tar 70351a035194: Loading layer [==================================================>] 45.68MB/45.68MB cd38981c5610: Loading layer [==================================================>] 5.12kB/5.12kB dce2fcdf3a87: Loading layer [==================================================>] 9.216kB/9.216kB be155d1c86b7: Loading layer [==================================================>] 7.68kB/7.68kB Loaded image: quay.io/coreos/flannel:v0.13.1-rc1 [root@master01 ~]#
複製flannel映象打包檔案到其他節點,並匯入映象
[root@master01 ~]# scp flannel-v0.13.1-rc1.tar node01:/root/ flannel-v0.13.1-rc1.tar 100% 63MB 62.5MB/s 00:01 [root@master01 ~]# scp flannel-v0.13.1-rc1.tar node02:/root/ flannel-v0.13.1-rc1.tar 100% 63MB 62.4MB/s 00:01 [root@master01 ~]# scp flannel-v0.13.1-rc1.tar node03:/root/ flannel-v0.13.1-rc1.tar 100% 63MB 62.5MB/s 00:01 [root@master01 ~]# ssh node01 'docker load -i /root/flannel-v0.13.1-rc1.tar' Loaded image: quay.io/coreos/flannel:v0.13.1-rc1 [root@master01 ~]# ssh node02 'docker load -i /root/flannel-v0.13.1-rc1.tar' Loaded image: quay.io/coreos/flannel:v0.13.1-rc1 [root@master01 ~]# ssh node03 'docker load -i /root/flannel-v0.13.1-rc1.tar' Loaded image: quay.io/coreos/flannel:v0.13.1-rc1 [root@master01 ~]#
再次檢視pod執行情況
[root@master01 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f89b7bc75-k9gdt 1/1 Running 0 39m coredns-7f89b7bc75-kp855 1/1 Running 0 39m etcd-master01.k8s.org 1/1 Running 0 40m kube-apiserver-master01.k8s.org 1/1 Running 0 40m kube-controller-manager-master01.k8s.org 1/1 Running 0 40m kube-flannel-ds-zgq92 1/1 Running 0 19m kube-proxy-pjv9s 1/1 Running 0 39m kube-scheduler-master01.k8s.org 1/1 Running 0 40m [root@master01 ~]#
提示:可以看到kube-flannel已經正常running起來了;
檢視節點資訊
[root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01.k8s.org Ready control-plane,master 41m v1.20.0 [root@master01 ~]#
提示:可以看到master節點已經處於ready狀態,表示master端已經部署好了;
將node01加入到k8s叢集作為node節點
[root@node01 ~]# kubeadm join 192.168.0.41:6443 --token dz6bs3.ohitv535s1fmcuag \ > --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 \ > --ignore-preflight-errors=Swap [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.0. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node01 ~]#
提示:執行相同的命令將其他節點加入到k8s叢集,作為node節點;
在master節點上檢視叢集節點資訊
[root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01.k8s.org Ready control-plane,master 49m v1.20.0 node01.k8s.org Ready <none> 5m53s v1.20.0 node02.k8s.org Ready <none> 30s v1.20.0 node03.k8s.org Ready <none> 25s v1.20.0 [root@master01 ~]#
提示:可以看到master和3個node節點都處於ready狀態;
測試:執行一個nginx 控制器,並指定使用nginx:1.14-alpine這個映象,看看是否可以正常執行?
[root@master01 ~]# kubectl create deploy nginx-dep --image=nginx:1.14-alpine deployment.apps/nginx-dep created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-dep-8967df55d-j8zp7 1/1 Running 0 18s [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-dep-8967df55d-j8zp7 1/1 Running 0 30s 10.244.2.2 node02.k8s.org <none> <none> [root@master01 ~]#
驗證:訪問podip看看對應nginx是否能夠被訪問到?
[root@master01 ~]# curl 10.244.2.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@master01 ~]#
提示:可以看到訪問pod ip能夠正常訪問到對應nginx pod;到此一個單master節點,3個node節點的k8s叢集就搭建好了;