通過kubeadm部署Kubernetes v1.13.5生產可用叢集環境(無需翻牆)

南喬峰發表於2019-04-11

[TOC]

環境準備

  • 主機準備 角色 | IP | 配置 | ------|----|-----------|--- k8s master | 10.10.40.54(應用網路) / 172.16.130.55(叢集網路) | 4Core 8G k8s work node | 10.10.40.95(應用網路) / 172.16.130.82(叢集網路) | 4Core 8G

  • 系統環境

[root@10-10-40-54 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
[root@10-10-40-54 ~]# uname -a
Linux 10-10-40-54 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@10-10-40-54 ~]# free -h
              total used free shared buff/cache available
Mem: 7.6G 141M 7.4G 8.5M 153M 7.3G
Swap: 2.0G 0B 2.0G
[root@10-10-40-54 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 13
Model name: QEMU Virtual CPU version 2.5+
Stepping: 3
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0-3
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology pni cx16 x2apic hypervisor lahf_lm
[root@10-10-40-54 ~]#
[root@10-10-40-54 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:7a:e9:1b:ab:00 brd ff:ff:ff:ff:ff:ff
    inet 10.10.40.54/24 brd 10.10.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f87a:e9ff:fe1b:ab00/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:39:4d:ef:80:01 brd ff:ff:ff:ff:ff:ff
    inet 172.16.130.55/24 brd 172.16.130.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::f839:4dff:feef:8001/64 scope link
       valid_lft forever preferred_lft forever
[root@10-10-40-54 ~]#
複製程式碼

安裝前檢查

  • 確保所有節點MAC和product uuid沒有衝突

kubernetes會用到這些資訊來區分各個節點, 若一樣可能會部署失敗 (github.com/kubernetes/…)

檢視MAC

ip a
複製程式碼

檢視product uuid

cat /sys/class/dmi/id/product_uuid
複製程式碼
  • 確保所有節點swap已經關閉

不關閉的話kubelet無法啟動

# 臨時關閉
swapoff -a

# 修改/etc/fstag註釋掉swap行
[root@10-10-40-54 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Jun 13 11:42:55 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=2fb6a9ac-835a-49a5-9d0a-7d6c7a1ba349 /boot xfs defaults 0 0
# /dev/mapper/rhel-swap swap swap defaults 0 0
[root@10-10-40-54 ~]#

# 關閉完成後可以通過swapon -s檢查是否已經關閉, 若無輸出則說明已經關閉
[root@10-10-40-54 ~]# swapon -s
[root@10-10-40-54 ~]#
複製程式碼
  • 關閉SELinux

允許容器訪問宿主機檔案系統

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 檢視SELinux設定情況, 確保是Permissive(只是純粹發出告警)
[root@10-10-40-54 ~]# getenforce
Permissive
[root@10-10-40-54 ~]#
複製程式碼
  • 開啟bridge-nf-call-iptables

即到達Linux Bridge上的包要先經過iptables規則 參考: news.ycombinator.com/item?id=164…

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
複製程式碼
  • 載入br_netfilter核心模組
modprobe br_netfilter

# 檢視載入情況, 若能看到則說明已經載入
[root@10-10-40-54 ~]# lsmod | grep br_netfilter
br_netfilter 22209 0
bridge 136173 1 br_netfilter
[root@10-10-40-54 ~]#
複製程式碼

安裝部署流程

安裝container runtime (Docker)

所有節點上執行

# Install Docker CE
## Set up the repository
### Install required packages.
yum install -y yum-utils device-mapper-persistent-data lvm2 git

### Add Docker repository.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum install -y docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker.service
systemctl restart docker

複製程式碼

安裝kubeadm / kubelet / kubectl

  • 新增阿里雲Kubernetes YUM源

官方推薦為Google YUM源, 需翻牆

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
#baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

yum -y install epel-release
yum clean all
yum makecache
複製程式碼
  • 安裝kubeadm / kubelet / kubectl

版本可以通過yum search kubeadm --show-duplicates檢視, 這裡直接安裝v1.3.5

yum install -y kubelet-1.13.5-0.x86_64 kubectl-1.13.5-0.x86_64 kubeadm-1.13.5-0.x86_64

* 檢查kubeadm安裝情況
[root@10-10-40-54 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:24:33Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
[root@10-10-40-54 ~]#

複製程式碼
  • 啟用kubelet

這個時候由於還沒初始化, 所以kubelet會啟動失敗並且一直嘗試重啟, 不用管

systemctl enable --now kubelet
複製程式碼

拉取Kubernetes元件Docker映象

同樣是無翻牆版

REGISTRY=registry.cn-hangzhou.aliyuncs.com/google_containers
VERSION=v1.13.5

## 拉取映象
docker pull ${REGISTRY}/kube-apiserver-amd64:${VERSION}
docker pull ${REGISTRY}/kube-controller-manager-amd64:${VERSION}
docker pull ${REGISTRY}/kube-scheduler-amd64:${VERSION}
docker pull ${REGISTRY}/kube-proxy-amd64:${VERSION}
docker pull ${REGISTRY}/etcd-amd64:3.2.18
docker pull ${REGISTRY}/pause-amd64:3.1
docker pull ${REGISTRY}/coredns:1.1.3
docker pull ${REGISTRY}/pause:3.1

## 新增Tag
docker tag ${REGISTRY}/kube-apiserver-amd64:${VERSION} k8s.gcr.io/kube-apiserver-amd64:${VERSION}
docker tag ${REGISTRY}/kube-scheduler-amd64:${VERSION} k8s.gcr.io/kube-scheduler-amd64:${VERSION}
docker tag ${REGISTRY}/kube-controller-manager-amd64:${VERSION} k8s.gcr.io/kube-controller-manager-amd64:${VERSION}
docker tag ${REGISTRY}/kube-proxy-amd64:${VERSION} k8s.gcr.io/kube-proxy-amd64:${VERSION}
docker tag ${REGISTRY}/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag ${REGISTRY}/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag ${REGISTRY}/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
docker tag ${REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
複製程式碼

那麼怎麼知道是需要哪些映象呢?

 kubeadm config images list
複製程式碼

Kubernetes master初始化配置

  • 修改叢集初始化配置 將以下配置檔案儲存為kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
apiServer:
  timeoutForControlPlane: 4m0s
  extraArgs:
    advertise-address: 172.16.130.55  # 對外提供服務的IP, 若有多張網路卡的情況預設採用預設閘道器指定網路卡的IP
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""  # 高可用部署時為LB的endpoint
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
  extraArgs:
    listen-client-urls: https://172.16.130.55:2379
    advertise-client-urls: https://172.16.130.55:2379
    listen-peer-urls: https://172.16.130.55:2380
    initial-advertise-peer-urls: https://172.16.130.55:2380
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # 注意這裡一定要修改為本地映象倉庫, 否則預設會去k8s.gcr.io拉映象
kind: ClusterConfiguration
kubernetesVersion: v1.13.5  # 這裡填寫要部署的Kubernetes版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"  # Pod所使用的網段, 跟後面要部署的Flannel網段保持一致
  serviceSubnet: 10.96.0.0/12
scheduler: {}
複製程式碼

也可以先輸出一份預設配置然後自己修改

kubeadm config print init-defaults > kubeadm-config.yaml
複製程式碼

Kubernetes master初始化 (kubeadm init)

真正的臨門一腳

kubeadm init --config kubeadm-config.yaml
複製程式碼

輸出示例

[root@10-10-40-54 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.13.5
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "10-10-40-54" could not be reached
 [WARNING Hostname]: hostname "10-10-40-54": lookup 10-10-40-54: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [10-10-40-54 localhost] and IPs [10.10.40.54 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [10-10-40-54 localhost] and IPs [10.10.40.54 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [10-10-40-54 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.40.54]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.002435 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "10-10-40-54" as an annotation
[mark-control-plane] Marking the node 10-10-40-54 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node 10-10-40-54 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xkdkxz.7om906dh5efmkujl
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.40.54:6443 --token xkdkxz.7om906dh5efmkujl --discovery-token-ca-cert-hash sha256:52335ece8b859d761e569e0d84a1801b503c018c6e1bd08a5bb7f39cd49ca056

[root@10-10-40-54 ~]#
複製程式碼

init成功的話能看到如何通過kubeadm join新增節點的提示

  • 新增kubectl 配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
複製程式碼
  • 檢查部署情況
[root@10-10-40-54 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-89cc84847-cmrkw 0/1 Pending 0 108s <none> <none> <none> <none>
kube-system coredns-89cc84847-k2nqs 0/1 Pending 0 108s <none> <none> <none> <none>
kube-system etcd-10-10-40-54 1/1 Running 0 45s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-apiserver-10-10-40-54 1/1 Running 0 54s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-controller-manager-10-10-40-54 1/1 Running 0 51s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-proxy-jbqkc 1/1 Running 0 108s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-scheduler-10-10-40-54 1/1 Running 0 45s 10.10.40.54 10-10-40-54 <none> <none>
[root@10-10-40-54 ~]#
複製程式碼

此時應該能看到除coredns外其他Pod都處於Running狀態, coredns處於pending狀態因為網路外掛還沒部署

新增work node (kubeadm join)

到work node上執行kubeadm join

[root@10-10-40-95 ~]# kubeadm join 10.10.40.54:6443 --token hog5db.zh5p9z4xi5kvf1g7 --discovery-token-ca-cert-hash sha256:c9c8d056467c345651d1cb6d23fac08beb4ed72ea37e923cd826af12314b9ff0
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "10-10-40-95" could not be reached
 [WARNING Hostname]: hostname "10-10-40-95": lookup 10-10-40-95: no such host
[discovery] Trying to connect to API Server "10.10.40.54:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.40.54:6443"
[discovery] Requesting info from "https://10.10.40.54:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.40.54:6443"
[discovery] Successfully established connection with API Server "10.10.40.54:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "10-10-40-95" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@10-10-40-95 ~]#
複製程式碼
  • 檢視加入的節點
[root@10-10-40-54 ~]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
10-10-40-54 NotReady master 7h51m v1.13.5 10.10.40.54 <none> Red Hat Enterprise Linux Server 7.4 (Maipo) 3.10.0-693.el7.x86_64 docker://18.6.2
10-10-40-95 NotReady <none> 7h48m v1.13.5 10.10.40.95 <none> Red Hat Enterprise Linux Server 7.4 (Maipo) 3.10.0-693.el7.x86_64 docker://18.6.2
[root@10-10-40-54 ~]#
複製程式碼
  • 部署網路外掛 (Flannel) Flannel配置

也可以直接wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml下載下來自行修改 若存在多個網路卡, 關鍵引數為flannel繫結的網路卡

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1 # 調整flannel進行節點外部通訊使用的網路介面
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

複製程式碼

部署flannel網路外掛

[root@10-10-40-54 ~]# kubectl create -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
[root@10-10-40-54 ~]#
複製程式碼
  • 檢視flannel Pod執行狀態, 到這一步安裝部署就算完成了
[root@10-10-40-54 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-89cc84847-cmrkw 1/1 Running 0 8h 10.244.1.2 10-10-40-95 <none> <none>
kube-system coredns-89cc84847-k2nqs 1/1 Running 0 8h 10.244.1.4 10-10-40-95 <none> <none>
kube-system etcd-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-apiserver-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-controller-manager-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-flannel-ds-amd64-69fjw 1/1 Running 0 2m37s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-flannel-ds-amd64-8789j 1/1 Running 0 2m37s 10.10.40.95 10-10-40-95 <none> <none>
kube-system kube-proxy-jbqkc 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-proxy-rv7hs 1/1 Running 0 8h 10.10.40.95 10-10-40-95 <none> <none>
kube-system kube-scheduler-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
[root@10-10-40-54 ~]#
複製程式碼
  • 建立一個busybox Pod對叢集可用性進行測試 將以下spec儲存為busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always
複製程式碼

建立busybox Pod

[root@10-10-40-54 ~]# kubectl create -f busybox.yaml
pod/busybox created
[root@10-10-40-54 ~]# kubectl get pod -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 0/1 ContainerCreating 0 17s <none> 10-10-40-95 <none> <none>
busybox 1/1 Running 0 17s 10.244.1.5 10-10-40-95 <none> <none>
複製程式碼

可以看到Pod已經能夠正常執行了, Kubernetes叢集搭建完畢

常見問題解答

kubeadm init 出錯瞭如何重新init

先進行kubeadm reset重置再執行kubeadm init即可

[root@10-10-40-54 ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

[root@10-10-40-54 ~]#
複製程式碼

最後

未完待續, 歡迎拍磚

相關文章