2019最新k8s叢集搭建教程 (centos k8s 搭建)

johnson1471332296000發表於2019-04-18

2019-k8s-centos

2019最新k8s叢集搭建教程 (centos k8s 搭建) 網上全是要麼過時的,要麼殘缺的,大多數都是2016年,2017年的文件,照著嘗試了N次,各種卸了重灌,最後centos系統都搞得亂七八糟,各種配置互相沖突,影響,一直在kubeadm init 報錯, 後來實在無果,重新安裝了centos系統,從頭再來

非常感謝網友@丿陌路灬再見ミ 技術支援和耐心指導

  • 首先fork我的github到本地
git clone https://github.com/qxl1231/2019-k8s-centos.git
cd 2019-k8s-centos
複製程式碼
  • 安裝完master後,還要安裝下dashboard,請看另一個dashboard的md文件

centos7 部署 k8s 叢集

安裝docker-ce

官方文件

Master、Node節點都需要安裝、配置Docker

# 解除安裝原來的docker
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

# 安裝依賴
sudo yum update -y && sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
  
# 新增官方yum庫
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
# 安裝docker
sudo yum install docker-ce docker-ce-cli containerd.io

# 檢視docker版本
docker --version

# 開機啟動
systemctl enable --now docker
複製程式碼

或者使用指令碼一鍵安裝

curl -fsSL "https://get.docker.com/" | sh
systemctl enable --now docker
複製程式碼

修改docker cgroup驅動,與k8s一致,使用systemd

# 修改docker cgroup驅動:native.cgroupdriver=systemd
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker  # 重啟使配置生效
複製程式碼

安裝 kubelet kubeadm kubectl

官方文件

master、node節點都需要安裝kubelet kubeadm kubectl。

安裝kubernetes的時候,需要安裝kubelet, kubeadm等包,但k8s官網給的yum源是packages.cloud.google.com,國內訪問不了,此時我們可以使用阿里雲的yum倉庫映象。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 關閉SElinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 安裝kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet  # 開機啟動kubelet

# centos7使用者還需要設定路由:
yum install -y bridge-utils.x86_64
modprobe  br_netfilter  # 載入br_netfilter模組,使用lsmod檢視開啟的模組
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 重新載入所有配置檔案

systemctl disable --now firewalld  # 關閉防火牆

# k8s要求關閉swap  (qxl)
swapoff -a && sysctl -w vm.swappiness=0  # 關閉swap
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  # 取消開機掛載swap
複製程式碼

使用虛擬機器的可以做完以上步驟後,進行克隆。實驗環境為1 Master,2 Node

建立叢集準備工作

# Master端:
kubeadm config images pull # 拉取叢集所需映象,這個需要翻牆

# --- 不能翻牆可以嘗試以下辦法 ---
kubeadm config images list # 列出所需映象
#(不是一定是下面的,根據實際情況來)
# 根據所需映象名字先拉取國內資源
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.1
docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1  # 這個在mirrorgooglecontainers中沒有

# 修改映象tag
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1


# 把所需的映象下載好,init的時候就不會再拉映象,由於無法連線google映象庫導致出錯

# 刪除原來的映象
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.3.10
docker rmi coredns/coredns:1.3.1

# --- 不能翻牆可以嘗試使用 ---

# Node端:
# 根據所需映象名字先拉取國內資源
docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1


# 修改映象tag
docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

# 刪除原來的映象
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
docker rmi mirrorgooglecontainers/pause:3.1
# 不載入映象node節點不能
複製程式碼

使用kubeadm建立叢集

# 第一次初始化過程中/etc/kubernetes/admin.conf該檔案存在,是空檔案(我自己手動建立的),會報錯:panic: runtime error: invalid memory address or nil pointer dereference
ls /etc/kubernetes/admin.conf && mv /etc/kubernetes/admin.conf.bak # 移走備份

# 初始化Master(Master需要至少2核)此處會各種報錯,異常...成功與否就在此
kubeadm init --apiserver-advertise-address 192.168.200.25 --pod-network-cidr 10.244.0.0/16 # --kubernetes-version 1.14.1
# --apiserver-advertise-address 指定與其它節點通訊的介面
# --pod-network-cidr 指定pod網路子網,使用fannel網路必須使用這個CIDR
複製程式碼
  • 執行初始化,程式會檢驗環境一致性,可以根據實際錯誤提示進一步修復問題。
  • 程式會訪問https://dl.k8s.io/release/stable-1.txt獲取最新的k8s版本,訪問這個連線需要FQ,如果無法訪問,則會使用kubeadm client的版本作為安裝的版本號,使用kubeadm version檢視client版本。也可以使用--kubernetes-version明確指定版本。
# 初始化結果:
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.503375 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#每個機器建立的master以下部分都不同,需要自己儲存好-qxl
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
    --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16 
複製程式碼

普通使用者設定許可權

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
複製程式碼

應用flannel網路

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
複製程式碼

node加入機器

# node1:
kubeadm join 192.168.20.5:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
    --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252 
# node2:
kubeadm join 192.168.20.5:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
    --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252 
複製程式碼

輸出日誌:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
複製程式碼
# master:
kubectl get pods --all-namespaces
# ---輸出資訊---
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-rn8kd          1/1     Running   0          170m
kube-system   coredns-fb8b8dccf-slwr4          1/1     Running   0          170m
kube-system   etcd-master                      1/1     Running   0          169m
kube-system   kube-apiserver-master            1/1     Running   0          169m
kube-system   kube-controller-manager-master   1/1     Running   0          169m
kube-system   kube-flannel-ds-amd64-l8c7c      1/1     Running   0          130m
kube-system   kube-flannel-ds-amd64-lcmxw      1/1     Running   1          117m
kube-system   kube-flannel-ds-amd64-pqnln      1/1     Running   1          72m
kube-system   kube-proxy-4kcqb                 1/1     Running   0          170m
kube-system   kube-proxy-jcqjd                 1/1     Running   0          72m
kube-system   kube-proxy-vm9sj                 1/1     Running   0          117m
kube-system   kube-scheduler-master            1/1     Running   0          169m
# ---輸出資訊---


kubectl get nodes
# ---輸出資訊---
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   171m   v1.14.1
node1    Ready    <none>   118m   v1.14.1
node2    Ready    <none>   74m    v1.14.1
# ---輸出資訊---
複製程式碼

排錯

journalctl -f  # 當前輸出日誌
journalctl -f -u kubelet  # 只看當前的kubelet程式日誌
複製程式碼

相關文章