k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4
k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4
最近由於工作需要開始研究k8s,看了好幾個基礎教程,也搭建了好幾次叢集;多次想著寫一篇簡單的易懂的教程(小白可上手),一方面以便於自己後續查閱,另一方面給有需要的人員提供一個參考案例;由於各種原因沒起筆,恰逢週六晚稍空閒了些,從11點開始搭建叢集,然後測試落筆,調整不合理的地方,終於完成了(已經凌晨4點了),再次體驗到這種如釋重負的感覺!!
1 簡介
-
節點說明
本次搭建的共4個節點,1master+3node角色 ip cpu 記憶體 master 192.168.2.131 2核 3Gi node01 192.168.2.132 1核 2Gi node02 192.168.2.133 1核 2Gi node03 192.168.2.134 1核 2Gi -
部署目標
- 在所有節點上安裝Docker和kubeadm
- 部署Kubernetes master
- 部署容器網路外掛
- 部署 Kubernetes Node,將節點加入Kubernetes叢集中
- 部署Dashboard Web頁面,視覺化檢視Kubernetes資源
- 安裝metrics-server,監控節點|pod cpu和記憶體資源
- 安裝lens,通過普羅米修斯採集相關資料,並通過lens展示
-
資源說明
此處提供了一個幾乎基於最新kubeadm搭建k8s的完整步驟,以便於有需要的人員學習!
除此之外,也對此次搭建的 系統、映象、元件配置yaml 檔案等所有檔案打包,上傳到百度雲盤,網路不太好的情況下可以直接下載安裝包,然後跳過docker 和 kubeadm等軟體的安裝,直接從 2.3小節開始啟動叢集!
資源中虛擬機器器網路調整方法可以參考筆者博文:Windows小技巧8–VMware workstation虛擬機器網路通訊,當前映象中為nat模式固定IP!
資源連結:kubeadm 配套資源 https://pan.baidu.com/s/1_JGnMv83yO6mDXXmO9Y3ng 提取碼: a3hd資源名稱 說明 k8s-Userver.7z ubuntu 1604虛擬機器映象,kubelet kubeadm kubectl docker k8s-images.7z 1.19.4 版本啟動的所有映象,也包括了幾個測試的映象,例如nginx stress busybox等 k8s-yaml.7z 包括網路元件 flannel、dashboard、metrics-server 的yaml檔案 k8s-Userver.7z
k8s-images.7z
k8s-yaml.7z
2 搭建叢集
2.1 安裝基礎軟體
基礎軟體包括 docker、kubelet、kubeadm 和 kubectl; 以下操作都在root許可權下進行的,因此不需要sudo。
-
更新源安裝基礎軟體
apt-get update && sudo apt-get install -y apt-transport-https curl
此處建議將系統源更換為清華源,以防部分資源下載過慢;
具體參考: 清華ubuntu源 https://mirror.tuna.tsinghua.edu.cn/help/ubuntu/ -
安裝docker
apt-get -y install apt-transport-https ca-certificates curl software-properties-common curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" apt-get update apt-get install docker-ce
docker 版本最好不要和kubeadm的版本差距太大,即儘量不要早期版本docker配較新的kubeadm,也不要最新docker配置早期的kubeadm;筆者這裡為最新docker 和 kubeadm 1.19.4(機會最新kubeadm),無衝突。
docker 安裝和使用的常見問題,可以參考筆者博文: docker筆記7–Docker常見操作 -
安裝kubeadm相關軟體
apt install -y kubelet=1.19.4-00 kubectl=1.19.4-00 kubeadm=1.19.4-00 --allow-unauthenticated
-
鎖定軟體版本
apt-mark hold kubelet kubeadm kubectl docker
-
重啟kubelet
systemctl daemon-reload systemctl restart kubelet
2.2 設定常見功能
- 關閉swap
臨時關閉: swapoff -a 持久關閉: vim /etc/fstab # UUID=e9a6ffe0-5f53-4e23-99ab-3fedfb3399c1 none swap sw 0 0
- 關閉防火牆
ufw disable
- 設定網路引數
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
- 載入映象
為了能快速下載映象,筆者已經將映象等資源打包傳到百度網盤了,可以直接下載下來,解壓後進入images資料夾,執行下面命令批量載入映象;for i in $(ls);do docker load -i $i; done 如果需要備份所有的映象,可以通過如下命令進行: docker images|tail -n +2|awk '{print $1":"$2, $1"-"$2".tar.gz"}'|awk '{print $1, gsub("/","-") $var}'|awk '{print "docker save -o " $3,$1}' > save_img.sh && bash save_img.sh
- 設定hosts
192.168.2.131 kmaster 192.168.2.132 knode01 192.168.2.133 knode02 192.168.2.134 knode03
2.3 啟動叢集
- master 叢集啟動 kubeadm
以下為正常輸出資訊:kubeadm init --apiserver-advertise-address=192.168.2.131 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
I1220 01:16:58.210095 1831 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.19 W1220 01:16:58.878208 1831 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.6 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "kmaster" could not be reached [WARNING Hostname]: hostname "kmaster": lookup kmaster on 8.8.8.8:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.2.131] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.2.131 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.2.131 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 34.505441 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kmaster as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: xxpowp.b8zoas29foe15zuz [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.2.131:6443 --token xxpowp.b8zoas29foe15zuz \ --discovery-token-ca-cert-hash sha256:9025b306232c82bb8f5a572d0453247d6db95e5c70dea1e90c63a5e8b8309af5
- master 部署cni網路
網路正常部署後,master節點為Ready狀態kubectl apply -f kube-flannel.yml 輸出: podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
- node 節點加入叢集
在node01-03 上分別執行 join 命令
3個node節點加入後,過1-2分鐘後通過get nodes檢視,發現都是Ready狀態了,至此叢集已正常啟動!kubeadm join 192.168.2.131:6443 --token xxpowp.b8zoas29foe15zuz \ --discovery-token-ca-cert-hash sha256:9025b306232c82bb8f5a572d0453247d6db95e5c70dea1e90c63a5e8b8309af5 輸出: [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3 測試
-
部署nginx 並暴露80 埠
kubectl create deployment nginx --image=nginx:1.19.6 kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc
發現埠為:80:30806/TCP, 使用nodeIp:30806 訪問,如下圖可以正常訪問:
-
安裝 dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml kubectl apply -f recommended.yaml 輸出: namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
注意此處需要對 service kubernetes-dashboard 的 ports 屬性中新增 nodePort: 30001(也可以按需更新其它它埠) 和 type: NodePort,如下圖:
對皮膚生成證書1) root@kmaster:~# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created 2) root@kmaster:~# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created 3) root@kmaster:~# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-6hl77 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 3e37a5cd-8f54-4cde-8d6d-cefbc7d92516 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktpaWNYeW5DSVRLdWx6YmpoUFJsdHVXTzRQV0NGQnlKV1dmN29Xd21zX1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNmhsNzciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2UzN2E1Y2QtOGY1NC00Y2RlLThkNmQtY2VmYmM3ZDkyNTE2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.fTGH-2oHqGcc4yOcjEUgco4aDPF5OyojQWzVt2AnvQLiOWynFtaxIjWMXNqMcfH4fpTE7sT1PrDECFG2iV4J6ZIhtQUMfDD5YqjPSLU_w1qr528HcDFRtNbS6ik-OA-KjmfbNU6bdQ4QEYNPsXC40TBj1kpr9nr-ZZQIuQhD7zXQ5AEQR3S6A9B0TPwl8v1wRn86ge7YD2YZ76JY-knntlnd5wgsbfYpAeQECxZ6uOcN-mJYOWB11WtGmfVCtWC4-N63SlWyvcXEfzl8h5wnxI8yTGdH-LoEjHMx-B9-_yS0yRfZLPDowND9BgoqQvJF7lqyC1PR7M25Z20s2h7Log
通過網址以下網址檢視dashboard,由於該 https 未認證,大部分瀏覽器不能直接檢視,因此需要繼續生成p12 證書,匯入證書後再通過上面生成的token檢視;
https://192.168.2.131:30001 (https://nodeIp:NodePort)
https://192.168.2.131:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default(通過api server 的6443埠的proxy來訪問)1)生成 kubecfg.crt grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt 2)生成 kubecfg.key grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key 3)kubecfg.key kubecfg.p12 openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client" 此處密碼設定111111,瀏覽器匯入 p12證書的時候需要使用該密碼
在瀏覽器中匯入該證書,google瀏覽器->settings->privacy and security->Manage certificates->Your certificates->Import ,匯入剛剛生成的 kubecfg.key kubecfg.p12 檔案,重啟下瀏覽器輸入dashboard網址即可。
匯入p12證書檔案:
正常情況下新增認證後,可以通過 Proceed to 192.168.2.131(unsafe) 訪問dashbaord(mac 最新系統設定方法不太一樣,so,此處只針對linux 和 windows的chrome),如下圖:
檢視節點資訊:
到這裡,終於可以通過瀏覽器來訪問dashboard了;當然,也可以通過相關設定,開放基於http 的dashboard埠,那樣就不會這樣繞一個大圈子了;
後續筆者測試下 http 埠dashboard 調整方式後,也會把相關步驟補充在此處! -
安裝 metrics-server
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 需要在 container metrics-server 的args 中新增: --kubelet-insecure-tls,否則會啟動報錯 kubectl apply -f components.yaml 輸出: serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
此時,各個pod已經正常拉起來了, pod 資訊如下圖:
metrics-server 正常工作後,可以通過 kubectl top nodes 來檢視節點cpu和記憶體使用資訊(若未正常安裝,則執行該操作會報錯):
-
使用 lens 外掛安裝普羅米修斯監控
拷貝config檔案到主機,通過lens匯入,並設定Prometheus資訊,具體操作參考筆者博文:k8s筆記3–Kubernetes IDE Lens
master 節點監控資訊:
各個node節點cpu|memory|disk資訊:
4 說明
- 參考文件
1 production-environment/tools/kubeadm/install-kubeadm
2 使用kubeadm快速部署一個Kubernetes叢集(v1.18)
3 kubeadm 配套資源 https://pan.baidu.com/s/1_JGnMv83yO6mDXXmO9Y3ng 提取碼: a3hd - 軟體說明
docker 版本為:Docker version 20.10.1;
k8s 叢集版本為:v1.19.4;
測試系統為ubuntu 16.04 server版本;
測試 VMwarestation 為16.0.0 Pro,vm 版本太高可能不支援(最基礎的 Userver.vmdk 映象是筆者18年用 vm12.5 生成的);
相關文章
- kubeadm部署K8S叢集K8S
- 升級 kubeadm 部署的 k8s 叢集K8S
- 通過kubeadm工具部署k8s叢集K8S
- 使用Kubeadm建立k8s叢集之部署規劃(三十)K8S
- Kubeadm叢集部署k8sK8S
- 使用Kubeadm建立k8s叢集之節點部署(三十一)K8S
- k8s叢集搭建--kubeadm方式K8S
- Kubeadm方式搭建K8S叢集K8S
- Centos7.9使用kubeadm部署K8S 1.27.6叢集環境(內網透過代理部署)CentOSK8S內網
- kubeadm實現k8s高可用叢集環境部署與配置K8S
- Centos7.9 使用 Kubeadm 自動化部署 K8S 叢集(一個指令碼)CentOSK8S指令碼
- 使用kubeadm搭建一單節點k8s測試叢集K8S
- Ansible部署K8s叢集K8S
- 使用kind快速搭建本地k8s叢集K8S
- 如何使用 Kind 快速建立 K8s 叢集?K8S
- 【k8s】使用Terraform一鍵部署EKS叢集K8SORM
- Kubernetes全棧架構師(Kubeadm高可用安裝k8s叢集)--學習筆記全棧架構K8S筆記
- 【趙強老師】使用kubeadmin部署K8s叢集K8S
- k8s 部署生產vault叢集K8S
- 使用 Kubeadm+Containerd 部署一個 Kubernetes 叢集AI
- 拆除kubeadm部署的Kubernetes 叢集
- k8s叢集安裝-kubeadm安裝K8S
- 基於Ubuntu部署企業級kubernetes叢集---k8s叢集容部署UbuntuK8S
- K8S如何部署Redis(單機、叢集)K8SRedis
- Kubernetes(k8s)部署redis-cluster叢集K8SRedis
- centos7使用kubeadm配置高可用k8s叢集的另一種方式CentOSK8S
- docker筆記20-初始化k8s叢集Docker筆記K8S
- Kubernetes — 在 OpenStack 上使用 kubeadm 部署高可用叢集
- Kubernetes(k8s)叢集部署(k8s企業級Docker容器集K8SDocker
- AnolisOS7.9部署K8s叢集K8S
- 基於kubeasz部署高可用k8s叢集K8S
- 教你用multipass快速搭建k8s叢集K8S
- 使用 Ansible 快速部署 HBase 叢集
- 快速體驗k8s叢集的測試、開發環境--allinone部署K8S開發環境None
- k8s叢集部署K8S
- 【Kubernetes學習筆記】-kubeadm 手動搭建kubernetes 叢集筆記
- Apache SeaTunnel k8s 叢集模式 Zeta 引擎部署指南ApacheK8S模式
- 日誌分析系統 - k8s部署ElasticSearch叢集K8SElasticsearch