Kubernetes搭建 Helm 與 Swift 環境

candyleer發表於2019-02-23

helm

背景

上一篇文章簡單介紹了Kubernetes 單機版的開發測試環境minikube的搭建和環境配置,在很長的一段時間裡面我也沒有搭建了,今天突然下了一個最新版,發現啟動啟動不了,於是又乖乖minikube 退回到了0.25.2版本,由於最近都在使用HelmSwift進行專案的部署釋出,本文簡單記錄步驟.

概念

Helm

Helm是 k8s 叢集部署和管理應用的一個包管理工具,使用Chart檔案配置專案,隱藏了 k8s 本身的一些基本概念,具有版本回滾,方便升級的一些特性. Helm分為客戶端和服務端,服務端tiller執行在 k8s 叢集中,使用者管理和部署,更新 k8s叢集中的應用,客戶端有 Kubernetes-helm的客戶端(命令列)工具,用於和服務端Tiller進行連線和通訊.

Swift

因為為了方便在程式碼中間呼叫Helm的一些介面不是特別方便,例如tiller都是走grpc協議等,所以社群的小夥伴們開發了這麼一個代理(Swift),封裝成為restful 的 http的介面形式,方便各種語言和tiller進行通訊和操作.

搭建

本文 環境:
kubernetes 1.9.4 minikube 0.25.2 helm 2.12.1 swift 0.10.0 swift的版本需要和 helm 版本對應,參考https://github.com/appscode/swift

啟動 minikube 環境

minikube start  --kubernetes-version v1.9.4  --extra-config=apiserver.Authorization.Mode=RBAC
複製程式碼
  • 必須開啟 RBAC 的許可權的模式,否則後續安裝會有很多問題,比如預設的模式沒有cluster-admin的叢集許可權,無法安裝swift等.
  • 如果用 helm 安裝 swift 遇到這樣的問題,就是這個原因.
Error: release my-searchlight failed: clusterroles.rbac.authorization.k8s.io "my-searchlight" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:["apiextensions.k8s.io"], Resources:["customresourcedefinitions"], Verbs:["*"]} PolicyRule{APIGroups:["extensions"], Resources:["thirdpartyresources"], Verbs:["*"]} PolicyRule{APIGroups:["monitoring.appscode.com"], Resources:["*"], Verbs:["*"]} PolicyRule{APIGroups:["storage.k8s.io"], Resources:["*"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["componentstatuses"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["componentstatuses"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumes"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumes"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["patch"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["patch"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["patch"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["pods/exec"], Verbs:["create"]} PolicyRule{APIGroups:[""], Resources:["events"], Verbs:["create"]} PolicyRule{APIGroups:[""], Resources:["events"], Verbs:["list"]}] user=&{system:serviceaccount:kube-system:tilleruser a61ce1ed-0a6d-11e9-babc-0800274b952b [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
複製程式碼
  • 用 k8s 1.9.4的原因在於,1.10.0沒有預設的kube-dns模組了,有些不方便,而 swift 和 tiller 通訊依賴 dns.
  • 和上文一樣配置一下代理,便於獲取一些牆外的資源.

Helm 客戶端安裝

brew install kubernetes-helm
helm version 
複製程式碼

Kube-dns安裝

  • 等待kube-dns 安裝完成
kubectl get deployment -n kube-system --watch
複製程式碼
  • 建立 serviceaccount 賬戶
 kubectl create serviceaccount kube-dns -n kube-system
複製程式碼
  • kube-dns的 deployment設定關聯關係
kubectl patch deployment kube-dns -n kube-system -p '{"spec":{"template":{"spec":{"serviceAccountName":"kube-dns"}}}}'
複製程式碼
  • 等待 dns 相關 pods 啟動完成
kubectl get pods -n kube-system --watch
複製程式碼

dns

部署 Tiller

  • Tiller 作為 Helm 的服務端,版本和客戶端保持一致.
  • 部署 Tiller 之前需要為 tiller 賦予 k8s 的操作許可權,建立流程如下
$ kubectl create serviceaccount tiller --namespace kube-system
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
$ helm version --short
Client: v2.12.1+g02a47c7
Server: v2.12.1+g02a47c7
複製程式碼
  • 等待 tiller 部署完成
    tiller

部署 Swift

  • 先為 helm 新增個 appscode 的倉庫
$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm repo list
NAME    	URL
stable  	https://kubernetes-charts.storage.googleapis.com
local   	http://127.0.0.1:8879/charts
appscode	https://charts.appscode.com/stable/
複製程式碼
  • helm中搜尋 swift
$ helm search swift
NAME          	CHART VERSION	APP VERSION	DESCRIPTION
appscode/swift	0.10.0       	0.10.0     	Swift by AppsCode - Ajax friendly Helm Tiller Proxy
stable/swift  	0.6.3        	0.7.3      	DEPRECATED swift by AppsCode - Ajax friendly Helm Tiller ...
複製程式碼
  • 安裝 0.10.0
helm install appscode/swift --name my-swift
複製程式碼
  • 檢查是否安裝完成
kubectl get pods --all-namespaces -l app=swift --watch
複製程式碼

swift

測試

  • 獲取service 檢視一下 ip 和埠
$ kubectl get svc swift-my-swift
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                                          AGE
swift-my-swift   NodePort   10.107.55.194   <none>        9855:31743/TCP,50055:32523/TCP,56790:30092/TCP   3h58m
複製程式碼
  • minikube ssh後進入 minikube 訪問,成功返回,說明已經 ok
$ curl  http://10.107.55.194:9855/tiller/v2/version/json
{"Version":{"sem_ver":"v2.12.1","git_commit":"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e","git_tree_state":"clean"}}
複製程式碼
  • 上面 service 中其實預設的 Type 應該是Cluster IP,為了宿主機能夠訪問,改成了NodePort,方法:
kubectl patch svc swift -n kube-system -p '{"spec":{"type":"NodePort"}}'
複製程式碼
  • 宿主機試一下
$ minikube ip
192.168.99.100
複製程式碼
$ curl http://192.168.99.100:31743/tiller/v2/version/json
{"Version":{"sem_ver":"v2.12.1","git_commit":"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e","git_tree_state":"clean"}}%
複製程式碼
  • dashborad檢視
    dashboard

大功告成!

ps:剛開始啟動minikube 只用了minikube start折騰了半天.....

如何測試kube-dns是否在工作,參考官方案例 kubernetes.io/docs/tasks/…

#相關的錯誤

  1. 無法解析內部 dns
I1228 09:25:08.241821       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:25:08.243483       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:25:28.242336       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I1228 09:25:28.242368       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:25:28.242629       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:25:53.349620       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I1228 09:25:53.349990       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:25:53.350133       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:26:32.635786       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I1228 09:26:32.635949       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:26:32.636553       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:27:12.647474       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy.kube-system.svc on 10.96.0.10:53: read udp 172.17.0.4:58290->10.96.0.10:53: i/o timeout". Reconnecting...
I1228 09:27:12.647527       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:27:44.000042       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:28:08.235280       1 clientconn.go:830] Failed to dial tiller-deploy.kube-system.svc:44134: grpc: the connection is closing; please retry.
複製程式碼
  1. 發現 minikube 起來後,根本沒有kube-dnskube-dashborad,但是minikube addons list中這兩個卻是enabled的狀態,參考解決方案https://github.com/kubernetes/minikube/issues/2619 原理就是手動裝了kube-dns元件讓其正常工作.

參考

[1] github.com/appscode/sw…

[2] appscode.com/products/sw…

相關文章