用kubeadm 搭建 Kubernetes

Frank範發表於2017-04-30

記錄下這幾天在折騰的一個事,就是想把Kubernetes 搭建起來,看看它是怎麼玩的,搭建過程還是比較辛苦的,因為沒有找到特別靠譜的資料,或者版本不相容。

一 搭建的方式

Kubernetes 搭建有三種方式,簡單評價一下:

  1. 基於Docker 本地執行Kubernetes
    先決條件:
    http://www.cnblogs.com/zhangeamon/p/5197655.html
    參考資料:
    https://github.com/kubernetes/community/blob/master/contributors/devel/local-cluster/docker.md
    Install kubectl and shell auto complish:
    評價: 這種方式我沒有搭建成功,一直有can not connet 127.0.0.1:8080 的問題,後面感覺是沒有建立./kube目錄的原因。不過沒有再試
  2. 用minikube
    minikube是一個適合於在單機環境下搭建,它是建立出一個虛擬機器來,並且Kubernetes官方好像已經停止對基於Docker本地執行Kubernetes的支援,參考:https://github.com/kubernetes/minikube, 但是因為它最好要求是virtualbox作為底層虛擬化driver,而我的bare metal 已經安裝kvm了,我試了下存在衝突,所以也就沒有用這種方式進行安裝。
  3. 用kubeadm
    它是一個比較方便安裝Kubernetes cluster的工具,我也是按照這種方式裝成功的。後面會詳細記錄這種方式。
  4. 一步步安裝
    每一個元件每一個元件進行安裝,我還沒有試,可以根據:https://github.com/opsnull/follow-me-install-kubernetes-cluster, 比較麻煩。
    個人還是推薦第三種方式,對於上手來說比較方便一點,我是這幾種方式都有嘗試。

二 kubeadm setup Kubernetes

參考:
Openstack: https://docs.openstack.org/developer/kolla-kubernetes/deployment-guide.html
Kubernetes: https://kubernetes.io/docs/getting-started-guides/kubeadm/
搭建環境:KVM 起的Centos7 虛擬機器
1.Turn off SELinux

sudo setenforce 0
sudo sed -i 's/enforcing/permissive/g' /etc/selinux/config

2.Turn off firewalld

sudo systemctl stop firewalld
sudo systemctl disable firewalld

3.Write the Kubernetes repository file

cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo mv kubernetes.repo /etc/yum.repos.d

4.Install Kubernetes 1.6.1 or later and other dependencies

sudo yum install -y docker ebtables kubeadm kubectl kubelet kubernetes-cni

5.To enable the proper cgroup driver, start Docker and disable CRI

sudo systemctl enable docker
sudo systemctl start docker
CGROUP_DRIVER=$(sudo docker info | grep "Cgroup Driver" | awk '{print $3}')
sudo sed -i "s|KUBELET_KUBECONFIG_ARGS=|KUBELET_KUBECONFIG_ARGS=--cgroup-driver=$CGROUP_DRIVER --enable-cri=false |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sudo sed -i "s|\$KUBELET_NETWORK_ARGS| |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

6.Setup the DNS server with the service CIDR:

sudo sed -i 's/10.96.0.10/10.3.3.10/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

7.reload kubelet

sudo systemctl daemon-reload
sudo systemctl stop kubelet
sudo systemctl enable kubelet
sudo systemctl start kubelet

8.Deploy Kubernetes with kubeadm

sudo kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.3.3.0/24

有可能會遇到的問題:如果你是通過公司的proxy出去的網路,那麼一定要把你vm的地址放到no_proxy中,否執行kubeadm,會hank在下面, 如果執行失敗,執行:sudo kubeadm reset:

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

Note:
Note pod-network-cidr is a network private to Kubernetes that the PODs within Kubernetes communicate on. The service-cidr is where IP addresses for Kubernetes services are allocated. There is no recommendation that the pod network should be /16 network in upstream documentation however, the Kolla developers have found through experience that each node consumes an entire /24 network, so this configuration would permit 255 Kubernetes nodes.
執行完後:

[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.29]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 23.768335 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 4.022721 seconds
[token] Using token: 5e0896.4cced9c43904d4d0
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 5e0896.4cced9c43904d4d0 192.168.122.29:6443

記住:最後一句話kuberadm join, slave node可以用此CLI去joint到Kubernetes叢集中。
然後:

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

Load the kubedm credentials into the system:

mkdir -p $HOME/.kube
sudo -H cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo -H chown $(id -u):$(id -g) $HOME/.kube/config

執行完後用 去check 狀態:

kubectl get nodes
kubectl get pods -n kube-system

9.Deploy CNI Driver
CNI 組網方式:https://linux.cn/thread-15315-1-1.html
用Flannel:
Flannel是基於vxlan, 用vxlan 因為報文長度增加,所以效率相對低,它它是Kubernetes推薦的方式

kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

這種方式我沒有成功, flannel 這個pod一直在重啟。
用 Canal:

wget http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

sed -i "s@192.168.0.0/16@10.1.0.0/16@" calico.yaml 
sed -i "s@10.96.232.136@10.3.3.100@" calico.yaml
kubectl apply -f calico.yaml

Finally untaint the node (mark the master node as schedulable) so that PODs can be scheduled to this AIO deployment:

kubectl taint nodes --all=true  node-role.kubernetes.io/master:NoSchedule-

10.restore $KUBELET_NETWORK_ARGS

sudo sed -i "s|\$KUBELET_EXTRA_ARGS|\$KUBELET_EXTRA_ARGS \$KUBELET_NETWORK_ARGS|g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

sudo systemctl daemon-reload
sudo systemctl restart kubelet

OLD_DNS_POD=$(kubectl get pods -n kube-system |grep dns | awk '{print $1}')
kubectl delete pod $OLD_DNS_POD -n kube-system

wait for old dns_pod deleted and autorestart a new dns_pod
kubectl get pods,svc,deploy,ds –all-namespaces

11.setup sample application
Ref: http://janetkuo.github.io/docs/getting-started-guides/kubeadm/
Installing a sample application 部分

總結

  1. kubectl 自動命令補全
    Ref:https://kubernetes.io/docs/tasks/kubectl/install/

相關文章