3.部署kubernetes 網路元件

ArcherBrian發表於2024-07-16

3.部署kubernetes 網路元件

目前主要支援的CNI元件

CNI外掛:Flannel、Calico、Weave和Canal(技術上是多個外掛的組合)
本次部署,使用Calico元件,可以使用改元件的BGP功能與內網打通Pod和SVC網路

[M1]在Kubernetes叢集中部署Calico,在任一Master節點執行即可

下載calico部署檔案

curl https://projectcalico.docs.tigera.io/manifests/calico-typha.yaml -o calico.yaml

修改CIDR

- name: CALICO_IPV4POOL_CIDR
  value: "172.15.0.0/16"  #修改為Pod的網路

部署calico

kubectl apply -f calico.yaml

以下為輸出

configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
service/calico-typha unchanged
deployment.apps/calico-typha unchanged
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-typha configured
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers unchanged
serviceaccount/calico-kube-controllers unchanged
poddisruptionbudget.policy/calico-kube-controllers configured

[M1]等待CNI生效

watch kubectl get pods --all-namespaces

所有Pod狀態為Running後即可退出

Every 2.0s: kubectl get pods --all-namespaces                                                                                                                        Mon Jul  5 18:49:49 2021

NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE          
kube-system   calico-kube-controllers-7fc57b95d4-wsvpm        1/1     Running   0          85s
kube-system   calico-node-7f99k                               0/1     Running   0          85s
kube-system   calico-node-jssb6                               0/1     Running   0          85s
kube-system   calico-node-qgvp9                               1/1     Running   0          85s
kube-system   coredns-5c98db65d4-6kxg6                        1/1     Running   0          44m          
kube-system   coredns-5c98db65d4-x6x4b                        1/1     Running   0          44m          
kube-system   etcd-kubernetes-master                          1/1     Running   0          43m          
kube-system   etcd-kubernetes-master                          1/1     Running   0          39m          
kube-system   etcd-kubernetes-master                          1/1     Running   0          38m          
kube-system   kube-apiserver-kubernetes-master                1/1     Running   0          43m          
kube-system   kube-apiserver-kubernetes-master                1/1     Running   0          39m          
kube-system   kube-apiserver-kubernetes-master                1/1     Running   1          38m          
kube-system   kube-controller-manager-kubernetes-master       1/1     Running   1          44m          
kube-system   kube-controller-manager-kubernetes-master       1/1     Running   0          39m          
kube-system   kube-controller-manager-kubernetes-master       1/1     Running   0          37m          
kube-system   kube-proxy-hrfgq                                1/1     Running   0          39m          
kube-system   kube-proxy-nlm68                                1/1     Running   0          38m          
kube-system   kube-proxy-tt8dg                                1/1     Running   0          44m          
kube-system   kube-scheduler-kubernetes-master                1/1     Running   1          43m          
kube-system   kube-scheduler-kubernetes-master                1/1     Running   0          39m          
kube-system   kube-scheduler-kubernetes-master                1/1     Running   0          37m   

相關文章