背景
在我之前的文章 K8S 生態週報| Google 選擇 Cilium 作為 GKE 下一代資料面 一文中,我介紹了 Google 宣佈使用 Cilium 作為 GKE 的下一代資料面,及其背後的故事。
Google 選擇 Cilium 主要是為了增加 GKE 平臺的容器安全性和可觀測性。那麼,Cilium 到底是什麼,為什麼會有這麼強的吸引力呢?
摘一段官網的介紹:
Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes.
Cilium 是一個用於透明保護部署在 Linux 容器管理平臺(比如 Docker 和 Kubernetes)上的應用服務之間網路連線的開源軟體。
為什麼著重強調是 “Linux 容器管理平臺” 呢?這就不得不提到 Cilium 的實現了。Cilium 的基礎是一種稱為 eBPF 的 Linux 核心技術,使用 eBPF 可以在 Linux 自身內部動態的插入一些控制邏輯,從而滿足可觀察性和安全性相關的需求。
只談概念畢竟過於空洞,本節我們直接上手實踐一下 Cilium 。
準備叢集
這裡我使用 KIND 來建立一套多節點的本地叢集。
寫配置檔案
在建立叢集時候,通過配置檔案來禁用掉 KIND 預設的 CNI 外掛。
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
disableDefaultCNI: true
啟動叢集
將配置檔案命名為 kindconfig
,通過 --config
引數來指定它。 通過 --image
引數可指定建立叢集所使用的映象,這裡我使用 kindest/node:v1.19.0@sha256:6a6e4d588db3c2873652f382465eeadc2644562a64659a1da4
來建立一個最新的 Kubernetes v1.19.0 版本的叢集。
(MoeLove) ➜ ~ kind create cluster --config=kindconfig --image=kindest/node:v1.19.0@sha256:6a6e4d588db3c2873652f382465eeadc2644562a64659a1da4
db73d3beaa8848
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.19.0) ?
✓ Preparing nodes ? ? ? ?
✓ Writing configuration ?
✓ Starting control-plane ?️
✓ Installing StorageClass ?
✓ Joining worker nodes ?
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ?
檢視狀態
由於我們已經禁用了 KIND 預設的 CNI ,所以現在叢集的 Node 都是 NotReady
的狀態,等待 CNI 的初始化。
(MoeLove) ➜ ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane NotReady master 85s v1.19.0
kind-worker NotReady <none> 49s v1.19.0
kind-worker2 NotReady <none> 49s v1.19.0
kind-worker3 NotReady <none> 49s v1.19.0
部署 Cilium
部署 Cilium 可以有多種方式,這裡我們選擇最簡單的,直接使用 Helm 3 進行部署。
新增 Helm 倉庫
Cilium 提供了官方維護的 Helm 倉庫,我們先來新增它。
注意: 請使用 Helm 3。 在之前的文章 K8S 生態週報| Helm v2 進入維護期倒數計時 中,我曾介紹過 Helm v2 的維護期已經進入倒數計時,三個月後將停止為 Helm v2 提供安全補丁,屆時 Helm v2 的維護期就徹底終止了。
(MoeLove) ➜ ~ helm repo add cilium https://helm.cilium.io/
"cilium" has been added to your repositories
預載入映象
這一步並非必須。 只是由於每個在 Node 上都需要下載 cilium/cilium:v1.8.2
的映象,會很耗時,所以我們可以直接使用 kind load docker-image
將主機 Docker 中的映象載入到 KIND 建立的叢集中。
# 下載映象
(MoeLove) ➜ ~ docker pull cilium/cilium:v1.8.2
v1.8.2: Pulling from cilium/cilium
Digest: sha256:9dffe79408025f7523a94a1828ac1691b997a2b1dbd69af338cfbecc8428d326
Status: Image is up to date for cilium/cilium:v1.8.2
docker.io/cilium/cilium:v1.8.2
# 將映象載入到 KIND 叢集中
(MoeLove) ➜ ~ kind load docker-image cilium/cilium:v1.8.2
Image: "cilium/cilium:v1.8.2" with ID "sha256:009715be68951ab107617f04dc50bcceb3d3f1e0c09db156aacf95e56eb0d5cc" not yet present on node "kind-worker3", loading...
Image: "cilium/cilium:v1.8.2" with ID "sha256:009715be68951ab107617f04dc50bcceb3d3f1e0c09db156aacf95e56eb0d5cc" not yet present on node "kind-control-plane", loading...
Image: "cilium/cilium:v1.8.2" with ID "sha256:009715be68951ab107617f04dc50bcceb3d3f1e0c09db156aacf95e56eb0d5cc" not yet present on node "kind-worker", loading...
Image: "cilium/cilium:v1.8.2" with ID "sha256:009715be68951ab107617f04dc50bcceb3d3f1e0c09db156aacf95e56eb0d5cc" not yet present on node "kind-worker2", loading...
映象載入完成後,可使用如下命令進行二次確認:
for i in `docker ps --filter label=io.x-k8s.kind.cluster=kind -q`
do
docker exec $i ctr -n k8s.io -a /run/containerd/containerd.sock i ls |grep cilium
done
使用 Helm 部署 Cilium
(MoeLove) ➜ ~ helm install cilium cilium/cilium --version 1.8.2 \
--namespace kube-system \
--set global.nodeinit.enabled=true \
--set global.kubeProxyReplacement=partial \
--set global.hostServices.enabled=false \
--set global.externalIPs.enabled=true \
--set global.nodePort.enabled=true \
--set global.hostPort.enabled=true \
--set global.pullPolicy=IfNotPresent \
--set config.ipam=kubernetes \
--set global.hubble.enabled=true \
--set global.hubble.relay.enabled=true \
--set global.hubble.ui.enabled=true \
--set global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"
NAME: cilium
LAST DEPLOYED: Wed Sep 2 21:03:23 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.8.2.
For any further help, visit https://docs.cilium.io/en/v1.8/gettinghelp
這裡對幾個配置項做下說明:
global.hubble.enabled=true
: 表示啟用 Hubble 。global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}
:表示 Hubble 暴露出的 metrics 中包含哪些內容,如果不指定則表示禁用它。global.hubble.ui.enabled=true
:表示啟用 Hubble UI
對於 Hubble 是什麼,我們稍後再介紹。
當 Cilium 部署完成後,我們可以檢視下部署的 ns 下的 Pod 情況:
(MoeLove) ➜ ~ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
cilium-86dbc 1/1 Running 0 2m11s
cilium-cjcps 1/1 Running 0 2m11s
cilium-f8dtm 1/1 Running 0 2m11s
cilium-node-init-9r9cm 1/1 Running 1 2m11s
cilium-node-init-bkg28 1/1 Running 1 2m11s
cilium-node-init-jgx6r 1/1 Running 1 2m11s
cilium-node-init-s7xhx 1/1 Running 1 2m11s
cilium-operator-756cc96896-brlrh 1/1 Running 0 2m11s
cilium-t8kqc 1/1 Running 0 2m11s
coredns-f9fd979d6-7vfnq 1/1 Running 0 6m16s
coredns-f9fd979d6-h7rfw 1/1 Running 0 6m16s
etcd-kind-control-plane 1/1 Running 0 6m19s
hubble-relay-666ddfd69b-2lpsz 1/1 Running 0 2m11s
hubble-ui-7854cf65dc-ncj89 1/1 Running 0 2m11s
kube-apiserver-kind-control-plane 1/1 Running 0 6m19s
kube-controller-manager-kind-control-plane 1/1 Running 0 6m19s
kube-proxy-48rwk 1/1 Running 0 6m16s
kube-proxy-8mn58 1/1 Running 0 5m59s
kube-proxy-jptln 1/1 Running 0 5m59s
kube-proxy-pp24h 1/1 Running 0 5m59s
kube-scheduler-kind-control-plane 1/1 Running 0 6m19s
檢視 Node 的狀態:
(MoeLove) ➜ ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 7m1s v1.19.0
kind-worker Ready <none> 6m26s v1.19.0
kind-worker2 Ready <none> 6m26s v1.19.0
kind-worker3 Ready <none> 6m26s v1.19.0
Cilium 功能體驗
Hubble 介紹
上文中,通過 Helm 部署 Cilium 時,我們指定了一些與 Hubble 有關的引數,但尚未介紹 Hubble 具體是什麼。這裡簡單介紹下。
Hubble 是一個完全分散式的網路和安全性的可觀察性平臺,它建立在 Cilium 和 eBPF 之上,以完全透明的方式深入瞭解服務以及網路基礎結構的通訊和行為。
由於它是構建在 Cilium 之上的,Hubble 可以利用 eBPF 獲得可見性。通過使用 eBPF ,所有可見性都是可程式設計的,並且可以最大程度的減少開銷,同時根據使用者需要提供深入和詳盡的可見性。例如:
- 瞭解服務之間的依賴關係。可以觀測到服務之間是否有通訊,通訊頻率,以及 HTTP 呼叫產生的狀態碼等;
- 監控網路和告警。可以觀測到網路連線是否異常,是 L4 還是 L7 有問題,DNS 查詢是否異常等;
- 監控應用程式。可以觀測到 HTTP 4xx/5xx 的錯誤率,HTTP 請求和響應的 95 值,99值等;
- 監控安全問題。可以觀測到哪些請求是被 Network Policy 所拒絕的,哪些服務解析了特定的域名等;
可觀察性
我們可以直接使用 hubble observe
觀測當前叢集中的連線情況:
(MoeLove) ➜ hubble-ui git:(master) kubectl exec -n kube-system -t ds/cilium -- hubble observe
TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
Sep 2 07:06:41.624 kube-system/coredns-f9fd979d6-h7rfw:8181 10.244.1.50:52404 to-stack FORWARDED TCP Flags: ACK, FIN
Sep 2 07:06:41.625 10.244.1.50:52404 kube-system/coredns-f9fd979d6-h7rfw:8181 to-endpoint FORWARDED TCP Flags: ACK, FIN
Sep 2 07:06:42.376 10.244.1.12:4240 10.244.0.76:45164 to-overlay FORWARDED TCP Flags: ACK
Sep 2 07:06:42.376 10.244.0.76:45164 10.244.1.12:4240 to-endpoint FORWARDED TCP Flags: ACK
Sep 2 07:06:42.778 10.244.1.50:37512 10.244.1.12:4240 to-endpoint FORWARDED TCP Flags: ACK, PSH
Sep 2 07:06:42.778 10.244.1.12:4240 10.244.1.50:37512 to-stack FORWARDED TCP Flags: ACK, PSH
Sep 2 07:06:44.941 10.244.1.50:59870 10.244.0.108:4240 to-overlay FORWARDED TCP Flags: ACK
Sep 2 07:06:44.941 10.244.1.12:4240 10.244.2.220:47616 to-overlay FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:44.941 10.244.1.50:52090 10.244.3.159:4240 to-overlay FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:44.941 10.244.1.50:52958 10.244.2.81:4240 to-overlay FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:44.941 10.244.2.220:47616 10.244.1.12:4240 to-endpoint FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:45.448 10.244.1.12:4240 10.244.3.111:54012 to-overlay FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:45.449 10.244.3.111:54012 10.244.1.12:4240 to-endpoint FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:47.631 kube-system/coredns-f9fd979d6-h7rfw:36120 172.18.0.4:6443 to-stack FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:47.822 10.244.1.50:60914 kube-system/coredns-f9fd979d6-h7rfw:8080 to-endpoint FORWARDED TCP Fla
gs: SYN
Sep 2 07:06:47.822 kube-system/coredns-f9fd979d6-h7rfw:8080 10.244.1.50:60914 to-stack FORWARDED TCP Fla
gs: SYN, ACK
Sep 2 07:06:47.822 10.244.1.50:60914 kube-system/coredns-f9fd979d6-h7rfw:8080 to-endpoint FORWARDED TCP Fla
gs: ACK
Sep 2 07:06:47.823 kube-system/coredns-f9fd979d6-h7rfw:8080 10.244.1.50:60914 to-stack FORWARDED TCP Fla
gs: ACK, PSH
Sep 2 07:06:47.823 kube-system/coredns-f9fd979d6-h7rfw:8080 10.244.1.50:60914 to-stack FORWARDED TCP Fla
gs: ACK, FIN
Sep 2 07:06:47.823 10.244.1.50:60914 kube-system/coredns-f9fd979d6-h7rfw:8080 to-endpoint FORWARDED TCP Fla
gs: ACK, PSH
可以看到內容很詳細,包括通訊的兩端,以及發的包是 ACK
還是 SYN
等資訊均可觀測到。
部署測試應用
這裡我們部署一個測試應用來實際體驗下 Cilium 提供的強大功能。官方倉庫中提供了一個 connectivity-check 的測試用例,這裡我對它做了精簡和修改,以便理解。
這裡定義的內容如下:
- 1 個名為
echo-a
的 svc ,用於暴露echo-a
這個測試服務; - 4 個 deploy ,分別是 1 個測試服務,以及三個用於測試與
echo-a
聯通性的 deploy; - 2 個 CiliumNetworkPolicy,用來控制是否允許與
echo-a
聯通;
---
apiVersion: v1
kind: Service
metadata:
name: echo-a
spec:
type: ClusterIP
ports:
- port: 80
selector:
name: echo-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-a
spec:
selector:
matchLabels:
name: echo-a
replicas: 1
template:
metadata:
labels:
name: echo-a
spec:
containers:
- name: echo-container
image: docker.io/cilium/json-mock:1.0
imagePullPolicy: IfNotPresent
readinessProbe:
exec:
command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod-to-a-allowed-cnp
spec:
selector:
matchLabels:
name: pod-to-a-allowed-cnp
replicas: 1
template:
metadata:
labels:
name: pod-to-a-allowed-cnp
spec:
containers:
- name: pod-to-a-allowed-cnp-container
image: docker.io/byrnedo/alpine-curl:0.1.8
command: ["/bin/ash", "-c", "sleep 1000000000"]
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"]
readinessProbe:
exec:
command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"]
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "pod-to-a-allowed-cnp"
spec:
endpointSelector:
matchLabels:
name: pod-to-a-allowed-cnp
egress:
- toEndpoints:
- matchLabels:
name: echo-a
toPorts:
- ports:
- port: "80"
protocol: TCP
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod-to-a-l3-denied-cnp
spec:
selector:
matchLabels:
name: pod-to-a-l3-denied-cnp
replicas: 1
template:
metadata:
labels:
name: pod-to-a-l3-denied-cnp
spec:
containers:
- name: pod-to-a-l3-denied-cnp-container
image: docker.io/byrnedo/alpine-curl:0.1.8
command: ["/bin/ash", "-c", "sleep 1000000000"]
imagePullPolicy: IfNotPresent
livenessProbe:
timeoutSeconds: 7
exec:
command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"]
readinessProbe:
timeoutSeconds: 7
exec:
command: ["ash", "-c", "! curl -sS --fail --connect-timeout 5 -o /dev/null echo-a"]
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "pod-to-a-l3-denied-cnp"
spec:
endpointSelector:
matchLabels:
name: pod-to-a-l3-denied-cnp
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod-to-a
spec:
selector:
matchLabels:
name: pod-to-a
replicas: 1
template:
metadata:
labels:
name: pod-to-a
spec:
containers:
- name: pod-to-a-container
image: docker.io/byrnedo/alpine-curl:0.1.8
command: ["/bin/ash", "-c", "sleep 1000000000"]
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command: ["curl", "-sS", "--fail", "-o", "/dev/null", "echo-a"]
直接部署即可:
(MoeLove) ➜ ~ kubectl apply -f cilium-demo.yaml
service/echo-a created
deployment.apps/echo-a created
deployment.apps/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
deployment.apps/pod-to-a-l3-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-l3-denied-cnp created
deployment.apps/pod-to-a created
檢視 Pod 狀態,看看狀態是否正常:
(MoeLove) ➜ ~ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo-a-8b6595b89-w9kt2 1/1 Running 0 49s
pod-to-a-5567c85856-xsg5b 1/1 Running 0 49s
pod-to-a-allowed-cnp-7b85c8db8-jrjhx 1/1 Running 0 49s
pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm 1/1 Running 0 49s
命令列觀測
接下來,使用 hubble observe
觀察下效果,已經可以看到我們部署的應用產生的連線了。
(MoeLove) ➜ ~ kubectl exec -n kube-system -t ds/cilium -- hubble observe
TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
Sep 3 00:00:13.481 default/pod-to-a-5567c85856-xsg5b:60784 default/echo-a-8b6595b89-w9kt2:80 to-endpoint FORWARDED TCP Flags: ACK, PSH
Sep 3 00:00:15.429 kube-system/coredns-f9fd979d6-h7rfw:53 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:43696 to-endpoint FORWARDED UDP
Sep 3 00:00:16.010 10.244.1.12:4240 10.244.2.220:50830 to-overlay FORWARDED TCP Flags: ACK
Sep 3 00:00:16.010 10.244.1.12:4240 10.244.1.50:40402 to-stack FORWARDED TCP Flags: ACK
Sep 3 00:00:16.010 10.244.1.50:40402 10.244.1.12:4240 to-endpoint FORWARDED TCP Flags: ACK
Sep 3 00:00:16.011 10.244.2.220:50830 10.244.1.12:4240 to-endpoint FORWARDED TCP Flags: ACK
Sep 3 00:00:16.523 10.244.1.12:4240 10.244.3.111:57242 to-overlay FORWARDED TCP Flags: ACK
Sep 3 00:00:16.523 10.244.3.111:57242 10.244.1.12:4240 to-endpoint FORWARDED TCP Flags: ACK
Sep 3 00:00:21.376 kube-system/coredns-f9fd979d6-h7rfw:53 default/pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm:44785 to-overlay FORWARDED UDP
Sep 3 00:00:21.377 kube-system/coredns-f9fd979d6-h7rfw:53 default/pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm:44785 to-overlay FORWARDED UDP
Sep 3 00:00:23.896 kube-system/coredns-f9fd979d6-h7rfw:36120 172.18.0.4:6443 to-stack FORWARDED TCP Flags: ACK
Sep 3 00:00:25.428 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:55678 default/echo-a-8b6595b89-w9kt2:80 L3-L4 FORWARDED TCP Flags: SYN
Sep 3 00:00:25.428 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:55678 default/echo-a-8b6595b89-w9kt2:80 to-endpoint FORWARDED TCP Flags: SYN
Sep 3 00:00:25.428 default/echo-a-8b6595b89-w9kt2:80 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:55678 to-endpoint FORWARDED TCP Flags: SYN, ACK
Sep 3 00:00:25.428 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:55678 default/echo-a-8b6595b89-w9kt2:80 to-endpoint FORWARDED TCP Flags: ACK
Sep 3 00:00:25.428 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:55678 default/echo-a-8b6595b89-w9kt2:80 to-endpoint FORWARDED TCP Flags: ACK, PSH
Sep 3 00:00:25.429 default/pod-to-a-allowed-cnp-7b85c8db8-jrjhx:55678 default/echo-a-8b6595b89-w9kt2:80 to-endpoint FORWARDED TCP Flags: ACK, FIN
Sep 3 00:00:29.546 10.244.1.50:57770 kube-system/coredns-f9fd979d6-h7rfw:8080 to-endpoint FORWARDED TCP Flags: SYN
Sep 3 00:00:29.546 kube-system/coredns-f9fd979d6-h7rfw:8080 10.244.1.50:57770 to-stack FORWARDED TCP Flags: SYN, ACK
Sep 3 00:00:29.546 10.244.1.50:57770 kube-system/coredns-f9fd979d6-h7rfw:8080 to-endpoint FORWARDED TCP Flags: ACK
Hubble UI 觀測
還記得我們在上文中部署 Cilium 時候配置的幾個關於 Hubble 的引數麼,現在我們可以使用 Hubble UI 來看看效果。
先檢查下 kube-system
ns 下,是否有 hubble-ui
這個 svc 。
(MoeLove) ➜ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hubble-metrics ClusterIP None <none> 9091/TCP 4m31s
hubble-relay ClusterIP 10.102.90.19 <none> 80/TCP 4m31s
hubble-ui ClusterIP 10.96.69.234 <none> 80/TCP 4m31s
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 8m51s
直接使用 kubectl port-forward
,從本地來訪問 Hubble UI 。
(MoeLove) ➜ ~ kubectl -n kube-system port-forward svc/hubble-ui 12000:80
Forwarding from 127.0.0.1:12000 -> 12000
Forwarding from [::1]:12000 -> 12000
瀏覽器中開啟 http://127.0.0.1:12000 即可。
可以看到我們剛才部署的所有 Pod,以及檢視到相應的 CiliumNetworkPolicy 等資訊,這裡就不贅述了,有興趣的小夥伴可以自行探索下。
Hubble metrics 觀測
我們也可以使用 Hubble 暴露出來的 metrics 進行觀測:
(MoeLove) ➜ ~ kubectl port-forward -n kube-system ds/cilium 19091:9091
Forwarding from 127.0.0.1:19091 -> 9091
Forwarding from [::1]:19091 -> 9091
簡單看下其中的內容,包含各類請求/響應/丟棄等相關的統計資訊,還有包括每個目標埠包的數量統計等。感興趣的小夥伴可以自行探索下。
(MoeLove) ➜ ~ curl -s localhost:19091/metrics | head -n 22
# HELP hubble_dns_queries_total Number of DNS queries observed
# TYPE hubble_dns_queries_total counter
hubble_dns_queries_total{ips_returned="0",qtypes="A",rcode=""} 1165
hubble_dns_queries_total{ips_returned="0",qtypes="AAAA",rcode=""} 1165
# HELP hubble_dns_response_types_total Number of DNS queries observed
# TYPE hubble_dns_response_types_total counter
hubble_dns_response_types_total{qtypes="A",type="A"} 233
hubble_dns_response_types_total{qtypes="AAAA",type="AAAA"} 233
# HELP hubble_dns_responses_total Number of DNS queries observed
# TYPE hubble_dns_responses_total counter
hubble_dns_responses_total{ips_returned="0",qtypes="A",rcode="Non-Existent Domain"} 932
hubble_dns_responses_total{ips_returned="0",qtypes="AAAA",rcode="Non-Existent Domain"} 932
hubble_dns_responses_total{ips_returned="1",qtypes="A",rcode="No Error"} 233
hubble_dns_responses_total{ips_returned="1",qtypes="AAAA",rcode="No Error"} 233
# HELP hubble_drop_total Number of drops
# TYPE hubble_drop_total counter
hubble_drop_total{protocol="ICMPv4",reason="Policy denied"} 459
hubble_drop_total{protocol="ICMPv4",reason="Unsupported protocol for NAT masquerade"} 731
hubble_drop_total{protocol="ICMPv6",reason="Unsupported L3 protocol"} 213
hubble_drop_total{protocol="TCP",reason="Policy denied"} 1425
hubble_drop_total{protocol="UDP",reason="Stale or unroutable IP"} 6
hubble_drop_total{protocol="Unknown flow",reason="Policy denied"} 1884
驗證 CiliumNetworkPolicy 的效果
說了這麼多,我們來驗證下剛才部署的 CiliumNetworkPolicy 的實際效果吧。
以下是剛才部署的測試 Pod, 我們通過這些 Pod 來訪問 echo-a
這個 svc 。
(MoeLove) ➜ ~ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo-a-8b6595b89-w9kt2 1/1 Running 0 79m
pod-to-a-5567c85856-xsg5b 1/1 Running 0 79m
pod-to-a-allowed-cnp-7b85c8db8-jrjhx 1/1 Running 0 79m
pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm 1/1 Running 0 79m
pod-to-a
這是未配置任何 CiliumNetworkPolicy 規則的 Pod
(MoeLove) ➜ ~ kubectl exec pod-to-a-5567c85856-xsg5b -- curl -sI --connect-timeout 5 echo-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Sat, 26 Oct 1985 08:15:00 GMT
ETag: W/"83d-7438674ba0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2109
Date: Thu, 03 Sep 2020 00:54:05 GMT
Connection: keep-alive
pod-to-a-allowed-cnp
配置了允許通過TCP
訪問echo-a
(MoeLove) ➜ ~ kubectl exec pod-to-a-allowed-cnp-7b85c8db8-jrjhx -- curl -sI --connect-timeout 5 echo-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Sat, 26 Oct 1985 08:15:00 GMT
ETag: W/"83d-7438674ba0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2109
Date: Thu, 03 Sep 2020 01:10:27 GMT
Connection: keep-alive
pod-to-a-l3-denied-cnp
則是隻配置了允許訪問 DNS,而未配置允許對echo-a
的訪問
(MoeLove) ➜ ~ kubectl exec pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm -- curl -sI --connect-timeout 5 echo-a
command terminated with exit code 28
可以看到,如果對 Pod 應用了 CiliumNetworkPolicy , 但是未配置對應的允許規則的話,則代表不允許訪問。
比如,我們可以使用上面兩個配置了 CiliumNetworkPolicy 的 Pod 來訪問下公網域名:
(MoeLove) ➜ ~ kubectl exec pod-to-a-allowed-cnp-7b85c8db8-jrjhx -- curl -sI --connect-timeout 5 moelove.info
command terminated with exit code 28
(MoeLove) ➜ ~ kubectl exec pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm -- curl -sI --connect-timeout 5 moelove.info
command terminated with exit code 28
可以看到,均不能正常訪問。
總結
本節,主要介紹了 Cilium 和 Hubble 等。
通過使用 KIND 建立的 Kubernetes 叢集,部署了 Cilium 及其相關元件,並通過一個例項,來展示了通過 hubble observe
,Hubble UI 及 Hubble metrics 等方式進行觀測。
也通過實際操作,驗證了 CiliumNetworkPolicy 的實際效果。
我主要是在為 Docker 寫程式碼的過程中,會涉及到 LSM
及 seccomp
等部分,所以順便去研究了 eBPF 及其相關技術(後續再分享這部分內容)。
而 Cilium 則是我在 2019 年上半年開始學習和研究的,但正如我在去年的文章 《K8S 生態週報| cilium 1.6 釋出 100% kube-proxy 的替代品》 中寫的那樣:
這裡稍微說幾句我關於 Cilium 的看法:
- 厲不厲害?厲害。
- 值不值得研究?值得。
- 會不會放到自己的叢集替代 kube-proxy ?不會,最起碼目前不會。
如果你想要通過 cilium 研究 eBPF 或者 XDP 我倒是建議你可以看看,是個很不錯的專案,而且通過這個專案能加深很多網路方面的認識。這麼說吧,如果把 cilium 的原始碼及所涉及原理都研究通透了,那就很厲害了。
至於要不要替換 kube-proxy 在我看來,最起碼目前我不會這樣去做。解決問題的辦法有很多種,而替換掉一個核心元件,卻不一定是一個最值得的選擇。
Cilium 是一個值得學習和研究的專案/技術,但我目前尚未將它放到生產環境中(這也是我少數花費很多精力研究,但未應用於生產的技術之一)。
但現在看來, Cilium 也有了一定的市場/發展,是時候重新考量下了。後續我會繼續分享 Cilium 及 eBPF 相關的技術文章,歡迎關注。