Flannel IPIP DR 模式
一、環境資訊
主機 | IP |
---|---|
ubuntu | 172.16.94.141 |
軟體 | 版本 |
---|---|
docker | 26.1.4 |
helm | v3.15.0-rc.2 |
kind | 0.18.0 |
clab | 0.54.2 |
kubernetes | 1.23.4 |
ubuntu os | Ubuntu 20.04.6 LTS |
kernel | 5.11.5 核心升級文件 |
二、安裝服務
kind
配置檔案資訊
$ cat install.sh
#!/bin/bash
date
set -v
# 1.prep noCNI env
cat <<EOF | kind create cluster --name=clab-flannel-ipip-directrouting --image=kindest/node:v1.23.4 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true
podSubnet: "10.244.0.0/16"
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.dayuan1997.com"]
endpoint = ["https://harbor.dayuan1997.com"]
EOF
# 2.remove taints
controller_node=`kubectl get nodes --no-headers -o custom-columns=NAME:.metadata.name| grep control-plane`
kubectl taint nodes $controller_node node-role.kubernetes.io/master:NoSchedule-
kubectl get nodes -o wide
# 3.install necessary tools
# cd /opt/
# curl -o calicoctl -O -L "https://gh.api.99988866.xyz/https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz"
# tar -zxvf cni-plugins-linux-amd64-v0.9.0.tgz
for i in $(docker ps -a --format "table {{.Names}}" | grep flannel)
do
echo $i
docker cp /opt/bridge $i:/opt/cni/bin/
docker cp /usr/bin/ping $i:/usr/bin/ping
docker exec -it $i bash -c "sed -i -e 's/jp.archive.ubuntu.com\|archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list"
docker exec -it $i bash -c "apt-get -y update >/dev/null && apt-get -y install net-tools tcpdump lrzsz bridge-utils >/dev/null 2>&1"
done
- 安裝
k8s
叢集
root@kind:~# ./install.sh
Creating cluster "clab-flannel-ipip-directrouting" ...
✓ Ensuring node image (kindest/node:v1.23.4) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-clab-flannel-ipip-directrouting"
You can now use your cluster with:
kubectl cluster-info --context kind-clab-flannel-ipip-directrouting
Have a nice day! 👋
root@kind:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
clab-flannel-ipip-directrouting-control-plane NotReady control-plane,master 66s v1.23.4 172.18.0.4 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker NotReady <none> 46s v1.23.4 172.18.0.2 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker2 NotReady <none> 33s v1.23.4 172.18.0.5 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker3 NotReady <none> 33s v1.23.4 172.18.0.3 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
建立 clab
容器環境
建立網橋
root@kind:~# brctl addbr br-leaf0
root@kind:~# ifconfig br-leaf0 up
root@kind:~# brctl addbr br-leaf1
root@kind:~# ifconfig br-leaf1 up
root@kind:~# ip a l
29: br-pool0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
link/ether aa:c1:ab:02:66:b8 brd ff:ff:ff:ff:ff:ff
inet6 fe80::1cab:b8ff:fe40:2b38/64 scope link
valid_lft forever preferred_lft forever
30: br-pool1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
link/ether aa:c1:ab:1b:40:50 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c830:c5ff:fe84:1258/64 scope link
valid_lft forever preferred_lft forever
建立這兩個網橋主要是為了讓
kind
上節點透過虛擬交換機連線到containerLab
,為什麼不直連線containerLab
,如果10.1.5.10/24
使用vethPair
和containerLab
進行連線,10.1.5.11/24
就沒有額外的埠進行連線
clab
網路拓撲檔案
# flannel.ipip.directrouting.clab.yml
name: flannel-ipip-directrouting
topology:
nodes:
gw0:
kind: linux
image: vyos/vyos:1.2.8
cmd: /sbin/init
binds:
- /lib/modules:/lib/modules
- ./startup-conf/gw0-boot.cfg:/opt/vyatta/etc/config/config.boot
br-pool0:
kind: bridge
br-pool1:
kind: bridge
server1:
kind: linux
image: harbor.dayuan1997.com/devops/nettool:0.9
# 複用節點網路,共享網路名稱空間
network-mode: container:clab-flannel-ipip-directrouting-control-plane
# 配置是為了設定節點上的業務網路卡,同時將預設路由的閘道器進行更改,使用業務網路卡為出介面。
exec:
- ip addr add 10.1.5.10/24 dev net0
- ip route replace default via 10.1.5.1
server2:
kind: linux
image: harbor.dayuan1997.com/devops/nettool:0.9
# 複用節點網路,共享網路名稱空間
network-mode: container:clab-flannel-ipip-directrouting-worker
# 配置是為了設定節點上的業務網路卡,同時將預設路由的閘道器進行更改,使用業務網路卡為出介面。
exec:
- ip addr add 10.1.5.11/24 dev net0
- ip route replace default via 10.1.5.1
server3:
kind: linux
image: harbor.dayuan1997.com/devops/nettool:0.9
# 複用節點網路,共享網路名稱空間
network-mode: container:clab-flannel-ipip-directrouting-worker2
# 配置是為了設定節點上的業務網路卡,同時將預設路由的閘道器進行更改,使用業務網路卡為出介面。
exec:
- ip addr add 10.1.8.10/24 dev net0
- ip route replace default via 10.1.8.1
server4:
kind: linux
image: harbor.dayuan1997.com/devops/nettool:0.9
# 複用節點網路,共享網路名稱空間
network-mode: container:clab-flannel-ipip-directrouting-worker3
# 配置是為了設定節點上的業務網路卡,同時將預設路由的閘道器進行更改,使用業務網路卡為出介面。
exec:
- ip addr add 10.1.8.11/24 dev net0
- ip route replace default via 10.1.8.1
links:
- endpoints: ["br-pool0:br-pool0-net0", "server1:net0"]
- endpoints: ["br-pool0:br-pool0-net1", "server2:net0"]
- endpoints: ["br-pool1:br-pool1-net0", "server3:net0"]
- endpoints: ["br-pool1:br-pool1-net1", "server4:net0"]
- endpoints: ["gw0:eth1", "br-pool0:br-pool0-net2"]
- endpoints: ["gw0:eth2", "br-pool1:br-pool1-net2"]
VyOS
配置檔案
gw0-boot.cfg
配置檔案
# ./startup-conf/gw0-boot.cfg
interfaces {
ethernet eth1 {
address 10.1.5.1/24
duplex auto
smp-affinity auto
speed auto
}
ethernet eth2 {
address 10.1.8.1/24
duplex auto
smp-affinity auto
speed auto
}
loopback lo {
}
}
# 配置 nat 資訊,gw0 網路下的其他伺服器可以訪問外網
nat {
source {
rule 100 {
outbound-interface eth0
source {
address 10.1.0.0/16
}
translation {
address masquerade
}
}
}
}
system {
config-management {
commit-revisions 100
}
console {
device ttyS0 {
speed 9600
}
}
host-name vyos
login {
user vyos {
authentication {
encrypted-password $6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/
plaintext-password ""
}
level admin
}
}
ntp {
server 0.pool.ntp.org {
}
server 1.pool.ntp.org {
}
server 2.pool.ntp.org {
}
}
syslog {
global {
facility all {
level info
}
facility protocols {
level debug
}
}
}
time-zone UTC
}
/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "wanloadbalance@3:l2tp@1:pptp@1:ntp@1:mdns@1:webgui@1:conntrack@1:ipsec@5:cluster@1:dhcp-server@5:nat@4:dhcp-relay@2:webproxy@1:system@10:pppoe-server@2:dns-forwarding@1:ssh@1:quagga@7:broadcast-relay@1:qos@1:snmp@1:firewall@5:zone-policy@1:config-management@1:webproxy@2:vrrp@2:conntrack-sync@1" === */
/* Release version: 1.2.8 */
部署服務
# tree -L 2 ./
./
├── flannel.ipip.directrouting.clab.yml
└── startup-conf
└── gw0-boot.cfg
# clab deploy -t flannel.ipip.directrouting.clab.yml
INFO[0000] Containerlab v0.54.2 started
INFO[0000] Parsing & checking topology file: clab.yaml
INFO[0000] Creating lab directory: /root/wcni-kind/flannel/6-flannel-ipip-directrouting/clab-flannel-ipip-directrouting
WARN[0000] node clab-flannel-ipip-directrouting-worker2 referenced in namespace sharing not found in topology definition, considering it an external dependency.
WARN[0000] node clab-flannel-ipip-directrouting-worker3 referenced in namespace sharing not found in topology definition, considering it an external dependency.
WARN[0000] node clab-flannel-ipip-directrouting-control-plane referenced in namespace sharing not found in topology definition, considering it an external dependency.
WARN[0000] node clab-flannel-ipip-directrouting-worker referenced in namespace sharing not found in topology definition, considering it an external dependency.
INFO[0000] Creating container: "gw0"
INFO[0001] Created link: gw0:eth1 <--> br-pool0:br-pool0-net2
INFO[0001] Created link: gw0:eth2 <--> br-pool1:br-pool1-net2
INFO[0003] Creating container: "server3"
INFO[0003] Creating container: "server2"
INFO[0004] Created link: br-pool0:br-pool0-net1 <--> server2:net0
INFO[0004] Created link: br-pool1:br-pool1-net0 <--> server3:net0
INFO[0005] Creating container: "server4"
INFO[0005] Creating container: "server1"
INFO[0006] Created link: br-pool0:br-pool0-net0 <--> server1:net0
INFO[0006] Created link: br-pool1:br-pool1-net1 <--> server4:net0
INFO[0006] Executed command "ip addr add 10.1.5.11/24 dev net0" on the node "server2". stdout:
INFO[0006] Executed command "ip route replace default via 10.1.5.1" on the node "server2". stdout:
INFO[0006] Executed command "ip addr add 10.1.8.10/24 dev net0" on the node "server3". stdout:
INFO[0006] Executed command "ip route replace default via 10.1.8.1" on the node "server3". stdout:
INFO[0006] Executed command "ip addr add 10.1.5.10/24 dev net0" on the node "server1". stdout:
INFO[0006] Executed command "ip route replace default via 10.1.5.1" on the node "server1". stdout:
INFO[0006] Executed command "ip addr add 10.1.8.11/24 dev net0" on the node "server4". stdout:
INFO[0006] Executed command "ip route replace default via 10.1.8.1" on the node "server4". stdout:
INFO[0006] Adding containerlab host entries to /etc/hosts file
INFO[0006] Adding ssh config for containerlab nodes
INFO[0006] 🎉 New containerlab version 0.56.0 is available! Release notes: https://containerlab.dev/rn/0.56/
Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/
+---+-----------------------------------------+--------------+------------------------------------------+-------+---------+----------------+----------------------+
| # | Name | Container ID | Image | Kind | State | IPv4 Address | IPv6 Address |
+---+-----------------------------------------+--------------+------------------------------------------+-------+---------+----------------+----------------------+
| 1 | clab-flannel-ipip-directrouting-gw0 | 3e133b73fa87 | vyos/vyos:1.2.8 | linux | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
| 2 | clab-flannel-ipip-directrouting-server1 | 71cd9cb72a9d | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A | N/A |
| 3 | clab-flannel-ipip-directrouting-server2 | 843eb9b12b31 | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A | N/A |
| 4 | clab-flannel-ipip-directrouting-server3 | e722bfec3127 | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A | N/A |
| 5 | clab-flannel-ipip-directrouting-server4 | 1281563feb4f | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A | N/A |
+---+-----------------------------------------+--------------+------------------------------------------+-------+---------+----------------+----------------------+
檢查 k8s 叢集資訊
root@kind:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
clab-flannel-ipip-directrouting-control-plane NotReady control-plane,master 11m v1.23.4 172.18.0.4 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker NotReady <none> 11m v1.23.4 172.18.0.2 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker2 NotReady <none> 11m v1.23.4 172.18.0.5 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker3 NotReady <none> 11m v1.23.4 172.18.0.3 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
# 檢視 node 節點 ip 資訊
root@kind:~# docker exec -it clab-flannel-ipip-directrouting-control-plane ip a l
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.4/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fc00:f853:ccd:e793::4/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:4/64 scope link
valid_lft forever preferred_lft forever
33: net0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default
link/ether aa:c1:ab:84:70:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.5.10/24 scope global net0
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:fe84:70e7/64 scope link
valid_lft forever preferred_lft forever
# 檢視 node 節點路由資訊
root@kind:~# docker exec -it clab-flannel-ipip-directrouting-control-plane ip r s
default via 10.1.5.1 dev net0
10.1.5.0/24 dev net0 proto kernel scope link src 10.1.5.10
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4
檢視 k8s 叢集發現
node
節點ip
地址分配了,登陸容器檢視到了新的ip
地址,並且預設路由資訊調整為了10.1.5.0/24 dev net0 proto kernel scope link src 10.1.5.10
安裝 flannel
服務
flannel.yaml
配置檔案
# flannel.yaml
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "ipip",
"DirectRouting": true
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
#image: 192.168.2.100:5000/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
image: harbor.dayuan1997.com/devops/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
#image: 192.168.2.100:5000/rancher/mirrored-flannelcni-flannel:v0.19.2
image: harbor.dayuan1997.com/devops/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
#image: 192.168.2.100:5000/rancher/mirrored-flannelcni-flannel:v0.19.2
image: harbor.dayuan1997.com/devops/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
- name: tun
mountPath: /dev/net/tun
volumes:
- name: tun
hostPath:
path: /dev/net/tun
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
flannel.yaml
引數解釋
Backend.Type
- 含義: 用於指定
flannel
工作模式。 ipip
:flannel
工作在ipip
模式。
- 含義: 用於指定
Backend.DirectRouting
- 含義: 用於指定
ipip
模式同網段使用host-gw
模式。 true
:ipip
模式中,同網段的node
節點之間資料轉發使用host-gw
模式。
- 含義: 用於指定
root@kind:~# kubectl apply -f flannel.yaml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
- 檢視
k8s
叢集和flannel
服務
root@kind:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
clab-flannel-ipip-directrouting-control-plane Ready control-plane,master 14m v1.23.4 172.18.0.4 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker Ready <none> 13m v1.23.4 172.18.0.2 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker2 Ready <none> 13m v1.23.4 172.18.0.5 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
clab-flannel-ipip-directrouting-worker3 Ready <none> 13m v1.23.4 172.18.0.3 <none> Ubuntu 21.10 5.11.5-051105-generic containerd://1.5.10
- 檢視安裝的服務
root@kind:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-7rmd8 1/1 Running 0 14m
kube-system coredns-64897985d-9qzts 1/1 Running 0 14m
kube-system etcd-clab-flannel-ipip-directrouting-control-plane 1/1 Running 0 14m
kube-system kube-apiserver-clab-flannel-ipip-directrouting-control-plane 1/1 Running 0 14m
kube-system kube-controller-manager-clab-flannel-ipip-directrouting-control-plane 1/1 Running 0 14m
kube-system kube-flannel-ds-c7bnc 1/1 Running 0 34s
kube-system kube-flannel-ds-ch8r5 1/1 Running 0 34s
kube-system kube-flannel-ds-gcpmk 1/1 Running 0 34s
kube-system kube-flannel-ds-nm8mw 1/1 Running 0 34s
kube-system kube-proxy-cr88l 1/1 Running 0 14m
kube-system kube-proxy-fs8k7 1/1 Running 0 13m
kube-system kube-proxy-hb76d 1/1 Running 0 13m
kube-system kube-proxy-q2d2w 1/1 Running 0 13m
kube-system kube-scheduler-clab-flannel-ipip-directrouting-control-plane 1/1 Running 0 14m
local-path-storage local-path-provisioner-5ddd94ff66-kgdw2 1/1 Running 0 14m
k8s
叢集安裝 Pod
測試網路
root@kind:~# cat cni.yaml
apiVersion: apps/v1
kind: DaemonSet
#kind: Deployment
metadata:
labels:
app: cni
name: cni
spec:
#replicas: 1
selector:
matchLabels:
app: cni
template:
metadata:
labels:
app: cni
spec:
containers:
- image: harbor.dayuan1997.com/devops/nettool:0.9
name: nettoolbox
securityContext:
privileged: true
---
apiVersion: v1
kind: Service
metadata:
name: serversvc
spec:
type: NodePort
selector:
app: cni
ports:
- name: cni
port: 80
targetPort: 80
nodePort: 32000
root@kind:~# kubectl apply -f cni.yaml
daemonset.apps/cni created
service/serversvc created
root@kind:~# kubectl run net --image=harbor.dayuan1997.com/devops/nettool:0.9
pod/net created
- 檢視安裝服務資訊
root@kind:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cni-8vvgl 1/1 Running 0 19s 10.244.3.2 clab-flannel-ipip-directrouting-worker2 <none> <none>
cni-gqht9 1/1 Running 0 19s 10.244.1.2 clab-flannel-ipip-directrouting-worker <none> <none>
cni-rk5hx 1/1 Running 0 19s 10.244.2.2 clab-flannel-ipip-directrouting-worker3 <none> <none>
cni-wfv4m 1/1 Running 0 19s 10.244.0.5 clab-flannel-ipip-directrouting-control-plane <none> <none>
net 1/1 Running 0 18s 10.244.3.3 clab-flannel-ipip-directrouting-worker2 <none> <none>
root@kind:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m
serversvc NodePort 10.96.171.181 <none> 80:32000/TCP 26s
三、測試網路
同節點 Pod
網路通訊
可以檢視此文件 Flannel UDP 模式 中,同節點網路通訊,資料包轉發流程一致
Flannel 同節點通訊透過
l2
網路,2
層交換機完成
跨節點同 Node
網段 Pod
網路通訊
可以檢視此文件 Flannel HOST-GW 模式 中,不同節點 Pod
網路通訊,資料包轉發流程一致
跨節點不同 Node
網段 Pod
網路通訊
Pod
節點資訊
## ip 資訊
root@kind:~# kubectl exec -it net -- ip a l
4: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9480 qdisc noqueue state UP group default
link/ether 66:1e:0c:01:b0:60 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.3.3/24 brd 10.244.3.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::641e:cff:fe01:b060/64 scope link
valid_lft forever preferred_lft forever
## 路由資訊
root@kind:~# kubectl exec -it net -- ip r s
default via 10.244.3.1 dev eth0
10.244.0.0/16 via 10.244.3.1 dev eth0
10.244.3.0/24 dev eth0 proto kernel scope link src 10.244.3.3
Pod
節點所在Node
節點資訊
root@kind:~# docker exec -it clab-flannel-ipip-directrouting-worker2 bash
## ip 資訊
root@clab-flannel-ipip-directrouting-worker2:/# ip a l
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: flannel.ipip@NONE: <NOARP,UP,LOWER_UP> mtu 9480 qdisc noqueue state UNKNOWN group default
link/ipip 10.1.8.10 brd 0.0.0.0
inet 10.244.3.0/32 scope global flannel.ipip
valid_lft forever preferred_lft forever
inet6 fe80::5efe:a01:80a/64 scope link
valid_lft forever preferred_lft forever
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9480 qdisc noqueue state UP group default qlen 1000
link/ether 6a:99:0f:73:74:63 brd ff:ff:ff:ff:ff:ff
inet 10.244.3.1/24 brd 10.244.3.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::6899:fff:fe73:7463/64 scope link
valid_lft forever preferred_lft forever
5: veth16279eba@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9480 qdisc noqueue master cni0 state UP group default
link/ether ba:68:cc:a7:7d:a3 brd ff:ff:ff:ff:ff:ff link-netns cni-9ebdf794-a2f7-5460-6d21-78f82e9d4ba3
inet6 fe80::b868:ccff:fea7:7da3/64 scope link
valid_lft forever preferred_lft forever
6: veth658091d4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9480 qdisc noqueue master cni0 state UP group default
link/ether 1a:f4:bc:3c:bb:48 brd ff:ff:ff:ff:ff:ff link-netns cni-369690a1-a496-8cbf-bba6-2bdc1ffd448f
inet6 fe80::18f4:bcff:fe3c:bb48/64 scope link
valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.5/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fc00:f853:ccd:e793::5/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:5/64 scope link
valid_lft forever preferred_lft forever
31: net0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default
link/ether aa:c1:ab:bf:0d:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.8.10/24 scope global net0
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:febf:d6b/64 scope link
valid_lft forever preferred_lft forever
## 路由資訊
root@clab-flannel-ipip-directrouting-worker2:/# ip r s
default via 10.1.8.1 dev net0
10.1.8.0/24 dev net0 proto kernel scope link src 10.1.8.10
10.244.0.0/24 via 10.1.5.10 dev flannel.ipip onlink
10.244.1.0/24 via 10.1.5.11 dev flannel.ipip onlink
10.244.2.0/24 via 10.1.8.11 dev net0 onlink
10.244.3.0/24 dev cni0 proto kernel scope link src 10.244.3.1
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.5
Pod
節點進行ping
包測試,訪問cni-gqht9
Pod
節點
root@kind:~# kubectl exec -it net -- ping 10.244.1.2 -c 1
PING 10.244.1.2 (10.244.1.2): 56 data bytes
64 bytes from 10.244.1.2: seq=0 ttl=62 time=1.562 ms
--- 10.244.1.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.562/1.562/1.562 ms
Pod
節點eth0
網路卡抓包
net~$ tcpdump -pne -i eth0
07:32:41.821687 66:1e:0c:01:b0:60 > 6a:99:0f:73:74:63, ethertype IPv4 (0x0800), length 98: 10.244.3.3 > 10.244.1.2: ICMP echo request, id 60, seq 0, length 64
07:32:41.821796 6a:99:0f:73:74:63 > 66:1e:0c:01:b0:60, ethertype IPv4 (0x0800), length 98: 10.244.1.2 > 10.244.3.3: ICMP echo reply, id 60, seq 0, length 64
資料包源 mac
地址: 66:1e:0c:01:b0:60
為 eth0
網路卡 mac
地址,而目的 mac
地址: 6a:99:0f:73:74:63
為 net
Pod
節點 cni0
網路卡對應的網路卡 mac
地址,cni0
網路卡 ip
地址為網路閘道器地址 10.244.2.1
, flannel
為 2
層網路模式透過路由送往資料到閘道器地址
net~$ arp -n
Address HWtype HWaddress Flags Mask Iface
10.244.3.1 ether 6a:99:0f:73:74:63 C eth0
而透過 veth pair
可以確定 Pod
節點 eth0
網路卡對應的 veth pair
為 veth658091d4@if4
網路卡
clab-flannel-ipip-directrouting-worker2
節點veth658091d4
網路卡抓包
root@clab-flannel-ipip-directrouting-worker2:/# tcpdump -pne -i veth658091d4
07:32:41.821687 66:1e:0c:01:b0:60 > 6a:99:0f:73:74:63, ethertype IPv4 (0x0800), length 98: 10.244.3.3 > 10.244.1.2: ICMP echo request, id 60, seq 0, length 64
07:32:41.821796 6a:99:0f:73:74:63 > 66:1e:0c:01:b0:60, ethertype IPv4 (0x0800), length 98: 10.244.1.2 > 10.244.3.3: ICMP echo reply, id 60, seq 0, length 64
因為他們互為 veth pair
所以抓包資訊相同
clab-flannel-ipip-directrouting-worker2
節點cni0
網路卡抓包
root@clab-flannel-ipip-directrouting-worker2:/# tcpdump -pne -i cni0
07:32:41.821687 66:1e:0c:01:b0:60 > 6a:99:0f:73:74:63, ethertype IPv4 (0x0800), length 98: 10.244.3.3 > 10.244.1.2: ICMP echo request, id 60, seq 0, length 64
07:32:41.821796 6a:99:0f:73:74:63 > 66:1e:0c:01:b0:60, ethertype IPv4 (0x0800), length 98: 10.244.1.2 > 10.244.3.3: ICMP echo reply, id 60, seq 0, length 64
資料包源 mac 地址: 66:1e:0c:01:b0:60
為 net
Pod
節點 eth0
網路卡 mac
地址,而目的 mac 地址: 6a:99:0f:73:74:63
為 cni0
網路卡 mac 地址
檢視
clab-flannel-ipip-directrouting-worker2
主機路由資訊,發現並在資料包會在透過10.244.1.0/24 via 10.1.5.11 dev flannel.ipip onlink
路由資訊轉發
clab-flannel-ipip-directrouting-worker2
節點flannel.ipip
網路卡抓包
root@clab-flannel-ipip-directrouting-worker2:/# tcpdump -pne -i flannel.ipip icmp
listening on flannel.ipip, link-type RAW (Raw IP), snapshot length 262144 bytes
07:34:59.737937 ip: 10.244.3.3 > 10.244.1.2: ICMP echo request, id 86, seq 0, length 64
07:34:59.738009 ip: 10.244.1.2 > 10.244.3.3: ICMP echo reply, id 86, seq 0, length 64
icmp
包中,沒有 mac
資訊,只有源 ip
目的 ip
資訊,這也是 ipip
資料包的特性: IPIP
隧道的工作原理是將源主機的IP資料包封裝在一個新的IP資料包中
clab-flannel-ipip-directrouting-worker2
節點net0
網路卡抓包
-
request
資料包資訊資訊icmp
包中,外部mac
資訊中,源mac: aa:c1:ab:bf:0d:6b
為clab-flannel-ipip-directrouting-worker2
的net0
網路卡mac
,目的mac: aa:c1:ab:25:67:20
為gw0
主機的eth2
網路卡mac
-
clab-flannel-ipip-directrouting-worker2
節點ipip
資訊
root@clab-flannel-ipip-directrouting-worker2:/# ip -d link show
3: flannel.ipip@NONE: <NOARP,UP,LOWER_UP> mtu 9480 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ipip 10.1.8.10 brd 0.0.0.0 promiscuity 0 minmtu 0 maxmtu 0
ipip any remote any local 10.1.8.10 ttl inherit nopmtudisc addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
資料包流向
- 資料從
pod
服務發出,透過檢視本機路由表,送往10.244.3.1
網路卡。路由:10.244.0.0/16 via 10.244.3.1 dev eth0
- 透過
veth pair
網路卡veth658091d4
傳送資料到clab-flannel-ipip-directrouting-worker2
主機上,在轉送到cni0: 10.244.3.1
網路卡 clab-flannel-ipip-directrouting-worker2
主機檢視自身路由後,會送往flannel.ipip
介面,因為目的地址為10.244.1.2
。路由:10.244.1.0/24 via 10.1.5.11 dev flannel.ipip onlink
flannel.ipip
介面為ipip
模式,會重新封裝資料包。- 資料封裝完成後,會送往
net0 介面
,並送往gw0
主機。 gw0
主機接受到資料包後,發目的地址為10.1.5.11
,會檢視路由表,送往eth1
介面。路由:10.1.5.0/24 dev eth1 proto kernel scope link src 10.1.5.1
- 透過
gw0
主機eth1
網路卡重新封裝資料包後,最終會把資料包送到clab-flannel-ipip-directrouting-worker
主機 - 對端
flannel-ipip-directrouting-worker
主機接受到資料包後,發現這個是一個ipip
資料包,接收端會將外層IP
頭部去掉,提取內層的IP
資料包。 - 內層資料包會被交給
flannel.ipip
介面進行處理,就像是接收到了一個普通的IP
資料包一樣。 - 解封裝後發現內部的資料包,目的地址為
10.244.1.2
,透過檢視本機路由表,送往cni0
網路卡。路由:10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
- 透過
cni0
網路卡brctl showmacs cni0
mac
資訊 ,最終會把資料包送到cni-gqht9
主機
Service
網路通訊
可以檢視此文件 Flannel UDP 模式 中,Service
網路通訊,資料包轉發流程一致