istio的sidecar原理學習

俞正東發表於2021-12-26

目的

從內嵌到應用的SDK模式轉成istio servicemesh,再到最新提出來的proxyless可謂是發展太快了。剛開始我只是圍繞著服務註冊和發現是怎麼轉變來展開研究,但是發現這個話題有點大,還是得一步步來:

  • sidecar如何接管流量?
  • 如果不考慮現有的微服務體系,註冊和發現怎麼實現,有幾種方式?
  • 結合現有的微服務體系,註冊和發現該如何融合?

先一步步研究吧,抓著這個主方向不斷地探尋,肯定有所收穫。

今天和大家分享第一個,sidecar如何接管流量

整個istio的bookinfo環境搭建如下

按照官網

  • 首先安裝k8s(我用minikube)
  • 再安裝istio
  • 開啟預設namespace:default的sidecar注入然後跑samples的bookinfo
image
image

下圖是使用 Istio 管理的 bookinfo 示例的訪問請求路徑圖

image
image

sidecar注入

先開啟sidecar注入

kubectl label namespace default istio-injection=enabled

部署bookinfo


kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

成功後

  • 會建立svc:
image
image
  • 和建立pods:
image
image

sidecar注入是依賴k8s的webhook功能來實現的,在建立pod的Crd資源增加了istio的配置

pod資源被webhook修改成啥樣了

先來分析下productpage


##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
    service: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage-v1
  labels:
    app: productpage
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      serviceAccountName: bookinfo-productpage
      containers:
      - name: productpage
        image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        securityContext:
          runAsUser: 1000
      volumes:
      - name: tmp
        emptyDir: {}
---    

對應productpage映象的DockerFile:

image
image

productpage是一個Flask寫的python應用

image
image

通過命令檢視sidecar注入後的productpage配置是怎麼樣的

kubectl describe pod -l app=productpage


Name:         productpage-v1-6b746f74dc-59t8d
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Sat, 25 Dec 2021 16:53:08 +0800
Labels:       app=productpage
              pod-template-hash=6b746f74dc
              security.istio.io/tlsMode=istio
              service.istio.io/canonical-name=productpage
              service.istio.io/canonical-revision=v1
              version=v1
Annotations:  kubectl.kubernetes.io/default-container: productpage
              kubectl.kubernetes.io/default-logs-container: productpage
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/status:
                {"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
Status:       Running
IP:           172.17.0.7
IPs:
  IP:           172.17.0.7
Controlled By:  ReplicaSet/productpage-v1-6b746f74dc
Init Containers:
  istio-init:
    Container ID:  docker://81d9c7297737675de742388e54de51d21598da8e6a63b81d293c69b848e92ba7
    Image:         docker.io/istio/proxyv2:1.12.1
    Image ID:      docker-pullable://istio/proxyv2@sha256:4704f04f399ae24d99e65170d1846dc83d7973f186656a03ba70d47bd1aba88f
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 26 Dec 2021 14:23:05 +0800
      Finished:     Sun, 26 Dec 2021 14:23:05 +0800
    Ready:          True
    Restart Count:  1
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        10m
      memory:     40Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clsdq (ro)
Containers:
  productpage:
    Container ID:   docker://421334a3a5262bdbb29158c423cce3464e19ac3267c7c6240ca5d89a1d38962a
    Image:          docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
    Image ID:       docker-pullable://istio/examples-bookinfo-productpage-v1@sha256:63ac3b4fb6c3ba395f5d044b0e10bae513afb34b9b7d862b3a7c3de7e0686667
    Port:           9080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 26 Dec 2021 14:23:11 +0800
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 25 Dec 2021 16:56:01 +0800
      Finished:     Sat, 25 Dec 2021 22:51:12 +0800
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clsdq (ro)
  istio-proxy:
    Container ID:  docker://c3d8a9d57e632112f86441fafad64eb73df3ea4f7317dbfb72152107c493866b
    Image:         docker.io/istio/proxyv2:1.12.1
    Image ID:      docker-pullable://istio/proxyv2@sha256:4704f04f399ae24d99e65170d1846dc83d7973f186656a03ba70d47bd1aba88f
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Sun, 26 Dec 2021 14:23:13 +0800
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 25 Dec 2021 16:56:01 +0800
      Finished:     Sat, 25 Dec 2021 22:51:15 +0800
    Ready:          True
    Restart Count:  1
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      10m
      memory:   40Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      productpage-v1-6b746f74dc-59t8d (v1:metadata.name)
      POD_NAMESPACE:                 default (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                         {"containerPort":9080,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     productpage
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      productpage-v1
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/default/deployments/productpage-v1
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clsdq (ro)
      /var/run/secrets/tokens from istio-token (rw)
省略

可以看出來多了2個東西的配置

  • istio-init
  • istio-proxy

istio-init

init容器 給pod配置iptables 讓流量進入pod和出pod都重定向sidecar指定埠

用的映象:docker.io/istio/proxyv2:1.12.1

新版本的istio已經用go語言重寫了,以前老版本是一個sh指令碼來配置iptables的

看老本也許更容易明白 https://github.com/istio/istio/blob/0.5.0/tools/deb/istio-iptables.sh

啟動引數:

 istio-iptables
      -p 15001 // 出站流量都重定向到此埠
      -z 15006 // 進站流量都重定向到此埠
      -u 1337
      -m REDIRECT
      -i * -x "" -b * // 匹配全部的IP地址範圍和埠列表
      -d 15090,15021,15020 //這些內部用的埠排除不攔截(下表) 
image
image

istio-proxy

啟動sidecar

用的映象:docker.io/istio/proxyv2:1.12.1


 proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency

檢視iptales是怎麼配置的

//進入minikube
minikube ssh

//切換為 root 使用者,minikube 預設使用者為 docker
sudo -i

//拿到這個istio_proxy的啟動的程式(一個是pilot-agent 一個是envoy)
docker top `docker ps|grep "istio-proxy_productpage"|cut -d " " -f1` 
 
 
//進入上條命令列印的PID(隨意哪個)
nsenter -n --target 5141
image
image

被istio注入後pod下參與的容器如下:

  • k8s_infra(也叫pause容器 把它理解為pod的底)
  • istio_init(配置IpTables)
  • app(應用容器)
  • istio_proxy (啟動pilot-agentenvoy如下圖) image

我理解的啟動順序也是按照從上到下的順序,因為同一個pod下想要網路棧共享,必須得有一個底容器來兜底,其他容器指定-net=兜底容器id加入網路棧,所以我認為infra是最先啟動才對

到底是init容器先啟動,還是infra先啟動,這塊我沒有找到相關資料,如果我理解有誤望大佬教育!

總結

istio的sidecar注入,把pod的流量的出入用iptables的方式攔截了到了Envoy,

回到productpage的程式碼

//product_id預設為0
def getProductReviews(product_id, headers):
    # Do not remove. Bug introduced explicitly for illustration in fault injection task
    # TODO: Figure out how to achieve the same effect using Envoy retries/timeouts
    for _ in range(2):
        try:
            url = reviews['name'] + "/" + reviews['endpoint'] + "/" + str(product_id)
            res = requests.get(url, headers=headers, timeout=3.0)
        except BaseException:
            res = None
        if res and res.status_code == 200:
            return 200, res.json()
    status = res.status_code if res is not None and res.status_code else 500
    return status, {'error': 'Sorry, product reviews are currently unavailable for this book.'}
image
image

訪問的reviews的url:http://reviews:9080/reviews/0

那麼流量被sidecar攔截之後的流程如下圖:

image
image

這裡你肯定有一個疑問,

http://reviews:9080/reviews/0

這種域名式的訪問是如何被envoy知道具體的請求地址的?是用kube-proxy嗎? 下次再講把!

感謝我的好朋友Raphael.Goh和istio大神dozer給予技術指導,讓我慢慢走進內部去欣賞它的美!

相關文章