Istio的流量管理(實操一)(istio 系列三)

charlieroro發表於2020-05-14

Istio的流量管理(實操一)(istio 系列三)

使用官方的Bookinfo應用進行測試。涵蓋官方文件Traffic Management章節中的請求路由,故障注入,流量遷移,TCP流量遷移,請求超時,熔斷處理和流量映象。不含ingress和Egree,後續再補充。

部署Bookinfo應用

Bookinfo應用說明

官方提供的測試應用如下,包含如下4個元件:

  • productpageproductpage 服務會呼叫detailsreviews來填充web頁面.
  • detailsdetails 服務包含book資訊.
  • reviewsreviews 服務包含書評,它會呼叫 ratings 服務.
  • ratingsratings 服務包與書評相關的含排名資訊

reviews 包含3個版本:

  • v1版本不會呼叫 ratings 服務.
  • v2版本會呼叫 ratings 服務,並按照1到5的黑色星展示排名
  • v2版本會呼叫 ratings 服務,並按照1到5的紅色星展示排名

部署

Bookinfo應用部署在default名稱空間下,使用自動注入sidecar的方式:

  • 通過如下命令在default名稱空間(當然也可以部署在其他名稱空間下面,Bookinfo配置檔案中並沒有指定部署的名稱空間)中啟用自動注入sidecar:

    $ cat <<EOF | oc -n <target-namespace> create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    
    $ kubectl label namespace default istio-injection=enabled
    
  • 切換在default名稱空間下,部署Bookinfo應用:

    $ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
    

    等待一段時間,Bookinfo的所有pod就可以成功啟動,檢視pod和service:

    $ oc get pod
    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-78d78fbddf-5mfv9       2/2     Running   0          2m27s
    productpage-v1-85b9bf9cd7-mfn47   2/2     Running   0          2m27s
    ratings-v1-6c9dbf6b45-nm6cs       2/2     Running   0          2m27s
    reviews-v1-564b97f875-ns9vz       2/2     Running   0          2m27s
    reviews-v2-568c7c9d8f-6r6rq       2/2     Running   0          2m27s
    reviews-v3-67b4988599-ddknm       2/2     Running   0          2m27s
    
    $ oc get svc                                              
    NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    details       ClusterIP      10.84.97.183   <none>        9080/TCP   3m33s
    kubernetes    ClusterIP      10.84.0.1      <none>        443/TCP    14d
    productpage   ClusterIP      10.84.98.111   <none>        9080/TCP   3m33s
    ratings       ClusterIP      10.84.237.68   <none>        9080/TCP   3m33s
    reviews       ClusterIP      10.84.39.249   <none>        9080/TCP   3m33s
    

    使用如下命令判斷Bookinfo應用是否正確安裝:

    $ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
    
    <title>Simple Bookstore App</title> #返回的結果
    

    也可以直接通過svc的endpoint進行訪問

    $ oc describe svc productpage|grep Endpoint
    Endpoints:         10.83.1.85:9080
    
    $ curl -s 10.83.1.85:9080/productpage | grep -o "<title>.*</title>"
    

    可在openshift中建立router(屬於kuberenetes的ingress gateway)進行訪問(將${HOST_NAME}替換為實際的主機名)

    kind: Route
    apiVersion: route.openshift.io/v1
    metadata:
      name: productpage
      namespace: default
      labels:
        app: productpage
        service: productpage
      annotations:
        openshift.io/host.generated: 'true'
    spec:
      host: ${HOST_NAME}
      to:
        kind: Service
        name: productpage
        weight: 100
      port:
        targetPort: http
      wildcardPolicy: None
    

    此處先不根據官方文件配置ingress,後續再配置

  • 配置預設的destination rules

    配置帶mutual TLS(一開始學習istio時不建議配置)

    $ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
    

    配置不帶mutual TLS

    $ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
    

    獲取配置的destination rules

    $ kubectl get destinationrules -o yaml
    

    獲取到的destination rules如下,注意預設安裝下,除了reviews外的service只有v1版本

    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: details
        namespace: default
      spec:
        host: details    #對應kubernetes service "details"
        subsets:
        - labels:        #實際的details的deployment只有一個標籤"version: v1"
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
    	  
    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: productpage
        namespace: default
      spec:
        host: productpage
        subsets:
        - labels:
            version: v1
          name: v1
    	  
    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: ratings
        namespace: default
      spec:
        host: ratings
        subsets:
        - labels:
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
        - labels:
            version: v2-mysql
          name: v2-mysql
        - labels:
            version: v2-mysql-vm
          name: v2-mysql-vm
    	  
    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: reviews     # kubernetes service "reviews"實際中有3個版本
        namespace: default
      spec:
        host: reviews
        subsets:
        - labels:
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
        - labels:
            version: v3
          name: v3
    

解除安裝

使用如下命令可以解除安裝Bookinfo

$ samples/bookinfo/platform/kube/cleanup.sh

流量管理

請求路由

下面展示如何根據官方提供的Bookinfo微服務的多個版本動態地路由請求。在上面部署BookInfo應用之後,該應用有3個reviews服務,分別提供:無排名,有黑星排名,有紅星排名三種顯示。由於預設情況下istio會使用輪詢模式將請求一次分發到3個reviews服務上,因此在重新整理/productpage的頁面時,可以看到如下變化:

  • V1版本:

  • V2版本:

  • V3版本:

本次展示如何將請求僅分發到某一個reviews服務上。

首先建立如下virtual service:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

檢視路由資訊

$ kubectl get virtualservices -o yaml
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: details
    namespace: default
  spec:
    hosts:
    - details
    http:
    - route:
      - destination:
          host: details
          subset: v1
		  
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: productpage
    namespace: default
  spec:
    hosts:
    - productpage
    http:
    - route:
      - destination:
          host: productpage
          subset: v1
		  
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: ratings
    namespace: default
  spec:
    hosts:
    - ratings
    http:
    - route:
      - destination:
          host: ratings
          subset: v1
		  
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: reviews
    namespace: default
  spec:
    hosts:
    - reviews
    http:
    - route:
      - destination: #可以看到流量都分發到`reviews`服務的v1版本上
          host: reviews #kubernetes的服務,解析為reviews.default.svc.cluster.local
          subset: v1 #將v1修改為v2就可以將請求分只發到v2版本上

此時再重新整理/productpage的頁面時,發現只顯示無排名的頁面

解除安裝:

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

基於使用者ID的路由

下面展示基於HTTP首部欄位的路由,首先在/productpage頁面中使用名為jason的使用者登陸(密碼隨便寫)。

部署啟用基於使用者的路由:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

建立的VirtualService如下

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: reviews
  namespace: default
spec:
  hosts:
  - reviews
  http:
  - match: #將HTTP請求首部中有end-user:jason欄位的請求路由到v2
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: reviews
        subset: v2
  - route: #HTTP請求首部中不帶end-user:jason欄位的請求會被路由到v1
    - destination:
        host: reviews
        subset: v1

重新整理/productpage頁面,可以看到只會顯示v2版本(帶黑星排名)頁面,退出jason登陸,可以看到只顯示v1版本(不帶排名)頁面。

解除安裝:

$ kubectl delete -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

故障注入

本節使用故障注入來測試應用的可靠性。

首先使用如下配置固定請求路徑:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

執行後,請求路徑變為:

  • productpagereviews:v2ratings (僅適用於使用者 jason)
  • productpagereviews:v1 (適用於除jason外的其他使用者)

注入HTTP延時故障

為了測試Bookinfo應用的彈性,為使用者jasonreviews:v2ratings 的微服務間注入7s的延時,用來模擬Bookinfo的內部bug。

注意reviews:v2在呼叫ratings服務時,有一個10s的硬編碼超時時間,因此即使引入了7s的延時,端到端流程上也不會看到任何錯誤。

注入故障,來延緩來自測試使用者jason的流量:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml

檢視部署的virtual service資訊:

$ kubectl get virtualservice ratings -o yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: ratings
  namespace: default
spec:
  hosts:
  - ratings
  http:
  - fault: #將來自jason的全部流量注入5s的延遲,流量目的地為v1版本的ratings服務
      delay:
        fixedDelay: 7s
        percentage:
          value: 100
    match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: ratings
        subset: v1
  - route: #非來自jason的流量不受影響
    - destination:
        host: ratings
        subset: v1

開啟 /productpage 頁面,使用jason使用者登陸並重新整理瀏覽器頁面,可以看到7s內不會載入頁面,且頁面上可以看到如下錯誤資訊:

相同服務的virtualservice的配置會被覆蓋,因此此處沒必要清理

注入HTTP中斷故障

ratings微服務上模擬為測試使用者jason引入HTTP中斷故障,這種場景下,在載入頁面時會看到錯誤資訊Ratings service is currently unavailable.

使用如下命令為使用者jason注入HTTP中斷

$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml

獲取部署的ratings的virtual service資訊

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: ratings
  namespace: default
spec:
  hosts:
  - ratings
  http:
  - fault: #對來自使用者jason的請求直接響應500錯誤碼
      abort:
        httpStatus: 500
        percentage:
          value: 100
    match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: ratings
        subset: v1
  - route:
    - destination:
        host: ratings
        subset: v1

開啟 /productpage 頁面,使用jason使用者登陸,可以看到如下錯誤。退出使用者jason後該錯誤消失。

刪除注入的中斷故障

$ kubectl delete -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml

解除安裝

環境清理

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

流量遷移

本章展示如何將流量從一個版本的微服務上遷移到另一個版本的微服務,如將流量從老版本切換到新版本。通常情況下會逐步進行流量切換,istio下可以基於百分比進行流量切換。注意各個版本的權重之和必須等於100,否則會報total destination weight ${weight-total}= 100的錯誤,${weight-total}為當前配置的權重之和。

基於權重的路由

  • 首先將所有微服務的流量都分發到v1版本的微服務,開啟/productpage頁面可以看到該頁面上沒有任何排名資訊。

    $ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
    
  • 使用如下命令將50%的流量從reviews:v1遷移到review:v3

    $ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
    
  • 獲取virtual service資訊

    $ kubectl get virtualservice reviews -o yaml
    
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      annotations:
        ...
      name: reviews
      namespace: default
    spec:
      hosts:
      - reviews
      http:
      - route: #50%的流量到v1,50%的流量到v3。
        - destination:
            host: reviews
            subset: v1
          weight: 50
        - destination:
            host: reviews
            subset: v3
          weight: 50
    
  • 登陸並重新整理/productpage,可以看到50%概率會看到v1的頁面,50%的概率會看到v2的頁面

解除安裝

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

TCP流量遷移

本節展示如何將TCP流量從一個版本的遷移到另一個版本。例如將TCP的流量從老版本遷移到新版本。

基於權重的TCP路由

單獨建立一個名稱空間部署tcp-echo應用

$ kubectl create namespace istio-io-tcp-traffic-shifting

openshift下面需要授權1337的使用者進行sidecar注入

$ oc adm policy add-scc-to-group privileged system:serviceaccounts:istio-io-tcp-traffic-shifting
$ oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-io-tcp-traffic-shifting

建立NetworkAttachmentDefinition,使用istio-cni

$ cat <<EOF | oc -n istio-io-tcp-traffic-shifting create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: istio-cni
EOF

對名稱空間istio-io-tcp-traffic-shifting使用自動注入sidecar的方式

$ kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled

部署tcp-echo應用

$ kubectl apply -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting

tcp-echo服務的流量全部分發到v1版本

$ kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting

tcp-echo服務的pod如下,包含v1v2兩個版本

$ oc get pod
NAME                           READY   STATUS    RESTARTS   AGE
tcp-echo-v1-5cb688897c-hk277   2/2     Running   0          16m
tcp-echo-v2-64b7c58f68-hk9sr   2/2     Running   0          16m

預設部署的gateway如下,可以看到它使用了istio預設安裝的ingress gateway,通過埠31400進行訪問

$ oc get gateways.networking.istio.io tcp-echo-gateway -oyaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  annotations:
    ...
  name: tcp-echo-gateway
  namespace: istio-io-tcp-traffic-shifting
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: tcp
      number: 31400
      protocol: TCP

對應繫結的virtual service為tcp-echo。此處host為"*",表示只要訪問到gateway tcp-echo-gateway 31400埠上的流量都會被分發到該virtual service中。

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tcp-echo
spec:
  hosts:
  - "*"
  gateways:
  - tcp-echo-gateway
  tcp:
  - match:
    - port: 31400
    route:
    - destination: #轉發到的後端服務的資訊
        host: tcp-echo
        port:
          number: 9000
        subset: v1

由於沒有安裝ingress gateway(沒有生效),按照gateway的原理,可以通過istio預設安裝的ingress gateway模擬ingress的訪問方式。可以看到預設的ingress gateway pod中開啟了31400埠:

$ oc exec -it  istio-ingressgateway-64f6f9d5c6-qrnw2 /bin/sh -n istio-system
$ ss -ntl                                                          
State          Recv-Q          Send-Q      Local Address:Port       Peer Address:Port     
LISTEN         0               0                 0.0.0.0:15090           0.0.0.0:*       
LISTEN         0               0               127.0.0.1:15000           0.0.0.0:*       
LISTEN         0               0                 0.0.0.0:31400           0.0.0.0:*       
LISTEN         0               0                 0.0.0.0:80              0.0.0.0:*       
LISTEN         0               0                       *:15020                 *:* 

通過ingress gateway pod的kubernetes service進行訪問:

$ oc get svc |grep ingress
istio-ingressgateway   LoadBalancer   10.84.93.45  ...

$ for i in {1..10}; do (date; sleep 1) | nc 10.84.93.45 31400; done
one Wed May 13 11:17:44 UTC 2020
one Wed May 13 11:17:45 UTC 2020
one Wed May 13 11:17:46 UTC 2020
one Wed May 13 11:17:47 UTC 2020

可以看到所有的流量都分發到了v1版本(列印"one")的tcp-echo服務

直接使用tcp-echo對應的kubernetes service進行訪問是不受istio管控的,需要通過virtual service進行訪問

下面將20%的流量從tcp-echo:v1 遷移到tcp-echo:v2

$ kubectl apply -f samples/tcp-echo/tcp-echo-20-v2.yaml -n istio-io-tcp-traffic-shifting

檢視部署的路由規則

$ kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: tcp-echo
  namespace: istio-io-tcp-traffic-shifting
spec:
  gateways:
  - tcp-echo-gateway
  hosts:
  - '*'
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v1
      weight: 80
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v2
      weight: 20

再次進行測試,結果如下:

$ for i in {1..10}; do (date; sleep 1) | nc 10.84.93.45 31400; done
one Wed May 13 13:17:44 UTC 2020
two Wed May 13 13:17:45 UTC 2020
one Wed May 13 13:17:46 UTC 2020
one Wed May 13 13:17:47 UTC 2020
one Wed May 13 13:17:48 UTC 2020
one Wed May 13 13:17:49 UTC 2020
one Wed May 13 13:17:50 UTC 2020
one Wed May 13 13:17:51 UTC 2020
one Wed May 13 13:17:52 UTC 2020
two Wed May 13 13:17:53 UTC 2020

解除安裝

執行如下命令解除安裝tcp-echo應用

$ kubectl delete -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
$ kubectl delete -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
$ kubectl delete namespace istio-io-tcp-traffic-shifting

請求超時

本節介紹如何使用istio在Envoy上配置請求超時時間。用到了官方的例子Bookinfo

部署路由

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

HTTP請求的超時時間在路由規則的timeout欄位中指定。預設情況下禁用HTTP的超時,下面會將review服務的超時時間設定為1s,為了校驗效果,將ratings 服務延時2s。

  • 將請求路由到v2版本的review服務,即呼叫ratings服務的版本,此時review服務沒有設定超時

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
        - reviews
      http:
      - route:
        - destination:
            host: reviews
            subset: v2
    EOF
    
    
  • rating服務增加2s延時

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: ratings
    spec:
      hosts:
      - ratings
      http:
      - fault:
          delay:
            percent: 100
            fixedDelay: 2s
        route:
        - destination:
            host: ratings
            subset: v1
    EOF
    
    
  • 開啟/productpage頁面,可以看到Bookinfo應用正在,但重新整理頁面後會有2s的延時

  • 為review服務設定0.5s的請求超時

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
      - reviews
      http:
      - route:
        - destination:
            host: reviews
            subset: v2
        timeout: 0.5s
    EOF
    
    
  • 此時重新整理頁面,大概1s返回結果,reviews不可用

    響應花了1s,而不是0.5s的原因是productpage 服務硬編碼了一次重試,因此reviews 服務在返回前會超時2次。Bookinfo應用是有自己內部的超時機制的,具體參見fault-injection

解除安裝

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

斷路

本節將顯示如何為連線、請求和異常值檢測配置熔斷。斷路是建立彈性微服務應用程式的重要模式,允許編寫的程式能夠限制錯誤,延遲峰值以及非期望的網路的影響。

default名稱空間(已經開啟自動注入sidecar)下部署httpbin

$ kubectl apply -f samples/httpbin/httpbin.yaml

配置斷路器

  • 建立destination rule,在呼叫httpbin服務時應用斷路策略。

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: httpbin
    spec:
      host: httpbin
      trafficPolicy:
        connectionPool:
          tcp:
            maxConnections: 1 #到一個目的主機的HTTP1/TCP 的最大連線數
          http:
            http1MaxPendingRequests: 1 #到一個目標的處於pending狀態的最大HTTP請求數
            maxRequestsPerConnection: 1 #到一個後端的每條連線上的最大請求數
        outlierDetection: #控制從負載平衡池中逐出不正常主機的設定
          consecutiveErrors: 1
          interval: 1s
          baseEjectionTime: 3m
          maxEjectionPercent: 100
    EOF
    
    
  • 校驗destination rule的正確性

    $ kubectl get destinationrule httpbin -o yaml
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      annotations:
        ...
      name: httpbin
      namespace: default
    spec:
      host: httpbin
      trafficPolicy:
        connectionPool:
          http:
            http1MaxPendingRequests: 1
            maxRequestsPerConnection: 1
          tcp:
            maxConnections: 1
        outlierDetection:
          baseEjectionTime: 3m
          consecutiveErrors: 1
          interval: 1s
          maxEjectionPercent: 100
    
    

新增客戶端

建立一個客戶端,向httpbin服務傳送請求。客戶端是一個名為 fortio的簡單負載測試工具,fortio可以控制連線數,併發數和發出去的HTTP呼叫延時。下面將使用該客戶端觸發設定在 DestinationRule中的斷路器策略。

  • 部署fortio服務

    $ kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml
    
    
  • 登陸到客戶端的pod,使用名為的fortio工具呼叫httpbin,使用-curl指明期望執行一次呼叫

    $ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get
    
    

    呼叫結果如下,可以看到請求成功:

    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get
    HTTP/1.1 200 OK
    server: envoy
    date: Thu, 14 May 2020 01:21:47 GMT
    content-type: application/json
    content-length: 586
    access-control-allow-origin: *
    access-control-allow-credentials: true
    x-envoy-upstream-service-time: 11
    
    {
      "args": {},
      "headers": {
        "Content-Length": "0",
        "Host": "httpbin:8000",
        "User-Agent": "fortio.org/fortio-1.3.1",
        "X-B3-Parentspanid": "b5cd907bcfb5158f",
        "X-B3-Sampled": "0",
        "X-B3-Spanid": "407597df02737b32",
        "X-B3-Traceid": "45f3690565e5ca9bb5cd907bcfb5158f",
        "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=dac158cf40c0f28f3322e6219c45d546ef8cc3b7df9d993ace84ab6e44aab708;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
      },
      "origin": "127.0.0.1",
      "url": "http://httpbin:8000/get"
    }
    
    

觸發斷路器

在上面的DestinationRule設定中指定了maxConnections: 1http1MaxPendingRequests: 1,表示如果併發的連線數和請求數大於1,則後續的請求和連線會失敗,此時觸發斷路。

  1. 使用兩條併發的連線 (-c 2) ,併發生20個請求 (-n 20):

    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
    05:50:30 I logger.go:97> Log level is now 3 Warning (was 2 Info)
    Fortio 1.3.1 running at 0 queries per second, 16->16 procs, for 20 calls: http://httpbin:8000/get
    Starting at max qps with 2 thread(s) [gomax 16] for exactly 20 calls (10 per thread + 0)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    Ended after 51.51929ms : 20 calls. qps=388.2
    Aggregated Function Time : count 20 avg 0.0041658472 +/- 0.003982 min 0.000313105 max 0.017104987 sum 0.083316943
    # range, mid point, percentile, count
    >= 0.000313105 <= 0.001 , 0.000656552 , 15.00, 3
    > 0.002 <= 0.003 , 0.0025 , 70.00, 11
    > 0.003 <= 0.004 , 0.0035 , 80.00, 2
    > 0.005 <= 0.006 , 0.0055 , 85.00, 1
    > 0.008 <= 0.009 , 0.0085 , 90.00, 1
    > 0.012 <= 0.014 , 0.013 , 95.00, 1
    > 0.016 <= 0.017105 , 0.0165525 , 100.00, 1
    # target 50% 0.00263636
    # target 75% 0.0035
    # target 90% 0.009
    # target 99% 0.016884
    # target 99.9% 0.0170829
    Sockets used: 6 (for perfect keepalive, would be 2)
    Code 200 : 16 (80.0 %)
    Code 503 : 4 (20.0 %)
    Response Header Sizes : count 20 avg 184.05 +/- 92.03 min 0 max 231 sum 3681
    Response Body/Total Sizes : count 20 avg 701.05 +/- 230 min 241 max 817 sum 14021
    All done 20 calls (plus 0 warmup) 4.166 ms avg, 388.2 qps
    
    

    主要關注的內容如下,可以看到大部分請求都是成功的,但也有一小部分失敗

    Sockets used: 6 (for perfect keepalive, would be 2)
    Code 200 : 16 (80.0 %)
    Code 503 : 4 (20.0 %)
    
  2. 將併發連線數提升到3

    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
    06:00:30 I logger.go:97> Log level is now 3 Warning (was 2 Info)
    Fortio 1.3.1 running at 0 queries per second, 16->16 procs, for 30 calls: http://httpbin:8000/get
    Starting at max qps with 3 thread(s) [gomax 16] for exactly 30 calls (10 per thread + 0)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    Ended after 18.885972ms : 30 calls. qps=1588.5
    Aggregated Function Time : count 30 avg 0.0015352119 +/- 0.002045 min 0.000165718 max 0.006403746 sum 0.046056356
    # range, mid point, percentile, count
    >= 0.000165718 <= 0.001 , 0.000582859 , 70.00, 21
    > 0.002 <= 0.003 , 0.0025 , 73.33, 1
    > 0.003 <= 0.004 , 0.0035 , 83.33, 3
    > 0.004 <= 0.005 , 0.0045 , 90.00, 2
    > 0.005 <= 0.006 , 0.0055 , 93.33, 1
    > 0.006 <= 0.00640375 , 0.00620187 , 100.00, 2
    # target 50% 0.000749715
    # target 75% 0.00316667
    # target 90% 0.005
    # target 99% 0.00634318
    # target 99.9% 0.00639769
    Sockets used: 23 (for perfect keepalive, would be 3)
    Code 200 : 9 (30.0 %)
    Code 503 : 21 (70.0 %)
    Response Header Sizes : count 30 avg 69 +/- 105.4 min 0 max 230 sum 2070
    Response Body/Total Sizes : count 30 avg 413.5 +/- 263.5 min 241 max 816 sum 12405
    All done 30 calls (plus 0 warmup) 1.535 ms avg, 1588.5 qps
    

    可以看到發生了短路,只有30%的請求成功

    Sockets used: 23 (for perfect keepalive, would be 3)
    Code 200 : 9 (30.0 %)
    Code 503 : 21 (70.0 %)
    
    
  3. 查詢 istio-proxy 獲取更多資訊

    $ kubectl exec $FORTIO_POD -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
    cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 93
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 139
    
    

解除安裝

$ kubectl delete destinationrule httpbin
$ kubectl delete deploy httpbin fortio-deploy
$ kubectl delete svc httpbin fortio

映象

本節展示istio的流量映象功能。映象會將活動的流量的副本傳送到映象的服務上。

該任務中,首先將所有的流量分發到v1的測試服務上,然後通過映象將一部分流量分發到v2。

  • 首先部署兩個版本的httpbin服務

    httpbin-v1:

    $ cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin-v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1 #v1版本標籤
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
            ports:
            - containerPort: 80
    EOF
    
    

    httpbin-v2:

    $ cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin-v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v2  #v2版本標籤
      template:
        metadata:
          labels:
            app: httpbin
            version: v2
        spec:
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
            ports:
            - containerPort: 80
    EOF
    

    httpbin Kubernetes service:

    $ kubectl create -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: httpbin
      labels:
        app: httpbin
    spec:
      ports:
      - name: http
        port: 8000
        targetPort: 80
      selector:
        app: httpbin
    EOF
    
  • 啟動一個sleep服務,提供curl功能

    cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          containers:
          - name: sleep
            image: tutum/curl
            command: ["/bin/sleep","infinity"]
            imagePullPolicy: IfNotPresent
    EOF
    

建立預設路由策略

預設kubernetes會對httpbin的所有版本的服務進行負載均衡,這一步中,將所有的流量分發到v1

  • 建立一個預設的路由,將所有流量分發大v1版本的服務

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: httpbin
    spec:
      hosts:
        - httpbin
      http:
      - route:
        - destination:
            host: httpbin
            subset: v1 # 100%將流量分發到v1
          weight: 100
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: httpbin
    spec:
      host: httpbin
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
    EOF
    
  • 向該服務傳送部分流量

    $ export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
    $ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool
    {
        "headers": {
            "Accept": "*/*",
            "Content-Length": "0",
            "Host": "httpbin:8000",
            "User-Agent": "curl/7.35.0",
            "X-B3-Parentspanid": "a35a08a1875f5d18",
            "X-B3-Sampled": "0",
            "X-B3-Spanid": "7d1e0a1db0db5634",
            "X-B3-Traceid": "3b5e9010f4a50351a35a08a1875f5d18",
            "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/default;Hash=6dd991f0846ac27dc7fb878ebe8f7b6a8ebd571bdea9efa81d711484505036d7;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
        }
    }
    
    
  • 校驗v1v2版本的httpbin pod的日誌,可以看到v1服務是有訪問日誌的,而v2則沒有

    $ export V1_POD=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name})
    $ kubectl logs -f $V1_POD -c httpbin
    ...
    127.0.0.1 - - [14/May/2020:06:17:57 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:18:16 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    
    
    $ export V2_POD=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name})
    $ kubectl logs -f $V2_POD -c httpbin
    <none>
    
    

將流量映象到v2

  • 修改路由規則,將流量映象到v2

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: httpbin
    spec:
      hosts:
        - httpbin
      http:
      - route:
        - destination:
            host: httpbin
            subset: v1 #100%將流量分發到v1
          weight: 100
        mirror:
          host: httpbin
          subset: v2  #100%將流量映象到v2
        mirror_percent: 100
    EOF
    
    

    當流量配置了映象時,傳送到映象服務的請求會在Host/Authority首部之後加上-shadow,如cluster-1 變為cluster-1-shadow需要注意的是,映象的請求是"發起並忘記"的方式,即會丟棄對映象請求的響應

    可以使用``mirror_percent 欄位映象一部分流量,而不是所有的流量。如果沒有出現該欄位,為了相容老版本,會映象所有的流量。

  • 傳送流量

    $ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool
    
    

    檢視v1和v2服務的日誌,可以看到此時將v1服務的請求映象到了v2服務上

    $ kubectl logs -f $V1_POD -c httpbin
    ...
    127.0.0.1 - - [14/May/2020:06:17:57 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:18:16 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:32:09 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:32:37 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    
    $ kubectl logs -f $V2_POD -c httpbin
    ...
    127.0.0.1 - - [14/May/2020:06:32:37 +0000] "GET /headers HTTP/1.1" 200 558 "-" "curl/7.35.0"
    
    

解除安裝

$ kubectl delete virtualservice httpbin
$ kubectl delete destinationrule httpbin
$ kubectl delete deploy httpbin-v1 httpbin-v2 sleep
$ kubectl delete svc httpbin

相關文章