Istio

要快乐不要emo發表於2024-03-25

Istio

1、istio簡介

image-20240220135512928

1.1 istio概念

istio 是一個服務網格形態的,用於雲原生形態下,對服務進行治理的基礎設施平臺

1.2 istio特點

# 可觀察性
# 安全性
# 流量治理

1.3 istio功能

透過邊車模式,為服務注入一個代理,實現以下功能
1、服務發現,對其代理的svc進行服務發現,svc後端通常存在一個ep列表,可以根據負載均衡策略選擇服務例項進行流量傳送
2、隔離後端故障例項
3、服務保護,對最大連線數,最大請求數進行限制
4、執行對後端訪問的快速失敗和訪問超時
5、對某個介面的訪問進行限流
6、當對後端訪問失敗時,自動重試
7、動態修改請求的頭部資訊
8、故障注入,模擬對訪問的返回失敗
9、將對後端的服務訪問重定向
10、灰度釋出,根據權重或者請求內容對流量進行切分
11、對訪問者和後端服務進行雙向加密
12、對訪問進行細粒度的授權
13、自動記錄訪問日誌和訪問細節
14、自動呼叫鏈埋點,進行分散式追蹤
15、生成訪問指標,對應用的訪問形成完整的拓撲
# 上述功能均無需侵入式配置,只需進行對應修改配置,即可動態生效

1.4 istio元件

1.4.1 envoy(資料平面代理)

# Envoy 代理被部署為服務的 Sidecar
協調服務網格中所有服務的入站和出站流量
# 內建特性
動態服務發現
負載均衡
TLS 終端
HTTP/2 與 gRPC 代理
熔斷器
健康檢查
基於百分比流量分割的分階段釋出
故障注入
豐富的指標
# 支援新增istio功能
流量控制
網路彈性
安全性和身份認證
基於 WebAssembly 的可插拔擴充套件

1.4.2 istiod(控制平面)

元件
Pilot
galley
citadel
功能
服務發現、配置和證書管理

1.5 部署模型

[]: https://istio.io/latest/zh/docs/ops/deployment/deployment-models/ "istio部署模型"

2、istio安裝

2.1 helm安裝

# 新增倉庫
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
或者從github下載
# 下載istio chart
helm pull istio-base istio/base
helm pull istiod istio/istiod
helm pull istio-ingress istio/gateways
# 修改istio chart中的映象地址為私有映象(可選:將倉庫打包並上傳到自己的helm倉庫)
# 建立istio使用的ns 
kubectl create namespace istio-system
# 安裝
helm install istio-base $base_path -n istio-system
helm install istiod $istiod_path -n istio-system
helm install istio-ingress $ingress_path -n istio-system
helm install istio-egress $egress_path -n istio-system
# 驗證
helm list -A
kubectl get pod -n istio-system

3、istio使用

3.1 安裝測試用服務

# 安裝測試使用的微服務
apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
    service: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-details
  labels:
    account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: details-v1
  labels:
    app: details
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: details
      version: v1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      serviceAccountName: bookinfo-details
      containers:
      - name: details
        image: quanheng.com/pub/examples-bookinfo-details-v1:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
    service: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-ratings
  labels:
    account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ratings-v1
  labels:
    app: ratings
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratings
      version: v1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      serviceAccountName: bookinfo-ratings
      containers:
      - name: ratings
        image: quanheng.com/pub/examples-bookinfo-ratings-v1:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
    service: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-reviews
  labels:
    account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v1
  labels:
    app: reviews
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: quanheng.com/pub/examples-bookinfo-reviews-v1:1.18.0
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v2
  labels:
    app: reviews
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v2
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: quanheng.com/pub/examples-bookinfo-reviews-v2:1.18.0
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v3
  labels:
    app: reviews
    version: v3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v3
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: quanheng.com/pub/examples-bookinfo-reviews-v3:1.18.0
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
    service: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-productpage
  labels:
    account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage-v1
  labels:
    app: productpage
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9080"
        prometheus.io/path: "/metrics"
      labels:
        app: productpage
        version: v1
    spec:
      serviceAccountName: bookinfo-productpage
      containers:
      - name: productpage
        image: quanheng.com/pub/examples-bookinfo-productpage-v1:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
      volumes:
      - name: tmp
        emptyDir: {}
---

3.2 開啟自動注入代理

對ns打上lable,擁有此標籤的ns將會對pod自動注入代理
istio-injection=enabled
kubectl lable ns $ns_name istio-injection=enabled

3.3 手動注入代理

# 生成帶有sider的yaml
istioctl kube-inject -f xx.yaml > xx_sider.yaml
# 部署
kubectl apply -f xx_sider.yaml

4、可觀測性

4.1 指標

Istio 基於 4 個監控的黃金標識(延遲、流量、錯誤、飽和)生成了一系列服務指標
代理本身管理功能的詳細統計資訊,包括配置資訊和健康資訊
服務級別的指標,服務監控需求:延遲、流量、錯誤和飽和情況
控制平面指標,監控 Istio 自己的行為(區別於網格內的服務)

4.2.1 內建指標

kubectl  exec -ti -n weather advertisement-v1-64c975fd5-tc2x7 -c istio-proxy -- pilot-agent request GET /stats/prometheus

4.2.2 自定義指標

4.2.2.1 修改指標

# 安裝crd
cd cloud-native-istio/09
istioctl install -f custom-metrics.yaml
kubectl get iop -n istio-system
# 在spec.values.telemetry.v2.prometheus下新增
configOverride:
  gateway:
    metrics:
    - name: request_total # 針對指標進行修改,若不指定那麼預設對所有指標生效
      dimensions: # 定義指標裡的監控屬性
          request_host: request.host
          request_method: request.method
      tags_to_remove: # 從指標中移除那些指標
      - response_flags
  inboundSidecar:
    metrics:
    - dimensions:
      request_host: request.host
      request_method: request.method
  outboundSidecar:
    metrics:
    - dimensions:
      request_host: request.host
      request_method: request.method

4.2.2.2 自定義新指標

# 定義一個新指標名稱為custom_count指定型別為統計總數,指標內屬性為reporter=proxy
# 在spec.values.telemetry.v2.prometheus下新增
configOverride:
  outboundSidecar:
    definitons: # 定義新指標
    - name: costom_count
      type: COUNTER # 統計
      value: "1"
    metrics:
    - name: custom_count
      dimensions:
        reporter: "'proxy'"

4.2 分散式追蹤

Envoy 代理進行分散式追蹤。代理自動為其應用程式生成追蹤 span, 只需要應用程式轉發適當的請求上下文
Istio 支援很多追蹤系統,包括 Zipkin、 Jaeger、 LightStep、 Datadog。 運維人員控制生成追蹤的取樣率(每個請求生成跟蹤資料的速率)。這允許運維人員控制網格生成追蹤資料的數量和速率。

4.3 日誌

4.3.1 開啟

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  accessLogging:
    - providers:
      - name: envoy

5、流量管理

5.1 灰度釋出(流量切分)

5.1.1 定義服務版本

# dr 將流量到每個服務版本的路徑封裝成一個對外的整體,進行流量切分時可以直接使用這個名稱
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: reviews
  namespace: istio-test
spec:
  host: reviews
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  - name: v3
    labels:
      version: v3

5.1.1 基於流量比例

image-20240220091644717

# 配置vs,使兩個版本按流量比例進行切分
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
  namespace: istio-test
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v2
      weight: 50

5.1.2 基於請求內容

# chrom瀏覽器

image-20240220094135292

# 火狐瀏覽器

image-20240220094008563

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
  namespace: istio-test
spec:
  hosts:
  - reviews
  http:
  - match: # 匹配請求頭部代理為谷歌瀏覽器核心任意版本的請求將其轉發到v2版本
    - headers:
        User-Agent:
          regex: .*(Chrome/([\d.]+)).*
    route:
    - destination:
        host: reviews
        subset: v1
  - route:   
    - destination:
        host: reviews
        subset: v2

5.1.3 多條件組合

# 配置vs 先將android使用者流量進行權重切分,再將剩餘流量路由到其餘服務
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: frontend-route
  namespace: weather
spec:
  hosts:
  - "*"
  gateways:
  - istio-system/weather-gateway
  http:
  - match:
    - headers:
        User-Agent:
          regex: .*((Android)).*  
    route: # 基於權重進行切分
    - destination:
        host: frontend
        subset: v1   
      weight: 50
    - destination:
        host: frontend
        subset: v2
      weight: 50		
  - route: # 
    - destination:
        host: frontend
        subset: v1

5.1.4 多服務多版本

# 對多個服務進行vs配置
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: frontend-route
spec:
  hosts:
  - "*"
  gateways:
  - istio-system/weather-gateway
  http:
  - match:
    - headers:
        cookie:
          regex: ^(.*?;)?(user=tester)(;.*)?$
    route:
    - destination:
        host: frontend
        subset: v2
  - route:
    - destination:
        host: frontend
        subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - match:
    - sourceLabels:
        version: v2
    route:
    - destination:
        host: forecast
        subset: v2
  - route:
    - destination:
        host: forecast
        subset: v1
# 將閘道器來的流量匹配使用者為tester的路由到v2版本,其餘路由到v1
# 將frontend來的流量,版本標籤為v2的路由到forecast的v2版本,其餘路由到v1

5.2 流量治理

# 定義的是服務到後端例項部分的流量分發規則

5.2.1 負載均衡

5.2.1.1 RR & RanDom

# 配置dr
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: advertisement-dr
spec:
  host: advertisement
  subsets:
  - labels:
      version: v1
    name: v1
  trafficPolicy:
    loadBalancer:
      simple: RANDOM #可以更改此策略實現不同的負載均衡效果
---
ROUND_ROBIN: 輪詢
RANDOM: 隨機

5.2.1.2 根據標籤進行流量切分

# 配置DR 標籤打在node節點上
region: topology.kubernetes.io/region=cn-north-7
zone: topology.kubernetes.io/zone=cn-north-7b
sub_zone: topology.istio.io/sub_zone=nanjing

# 格式 region/zone/sub_zone

---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: advertisement-distribution
spec:
  host: advertisement.weather.svc.cluster.local
  trafficPolicy:
    loadBalancer:
      localityLbSetting:
        enabled: true
        distribute:
        - from: cn-north-7/cn-north-7b/* # 來自此標籤下的所流量 
          to: # 當上面from裡面流量流向下面定義的標籤時按比例對流量進行切分
            "cn-north-7/cn-north-7b/nanjing": 10
            "cn-north-7/cn-north-7c/hangzhou": 80
            "cn-north-7/cn-north-7c/ningbo": 10
***

5.2.1.3 設定故障轉移負載均衡

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: recommendation-failover
spec:
  host: recommendation.weather.svc.cluster.local
  trafficPolicy:
    connectionPool:
      http:
        maxRequestsPerConnection: 1 
    loadBalancer:
      simple: ROUND_ROBIN # 指定故障後轉移策略為rr
      localityLbSetting: # 開啟故障負載轉移
        enabled: true
    outlierDetection:
      consecutive5xxErrors: 1
      interval: 1s
      baseEjectionTime: 2m

5.2.2 會話保持

# 將具有指定相同欄位cooki的流量轉發到同一後端服務
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: advertisement-dr
  namespace: weather
spec:
  host: advertisement
  subsets:
  - labels:
      version: v1
    name: v1
  trafficPolicy:
     loadBalancer:
       consistentHash:
         httpCookie:
           name: user
           ttl: 60s

5.2.3 故障注入

5.2.3.1 延遲注入

# 所有訪問都會被延遲3s
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - fault:
      delay:
        fixedDelay: 3s # 延遲時間
        percentage:
          value: 100
    route: # 路由規則
    - destination:
        host: advertisement
        subset: v1

5.2.3.2 中斷注入

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - fault:
      abort:
        httpStatus: 521 # 自定義返回狀態碼
        percentage:
          value: 100
    route:
    - destination:
        host: advertisement
        subset: v1

5.2.4 超時策略

#定義服務訪問的超時策略,超過時間返回報錯
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - route:
    - destination:
        host: forecast
        subset: v2
    timeout: 1s # 對當前服務進行超時策略配置,當服務訪問超過定義時間返回失敗

5.2.5 重試

# 當服務呼叫返回匹配的狀態碼 進行重試操作
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - route:
    - destination:
        host: forecast
        subset: v2
    retries:
      attempts: 5 # 重試次數
      perTryTimeout: 1s #間隔多久沒得到正確狀態碼
      retryOn: "5xx" # 匹配狀態嗎

5.2.6 重定向

# 將匹配路徑進行重定向
# advertisement.weather/ad 重定向到 advertisement.weather.svc.cluster.local/maintenanced
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - match:
    - uri:
        prefix: /ad # 匹配路徑
    redirect:
      uri: /maintenanced # 重定向路徑
      authority: advertisement.weather.svc.cluster.local # 字首

5.2.7 重寫

# 將給定條件重寫為新的條件
# 將路由到advertisement的流量含有/damo/的url重定向為/
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - match:
    - uri:
        prefix: /demo/
    rewrite:
      uri: /
    route:
    - destination:
        host: advertisement
        subset: v1

5.2.8 熔斷

5.2.8.1 熔斷設定

# 防止由於某一服務呼叫失敗而影響整體服務效能
# 當forecast同時併發3個且等待連結超過5個時,觸發熔斷直接返回失敗
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: forecast-dr
spec:
  host: forecast
  subsets:
    - labels:
        version: v1
      name: v1
    - labels:
        version: v2
      name: v2
  trafficPolicy:
    connectionPool: # 連線池設定
      tcp:
        maxConnections: 3 # 定義tcp連結最大可同時存在多少個
      http:
        http1MaxPendingRequests: 5 # 定義http最大等待請求
        maxRequestsPerConnection: 1 
	outlierDetection:
      consecutive5xxErrors: 2  # 若連續返回2次500
      interval: 10s  # 每10s掃描後端例項
      baseEjectionTime: 2m # 基礎等待時間
      maxEjectionPercent: 40 # 錯誤例項中將有40百分比的例項被移出例項列表

5.2.8.2 熔斷異常點檢測

# 例項被檢測到符合配置的錯誤次數後,按照給定策略將例項從svc後端剔除,如果一段時間恢復將其重新加入後端,否則繼續觸發
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: forecast-dr
spec:
  host: forecast
  subsets:
    - labels:
        version: v1
      name: v1
    - labels:
        version: v2
      name: v2
  trafficPolicy:
	outlierDetection:
      consecutive5xxErrors: 2  # 若連續返回2次500
      interval: 10s  # 每10s掃描後端例項
      baseEjectionTime: 2m # 基礎等待時間
      maxEjectionPercent: 40 # 錯誤例項中將有40百分比的例項被移出例項列表

5.2.9 限流

全侷限流和本地限流,滿足最低標準的那個

5.2.9.1 全侷限流

# 限制svc後端所有例項共同可以接收多少流量
# 將配置檔案以cm形式定義
apiVersion: v1
kind: ConfigMap
metadata:
  name: ratelimit-config
data:
  config.yaml: |
    domain: advertisement-ratelimit
    descriptors:
      - key: PATH 
        value: "/ad"
        rate_limit:
          unit: minute # 分鐘
          requests_per_unit: 3
      - key: PATH
        rate_limit:
          unit: minute
          requests_per_unit: 100 
---
# 建立一個訊息中介軟體來限制流量(此處使用redis)
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    app: redis
spec:
  ports:
  - name: redis
    port: 6379
  selector:
    app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:alpine
        imagePullPolicy: Always
        name: redis
        ports:
        - name: redis
          containerPort: 6379
      restartPolicy: Always
      serviceAccountName: ""
---
apiVersion: v1
kind: Service
metadata:
  name: ratelimit
  labels:
    app: ratelimit
spec:
  ports:
  - name: http-port
    port: 8080
    targetPort: 8080
    protocol: TCP
  - name: grpc-port
    port: 8081
    targetPort: 8081
    protocol: TCP
  - name: http-debug
    port: 6070
    targetPort: 6070
    protocol: TCP
  selector:
    app: ratelimit
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ratelimit
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratelimit
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: ratelimit
    spec:
      containers:
      - image: quanheng.com/k8s/envoyproxy/ratelimit:6f5de117 # 2021/01/08
        imagePullPolicy: Always
        name: ratelimit
        command: ["/bin/ratelimit"]
        env:
        - name: LOG_LEVEL
          value: debug
        - name: REDIS_SOCKET_TYPE
          value: tcp
        - name: REDIS_URL
          value: redis:6379
        - name: USE_STATSD
          value: "false"
        - name: RUNTIME_ROOT
          value: /data
        - name: RUNTIME_SUBDIRECTORY
          value: ratelimit
        ports:
        - containerPort: 8080
        - containerPort: 8081
        - containerPort: 6070
        volumeMounts:
        - name: config-volume
          mountPath: /data/ratelimit/config/config.yaml
          subPath: config.yaml
      volumes:
      - name: config-volume
        configMap:
          name: ratelimit-config
---
# 將定義的cm配置 插入envoyfilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: global-ratelimit
spec:
  workloadSelector:
    labels:
      app: advertisement
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: "envoy.filters.network.http_connection_manager"
              subFilter:
                name: "envoy.filters.http.router"
      patch:
        operation: INSERT_BEFORE
        value:
          name: envoy.filters.http.ratelimit #
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
            domain: advertisement-ratelimit # 定義的cm domain
            failure_mode_deny: true # 開啟
            rate_limit_service:
              grpc_service:
                envoy_grpc:
                  cluster_name: rate_limit_cluster # 針對叢集名稱
                timeout: 10s # 超時時間
              transport_api_version: V3 # 版本
    - applyTo: CLUSTER # 配置針對叢集
      match:
        cluster:
          service: ratelimit.weather.svc.cluster.local
      patch:
        operation: ADD # 動作
        value:
          name: rate_limit_cluster
          type: STRICT_DNS
          connect_timeout: 10s
          lb_policy: ROUND_ROBIN
          http2_protocol_options: {}
          load_assignment:
            cluster_name: rate_limit_cluster
            endpoints:
            - lb_endpoints:
              - endpoint:
                  address:
                     socket_address:
                      address: ratelimit.weather.svc.cluster.local
                      port_value: 8081
---
# 配置路由
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: global-ratelimit-svc
spec:
  workloadSelector:
    labels:
      app: advertisement
  configPatches:
    - applyTo: VIRTUAL_HOST
      match:
        context: SIDECAR_INBOUND
        routeConfiguration:
          vhost:
            name: ""
            route:
              action: ANY
      patch:
        operation: MERGE
        value:
          rate_limits:
            - actions: 
              - request_headers:
                  header_name: ":path"
                  descriptor_key: "PATH"

5.2.9.2 本地限流

# 限制每個例項可以接收多少流量
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: local-ratelimit-svc
spec:
  workloadSelector:
    labels:
      app: advertisement
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: "envoy.filters.network.http_connection_manager"
      patch:
        operation: INSERT_BEFORE
        value:
          name: envoy.filters.http.local_ratelimit
          typed_config:
            "@type": type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
            value:
              stat_prefix: http_local_rate_limiter
              token_bucket:
                max_tokens: 3
                tokens_per_fill: 3
                fill_interval: 60s
              filter_enabled:
                runtime_key: local_rate_limit_enabled
                default_value:
                  numerator: 100
                  denominator: HUNDRED
              filter_enforced:
                runtime_key: local_rate_limit_enforced
                default_value:
                  numerator: 100
                  denominator: HUNDRED
              response_headers_to_add:
                - append: false
                  header:
                    key: x-local-rate-limit
                    value: 'true'

5.2.10 服務隔離

# 類似networkpolicy,定義服務之間的互相訪問
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
  name: sidecar-frontend
spec:
  workloadSelector:
    labels:
      app: frontend
  egress:
  - hosts:
    - "weather/advertisement.weather.svc.cluster.local"
    - "istio-system/*"

5.2.11 流量映象

# 將流量複製一份,複製流量將會被丟棄,若主流量存在問題,那麼複製流量會進行響應
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - route:
      - destination:
          host: forecast
          subset: v1
        weight: 100 # 指定複製流量的權重比例
    mirror:   #  指定影子流量給誰      
        host: forecast     
        subset: v2

5.3 服務治理

5.3.1 將https服務對外發布

# 部署一個https服務
# 將https服務所使用的證書製作為secret物件
# 配置istio閘道器
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  generation: 1
  name: argocd-gateway
  namespace: argocd
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: http-argocd
        number: 15036
        protocol: HTTPS
      tls:
        mode: SIMPLE
        credentialName: argocd-secret
# 配置vs
# 配置dr
# 訪問
https://ip:port

5.3.2 將http服務對外暴露

image-20240219150418131

5.3.2.1 配置gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway
  namespace: istio-ingress
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 8899
      name: http-bookinfo
      protocol: HTTP
    hosts:
    - "bookinfo.com"

5.3.2.2 配置vs

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfo
  namespace: istio-test
spec:
  hosts:
  - "bookinfo.com"
  gateways:
  - istio-ingress/bookinfo-gateway
  http:
  - match:
    - port: 8899
    route:
    - destination:
        host: productpage
        port:
          number: 9080

5.3.2.3 配置svc

# 新增8899埠
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: istio-ingress
    meta.helm.sh/release-namespace: istio-ingress
  creationTimestamp: "2024-02-18T07:27:49Z"
  labels:
    app: istio-ingressgateway
    app.kubernetes.io/managed-by: Helm
    install.operator.istio.io/owning-resource: unknown
    istio: ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    release: istio-ingress
  name: istio-ingressgateway
  namespace: istio-ingress
  resourceVersion: "32406549"
  uid: a0d51225-836c-4ce0-b1e0-5167251b24bb
spec:
  clusterIP: 10.173.49.109
  clusterIPs:
  - 10.173.49.109
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 172.31.3.77
  ports:
  - name: status-port
    nodePort: 14221
    port: 15021
    protocol: TCP
    targetPort: 15021
  - name: http2
    nodePort: 27801
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: bookinfo
    nodePort: 15086
    port: 8899
    protocol: TCP
    targetPort: 8899
  - name: https
    nodePort: 29533
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

5.3.2.4 配置hosts解析

172.31.3.19 bookinfo.com

image-20240219101204881

6、流量走向

istio-ingress-->gateway-->VirtualService-->匹配路由規則(dr-DestinationRule)--> 根據路由規則將流量轉發
# gateway
定義了服務網格外允許哪些流量可以進入服務網格,後端是vs
# vs
定義了從閘道器到服務的流量路徑,後端必須配置一個真正的服務比如svc或者註冊進服務網格的服務
# dr
定義了從服務到後端應用的流量規則,主要定義流量如何使用分配,後端是真正的例項

7、概念

7.1 vs

# 主要定義了服務如何訪問
虛擬服務讓您配置如何在服務網格內將請求路由到服務

7.2 dr

# 定義了流量如何處理,具備豐富的策略,滿足路由規則的流量到達後端服務的訪問策略
負載均衡設定
tls設定
熔斷設定

7.3 閘道器

選擇一個虛擬服務,透過那個埠接收流量進入服務網格,vs必須定義此虛擬服務且繫結閘道器才可生效

7.4 服務入口

新增一個入口將服務網格外的服務納入服務網格來管理

7.5 Sidecar

類似networkpolicy,用來根據限制流量的走向

9、安全

1、流量加密
2、訪問控制,雙向 TLS 和細粒度的訪問策略
3、審計

9.1 架構

# 一個證書頒發機構
# 一個api伺服器用於將證書進行分發到代理 認證策略,授權策略,安全命名資訊
# 一個代理(sider)用來做策略執行
# envoy代理擴充套件用來做審計和遙測
簡單來說控制面提供證書envoy代理作為執行點互相通訊,實現無侵入式得加密流量通訊

image-20240221100745213

9.2 概念

9.2.1 安全命名

伺服器身份: 編碼在證書裡
服務名稱
安全命名,將伺服器身份對映到服務名稱
例子:
	伺服器身份A與服務名稱B的對映關係可以理解為授權A執行服務B,類似於賬號密碼的登入放行,A可以透過B的驗證
# 注意,非七層網路無法保證安全命名的安全性,因為其透過ip地址進行通訊,其發生在envoy代理之前

9.3 身份和證書管理詳解

istio-agent 是指 Sidecar 容器中的 pilot-agent 程序
# 詳細流程
istiod 提供 gRPC 服務以接受證書籤名請求(CSR)。
istio-agent 在啟動時建立私鑰和 CSR,然後將 CSR 及其憑據傳送到 istiod 進行簽名。
istiod CA 驗證 CSR 中攜帶的憑據,成功驗證後簽署 CSR 以生成證書。
當工作負載啟動時,Envoy 透過 Secret 發現服務(SDS)API 向同容器內的 istio-agent 傳送證書和金鑰請求。
istio-agent 透過 Envoy SDS API 將從 istiod 收到的證書和金鑰傳送給 Envoy。
istio-agent 監控工作負載證書的過期時間。上述過程會定期重複進行證書和金鑰輪換。

image-20240221135958207

9.4 認證

認證策略的生效範圍由小到大,類似網路策略

9.4.1 對等認證

服務到服務的認證,以驗證建立連線的客戶端,雙向tls(最低版本為v1.2),加密通訊
# 工作原理
Istio 將出站流量從客戶端重新路由到客戶端的本地 Sidecar Envoy。
客戶端 Envoy 與伺服器端 Envoy 開始雙向 TLS 握手。在握手期間, 客戶端 Envoy 還做了安全命名檢查, 以驗證伺服器證書中顯示的服務帳戶是否被授權執行目標服務。
客戶端 Envoy 和伺服器端 Envoy 建立了一個雙向的 TLS 連線,Istio 將流量從客戶端 Envoy 轉發到伺服器端 Envoy。
伺服器端 Envoy 授權請求。如果獲得授權,它將流量轉發到透過本地 TCP 連線的後端服務。

9.4.1.1 部署測試服務

kubectl create ns foo 
kubectl create ns bar 
kubectl create ns legacy
kubectl label ns foo istio-injection=enabled
kubectl label ns bar istio-injection=enabled
kubectl label ns legacy istio-injection=enabled
kubectl apply -f sleep.yaml -n foo
kubectl apply -f sleep.yaml -n legacy
kubectl apply -f sleep.yaml -n bar
kubectl apply -f httpbin.yaml -n foo
kubectl apply -f httpbin.yaml -n legacy
kubectl apply -f httpbin.yaml -n bar
# sleep.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sleep
---
apiVersion: v1
kind: Service
metadata:
  name: sleep
  labels:
    app: sleep
    service: sleep
spec:
  ports:
  - port: 80
    name: http
  selector:
    app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sleep
  template:
    metadata:
      labels:
        app: sleep
    spec:
      terminationGracePeriodSeconds: 0
      serviceAccountName: sleep
      containers:
      - name: sleep
        image: quanheng.com/pub/curl:v1
        command: ["/bin/sleep", "infinity"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /etc/sleep/tls
          name: secret-volume
      volumes:
      - name: secret-volume
        secret:
          secretName: sleep-secret
          optional: true

9.4.1.2 配置為雙向tls

# 僅允許sider之間的通訊,無sider無法進行訪問
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: "default"
  namespace: "istio-system" # istio安裝的根名稱空間
spec:
  mtls:
    mode: STRICT 

9.4.2 請求認證

終端使用者認證,以驗證附加到請求的憑據,賬戶密碼,採用jwt進行認證

9.4.2.1 部署測試服務

kubectl create ns foo
kubectl label ns foo istio-injection=enabled
kubectl apply -f httpbin.yaml -n foo
kubectl apply -f httpbin-gateway.yaml
# httpbin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: httpbin
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
    service: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      serviceAccountName: httpbin
      containers:
      - image: quanheng.com/pub/httpbin:v1
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80
# httpbin-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: istio-ingress
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - istio-ingress/httpbin-gateway
  http:
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000

9.4.2.2 配置入口閘道器認證

apiVersion: security.istio.io/v1
kind: RequestAuthentication
metadata:
  name: ingress-jwt
  namespace: istio-system
spec:
  selector:
    matchLabels:
      istio: ingressgateway
  jwtRules:
  - issuer: "testing@secure.istio.io"
    jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.20/security/tools/jwt/samples/jwks.json"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - istio-ingress/httpbin-gateway
  http:
  - match:
    - uri:
        prefix: /headers
      headers:
        "@request.auth.claims.groups":
          exact: group1
    route:
    - destination:
        port:
          number: 8000
        host: httpbin

9.5 授權

為網格中的工作負載提供網格、 名稱空間和工作負載級別的訪問控制
工作負載到工作負載以及終端使用者到工作負載的授權。
一個簡單的 API:它包括一個單獨的並且很容易使用和維護的 AuthorizationPolicy CRD。
靈活的語義:運維人員可以在 Istio 屬性上定義自定義條件,並使用 DENY 和 ALLOW 動作。
高效能:Istio 授權是在 Envoy 本地強制執行的。
高相容性:原生支援 HTTP、HTTPS 和 HTTP2,以及任意普通 TCP 協議。
授權策略包括選擇器(selector)、動作(action)和一個規則(rules)列表:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
 name: httpbin
 namespace: foo
spec:
 selector:
   matchLabels:
     app: httpbin
     version: v1
 action: ALLOW
 rules:
 - from:
   - source:
       principals: ["cluster.local/ns/default/sa/sleep"]
   - source:
       namespaces: ["dev"]
   to:
   - operation:
       methods: ["GET"]
   when:
   - key: request.auth.claims[iss]
     values: ["https://accounts.google.com"]

9.5.1 七層流量訪問控制

# 允許使用get方法訪問
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: "productpage-viewer"
  namespace: default
spec:
  selector:
    matchLabels:
      app: productpage
  action: ALLOW
  rules:
  - to:
    - operation:
        methods: ["GET"]
# sa授權
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: "details-viewer"
  namespace: default
spec:
  selector:
    matchLabels:
      app: details
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/default/sa/bookinfo-productpage"]
    to:
    - operation:
        methods: ["GET"]

9.5.2 四層訪問控制

# 允許訪問
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: tcp-policy
  namespace: foo
spec:
  selector:
    matchLabels:
      app: tcp-echo
  action: ALLOW
  rules:
  - to:
    - operation:
        ports: ["9000", "9001"]
# 拒絕訪問
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: tcp-policy
  namespace: foo
spec:
  selector:
    matchLabels:
      app: tcp-echo
  action: DENY
  rules:
  - to:
    - operation:
        methods: ["GET"]

9.5.3 閘道器訪問控制

# 根據外接負載均衡器來進行配置具體參考
https://istio.io/latest/zh/docs/tasks/security/authorization/authz-ingress/#ip-based-allow-list-and-deny-list
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: ingress-policy
  namespace: istio-system
spec:
  selector:
    matchLabels:
      app: istio-ingressgateway
  action: ALLOW
  rules:
  - from:
    - source:
        ipBlocks: ["1.2.3.4", "5.6.7.0/24"]

10、服務註冊

將服務網格外的應用註冊進來,一般使用虛擬機器服務

[]: https://istio.io/latest/zh/docs/ops/deployment/vm-architecture/ "將虛擬機器應用接入服務網格"

10.1 概念

WorkloadGroup 表示共享通用屬性的虛擬機器工作負載邏輯組合。這類似於 Kubernetes 中的 Deployment。
WorkloadEntry 表示虛擬機器工作負載的單個例項。這類似於 Kubernetes 中的 Pod。

10.2 實現

10.2.1 部署虛擬機器

根據要求選擇debian系或centos8

10.2.2 規劃所需要的變數

VM_APP="<將在這臺虛機上執行的應用名>"
VM_NAMESPACE="<您的服務所在的名稱空間>"
WORK_DIR="<證書工作目錄>"
SERVICE_ACCOUNT="<為這臺虛機提供的 Kubernetes 的服務賬號名稱>"
CLUSTER_NETWORK=""
VM_NETWORK=""
CLUSTER="Kubernetes"

10.2.3 建立一個目錄來生成檔案

mkdir -p vm

10.2.4 設定iop

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster1
      network: 10.182.0.0/16 # pod網路

10.2.5 部署東西向閘道器

# samples/multicluster/gen-eastwest-gateway.sh --single-cluster > iop_for_vm.yaml
apiVersion: install.istio.io/v1alpha1                                                                                             
kind: IstioOperator
metadata:
  name: eastwest
spec:
  revision: ""
  profile: empty
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
        enabled: true
        k8s:
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017
  values:
    gateways:
      istio-ingressgateway:
        injectionTemplate: gateway
# gateway.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istiod-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        name: tls-istiod
        number: 15012
        protocol: tls
      tls:
        mode: PASSTHROUGH        
      hosts:
        - "*"
    - port:
        name: tls-istiodwebhook
        number: 15017
        protocol: tls
      tls:
        mode: PASSTHROUGH          
      hosts:
        - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiod-vs
spec:
  hosts:
  - "*"
  gateways:
  - istiod-gateway
  tls:
  - match:
    - port: 15012
      sniHosts:
      - "*"
    route:
    - destination:
        host: istiod.istio-system.svc.cluster.local
        port:
          number: 15012
  - match:
    - port: 15017
      sniHosts:
      - "*"
    route:
    - destination:
        host: istiod.istio-system.svc.cluster.local
        port:
          number: 443

10.2.6 建立一個sa

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx
  namespace:vm
---
apiVersion: v1
kind: Secret
metadata:
  name: nginx-sa-secret
  namespace: vm
  annotations:
    kubernetes.io/service-account.name: nginx
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vm-clusterrole-binding
subjects:
- kind: ServiceAccount
  name: nginx
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: istiod-istio-system
  apiGroup: rbac.authorization.k8s.io

10.2.7 建立虛擬機器用的檔案

# 建立 WorkloadGroup,作為每個 WorkloadEntry 例項,當建立好後例項會被自動建立
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
  name: nginx
  namespace: vm
spec:
  metadata:
    labels:
      app: nginx
  template:
    serviceAccount: nginx
  probe:
    httpGet:
      port: 80
istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK_DIR}" --clusterID "${CLUSTER}" --
istioctl x workload entry configure -f workload.yaml -o /home/gu/k8s/istio/vm --clusterID cluster1 --ingressIP=10.173.31.65
cluster.env: 包含用來識別名稱空間、服務帳戶、網路 CIDR、和入站埠(可選)的後設資料。
istio-token: 用來從 CA 獲取證書的 Kubernetes 令牌。
mesh.yaml: 提供 ProxyConfig 來配置 discoveryAddress, 健康檢查, 以及一些認證操作。
root-cert.pem: 用於認證的根證書。
hosts: /etc/hosts 的補充,代理將使用該補充從 Istiod 獲取 xDS.*。
生成後將檔案上傳到虛擬機器"${HOME}"

10.2.8 配置虛擬機器

# 將根證書安裝
sudo mkdir -p /etc/certs \
sudo cp root-cert.pem /etc/certs/root-cert.pem \
# 將令牌安裝
sudo  mkdir -p /var/run/secrets/tokens \
sudo cp istio-token /var/run/secrets/tokens/istio-token \
# 安裝istio-sidecar服務
debian 
curl -LO https://storage.googleapis.com/istio-release/releases/1.18.2/deb/istio-sidecar.deb
sudo dpkg -i istio-sidecar.deb
centos
curl -LO https://storage.googleapis.com/istio-release/releases/1.18.2/rpm/istio-sidecar.rpm
sudo rpm -i istio-sidecar.rpm
# 將 cluster.env 安裝到目錄 /var/lib/istio/envoy/ 
sudo cp cluster.env /var/lib/istio/envoy/cluster.env \
# 將網格配置檔案 Mesh Config 安裝到目錄 /etc/istio/config/mesh:
sudo cp mesh.yaml /etc/istio/config/mesh \
# 將 istiod 主機新增到 /etc/hosts:
sudo sh -c 'cat $(eval echo ~$SUDO_USER)/hosts >> /etc/hosts'
# 把檔案 /etc/certs/ 和 /var/lib/istio/envoy/ 的所有權轉移給 Istio proxy:
sudo mkdir -p /etc/istio/proxy \
sudo chown -R istio-proxy /var/lib/istio /etc/certs /etc/istio/proxy /etc/istio/config /var/run/secrets /etc/certs/root-cert.pem
sudo mkdir -p /etc/certs && sudo cp root-cert.pem /etc/certs/root-cert.pem && sudo  mkdir -p /var/run/secrets/tokens && sudo cp istio-token /var/run/secrets/tokens/istio-token && sudo cp cluster.env /var/lib/istio/envoy/cluster.env && sudo cp mesh.yaml /etc/istio/config/mesh && sudo mkdir -p /etc/istio/proxy && sudo chown -R istio-proxy /var/lib/istio /etc/certs /etc/istio/proxy /etc/istio/config /var/run/secrets /etc/certs/root-cert.pem

10.2.9 虛擬機器啟動istio

 systemctl start istio
 檢查 /var/log/istio/istio.log 中的日誌

10.2.10 解除安裝

# 停止虛擬機器服務
sudo systemctl stop istio
#  刪除ns

10.3 serviceentry

# 將外部服務包裝成和svc一樣的服務發現機制

10.4 workloadgroup

# 將虛擬機器服務進行自動註冊,可以理解為workloadentry的模版,類似於deploy

10.5 workloadentry

# 服務例項註冊

相關文章