使用 Flomesh 強化 Spring Cloud 服務治理

雲原生指北發表於2021-08-19

寫在最前

這篇是關於如何使用 Flomesh[1] 服務網格來強化 Spring Cloud 的服務治理能力,降低 Spring Cloud 微服務架構落地服務網格的門檻,實現“自主可控”。

文件在 github[2] 上持續更新,歡迎大家一起討論:https://github.com/flomesh-io...


架構

 title=

Architect

環境搭建

搭建 Kubernetes 環境,可以選擇 kubeadm 進行叢集搭建。也可以選擇 minikube、k3s、Kind 等,本文使用 k3s。

使用 k3d[3] 安裝 k3s[4]。k3d 將在 Docker 容器中執行 k3s,因此需要保證已經安裝了 Docker。

$ k3d cluster create spring-demo -p "81:80@loadbalancer" --k3s-server-arg '--no-deploy=traefik'

安裝 Flomesh

從倉庫 https://github.com/flomesh-io/flomesh-bookinfo-demo.git 克隆程式碼。進入到 flomesh-bookinfo-demo/kubernetes目錄。

所有 Flomesh 元件以及用於 demo 的 yamls 檔案都位於這個目錄中。

安裝 Cert Manager

$ kubectl apply -f artifacts/cert-manager-v1.3.1.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

注意: 要保證 cert-manager 名稱空間中所有的 pod 都正常執行:

$ kubectl get pod -n cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-webhook-56fdcbb848-q7fn5      1/1     Running   0          98s
cert-manager-59f6c76f4b-z5lgf              1/1     Running   0          98s
cert-manager-cainjector-59f76f7fff-flrr7   1/1     Running   0          98s

安裝 Pipy Operator

$ kubectl apply -f artifacts/pipy-operator.yaml

執行完命令後會看到類似的結果:

namespace/flomesh created
customresourcedefinition.apiextensions.k8s.io/proxies.flomesh.io created
customresourcedefinition.apiextensions.k8s.io/proxyprofiles.flomesh.io created
serviceaccount/operator-manager created
role.rbac.authorization.k8s.io/leader-election-role created
clusterrole.rbac.authorization.k8s.io/manager-role created
clusterrole.rbac.authorization.k8s.io/metrics-reader created
clusterrole.rbac.authorization.k8s.io/proxy-role created
rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/proxy-rolebinding created
configmap/manager-config created
service/operator-manager-metrics-service created
service/proxy-injector-svc created
service/webhook-service created
deployment.apps/operator-manager created
deployment.apps/proxy-injector created
certificate.cert-manager.io/serving-cert created
issuer.cert-manager.io/selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration created
mutatingwebhookconfiguration.admissionregistration.k8s.io/proxy-injector-webhook-cfg created
validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration created

注意:要保證 flomesh 名稱空間中所有的 pod 都正常執行:

$ kubectl get pod -n flomesh
NAME                               READY   STATUS    RESTARTS   AGE
proxy-injector-5bccc96595-spl6h    1/1     Running   0          39s
operator-manager-c78bf8d5f-wqgb4   1/1     Running   0          39s

安裝 Ingress 控制器:ingress-pipy

$ kubectl apply -f ingress/ingress-pipy.yaml
namespace/ingress-pipy created
customresourcedefinition.apiextensions.k8s.io/ingressparameters.flomesh.io created
serviceaccount/ingress-pipy created
role.rbac.authorization.k8s.io/ingress-pipy-leader-election-role created
clusterrole.rbac.authorization.k8s.io/ingress-pipy-role created
rolebinding.rbac.authorization.k8s.io/ingress-pipy-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/ingress-pipy-rolebinding created
configmap/ingress-config created
service/ingress-pipy-cfg created
service/ingress-pipy-controller created
service/ingress-pipy-defaultbackend created
service/webhook-service created
deployment.apps/ingress-pipy-cfg created
deployment.apps/ingress-pipy-controller created
deployment.apps/ingress-pipy-manager created
certificate.cert-manager.io/serving-cert created
issuer.cert-manager.io/selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration configured
validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration configured

檢查 ingress-pipy 名稱空間下 pod 的狀態:

$ kubectl get pod -n ingress-pipy
NAME                                       READY   STATUS    RESTARTS   AGE
svclb-ingress-pipy-controller-8pk8k        1/1     Running   0          71s
ingress-pipy-cfg-6bc649cfc7-8njk7          1/1     Running   0          71s
ingress-pipy-controller-76cd866d78-m7gfp   1/1     Running   0          71s
ingress-pipy-manager-5f568ff988-tw5w6      0/1     Running   0          70s

至此,你已經成功安裝 Flomesh 的所有元件,包括 operator 和 ingress 控制器。

中介軟體

Demo 需要用到中介軟體完成日誌和統計資料的儲存,這裡為了方便使用 pipy 進行 mock:直接在控制檯中列印資料。

另外,服務治理相關的配置有 mock 的 pipy config 服務提供。

log & metrics

$ cat > middleware.js <<EOF
pipy()
.listen(8123)
    .link('mock')

.listen(9001)
    .link('mock')
.pipeline('mock')
    .decodeHttpRequest()
    .replaceMessage(
        req => (
            console.log(req.body.toString()),
            new Message('OK')
        )
    )
    .encodeHttpResponse()
EOF

$ docker run --rm --name middleware --entrypoint "pipy" -v ${PWD}:/script -p 8123:8123 -p 9001:9001 flomesh/pipy-pjs:0.4.0-118 /script/middleware.js

pipy config

$ cat > mock-config.json <<EOF
{
  "ingress": {},
  "inbound": {
    "rateLimit": -1,
    "dataLimit": -1,
    "circuitBreak": false,
    "blacklist": []
  },
  "outbound": {
    "rateLimit": -1,
    "dataLimit": -1
  }
}
EOF

$ cat > mock.js <<EOF
pipy({
  _CONFIG_FILENAME: 'mock-config.json',

  _serveFile: (req, filename, type) => (
    new Message(
      {
        bodiless: req.head.method === 'HEAD',
        headers: {
          'etag': os.stat(filename)?.mtime | 0,
          'content-type': type,
        },
      },
      req.head.method === 'HEAD' ? null : os.readFile(filename),
    )
  ),

  _router: new algo.URLRouter({
    '/config': req => _serveFile(req, _CONFIG_FILENAME, 'application/json'),
    '/*': () => new Message({ status: 404 }, 'Not found'),
  }),
})

// Config
.listen(9000)
  .decodeHttpRequest()
  .replaceMessage(
    req => (
      _router.find(req.head.path)(req)
    )
  )
  .encodeHttpResponse()
EOF

$ docker run --rm --name mock --entrypoint "pipy" -v ${PWD}:/script -p 9000:9000 flomesh/pipy-pjs:0.4.0-118 /script/mock.js

執行 Demo

Demo 執行在另一個獨立的名稱空間 flomesh-spring 中,執行命令 kubectl apply -f base/namespace.yaml 來建立該名稱空間。如果你 describe 該名稱空間你會發現其使用了 flomesh.io/inject=true 標籤。

這個標籤告知 operator 的 admission webHook 攔截標註的名稱空間下 pod 的建立。

$ kubectl describe ns flomesh-spring
Name:         flomesh-spring
Labels:       app.kubernetes.io/name=spring-mesh
              app.kubernetes.io/version=1.19.0
              flomesh.io/inject=true
              kubernetes.io/metadata.name=flomesh-spring
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.

我們首先看下 Flomesh 提供的 CRD ProxyProfile。這個 demo 中,其定義了 sidecar 容器片段以及所使用的的指令碼。檢查 sidecar/proxy-profile.yaml 獲取更多資訊。執行下面的命令,建立 CRD 資源。

$ kubectl apply -f sidecar/proxy-profile.yaml

檢查是否建立成功:

$ kubectl get pf -o wide
NAME                         NAMESPACE        DISABLED   SELECTOR                                     CONFIG                                                                AGE
proxy-profile-002-bookinfo   flomesh-spring   false      {"matchLabels":{"sys":"bookinfo-samples"}}   {"flomesh-spring":"proxy-profile-002-bookinfo-fsmcm-b67a9e39-0418"}   27s

As the services has startup dependencies, you need to deploy it one by one following the strict order. Before starting, check the Endpoints section of base/clickhouse.yaml.

提供中介軟體的訪問 endpoid,將 base/clickhouse.yamlbase/metrics.yaml 和 base/config.yaml 中的 ip 地址改為本機的 ip 地址(不是 127.0.0.1)。

修改之後,執行如下命令:

$ kubectl apply -f base/clickhouse.yaml
$ kubectl apply -f base/metrics.yaml
$ kubectl apply -f base/config.yaml

$ kubectl get endpoints samples-clickhouse samples-metrics samples-config
NAME                 ENDPOINTS            AGE
samples-clickhouse   192.168.1.101:8123   3m
samples-metrics      192.168.1.101:9001   3s
samples-config       192.168.1.101:9000   3m

部署註冊中心

$ kubectl apply -f base/discovery-server.yaml

檢查註冊中心 pod 的狀態,確保 3 個容器都執行正常。

$ kubectl get pod
NAME                                           READY   STATUS        RESTARTS   AGE
samples-discovery-server-v1-85798c47d4-dr72k   3/3     Running       0          96s

部署配置中心

$ kubectl apply -f base/config-service.yaml

部署 API 閘道器以及 bookinfo 相關的服務

$ kubectl apply -f base/bookinfo-v1.yaml
$ kubectl apply -f base/bookinfo-v2.yaml
$ kubectl apply -f base/productpage-v1.yaml
$ kubectl apply -f base/productpage-v2.yaml

檢查 pod 狀態,可以看到所有 pod 都注入了容器。

$ kubectl get pods
samples-discovery-server-v1-85798c47d4-p6zpb       3/3     Running   0          19h
samples-config-service-v1-84888bfb5b-8bcw9         1/1     Running   0          19h
samples-api-gateway-v1-75bb6456d6-nt2nl            3/3     Running   0          6h43m
samples-bookinfo-ratings-v1-6d557dd894-cbrv7       3/3     Running   0          6h43m
samples-bookinfo-details-v1-756bb89448-dxk66       3/3     Running   0          6h43m
samples-bookinfo-reviews-v1-7778cdb45b-pbknp       3/3     Running   0          6h43m
samples-api-gateway-v2-7ddb5d7fd9-8jgms            3/3     Running   0          6h37m
samples-bookinfo-ratings-v2-845d95fb7-txcxs        3/3     Running   0          6h37m
samples-bookinfo-reviews-v2-79b4c67b77-ddkm2       3/3     Running   0          6h37m
samples-bookinfo-details-v2-7dfb4d7c-jfq4j         3/3     Running   0          6h37m
samples-bookinfo-productpage-v1-854675b56-8n2xd    1/1     Running   0          7m1s
samples-bookinfo-productpage-v2-669bd8d9c7-8wxsf   1/1     Running   0          6m57s

新增 Ingress 規則

執行如下命令新增 Ingress 規則。

$ kubectl apply -f ingress/ingress.yaml

測試前的準備

訪問 demo 服務都要透過 ingress 控制器。因此需要先獲取 LB 的 ip 地址。

//Obtain the controller IP
//Here, we append port. 
ingressAddr=`kubectl get svc ingress-pipy-controller -n ingress-pipy -o jsonpath='{.spec.clusterIP}'`:81

這裡我們使用了是 k3d 建立的 k3s,命令中加入了 -p 81:80@loadbalancer 選項。我們可以使用 127.0.0.1:81 來訪問 ingress 控制器。這裡執行命令 ingressAddr=127.0.0.1:81

Ingress 規則中,我們為每個規則指定了 host,因此每個請求中需要透過 HTTP 請求頭 Host 提供對應的 host

或者在 /etc/hosts 新增記錄:

$ kubectl get ing ingress-pipy-bookinfo -n flomesh-spring -o jsonpath="{range .spec.rules[*]}{.host}{'\n'}"
api-v1.flomesh.cn
api-v2.flomesh.cn
fe-v1.flomesh.cn
fe-v2.flomesh.cn

//新增記錄到 /etc/hosts
127.0.0.1 api-v1.flomesh.cn api-v2.flomesh.cn fe-v1.flomesh.cn fe-v2.flomesh.cn

驗證

$ curl http://127.0.0.1:81/actuator/health -H 'Host: api-v1.flomesh.cn'
{"status":"UP","groups":["liveness","readiness"]}
//OR
$ curl http://api-v1.flomesh.cn:81/actuator/health
{"status":"UP","groups":["liveness","readiness"]}

測試

灰度

在 v1 版本的服務中,我們為 book 新增 rating 和 review。

# rate a book
$ curl -X POST http://$ingressAddr/bookinfo-ratings/ratings \
    -H "Content-Type: application/json" \
    -H "Host: api-v1.flomesh.cn" \
    -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"2099a055-1e21-46ef-825e-9e0de93554ea","rating":3}' 

$ curl http://$ingressAddr/bookinfo-ratings/ratings/2099a055-1e21-46ef-825e-9e0de93554ea -H "Host: api-v1.flomesh.cn"

# review a book
$ curl -X POST http://$ingressAddr/bookinfo-reviews/reviews \
    -H "Content-Type: application/json" \
    -H "Host: api-v1.flomesh.cn" \
    -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"2099a055-1e21-46ef-825e-9e0de93554ea","review":"This was OK.","rating":3}'

$ curl http://$ingressAddr/bookinfo-reviews/reviews/2099a055-1e21-46ef-825e-9e0de93554ea -H "Host: api-v1.flomesh.cn"

執行上面的命令之後,我們可以在瀏覽器中訪問前端服務(http://fe-v1.flomesh.cn:81/productpage?u=normal、 http://fe-v2.flomesh.cn:81/productpage?u=normal),只有 v1 版本的前端中才能看到剛才新增的記錄。

 title=

v1

 title=

v2

熔斷

這裡熔斷我們透過修改 mock-config.json 中的 inbound.circuitBreak 為 true,來將服務強制開啟熔斷:

{
  "ingress": {},
  "inbound": {
    "rateLimit": -1,
    "dataLimit": -1,
    "circuitBreak": true, //here
    "blacklist": []
  },
  "outbound": {
    "rateLimit": -1,
    "dataLimit": -1

  }
}
$ curl http://$ingressAddr/actuator/health -H 'Host: api-v1.flomesh.cn'
HTTP/1.1 503 Service Unavailable
Connection: keep-alive
Content-Length: 27

Service Circuit Break Open

限流

修改 pipy config 的配置,將 inbound.rateLimit 設定為 1。

{
  "ingress": {},
  "inbound": {
    "rateLimit": 1, //here
    "dataLimit": -1,
    "circuitBreak": false,
    "blacklist": []
  },
  "outbound": {
    "rateLimit": -1,
    "dataLimit": -1
  }
}

我們使用 wrk 模擬傳送請求,20 個連線、20 個請求、持續 30s:

$ wrk -t20 -c20 -d30s --latency http://$ingressAddr/actuator/health -H 'Host: api-v1.flomesh.cn'
Running 30s test @ http://127.0.0.1:81/actuator/health
  20 threads and 20 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   951.51ms  206.23ms   1.04s    93.55%
    Req/Sec     0.61      1.71    10.00     93.55%
  Latency Distribution
     50%    1.00s
     75%    1.01s
     90%    1.02s
     99%    1.03s
  620 requests in 30.10s, 141.07KB read
Requests/sec:     20.60
Transfer/sec:      4.69KB

從結果來看 20.60 req/s,即每個連線 1 req/s。

黑白名單

將 pipy config 的 mock-config.json 做如下修改:ip 地址使用的是 ingress controller 的 pod ip。

$ kgpo -n ingress-pipy ingress-pipy-controller-76cd866d78-4cqqn -o jsonpath='{.status.podIP}'
10.42.0.78
{
  "ingress": {},
  "inbound": {
    "rateLimit": -1,
    "dataLimit": -1,
    "circuitBreak": false,
    "blacklist": ["10.42.0.78"] //here
  },
  "outbound": {
    "rateLimit": -1,
    "dataLimit": -1

  }
}

還是訪問閘道器的介面

curl http://$ingressAddr/actuator/health -H 'Host: api-v1.flomesh.cn'
HTTP/1.1 503 Service Unavailable
content-type: text/plain
Connection: keep-alive
Content-Length: 20

Service Unavailable

引用連結

[1] Flomesh: _https://flomesh.cn/_
[2] github: _https://github.com/flomesh-io...
[3] k3d: _https://k3d.io/_
[4] k3s: _https://github.com/k3s-io/k3s_

相關文章