跟我一起學Knative(6)--部署gRPC服務

iyacontrol發表於2020-05-17

本文主要利用Knative Serving 部署一個gRPC服務。

此示例可用於在knative服務中試用gRPC,HTTP / 2和自定義埠配置。

容器映象由兩個二進位制檔案構建:伺服器和客戶端。這樣做是為了便於測試,不建議將其用於生產容器。

構建和部署示例程式碼

1:clone程式碼倉庫

git clone -b "release-0.14" https://github.com/knative/docs knative-docs
cd knative-docs/docs/serving/samples/grpc-ping-go

使用Docker來為此服務構建容器映象,並將其推送到Docker Hub。

將{username}替換為您的Docker Hub使用者名稱,然後執行以下命令:

# Build the container on your local machine.
docker build --tag "{username}/grpc-ping-go" .

# Push the container to docker registry.
docker push "{username}/grpc-ping-go"

3: 更新專案中的service.yaml檔案以引用步驟1中釋出的映象。

用您的Docker Hub使用者名稱替換service.yaml中的{username}:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: grpc-ping
  namespace: default
spec:
  template:
    spec:
      containers:
      - image: docker.io/{username}/grpc-ping-go
        ports:
          - name: h2c
            containerPort: 8080

4:使用 kubectl部署服務

kubectl apply --filename service.yaml 

service.serving.knative.dev/grpc-ping created

Exploring

部署後,您可以使用kubectl命令檢查建立的資源:

首先檢視knative service:

# This will show the Knative service that we created:
kubectl get ksvc grpc-ping --output yaml

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  annotations:
    serving.knative.dev/creator: jenkins
    serving.knative.dev/lastModifier: jenkins
  creationTimestamp: "2020-05-13T01:55:44Z"
  generation: 1
  name: grpc-ping
  namespace: default
  resourceVersion: "2201773"
  selfLink: /apis/serving.knative.dev/v1/namespaces/default/services/grpc-ping
  uid: 7977a697-c413-459f-852c-60e5adf3dccc
spec:
  template:
    metadata:
      creationTimestamp: null
    spec:
      containerConcurrency: 0
      containers:
      - image: docker.io/iyacontrol/grpc-ping-go
        name: user-container
        ports:
        - containerPort: 8080
          name: h2c
        readinessProbe:
          successThreshold: 1
          tcpSocket:
            port: 0
        resources: {}
      timeoutSeconds: 300
  traffic:
  - latestRevision: true
    percent: 100
status:
  address:
    url: http://grpc-ping.default.svc.cluster.local
  conditions:
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: ConfigurationsReady
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: RoutesReady
  latestCreatedRevisionName: grpc-ping-gcltn
  latestReadyRevisionName: grpc-ping-gcltn
  observedGeneration: 1
  traffic:
  - latestRevision: true
    percent: 100
    revisionName: grpc-ping-gcltn
  url: http://grpc-ping.default.serverless.xx.me

檢視knative route:

# This will show the Route, created by the service:
kubectl get route grpc-ping --output yaml

apiVersion: serving.knative.dev/v1
kind: Route
metadata:
  annotations:
    serving.knative.dev/creator: jenkins
    serving.knative.dev/lastModifier: jenkins
  creationTimestamp: "2020-05-13T01:55:44Z"
  finalizers:
  - routes.serving.knative.dev
  generation: 1
  labels:
    serving.knative.dev/service: grpc-ping
  name: grpc-ping
  namespace: default
  ownerReferences:
  - apiVersion: serving.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: grpc-ping
    uid: 7977a697-c413-459f-852c-60e5adf3dccc
  resourceVersion: "2201772"
  selfLink: /apis/serving.knative.dev/v1/namespaces/default/routes/grpc-ping
  uid: 8455e488-2ac2-4e1e-8b51-8d20e1836801
spec:
  traffic:
  - configurationName: grpc-ping
    latestRevision: true
    percent: 100
status:
  address:
    url: http://grpc-ping.default.svc.cluster.local
  conditions:
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: AllTrafficAssigned
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: IngressReady
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: Ready
  observedGeneration: 1
  traffic:
  - latestRevision: true
    percent: 100
    revisionName: grpc-ping-gcltn
  url: http://grpc-ping.default.serverless.xx.me

檢視knative configurations:

# This will show the Configuration, created by the service:
kubectl get configurations grpc-ping --output yaml

apiVersion: serving.knative.dev/v1
kind: Configuration
metadata:
  annotations:
    serving.knative.dev/creator: jenkins
    serving.knative.dev/lastModifier: jenkins
  creationTimestamp: "2020-05-13T01:55:44Z"
  generation: 1
  labels:
    serving.knative.dev/route: grpc-ping
    serving.knative.dev/service: grpc-ping
  name: grpc-ping
  namespace: default
  ownerReferences:
  - apiVersion: serving.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: grpc-ping
    uid: 7977a697-c413-459f-852c-60e5adf3dccc
  resourceVersion: "2201750"
  selfLink: /apis/serving.knative.dev/v1/namespaces/default/configurations/grpc-ping
  uid: 1a8ab033-7a28-41f8-97ab-cc00560bf613
spec:
  template:
    metadata:
      creationTimestamp: null
    spec:
      containerConcurrency: 0
      containers:
      - image: docker.io/iyacontrol/grpc-ping-go
        name: user-container
        ports:
        - containerPort: 8080
          name: h2c
        readinessProbe:
          successThreshold: 1
          tcpSocket:
            port: 0
        resources: {}
      timeoutSeconds: 300
status:
  conditions:
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: Ready
  latestCreatedRevisionName: grpc-ping-gcltn
  latestReadyRevisionName: grpc-ping-gcltn
  observedGeneration: 1

透過configurations我們可以知道revision的名字是grpc-ping-gcltn,我們的檢視 knative revision:

# This will show the Revision, created by the Configuration:
kubectl get revisions grpc-ping-gcltn -o yaml

apiVersion: serving.knative.dev/v1
kind: Revision
metadata:
  annotations:
    serving.knative.dev/creator: jenkins
    serving.knative.dev/lastPinned: "1589334954"
  creationTimestamp: "2020-05-13T01:55:44Z"
  generateName: grpc-ping-
  generation: 1
  labels:
    serving.knative.dev/configuration: grpc-ping
    serving.knative.dev/configurationGeneration: "1"
    serving.knative.dev/route: grpc-ping
    serving.knative.dev/service: grpc-ping
  name: grpc-ping-gcltn
  namespace: default
  ownerReferences:
  - apiVersion: serving.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Configuration
    name: grpc-ping
    uid: 1a8ab033-7a28-41f8-97ab-cc00560bf613
  resourceVersion: "2201933"
  selfLink: /apis/serving.knative.dev/v1/namespaces/default/revisions/grpc-ping-gcltn
  uid: d3d13c00-1aa9-44e6-979d-60b50d40b519
spec:
  containerConcurrency: 0
  containers:
  - image: docker.io/iyacontrol/grpc-ping-go
    name: user-container
    ports:
    - containerPort: 8080
      name: h2c
    readinessProbe:
      successThreshold: 1
      tcpSocket:
        port: 0
    resources: {}
  timeoutSeconds: 300
status:
  conditions:
  - lastTransitionTime: "2020-05-13T01:56:54Z"
    message: The target is not receiving traffic.
    reason: NoTraffic
    severity: Info
    status: "False"
    type: Active
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: ContainerHealthy
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2020-05-13T01:55:54Z"
    status: "True"
    type: ResourcesAvailable
  imageDigest: index.docker.io/iyacontrol/grpc-ping-go@sha256:bfe8362fd0f7ccf18502688baca084b6ea63b5725bfef287d8d7dcef9320a17b
  logUrl: http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'d3d13c00-1aa9-44e6-979d-60b50d40b519',type:phrase))))
  observedGeneration: 1
  serviceName: grpc-ping-gcltn

測試服務

測試gRPC服務需要使用根據伺服器使用的相同protobuf定義構建的gRPC客戶端。

Dockerfile構建客戶端二進位制檔案。要執行客戶端,您將使用為伺服器部署的相同容器映象,並帶有對entrypoint命令的覆蓋,以使用客戶端二進位制檔案而不是伺服器二進位制檔案。

將{username}替換為您的Docker Hub使用者名稱,然後執行以下命令:

docker run --rm {username}/grpc-ping-go \
  /client \
  -server_addr="grpc-ping.default.serverless.xx.me:80" \
  -insecure

使用容器標籤{username} / grpc-ping-go之後的引數代替Dockerfile CMD語句中定義的入口點命令。

執行測試之後有類似如下輸出:

2020/05/13 02:06:43 Ping got hello - pong
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466829984 +0000 UTC m=+1.361228108
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466881062 +0000 UTC m=+1.361279193
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466890156 +0000 UTC m=+1.361288283
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466896804 +0000 UTC m=+1.361294929
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466908132 +0000 UTC m=+1.361306260
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466915748 +0000 UTC m=+1.361313871
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466926437 +0000 UTC m=+1.361324564
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466934259 +0000 UTC m=+1.361332383
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466945454 +0000 UTC m=+1.361343587
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466953871 +0000 UTC m=+1.361351996
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.4669644 +0000 UTC m=+1.361362524
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466971662 +0000 UTC m=+1.361369790
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466985621 +0000 UTC m=+1.361383746
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466993072 +0000 UTC m=+1.361391202
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467000507 +0000 UTC m=+1.361398632
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467007443 +0000 UTC m=+1.361405566
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467026014 +0000 UTC m=+1.361424141
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467034894 +0000 UTC m=+1.361433022
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467044127 +0000 UTC m=+1.361442256
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467052183 +0000 UTC m=+1.361450308

測試透過。

結論

當我們部署一個gRPC專案的時候,需要在service中spec的port做一些特殊的處理

ports:
          - name: h2c
            containerPort: 8080

name 是h2c。h2 is HTTP/2 over TLS (protocol negotiation via ALPN),h2c is HTTP/2 over TCP。

相關文章