跟我一起學Knative(2)--Knative Serving

iyacontrol發表於2020-05-15

Knative 元件包含兩個大的領域,分別是Serving和Event。今天我們主要部署Serving部分。Serving,即服務,基於Kubernetes的crd提供縮容至零、請求驅動的計算功能。它本質上是無伺服器平臺的執行和擴充套件元件。主要有以下特性:

  • 更高階層的抽象化,物件模型更易於理解
  • 基於 HTTP 請求的無縫自動擴縮
  • 逐步推出新版本
  • 自動整合網路和服務網格
  • 可插入:連線您自己的日誌記錄和監控平臺

Serving 元件

在本文中,我們將重點關注 Serving 子專案,因為它是深入瞭解 Knative 自然起點。 Knative Serving 使用者應該熟悉以下四個主要資源: Service(服務)、 Route(路由)、Configuration(配置)和 Revision(修訂)。

  • Serviceservice.serving.knative.dev資源會自動管理您的工作負載的整個生命週期。它控制其他物件的建立,以確保您的應用為服務的每次更新都具有路由,配置和新修訂版。可以將服務定義為始終將流量路由到最新修訂版或固定修訂版。
  • Routeroute.serving.knative.dev資源將網路端點對映到一個或多個修訂版。您可以透過幾種方式管理流量,包括部分流量和命名路由。
  • Configurationconfiguration.serving.knative.dev資源維護部署的所需狀態。它在程式碼和配置之間提供了清晰的分隔,並遵循了十二要素應用程式方法。修改配置會建立一個新修訂。
  • Revisionreversion.serving.knative.dev資源是對工作負載進行的每次修改的程式碼和配置的時間點快照。修訂是不可變的物件,可以保留很長時間。可以根據傳入流量自動縮放例項數。有關更多資訊,請參見配置自動縮放器

部署

目前knative最新版本是v0.14,要求kubernetes版本v1.15+。以下命令將安裝Knative Serving元件。

安裝所需的CRD:

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.14.0/serving-crds.yaml

可以看到如下輸出:

customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created

安裝Serving 核心元件

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.14.0/serving-core.yaml

可以看到如下輸出:

customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
namespace/knative-serving created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
image.caching.internal.knative.dev/queue-proxy created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-gc created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
horizontalpodautoscaler.autoscaling/activator created
deployment.apps/activator created
service/activator-service created
deployment.apps/autoscaler created
service/autoscaler created
deployment.apps/controller created
service/controller created
deployment.apps/webhook created
service/webhook created
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
secret/webhook-certs created

我們可以此時執行下面的命令,來檢視安裝了那些資源對下:

kubectl get all -n knative-serving

可以看到:

NAME                              READY   STATUS    RESTARTS   AGE
pod/activator-8cb6d456-fgm8b      1/1     Running   0          28s
pod/autoscaler-dd459ddbb-dk5bc    1/1     Running   0          28s
pod/controller-8564567c4c-jzlfg   1/1     Running   0          28s
pod/webhook-7fbf9c6d49-45lmq      1/1     Running   0          28s

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                              AGE
service/activator-service   ClusterIP   10.100.157.161   <none>        9090/TCP,8008/TCP,80/TCP,81/TCP      28s
service/autoscaler          ClusterIP   10.100.73.90     <none>        9090/TCP,8008/TCP,8080/TCP,443/TCP   28s
service/controller          ClusterIP   10.100.109.204   <none>        9090/TCP,8008/TCP                    28s
service/webhook             ClusterIP   10.100.101.64    <none>        9090/TCP,8008/TCP,443/TCP            28s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/activator    1/1     1            1           28s
deployment.apps/autoscaler   1/1     1            1           28s
deployment.apps/controller   1/1     1            1           28s
deployment.apps/webhook      1/1     1            1           28s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/activator-8cb6d456      1         1         1       28s
replicaset.apps/autoscaler-dd459ddbb    1         1         1       28s
replicaset.apps/controller-8564567c4c   1         1         1       28s
replicaset.apps/webhook-7fbf9c6d49      1         1         1       28s

NAME                                            REFERENCE              TARGETS          MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/activator   Deployment/activator   <unknown>/100%   1         20        1          28s

選擇安裝網路層

此處Knative已經不再強依賴istio,除istio之外,我們可以選擇ambassador,contour,gloo,kourier。

此處因為Kourier相當於knative社群自己開發的網路層,有著更好的相容性,所以選擇Kourier。

安裝Knative Kourier controller控制器:

kubectl apply --filename https://github.com/knative/net-kourier/releases/download/v0.14.0/kourier.yaml

有如下輸出:

namespace/kourier-system created
configmap/config-logging created
configmap/config-observability created
configmap/config-leader-election created
service/kourier created
deployment.apps/3scale-kourier-gateway created
deployment.apps/3scale-kourier-control created
clusterrole.rbac.authorization.k8s.io/3scale-kourier created
serviceaccount/3scale-kourier created
clusterrolebinding.rbac.authorization.k8s.io/3scale-kourier created
service/kourier-internal created
service/kourier-control created
configmap/kourier-bootstrap created

執行以下命令檢視部署了那些資源:

kubectl get all -n kourier-system

可以看到:

NAME                                          READY   STATUS    RESTARTS   AGE
pod/3scale-kourier-control-759cb78499-cph5g   1/1     Running   0          97s
pod/3scale-kourier-gateway-6f49d5768b-l7rsv   1/1     Running   0          97s

NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP                                                                    PORT(S)                      AGE
service/kourier            LoadBalancer   10.100.81.199    ada062477a777476a8ed7fc989190fdf-1615177688.ap-southeast-1.elb.amazonaws.com   80:30588/TCP,443:30850/TCP   97s
service/kourier-control    ClusterIP      10.100.163.167   <none>                                                                         18000/TCP                    97s
service/kourier-internal   ClusterIP      10.100.113.134   <none>                                                                         80/TCP                       97s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/3scale-kourier-control   1/1     1            1           97s
deployment.apps/3scale-kourier-gateway   1/1     1            1           97s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/3scale-kourier-control-759cb78499   1         1         1       97s
replicaset.apps/3scale-kourier-gateway-6f49d5768b   1         1         1       97s

配置Serving 預設使用Kourier:

Serving的配置位於config-network configmap中,修改之前的內容為:

    ################################
    #                              #
    #    EXAMPLE CONFIGURATION     #
    #                              #
    ################################

    # This block is not actually functional configuration,
    # but serves to illustrate the available configuration
    # options and document them in a way that is accessible
    # to users that `kubectl edit` this config map.
    #
    # These sample configuration options may be copied out of
    # this example block and unindented to be in the data block
    # to actually change the configuration.

    # DEPRECATED:
    # istio.sidecar.includeOutboundIPRanges is obsolete.
    # The current versions have outbound network access enabled by default.
    # If you need this option for some reason, please use global.proxy.includeIPRanges in Istio.
    #
    # istio.sidecar.includeOutboundIPRanges: "*"

    # ingress.class specifies the default ingress class
    # to use when not dictated by Route annotation.
    #
    # If not specified, will use the Istio ingress.
    #
    # Note that changing the Ingress class of an existing Route
    # will result in undefined behavior.  Therefore it is best to only
    # update this value during the setup of Knative, to avoid getting
    # undefined behavior.
    ingress.class: "istio.ingress.networking.knative.dev"

    # certificate.class specifies the default Certificate class
    # to use when not dictated by Route annotation.
    #
    # If not specified, will use the Cert-Manager Certificate.
    #
    # Note that changing the Certificate class of an existing Route
    # will result in undefined behavior.  Therefore it is best to only
    # update this value during the setup of Knative, to avoid getting
    # undefined behavior.
    certificate.class: "cert-manager.certificate.networking.knative.dev"

    # domainTemplate specifies the golang text template string to use
    # when constructing the Knative service's DNS name. The default
    # value is "{{.Name}}.{{.Namespace}}.{{.Domain}}". And those three
    # values (Name, Namespace, Domain) are the only variables defined.
    #
    # Changing this value might be necessary when the extra levels in
    # the domain name generated is problematic for wildcard certificates
    # that only support a single level of domain name added to the
    # certificate's domain. In those cases you might consider using a value
    # of "{{.Name}}-{{.Namespace}}.{{.Domain}}", or removing the Namespace
    # entirely from the template. When choosing a new value be thoughtful
    # of the potential for conflicts - for example, when users choose to use
    # characters such as `-` in their service, or namespace, names.
    # {{.Annotations}} can be used for any customization in the go template if needed.
    # We strongly recommend keeping namespace part of the template to avoid domain name clashes
    # Example '{{.Name}}-{{.Namespace}}.{{ index .Annotations "sub"}}.{{.Domain}}'
    # and you have an annotation {"sub":"foo"}, then the generated template would be {Name}-{Namespace}.foo.{Domain}
    domainTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"

    # tagTemplate specifies the golang text template string to use
    # when constructing the DNS name for "tags" within the traffic blocks
    # of Routes and Configuration.  This is used in conjunction with the
    # domainTemplate above to determine the full URL for the tag.
    tagTemplate: "{{.Tag}}-{{.Name}}"

    # Controls whether TLS certificates are automatically provisioned and
    # installed in the Knative ingress to terminate external TLS connection.
    # 1. Enabled: enabling auto-TLS feature.
    # 2. Disabled: disabling auto-TLS feature.
    autoTLS: "Disabled"

    # Controls the behavior of the HTTP endpoint for the Knative ingress.
    # It requires autoTLS to be enabled.
    # 1. Enabled: The Knative ingress will be able to serve HTTP connection.
    # 2. Disabled: The Knative ingress will reject HTTP traffic.
    # 3. Redirected: The Knative ingress will send a 302 redirect for all
    # http connections, asking the clients to use HTTPS
    httpProtocol: "Enabled"

執行以下命令,完成修改工作:

kubectl patch configmap/config-network \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'

獲取外部IP或CNAME:

kubectl --namespace kourier-system get service kourier

由於我們的叢集是eks,所以可以看到如下輸出:

NAME      TYPE           CLUSTER-IP      EXTERNAL-IP                                                                          PORT(S)                      AGE
kourier   LoadBalancer   10.100.47.159   a2f4c64ef34ea40b03d8933e1137ef-424fef949c9c7ad3.elb.ap-southeast-1.amazonaws.com   80:32327/TCP,443:31313/TCP   111s

配置dns

此處可以使用Magic DNS(http://xip.io), Real DNS以及Temporary DNS三種之一。

Magic DNS僅在叢集LoadBalancer服務公開了IPv4地址的情況下才有效,因此不適用於IPv6叢集,AWS或Minikube之類的本地設定。

生產環境的話,應該考慮使用Real DNS。

前面已經提到,我們使用的eks,決定了網路層(Service kourier的外部地址)返回的是一個CNAME。所以我們需要為domain配置CNAME記錄。此處我們將http://knative.example.com 替換為http://serverless.xx.me

# Here knative.example.com is the domain suffix for your cluster
*.serverless.xx.me == CNAME a2f4c64ef34ea40b03d8933e1137ef-424fef949c9c7ad3.elb.ap-southeast-1.amazonaws.com

一旦你的dns提供商已經配置完成,你需要修改從configmap config-domain 告訴Knative去使用你自定義的domain:

修改之前我們看下預設的config-domain內容:

    ################################
    #                              #
    #    EXAMPLE CONFIGURATION     #
    #                              #
    ################################

    # This block is not actually functional configuration,
    # but serves to illustrate the available configuration
    # options and document them in a way that is accessible
    # to users that `kubectl edit` this config map.
    #
    # These sample configuration options may be copied out of
    # this example block and unindented to be in the data block
    # to actually change the configuration.

    # Default value for domain.
    # Although it will match all routes, it is the least-specific rule so it
    # will only be used if no other domain matches.
    example.com: |

    # These are example settings of domain.
    # example.org will be used for routes having app=nonprofit.
    example.org: |
      selector:
        app: nonprofit

    # Routes having domain suffix of 'svc.cluster.local' will not be exposed
    # through Ingress. You can define your own label selector to assign that
    # domain suffix to your Route here, or you can set the label
    #    "serving.knative.dev/visibility=cluster-local"
    # to achieve the same effect.  This shows how to make routes having
    # the label app=secret only exposed to the local cluster.
    svc.cluster.local: |
      selector:
        app: secret

那麼執行下面的命令完成修改,

# Replace knative.example.com with your domain suffix
kubectl patch configmap/config-domain \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"serverless.xx.me":""}}'

檢視所有的Knative 元件執行狀態,如果STATUS是Running或者 Completed 即部署成功。

kubectl get pods --namespace knative-serving
NAME                          READY   STATUS    RESTARTS   AGE
activator-8cb6d456-fgm8b      1/1     Running   0          3h19m
autoscaler-dd459ddbb-dk5bc    1/1     Running   0          3h19m
controller-8564567c4c-jzlfg   1/1     Running   0          3h19m
webhook-7fbf9c6d49-45lmq      1/1     Running   0          3h19m

部署可選元件

可選元件主要包括:

  • HPA autoscaling
  • TLS with cert-manager
  • TLS via HTTP01
  • TLS wildcard support

可以根據實際需求選擇部署。

Hello World

接下來展示如何使用Knative部署應用程式,然後使用cURL請求與之互動。

示例程式

本指南演示了從Google Container Registry部署Hello World示例應用程式(Go)的基本工作流程。您可以使用這些步驟作為從其他登錄檔(例如Docker Hub)部署自己的容器映象的指南。

要部署本地容器映象,您需要透過執行以下命令來禁用映象標籤解析:

# Set to dev.local/local-image when deploying local container images
docker tag local-image dev.local/local-image

部署應用

要使用Knative部署應用,您需要一個配置.yaml的檔案來定義服務。有關Service物件的更多資訊,請參見資源型別文件。

此配置檔案指定有關應用程式的後設資料,指向要部署的應用程式的託管映象,並允許配置部署。有關可用的配置選項的更多資訊,請參閱服務規範文件

建立一個名為service.yaml的新檔案,然後將以下內容複製並貼上到其中:

apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
  name: helloworld-go # The name of the app
  namespace: default # The namespace the app will use
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
          env:
            - name: TARGET # The environment variable printed out by the sample app
              value: "Go Sample v1"

執行以下命令完成建立:

kubectl apply --filename service.yaml

現在,您的服務已建立,Knative將執行以下步驟:

  • 為此版本的應用程式建立一個新的不變版本。
  • 執行網路程式設計以為您的應用建立路由,入口,服務和負載平衡器。
  • 根據流量自動上下縮放Pod,包括將活動Pod調整為零。

詳情如下:

NAME                                  TYPE           CLUSTER-IP      EXTERNAL-IP                                         PORT(S)                             AGE
service/helloworld-go                 ExternalName   <none>          kourier-internal.kourier-system.svc.cluster.local   <none>                              3m9s
service/helloworld-go-vx6cf           ClusterIP      10.100.14.241   <none>                                              80/TCP                              3m40s
service/helloworld-go-vx6cf-private   ClusterIP      10.100.100.96   <none>                                              80/TCP,9090/TCP,9091/TCP,8022/TCP   3m40s
service/kubernetes                    ClusterIP      10.100.0.1      <none>                                              443/TCP                             10d

NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/helloworld-go-vx6cf-deployment   0/0     0            0           3m40s

NAME                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/helloworld-go-vx6cf-deployment-6fcdd7b9cd   0         0         0       3m40s

NAME                                              LATESTCREATED         LATESTREADY           READY   REASON
configuration.serving.knative.dev/helloworld-go   helloworld-go-vx6cf   helloworld-go-vx6cf   True

NAME                                               CONFIG NAME     K8S SERVICE NAME      GENERATION   READY   REASON
revision.serving.knative.dev/helloworld-go-vx6cf   helloworld-go   helloworld-go-vx6cf   1            True

NAME                                      URL                                                   READY   REASON
route.serving.knative.dev/helloworld-go   http://helloworld-go.default.serverless.xx.me   True

NAME                                        URL                                                   LATESTCREATED         LATESTREADY           READY   REASON
service.serving.knative.dev/helloworld-go   http://helloworld-go.default.serverless.xx.me   helloworld-go-vx6cf   helloworld-go-vx6cf   True

發起請求,測試並觀察結果:

curl http://helloworld-go.default.serverless.xx.me
Hello World: Go Sample v1!

至此,我們的使用knative 部署了hello-world應用,並且測試成功。

結論

本文簡單介紹了Knative serving以及部署以及hello-world demo。接下來我們會透過解讀hello-world,梳理一下整個Knative serving的實現。

相關文章