Knative Eventing是一個旨在滿足雲原生開發的常見需求的系統,並提供可組合的原語以啟用後期繫結事件源和事件使用者。具備以下設計目標:
- 服務在開發期間鬆散耦合並且在不同平臺(kubernetes,虛擬機器,SaaS 或者 PaaS)上獨立部署
- 生產者可以在消費者收聽之前生成事件,並且消費者可以表達對尚未生成的事件或事件類別的興趣。
- 可以連線服務以建立新的應用程式
- 無需修改生產者或消費者
- 能夠從特定的生產者中選擇特定的事件子集。
4. 確保跨服務的互操作性。與CloudEvents的設計目標是一致的,CloudEvents是由CNCF serverless工作組開發的跨服務互操作性的通用規範。
架構
目前 Knative Eventing 主要有三種使用模式:
- Source to Service
從源直接傳遞到單個服務(可定址端點,包括Knative服務或核心Kubernetes服務)。在這種情況下,如果目標服務不可用,則源負責重試或排隊事件。
- Channels 和 Subscriptions
透過渠道和訂閱,Knative事件系統定義了一個渠道,該渠道可以連線到各種後端(例如記憶體中,Kafka和GCP PubSub)來sourcing事件。每個通道可以具有一個或多個以Sink Services形式的訂閱使用者,如圖,該訂閱使用者可以接收事件訊息並根據需要對其進行處理。通道中的每個訊息都將格式化為CloudEvent,並在鏈中進一步傳送給其他訂閱使用者以進行進一步處理。通道和訂閱使用模式無法過濾訊息。
- Brokers 和 Triggers
Broker提供了一系列事件,可以透過屬性選擇事件。它接收事件並將其轉發給由一個或多個匹配Trigger定義的訂閱使用者。
Trigger描述了事件屬性的過濾器,應將其傳遞給可定址物件。您可以根據需要建立任意數量的Trigger。
更高階別的事件構造
在某些情況下,您可能希望一起使用一組協作功能,對於這些用例,Knative Eventing提供了兩個附加資源:
Source(源)
Source 是事件的來源,它是我們定義事件在何處生成以及如何將事件傳遞給關注物件的方式。例如,Knative 團隊開發了許多開箱即用的源。舉幾個例子:
- GCP PubSub
訂閱谷歌雲訊息釋出訂閱服務中的主題並監聽訊息。 - Kubernetes Event
Kubernetes 叢集中發生的所有事件的反饋。 - GitHub
監視 GitHub 儲存庫中的事件,例如版本的拉取,推送和建立釋出等。 - Container Source
如果你需要建立自己的事件源,Knative 還有一個抽象叫容器源,允許你輕鬆建立自定義的事件源,並打包為容器。詳細內容請閱讀“構建自定義事件源”章節。
這個列表只是整個事件的一部分,但整個事件清單在不斷的快速增長。你可以在 Knative 事件文件中的 Knative 生態系統部分檢視事件源的當前列表。
安裝
1:安裝CRD
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/eventing-crds.yaml
可以看到如下輸出:
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created
2:安裝Eventing核心元件
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/eventing-core.yaml
可以看到如下輸出:
namespace/knative-eventing created
serviceaccount/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
serviceaccount/pingsource-jobrunner created
clusterrolebinding.rbac.authorization.k8s.io/pingsource-jobrunner created
serviceaccount/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
configmap/config-br-default-channel created
configmap/config-br-defaults created
configmap/default-ch-webhook created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-tracing created
deployment.apps/eventing-controller created
deployment.apps/eventing-webhook created
service/eventing-webhook created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/messaging-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-jobrunner created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
secret/eventing-webhook-certs created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created
3:安裝Channel(訊息)層
此處Knative支援Apache Kafka,Google Cloud Pub/Sub,NATS,In-Memory幾種Channel。
此處為了演示,我們使用 In-Memory,該實現非常好,因為它既簡單又獨立,但是不適合生產用例。生產環境,建議大家使用其他幾種。
以下命令安裝在記憶體中執行的Channel的實現。
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/in-memory-channel.yaml
有如下輸出:
configmap/config-imc-event-dispatcher created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
service/imc-dispatcher created
serviceaccount/imc-dispatcher created
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
deployment.apps/imc-controller created
deployment.apps/imc-dispatcher created
4: 安裝Broker (訊息)層
一旦確定了要使用的Channel並安裝了它們,就可以透過控制使用哪個Channel來配置Broker。您可以透過名稱空間或特定的Broker將其選擇為叢集級別的預設值。這些由knative-eventing名稱空間中的config-br-defaults ConfigMap配置。
此處支援基於Channel和基於MT-Channel(多租戶)兩種。
基於MT-Channel(多租戶)的Broker即Knative提供的用來進行事件路由的多租戶Broker實現。
以下命令將安裝使用基於Channel的Broker實現:
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/channel-broker.yaml
有如下輸出
configmap/config-imc-event-dispatcher created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
service/imc-dispatcher created
serviceaccount/imc-dispatcher created
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
deployment.apps/imc-controller created
deployment.apps/imc-dispatcher created
要自定義使用哪種Broker channel實現,請更新以下ConfigMap以指定將哪些配置用於哪些名稱空間:
apiVersion: v1
data:
default-br-config: |
clusterDefault:
brokerClass: ChannelBasedBroker
apiVersion: v1
kind: ConfigMap
name: config-br-default-channel
namespace: knative-eventing
kind: ConfigMap
metadata:
annotations:
labels:
eventing.knative.dev/release: v0.14.0
name: config-br-defaults
namespace: knative-eventing
5: 監視Knative元件,直到所有元件都顯示“runing”狀態:
kubectl get pods --namespace knative-eventing
NAME READY STATUS RESTARTS AGE
eventing-controller-5d866849fd-xm4lz 1/1 Running 0 157m
eventing-webhook-59489cddcf-ncvr4 1/1 Running 0 157m
imc-controller-76d5bfd958-857x6 1/1 Running 0 47m
imc-dispatcher-6bd7c74d7d-pvh8b 1/1 Running 0 47m
6: 可選擴充套件
此處支援以下擴充套件,大家可以選擇安裝:
- Enable Broker
- Github Source
- Apache Camel-K Source
- Apache Kafka Source
- GCP Sources
- Apache CouchDB Source
- VMware Sources and Bindings
Knative Hello World
在上面的安裝中,是一個基本安裝。該demo需要更多的資源,所以為了讓demo能夠正常執行,需要執行下面的命令:
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/eventing.yaml
namespace/knative-eventing unchanged
serviceaccount/eventing-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator unchanged
serviceaccount/pingsource-jobrunner unchanged
clusterrolebinding.rbac.authorization.k8s.io/pingsource-jobrunner unchanged
serviceaccount/eventing-webhook unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding unchanged
configmap/config-br-default-channel unchanged
configmap/config-br-defaults unchanged
configmap/default-ch-webhook unchanged
configmap/config-leader-election unchanged
configmap/config-logging unchanged
configmap/config-observability unchanged
configmap/config-tracing unchanged
deployment.apps/eventing-controller unchanged
deployment.apps/eventing-webhook unchanged
service/eventing-webhook unchanged
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver configured
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/messaging-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter unchanged
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress unchanged
clusterrole.rbac.authorization.k8s.io/eventing-config-reader unchanged
clusterrole.rbac.authorization.k8s.io/channelable-manipulator configured
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-jobrunner unchanged
clusterrole.rbac.authorization.k8s.io/podspecable-binding configured
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding unchanged
clusterrole.rbac.authorization.k8s.io/source-observer configured
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev unchanged
secret/eventing-webhook-certs unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-channel-broker-controller created
customresourcedefinition.apiextensions.k8s.io/configmappropagations.configs.internal.knative.dev created
deployment.apps/broker-controller created
deployment.apps/broker-controller configured
customresourcedefinition.apiextensions.k8s.io/configmappropagations.configs.internal.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-channel-broker-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
serviceaccount/mt-broker-filter created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
serviceaccount/mt-broker-ingress created
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
deployment.apps/broker-filter created
service/broker-filter created
deployment.apps/broker-ingress created
service/broker-ingress created
deployment.apps/mt-broker-controller created
deployment.apps/broker-filter unchanged
service/broker-filter unchanged
deployment.apps/broker-ingress unchanged
service/broker-ingress unchanged
deployment.apps/mt-broker-controller unchanged
horizontalpodautoscaler.autoscaling/broker-ingress-hpa created
horizontalpodautoscaler.autoscaling/broker-filter-hpa created
horizontalpodautoscaler.autoscaling/broker-ingress-hpa unchanged
horizontalpodautoscaler.autoscaling/broker-filter-hpa unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter unchanged
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter unchanged
serviceaccount/mt-broker-filter unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress unchanged
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress unchanged
serviceaccount/mt-broker-ingress unchanged
configmap/config-imc-event-dispatcher unchanged
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator unchanged
clusterrole.rbac.authorization.k8s.io/imc-controller unchanged
clusterrole.rbac.authorization.k8s.io/imc-dispatcher unchanged
service/imc-dispatcher unchanged
serviceaccount/imc-dispatcher unchanged
serviceaccount/imc-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/imc-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher unchanged
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev unchanged
deployment.apps/imc-controller unchanged
deployment.apps/imc-dispatcher configured
在開始管理事件之前,您需要建立傳輸事件所需的物件。
安裝Knative 資料來源
建立並配置 Eventing 名稱空間
在本節中,您將建立事件示例名稱空間,然後將knative-eventing-injection標籤新增到該名稱空間。您可以使用名稱空間將它們組合在一起並組織您的Knative資源,包括Eventing子元件。
1:執行以下命令以建立一個名為event-example
的名稱空間:
kubectl create namespace event-example
2: 使用以下命令將標籤新增到您的名稱空間:
kubectl label namespace event-example knative-eventing-injection=enabled
namespace/event-example labeled
這為事件示例名稱空間提供了knative-eventing-injection標籤,該標籤新增了可用於管理事件的資源。
在下一節中,您將需要驗證在本節中新增的資源是否正在正確執行。然後,您可以建立管理事件所需的其餘事件資源。
檢查 Broker
執行狀況
代理確保事件產生者傳送的每個事件都到達正確的事件使用者。當您將名稱空間標記為可以進行事件準備時,將建立Broker,但是確保Broker正常執行很重要。在本指南中,您將使用預設Broker。
1:執行以下命令以驗證Broker處於健康狀態:
kubectl --namespace event-example get Broker default
這顯示您建立的Broker:
NAME READY REASON URL AGE
default True http://default-broker.event-example.svc.cluster.local 64s
當Broker處於READY = True狀態時,它可以開始管理收到的任何事件。
2: 如果READY = False
,請等待2分鐘,然後重新執行命令。如果繼續收到READY = False
,請參閱《除錯指南》以幫助解決問題。
現在,您的Broker已準備好管理事件,您可以建立和配置事件生產者和使用者。
建立 event 使用者
您的事件使用者將收到事件產生者傳送的事件。在此步驟中,您將建立兩個事件使用者,即hello-display
和goodbye-display
,以演示如何配置事件生產者以選擇性地針對特定使用者。
1: 要將hello-display
使用者部署到您的叢集,請執行以下命令:
kubectl --namespace event-example apply --filename - << END
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
# Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
# Service pointing at the previous Deployment. This will be the target for event
# consumption.
kind: Service
apiVersion: v1
metadata:
name: hello-display
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
END
2: 要將goodbye-display
使用者部署到您的叢集,請執行以下命令:
kubectl --namespace event-example apply --filename - << END
apiVersion: apps/v1
kind: Deployment
metadata:
name: goodbye-display
spec:
replicas: 1
selector:
matchLabels: &labels
app: goodbye-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
# Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
# Service pointing at the previous Deployment. This will be the target for event
# consumption.
kind: Service
apiVersion: v1
metadata:
name: goodbye-display
spec:
selector:
app: goodbye-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
END
3: 就像使用Broker一樣,透過執行以下命令來驗證事件使用者是否正在工作:
kubectl --namespace event-example get deployments hello-display goodbye-display
有如下輸出:
NAME READY UP-TO-DATE AVAILABLE AGE
hello-display 1/1 1 1 88s
goodbye-display 1/1 1 1 36s
DESIRED列中的副本數應與AVAILABLE列中的副本數匹配,這可能需要幾分鐘。如果兩分鐘後數字不匹配,請參閱《除錯指南》以幫助解決問題。
建立Triggers
Triggers定義了您希望每個事件使用者接收的事件。您的Broker使用Triggers將事件轉發給合適的使用者。每個Triggers都可以指定一個過濾器,以根據Cloud Event上下文屬性選擇相關事件。
1: 要建立第一個triggers,請執行以下命令:
kubectl --namespace event-example apply --filename - << END
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: hello-display
spec:
filter:
attributes:
type: greeting
subscriber:
ref:
apiVersion: v1
kind: Service
name: hello-display
END
該命令建立一個triggers,該triggers將所有型別為greeting的事件傳送到名為hello-display
的事件使用者。
2: 要建立第二個triggers,請執行以下命令:
kubectl --namespace event-example apply --filename - << END
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: goodbye-display
spec:
filter:
attributes:
source: sendoff
subscriber:
ref:
apiVersion: v1
kind: Service
name: goodbye-display
END
該命令建立一個triggers,該triggers將源傳送的所有事件傳送到名為goodbye-display
的事件使用者。
3:透過執行以下命令來驗證triggers是否正常工作:
kubectl --namespace event-example get triggers
這將返回您建立的hello-display
和goodbye-display
triggers:
NAME READY REASON BROKER SUBSCRIBER_URI AGE
goodbye-display True default http://goodbye-display.event-example.svc.cluster.local/ 109s
hello-display True default http://hello-display.event-example.svc.cluster.local/ 3m
如果正確配置了triggers,它們將準備就緒,並指向正確的Broker(預設Broker)和SUBSCRIBER_URI(triggerName.namespaceName.svc.cluster.local)。如果不是這種情況,請參閱《除錯指南》以幫助解決問題。
現在,您已經建立了接收和管理事件所需的所有資源。您建立了Broker,該Broker透過triggers管理傳送給事件使用者的事件。在下一節中,您將建立事件生產者,該事件生產者將用於建立事件。
建立 event 生產者
在本節中,您將建立一個事件生成器,可用於與之前建立的Knative Eventing子元件進行互動。大多數事件是系統建立的,但是本指南使用curl手動傳送單個事件,並演示正確的事件使用者如何接收這些事件。因為您只能從Eventing叢集中訪問Broker,所以必須在該叢集中建立Pod才能充當事件生產者。
在下一步中,您將建立一個Pod,該Pod執行curl命令將事件傳送到Eventing叢集中的Broker。
執行下面的命令,建立一個Pod:
kubectl --namespace event-example apply --filename - << END
apiVersion: v1
kind: Pod
metadata:
labels:
run: curl
name: curl
spec:
containers:
# This could be any image that we can SSH into and has curl.
- image: radial/busyboxplus:curl
imagePullPolicy: IfNotPresent
name: curl
resources: {}
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
END
現在,您已經設定了Eventing叢集來傳送和使用事件,接下來將使用HTTP請求來手動傳送單獨的事件,並在下一部分中演示每個事件如何針對您的單個事件使用者。
傳送 Events 到 Broker
現在,您已經建立了Pod,可以透過將HTTP請求傳送到Broker來建立事件。透過執行以下命令,將SSH SSH到Pod:
kubectl --namespace event-example attach curl -it
您已經進入Pod,現在可以發出HTTP請求。將會出現類似於以下內容的提示:
Defaulting container name to curl.
Use 'kubectl describe pod/ -n event-example' to see all of the containers in this pod.
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$
為了顯示您可以傳送的各種事件,您將提出三個請求:
1:要發出第一個請求,該請求建立的事件的型別為greeting,請在SSH終端中執行以下命令:
curl -v "http://default-broker.event-example.svc.cluster.local" \
-X POST \
-H "Ce-Id: say-hello" \
-H "Ce-Specversion: 0.3" \
-H "Ce-Type: greeting" \
-H "Ce-Source: not-sendoff" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello Knative!"}'
當Broker收到您的事件時,hello-display將被啟用並將其傳送給同名的事件使用者。
如果事件已收到,您將收到類似於以下內容的202接受響應:
< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 14 May 2020 10:16:43 GMT
2:要發出第二個請求,該請求建立一個具有源傳送事件的事件,請在SSH終端中執行以下命令:
curl -v "http://default-broker.event-example.svc.cluster.local" \
-X POST \
-H "Ce-Id: say-goodbye" \
-H "Ce-Specversion: 0.3" \
-H "Ce-Type: not-greeting" \
-H "Ce-Source: sendoff" \
-H "Content-Type: application/json" \
-d '{"msg":"Goodbye Knative!"}'
當Broker收到您的事件時,告別顯示將啟用並將事件傳送給同名事件消費者。
如果事件已收到,您將收到類似於以下內容的202接受響應:
< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 14 May 2020 10:18:04 GMT
3:要發出第三個請求,該請求建立的事件的型別為greeting和source sendoff,請在SSH終端中執行以下命令:
curl -v "http://default-broker.event-example.svc.cluster.local" \
-X POST \
-H "Ce-Id: say-hello-goodbye" \
-H "Ce-Specversion: 0.3" \
-H "Ce-Type: greeting" \
-H "Ce-Source: sendoff" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello Knative! Goodbye Knative!"}'
當Broker收到您的事件時,hello-display和goodbye-display將被啟用並將事件傳送給同名的事件使用者。
如果事件已收到,您將收到類似於以下內容的202接受響應:
< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 14 May 2020 10:19:13 GMT
4:透過在命令提示符下鍵入exit退出SSH。
您已經向hello-display事件使用者傳送了兩個事件,並向再見-顯示事件使用者傳送了兩個事件(請注意,say-hello-goodbye啟用了hello-display和goodbye-display的觸發條件)。您將在下一部分中驗證是否正確接收了這些事件。
檢查接受到的 events
傳送事件後,請驗證適當的Subscribers
是否已收到您的事件。
1: 透過執行以下命令檢視hello-display事件使用者的日誌:
kubectl --namespace event-example logs -l app=hello-display --tail=100
這將返回傳送到hello-display的事件的屬性和資料:
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 0.3
type: greeting
source: not-sendoff
id: say-hello
time: 2020-05-14T10:16:43.273679173Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2020-05-14T10:16:43.273622736Z
knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
traceparent: 00-9290fc892758739bcaddf3b18863c5ec-bff2f00567d9e675-00
Data,
{
"msg": "Hello Knative!"
}
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 0.3
type: greeting
source: sendoff
id: say-hello-goodbye
time: 2020-05-14T10:19:13.68289106Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2020-05-14T10:19:13.682844758Z
knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
traceparent: 00-ec47a6944893a7aeea50f449a48ecc47-7908f8cb59970c45-00
Data,
{
"msg": "Hello Knative! Goodbye Knative!"
}
2:透過執行以下命令檢視goodbye-display事件使用者的日誌:
kubectl --namespace event-example logs -l app=goodbye-display --tail=100
這將返回傳送到goodbye-display的事件的屬性和資料:
cloudevents.Event
Validation: valid
Context Attributes,
specversion: 0.3
type: not-greeting
source: sendoff
id: say-goodbye
time: 2020-05-14T10:18:04.583499078Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2020-05-14T10:18:04.583452664Z
knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
traceparent: 00-fa9715c616db95172fe0bbcacb5cf3b7-7ee4dd9a3256ec64-00
Data,
{
"msg": "Goodbye Knative!"
}
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 0.3
type: greeting
source: sendoff
id: say-hello-goodbye
time: 2020-05-14T10:19:13.68289106Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2020-05-14T10:19:13.682844758Z
knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
traceparent: 00-ec47a6944893a7aeea50f449a48ecc47-b53a2568b55b2736-00
Data,
{
"msg": "Hello Knative! Goodbye Knative!"
}