體驗 正式釋出 的OSM v1.0.0 版本

張善友發表於2022-02-03

2021年10月份釋出了OSM 1.0 RC[1],在過去的幾個月裡,OSM 的貢獻者一直在努力為 v1.0.0 版本的釋出做準備。2022年2月1日,OSM 團隊正式釋出 1.0.0 版本[2]。 OSM 從最初的釋出到現在已經走了很長的路,團隊繼續專注於社群需要的關鍵和必要的功能。Open Service Mesh(OSM)是一個 輕量級、 可擴充套件的 Service Mesh 工具,旨在通過引入簡單性和降低複雜性來管理和保護 K8s 叢集內的 API。它基於 envoy Proxy 並將其作為 sidecar 容器注入到每個Observable應用程式中,該應用程式依次執行流量管理、路由策略、捕獲指標等。

微軟把Open Service Mesh 捐贈給雲原生計算基金會(CNCF),以確保它由社群主導,並具有開放的治理,OSM目前還是 沙箱專案。

1.0 版本已經支援多叢集和混合環境中執行 OSM。 1.0版本中的一些新功能:

  • 新的內部控制平面事件管理框架來處理對 Kubernetes 叢集和策略的更改
  • 拒絕/忽略無效 SMI TrafficTarget 資源的驗證
  • 改進控制平面記憶體利用率,OSM 現在可以根據記憶體使用情況自動縮放。
  • 支援用於網狀流量的 TCP 伺服器優先協議。appProtocol: tcp-server-first 除了在 Egress 策略中指定的服務之外,現在可以在網格中的服務埠上指定,以減少 MySQL 和 PostgreSQL 等協議的延遲。
  • OSM 附帶的 Grafana 儀表板更加準確和一致。
  • OSM 控制平面映象現在是多架構的,支援 linux/amd64 和 linux/arm64。

自上次釋出以來,osmCLI 也有了一些改進。

  • osm support bug-report除了網格內的 Pod 之外,收集日誌和其他有助於除錯的資訊的命令現在可以從 OSM 的控制平面收集日誌。
  • 對於在沒有 Helm 的情況下管理 OSM 生命週期的使用者,該osm install命令現在支援選擇性地清理由控制平面建立的 CustomResourceDefinition、webhook 配置和資源以簡化解除安裝。
  • osm version命令現在將顯示安裝在叢集上的 OSM 版本以及 CLI 的版本。

檢視我們最新更新的文件網站[3],瞭解更多關於特性、演示和架構的資訊。

顯著特性

Open Service Mesh 相對於Istio 來說,確實很輕量。SMI 處理了所有你期望的標準服務 Mesh 功能,包括使用 mTLS 確保服務之間的通訊安全,管理訪問控制策略,服務監控等。

  • 為服務定義並執行細化的訪問控制策略,基於 Service Mesh Interface (SMI) 的實現,主要包括 Traffic Access ControlTraffic SpecsTraffic Split 以及 Traffic Metrics

  • 通過啟用相互 TLS (mTLS) 來保護服務與服務之間的通訊 ;

  • 定義和執行服務間的訪問控制策略;

  • 通過 Prometheus 和 Grafana 完成器觀察性;

  • 可與外部證書管理服務進行整合;

  • 使用 Envoy 邊車代理自動注入,將應用程式加入到 OSM 網格中;

上手體驗

這裡我使用 Rancher Desktop[4] 作為我本地的實驗環境,來親手試一試看了。

安裝非常簡單,參考文件 [5],直接去 Release 頁面下載預編譯好的二進位制檔案。可將二進位制檔案加入到 $PATH 中。

wget  https://github.com/openservicemesh/osm/releases/download/v1.0.0/osm-v1.0.0-windows-amd64.zip -o osm.zip
unzip osm.zip
osm.exe version

下面的命令顯示瞭如何在 Kubernetes 叢集上安裝 OSM。此命令啟用 PrometheusGrafanaJaeger整合。檔案中的osm.enablePermissiveTrafficPolicychart 引數values.yaml 指示 OSM 忽略任何策略,讓流量在 Pod 之間自由流動。在 OSM 中的寬鬆流量策略模式下,系統會繞過 SMI 流量策略強制執行。 在此模式下,OSM 會自動發現屬於服務網格一部分的服務,並在每個 Envoy 代理挎鬥上對流量策略規則進行程式設計,以便能夠與這些服務通訊。

osm install  --mesh-name "osm-system" --osm-namespace "osm" --set=osm.enablePermissiveTrafficPolicy=true --set=osm.deployPrometheus=true  --set=osm.deployGrafana=true   --set=osm.deployJaeger=true

可以看到預設安裝完成後,都在 osm-system 名稱空間下有6個pod:

image

上圖是使用k8slens :https://k8slens.dev/ ,這裡簡要介紹一下lens:(Lens 就是一個強大的 IDE,可以實時檢視叢集狀態,實時檢視日誌流,方便排查故障。有了 Lens,你可以更方便快捷地使用你的叢集,從根本上提高工作效率和業務迭代速度。Lens 可以管理多叢集,它使用內建的 kubectl 通過 kubeconfig 來訪問叢集,支援本地叢集和外部叢集(如EKS、AKS、GKE、Pharos、UCP、Rancher 等),甚至連 Openshift 也支援。)

  • osm-controller:osm控制器

  • osm-grafana:Dashboard 相關,可通過 osm dashboard 命令喚起;

  • osm-prometheus:採集 metrics ;

  • osm-injector:注入程式

  • osm-bootstrap:啟動

  • jaeger :鏈路追蹤

檢查 OSM 控制器Deployment、Pod 和svc

kubectl get deployment,pod,service -n osm --selector app=osm-controller

正常執行的 OSM 控制器將如下所示:

image

檢查 OSM 注入程式Deployment、Pod 和服務

kubectl get deployment,pod,service -n osm --selector app=osm-injector

正常執行的 OSM 注入程式將如下所示:

image

檢查 OSM 啟動 Deployment、Pod 和服務

kubectl get deployment,pod,service -n osm --selector app=osm-bootstrap

image

檢查驗證 Webhook 和改變 Webhook

kubectl get ValidatingWebhookConfiguration --selector app=osm-controller

正常執行的 OSM 驗證 Webhook 將如下所示:

image

檢查改變 Webhook 的服務和 CA 捆綁包

kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm-system -o json | jq '.webhooks[0].clientConfig.service'

正確配置的改變 Webhook 配置將如下所示:

{
       "name": "osm-validator",
       "namespace": "osm",
       "path": "/validate",
       "port": 9093
}

檢查 osm-mesh-config 資源

檢查該 ConfigMap 是否存在:kubectl get meshconfig osm-mesh-config -n osm

檢查 OSM MeshConfig 的內容

kubectl get meshconfig osm-mesh-config -n osm -o yaml

PS C:\Users\zsygz> kubectl get meshconfig osm-mesh-config -n osm -o yaml
apiVersion: config.openservicemesh.io/v1alpha1
kind: MeshConfig
metadata:
   creationTimestamp: "2022-02-03T07:47:42Z"
   generation: 1
   name: osm-mesh-config
   namespace: osm
   resourceVersion: "230958"
   uid: 2701cf39-02dd-4d8d-b920-30120f52dc66
spec:
   certificate:
     certKeyBitSize: 2048
     serviceCertValidityDuration: 24h
   featureFlags:
     enableAsyncProxyServiceMapping: false
     enableEgressPolicy: true
     enableEnvoyActiveHealthChecks: false
     enableIngressBackendPolicy: true
     enableMulticlusterMode: false
     enableRetryPolicy: false
     enableSnapshotCacheMode: false
     enableWASMStats: true
   observability:
     enableDebugServer: false
     osmLogLevel: info
     tracing:
       enable: false
   sidecar:
     configResyncInterval: 0s
     enablePrivilegedInitContainer: false
     logLevel: error
     resources: {}
   traffic:
     enableEgress: false
     enablePermissiveTrafficPolicyMode: true
     inboundExternalAuthorization:
       enable: false
       failureModeAllow: false
       statPrefix: inboundExtAuthz
       timeout: 1s
     inboundPortExclusionList: []
     outboundIPRangeExclusionList: []
     outboundPortExclusionList: []

以及一系列的 CRD

PS C:\Users\zsygz> kubectl get crds -n osm
NAME                                                            CREATED AT
addons.k3s.cattle.io                                            2022-01-03T02:00:57Z
helmcharts.helm.cattle.io                                       2022-01-03T02:00:57Z
helmchartconfigs.helm.cattle.io                                 2022-01-03T02:00:57Z
middlewaretcps.traefik.containo.us                              2022-01-03T02:03:26Z
ingressrouteudps.traefik.containo.us                            2022-01-03T02:03:26Z
tlsstores.traefik.containo.us                                   2022-01-03T02:03:26Z
serverstransports.traefik.containo.us                           2022-01-03T02:03:26Z
traefikservices.traefik.containo.us                             2022-01-03T02:03:26Z
ingressroutetcps.traefik.containo.us                            2022-01-03T02:03:26Z
middlewares.traefik.containo.us                                 2022-01-03T02:03:26Z
tlsoptions.traefik.containo.us                                  2022-01-03T02:03:26Z
ingressroutes.traefik.containo.us                               2022-01-03T02:03:26Z
challenges.acme.cert-manager.io                                 2022-01-03T10:05:42Z
certificaterequests.cert-manager.io                             2022-01-03T10:05:42Z
clusterissuers.cert-manager.io                                  2022-01-03T10:05:42Z
issuers.cert-manager.io                                         2022-01-03T10:05:42Z
orders.acme.cert-manager.io                                     2022-01-03T10:05:42Z
certificates.cert-manager.io                                    2022-01-03T10:05:42Z
features.management.cattle.io                                   2022-01-03T11:35:16Z
navlinks.ui.cattle.io                                           2022-01-03T11:35:19Z
clusters.management.cattle.io                                   2022-01-03T11:35:20Z
apiservices.management.cattle.io                                2022-01-03T11:35:20Z
clusterregistrationtokens.management.cattle.io                  2022-01-03T11:35:20Z
settings.management.cattle.io                                   2022-01-03T11:35:20Z
preferences.management.cattle.io                                2022-01-03T11:35:20Z
clusterrepos.catalog.cattle.io                                  2022-01-03T11:35:20Z
operations.catalog.cattle.io                                    2022-01-03T11:35:20Z
apps.catalog.cattle.io                                          2022-01-03T11:35:20Z
fleetworkspaces.management.cattle.io                            2022-01-03T11:35:20Z
managedcharts.management.cattle.io                              2022-01-03T11:35:20Z
clusters.provisioning.cattle.io                                 2022-01-03T11:35:21Z
rkeclusters.rke.cattle.io                                       2022-01-03T11:35:21Z
rkecontrolplanes.rke.cattle.io                                  2022-01-03T11:35:21Z
rkebootstraps.rke.cattle.io                                     2022-01-03T11:35:21Z
rkebootstraptemplates.rke.cattle.io                             2022-01-03T11:35:21Z
custommachines.rke.cattle.io                                    2022-01-03T11:35:21Z
clusters.cluster.x-k8s.io                                       2022-01-03T11:35:21Z
machinedeployments.cluster.x-k8s.io                             2022-01-03T11:35:21Z
machinehealthchecks.cluster.x-k8s.io                            2022-01-03T11:35:21Z
machines.cluster.x-k8s.io                                       2022-01-03T11:35:22Z
machinesets.cluster.x-k8s.io                                    2022-01-03T11:35:22Z
authconfigs.management.cattle.io                                2022-01-03T11:35:22Z
groupmembers.management.cattle.io                               2022-01-03T11:35:22Z
groups.management.cattle.io                                     2022-01-03T11:35:22Z
tokens.management.cattle.io                                     2022-01-03T11:35:22Z
userattributes.management.cattle.io                             2022-01-03T11:35:22Z
users.management.cattle.io                                      2022-01-03T11:35:22Z
catalogs.management.cattle.io                                   2022-01-03T11:35:23Z
clusterroletemplatebindings.management.cattle.io                2022-01-03T11:35:23Z
catalogtemplates.management.cattle.io                           2022-01-03T11:35:23Z
dynamicschemas.management.cattle.io                             2022-01-03T11:35:23Z
catalogtemplateversions.management.cattle.io                    2022-01-03T11:35:23Z
etcdbackups.management.cattle.io                                2022-01-03T11:35:23Z
clusteralerts.management.cattle.io                              2022-01-03T11:35:23Z
globalrolebindings.management.cattle.io                         2022-01-03T11:35:23Z
clusteralertgroups.management.cattle.io                         2022-01-03T11:35:23Z
clustercatalogs.management.cattle.io                            2022-01-03T11:35:23Z
globalroles.management.cattle.io                                2022-01-03T11:35:23Z
clusterloggings.management.cattle.io                            2022-01-03T11:35:23Z
kontainerdrivers.management.cattle.io                           2022-01-03T11:35:23Z
clusteralertrules.management.cattle.io                          2022-01-03T11:35:23Z
apps.project.cattle.io                                          2022-01-03T11:35:23Z
nodedrivers.management.cattle.io                                2022-01-03T11:35:23Z
clustermonitorgraphs.management.cattle.io                       2022-01-03T11:35:23Z
clusterscans.management.cattle.io                               2022-01-03T11:35:23Z
apprevisions.project.cattle.io                                  2022-01-03T11:35:23Z
pipelineexecutions.project.cattle.io                            2022-01-03T11:35:23Z
nodepools.management.cattle.io                                  2022-01-03T11:35:23Z
nodetemplates.management.cattle.io                              2022-01-03T11:35:23Z
pipelinesettings.project.cattle.io                              2022-01-03T11:35:23Z
composeconfigs.management.cattle.io                             2022-01-03T11:35:23Z
nodes.management.cattle.io                                      2022-01-03T11:35:23Z
podsecuritypolicytemplateprojectbindings.management.cattle.io   2022-01-03T11:35:24Z
multiclusterapps.management.cattle.io                           2022-01-03T11:35:24Z
pipelines.project.cattle.io                                     2022-01-03T11:35:23Z
podsecuritypolicytemplates.management.cattle.io                 2022-01-03T11:35:24Z
sourcecodecredentials.project.cattle.io                         2022-01-03T11:35:24Z
multiclusterapprevisions.management.cattle.io                   2022-01-03T11:35:24Z
projectnetworkpolicies.management.cattle.io                     2022-01-03T11:35:24Z
sourcecodeproviderconfigs.project.cattle.io                     2022-01-03T11:35:24Z
monitormetrics.management.cattle.io                             2022-01-03T11:35:24Z
sourcecoderepositories.project.cattle.io                        2022-01-03T11:35:24Z
notifiers.management.cattle.io                                  2022-01-03T11:35:24Z
projectroletemplatebindings.management.cattle.io                2022-01-03T11:35:24Z
projects.management.cattle.io                                   2022-01-03T11:35:24Z
projectalerts.management.cattle.io                              2022-01-03T11:35:24Z
projectalertgroups.management.cattle.io                         2022-01-03T11:35:24Z
rkek8ssystemimages.management.cattle.io                         2022-01-03T11:35:24Z
projectcatalogs.management.cattle.io                            2022-01-03T11:35:24Z
projectloggings.management.cattle.io                            2022-01-03T11:35:24Z
rkek8sserviceoptions.management.cattle.io                       2022-01-03T11:35:24Z
projectalertrules.management.cattle.io                          2022-01-03T11:35:24Z
rkeaddons.management.cattle.io                                  2022-01-03T11:35:24Z
roletemplates.management.cattle.io                              2022-01-03T11:35:24Z
projectmonitorgraphs.management.cattle.io                       2022-01-03T11:35:24Z
samltokens.management.cattle.io                                 2022-01-03T11:35:24Z
clustertemplates.management.cattle.io                           2022-01-03T11:35:24Z
clustertemplaterevisions.management.cattle.io                   2022-01-03T11:35:24Z
cisconfigs.management.cattle.io                                 2022-01-03T11:35:24Z
cisbenchmarkversions.management.cattle.io                       2022-01-03T11:35:24Z
templates.management.cattle.io                                  2022-01-03T11:35:24Z
templateversions.management.cattle.io                           2022-01-03T11:35:24Z
templatecontents.management.cattle.io                           2022-01-03T11:35:24Z
globaldnses.management.cattle.io                                2022-01-03T11:35:24Z
globaldnsproviders.management.cattle.io                         2022-01-03T11:35:24Z
prometheuses.monitoring.coreos.com                              2022-01-03T11:35:29Z
prometheusrules.monitoring.coreos.com                           2022-01-03T11:35:29Z
alertmanagers.monitoring.coreos.com                             2022-01-03T11:35:29Z
servicemonitors.monitoring.coreos.com                           2022-01-03T11:35:29Z
azureconfigs.rke-machine-config.cattle.io                       2022-01-03T11:35:32Z
vmwarevsphereconfigs.rke-machine-config.cattle.io               2022-01-03T11:35:32Z
digitaloceanconfigs.rke-machine-config.cattle.io                2022-01-03T11:35:32Z
harvesterconfigs.rke-machine-config.cattle.io                   2022-01-03T11:35:32Z
linodeconfigs.rke-machine-config.cattle.io                      2022-01-03T11:35:32Z
amazonec2configs.rke-machine-config.cattle.io                   2022-01-03T11:35:32Z
digitaloceanmachines.rke-machine.cattle.io                      2022-01-03T11:35:32Z
azuremachines.rke-machine.cattle.io                             2022-01-03T11:35:32Z
linodemachines.rke-machine.cattle.io                            2022-01-03T11:35:32Z
vmwarevspheremachines.rke-machine.cattle.io                     2022-01-03T11:35:32Z
harvestermachines.rke-machine.cattle.io                         2022-01-03T11:35:32Z
amazonec2machines.rke-machine.cattle.io                         2022-01-03T11:35:32Z
digitaloceanmachinetemplates.rke-machine.cattle.io              2022-01-03T11:35:32Z
azuremachinetemplates.rke-machine.cattle.io                     2022-01-03T11:35:32Z
linodemachinetemplates.rke-machine.cattle.io                    2022-01-03T11:35:32Z
amazonec2machinetemplates.rke-machine.cattle.io                 2022-01-03T11:35:32Z
vmwarevspheremachinetemplates.rke-machine.cattle.io             2022-01-03T11:35:32Z
harvestermachinetemplates.rke-machine.cattle.io                 2022-01-03T11:35:32Z
bundles.fleet.cattle.io                                         2022-01-03T11:35:20Z
bundledeployments.fleet.cattle.io                               2022-01-03T11:36:37Z
bundlenamespacemappings.fleet.cattle.io                         2022-01-03T11:36:37Z
clustergroups.fleet.cattle.io                                   2022-01-03T11:36:37Z
clusters.fleet.cattle.io                                        2022-01-03T11:35:20Z
clusterregistrationtokens.fleet.cattle.io                       2022-01-03T11:36:37Z
gitrepos.fleet.cattle.io                                        2022-01-03T11:36:37Z
clusterregistrations.fleet.cattle.io                            2022-01-03T11:36:37Z
gitreporestrictions.fleet.cattle.io                             2022-01-03T11:36:37Z
contents.fleet.cattle.io                                        2022-01-03T11:36:37Z
imagescans.fleet.cattle.io                                      2022-01-03T11:36:37Z
gitjobs.gitjob.cattle.io                                        2022-01-03T11:36:37Z
components.dapr.io                                              2022-01-07T10:13:43Z
configurations.dapr.io                                          2022-01-07T10:13:44Z
subscriptions.dapr.io                                           2022-01-07T10:13:45Z
meshconfigs.config.openservicemesh.io                           2022-02-03T07:46:15Z
multiclusterservices.config.openservicemesh.io                  2022-02-03T07:46:15Z
egresses.policy.openservicemesh.io                              2022-02-03T07:46:15Z
trafficsplits.split.smi-spec.io                                 2022-02-03T07:46:15Z
tcproutes.specs.smi-spec.io                                     2022-02-03T07:46:15Z
ingressbackends.policy.openservicemesh.io                       2022-02-03T07:46:15Z
traffictargets.access.smi-spec.io                               2022-02-03T07:46:15Z
httproutegroups.specs.smi-spec.io                               2022-02-03T07:46:15Z

使用以下命令獲取已安裝的 SMI CRD 版本:

PS C:\Users\zsygz> osm mesh list

MESH NAME    MESH NAMESPACE   VERSION   ADDED NAMESPACES
osm-system   osm              v1.0.0

MESH NAME    MESH NAMESPACE   SMI SUPPORTED
osm-system   osm              HTTPRouteGroup:v1alpha4,TCPRoute:v1alpha4,TrafficSplit:v1alpha2,TrafficTarget:v1alpha3

To list the OSM controller pods for a mesh, please run the following command passing in the mesh's namespace
         kubectl get pods -n <osm-mesh-namespace> -l app=osm-controller


實踐

下面我們來部署一個應用程式測試一下,OSM 強調的 Observable 的含義是什麼 ——使用者可以選擇哪些應用程式(名稱空間)應該在 OSM的管理範圍,OSM 會監控那些不影響其他人的應用程式!

  • 建立實驗用的 namespace, 並通過 osm namespace add 將其納入管理範圍中:

kubectl create namespace bookstore
kubectl create namespace bookbuyer
kubectl create namespace bookthief
kubectl create namespace bookwarehouse

osm namespace add bookstore --mesh-name=osm-system
osm namespace add bookbuyer --mesh-name=osm-system
osm namespace add bookthief --mesh-name=osm-system
osm namespace add bookwarehouse --mesh-name=osm-system

osm metrics enable --namespace bookstore
osm metrics enable --namespace bookbuyer
osm metrics enable --namespace bookthief
osm metrics enable --namespace bookwarehouse

現在,四個名稱空間中的每一個都用 標記openservicemesh.io/monitored-by: osm和註釋openservicemesh.io/sidecar-injection: enabled。OSM 控制器注意到這些名稱空間上的標籤和註釋,將開始使用 Envoy sidecar 注入所有pod。

  • 部署實驗應用程式

PS C:\Users\zsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookbuyer.yaml
serviceaccount/bookbuyer created
deployment.apps/bookbuyer created
PS C:\Users\zsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookthief.yaml
serviceaccount/bookthief created
deployment.apps/bookthief created
PS C:\Users\zsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookstore.yaml
service/bookstore created
serviceaccount/bookstore created
deployment.apps/bookstore created
PS C:\Users\zsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookwarehouse.yaml
serviceaccount/bookwarehouse created
service/bookwarehouse created
deployment.apps/bookwarehouse created
PS C:\Users\zsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/mysql.yaml
serviceaccount/mysql created
service/mysql created
statefulset.apps/mysql created

PS C:\Users\zsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookstore-v2.yaml
service/bookstore-v2 created
serviceaccount/bookstore-v2 created
deployment.apps/bookstore-v2 created
traffictarget.access.smi-spec.io/bookstore-v2 created

使用下列命令檢查下安裝的資源:

kubectl get pods,deployments,serviceaccounts -n bookbuyer
kubectl get pods,deployments,serviceaccounts -n bookthief

kubectl get pods,deployments,serviceaccounts,services,endpoints -n bookstore
kubectl get pods,deployments,serviceaccounts,services,endpoints -n bookwarehouse

image

實驗裡為每個應用程式建立了一個Kubernetes 服務帳戶。服務帳戶用作應用程式的身份,稍後將在演示中使用它來建立服務到服務的訪問控制策略。

  • 本地訪問

可以通過 kubectl port-foward 在本地對剛才部署的應用進行訪問。我們也可以通過Rancher Desktop 來操作:

image

訪問 http://localhost:62300/  即可看到示例專案。例如:

image

通過 osm dashboard --osm-namespace=osm可直接喚起本地瀏覽器,並 port-foward 將 Grafana 開啟。

PS C:\Users\zsygz> osm dashboard --osm-namespace=osm
[+] Starting Dashboard forwarding
[+] Issuing open browser http://localhost:3000

Grafana 登入的預設使用者名稱和密碼是admin/admin。

image

  • 訪問控制策略

一旦應用程式啟動並執行,它們可以使用寬鬆流量策略模式或SMI 流量策略模式相互互動。在寬鬆流量策略模式下,應用服務之間的流量由 自動配置osm-controller,SMI Traffic Targets 定義的訪問控制策略不強制執行。在 SMI 策略模式下,預設情況下所有流量都被拒絕,除非使用 SMI 訪問和路由策略的組合明確允許。

前面我們安裝osm 的時候指定的--set=osm.enablePermissiveTrafficPolicy=true 就是寬鬆流量策略模式。從而允許應用程式之間的連線,而不需要 SMI 流量訪問策略。

kubectl edit meshconfig -n osm

將osm.enablePermissiveTrafficPolicy 改成false 儲存,從而禁用寬鬆流量策略模式,啟用SMI流量策略。

SMI 流量策略可用於以下方面:

  1. SMI 訪問控制策略,用於授權服務身份之間的流量訪問
  2. 用於定義路由規則以與訪問控制策略相關聯的 SMI 流量規範策略
  3. SMI 流量拆分策略可根據權重將客戶端流量引導至多個後端

我們現在來部署 SMI TrafficTarget 和 HTTPRouteGroup 策略:

kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/access/traffic-access-v1.yaml

kind: TrafficTarget apiVersion: access.smi-spec.io/v1alpha3 metadata: name: bookstore namespace: bookstore spec: destination: kind: ServiceAccount name: bookstore namespace: bookstore rules: - kind: HTTPRouteGroup name: bookstore-service-routes matches: - buy-a-book - books-bought sources: - kind: ServiceAccount name: bookbuyer namespace: bookbuyer --- apiVersion: specs.smi-spec.io/v1alpha4 kind: HTTPRouteGroup metadata: name: bookstore-service-routes namespace: bookstore spec: matches: - name: books-bought pathRegex: /books-bought methods: - GET headers: - "user-agent": ".*-http-client/*.*" - "client-app": "bookbuyer" - name: buy-a-book pathRegex: ".*a-book.*new" methods: - GET

這裡定義了兩個 SMI 中的資源 TrafficTarget 和 HTTPRouteGroup ,用來控制入口流量,應用後將允許訪問對應的服務。

清理

列出osm 掃描器下的所有名稱空間

osm ns list --mesh-name=osm-system

從 OSM 掃描器中刪除名稱空間 osm namespace remove bookbuyer --mesh-name=osm-system osm namespace remove bookstore --mesh-name=osm-system osm namespace remove bookthief --mesh-name=osm-system osm namespace remove bookwarehouse --mesh-name=osm-system
重新部署 刪除Envoy 邊車 kubectl rollout restart deployment bookbuyer -n bookbuyer kubectl rollout restart deployment bookstore -n bookstore kubectl rollout restart deployment bookthief -n bookthief kubectl rollout restart deployment bookwarehouse -n bookwarehouse
從k8s 叢集裡解除安裝osm osm uninstall mesh --mesh-name=osm-system  --osm-namespace=osm

總結

Open Service Mesh 相對來說,確實非常的輕量。所需要的訪問控制,流量切割等功能通過自己建立 SMI 資源來控制, Dapr 和 OSM 是非常好的一個實踐多執行時架構的組合。

image

參考資料

[1] 第一個候選版本: https://github.com/openservicemesh/osm/releases/tag/v1.0.0-rc.1

[2] 第一個1.0 正式版本: https://github.com/openservicemesh/osm/releases/tag/v1.0.0

[3] 文件網站: https://docs.openservicemesh.io/

[4] 通過Rancher Desktop在桌面上執行K8s :https://www.cnblogs.com/shanyou/p/15759035.html

[5] 設定OSM:https://release-v1-0.docs.openservicemesh.io/docs/getting_started/setup_osm/

相關文章