一文搞定 KubeKey 3.1.1 離線部署 KubeSphere 3.4.1 和 Kubernetes v1.28

kubesphere發表於2024-05-23

本文將詳細介紹,如何基於作業系統 openEuler 22.03 LTS SP3,利用 KubeKey 製作 KubeSphere 和 Kubernetes 離線安裝包,並實戰部署 KubeSphere 3.4.1Kubernetes 1.28.8 叢集。

實戰伺服器配置 (架構 1:1 復刻小規模生產環境,配置略有不同)

主機名 IP CPU 記憶體 系統盤 資料盤 用途
ksp-master-1 192.168.9.91 8 16 40 100 離線環境 KubeSphere/k8s-master
ksp-master-2 192.168.9.92 8 16 40 100 離線環境 KubeSphere/k8s-master
ksp-master-3 192.168.9.93 8 16 40 100 離線環境 KubeSphere/k8s-master
ksp-registry 192.168.9.90 4 8 40 100 離線環境部署節點和映象倉庫節點(Harbor)
ksp-deploy 192.168.9.89 4 8 40 100 聯網主機用於製作離線包
合計 4 32 64 200 500

實戰環境涉及軟體版本資訊

  • 作業系統:openEuler 22.03 LTS SP3

  • KubeSphere:v3.4.1

  • Kubernetes:v1.28.8

  • KubeKey: v3.1.1

  • Harbor: v2.10.1

1. 離線部署資源製作

本文增加了一臺能聯網的 ksp-deploy 節點,用來製作離線部署資源包。

在該節點下載 KubeKey 最新版(v3.1.1)。具體 KubeKey 版本號可以在 KubeKey 發行頁面 檢視。

1.1 下載 KubeKey

  • 下載最新版的 KubeKey
cd ~
mkdir kubekey
cd kubekey/

# 選擇中文區下載(訪問 GitHub 受限時使用)
export KKZONE=cn

# 執行下載命令,獲取最新版的 kk(受限於網路,有時需要執行多次)
curl -sfL https://get-kk.kubesphere.io | sh -

1.2 建立 manifests 檔案

KubeKey v3.1.0 之前 manifests 檔案需要根據模板手動編寫, 現在可以透過 KubeKey 的 create manifest 命令自動生成一份 manifests 模板。

  1. create manifest 支援的引數如下
$ ./kk create manifest --help
Create an offline installation package configuration file

Usage:
  kk create manifest [flags]

Flags:
      --arch stringArray         Specify a supported arch (default [amd64])
      --debug                    Print detailed information
  -f, --filename string          Specify a manifest file path
  -h, --help                     help for manifest
      --ignore-err               Ignore the error message, remove the host which reported error and force to continue
      --kubeconfig string        Specify a kubeconfig file
      --name string              Specify a name of manifest object (default "sample")
      --namespace string         KubeKey namespace to use (default "kubekey-system")
      --with-kubernetes string   Specify a supported version of kubernetes
      --with-registry            Specify a supported registry components
  -y, --yes                      Skip confirm check
  1. 官方示例(支援多叢集、多架構)
# 示例:建立包含 kubernetes v1.24.17,v1.25.16,且 cpu 架構為 amd64、arm64 的 manifests 檔案。
./kk create manifest --with-kubernetes v1.24.17,v1.25.16 --arch amd64 --arch arm64
  1. 建立一個 amd64 架構 kubernetes v1.28.8 的 manifests 檔案
./kk create manifest --with-kubernetes v1.28.8 --arch amd64
  1. 生成的配置檔案如下
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: ubuntu
    version: "20.04"
    osImage: Ubuntu 20.04.6 LTS
    repository:
      iso:
        localPath:
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.28.8
  components:
    helm:
      version: v3.14.3
    cni:
      version: v1.2.0
    etcd:
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.3
    crictl:
      version: v1.29.0

  images:
  - docker.io/kubesphere/pause:3.9
  - docker.io/kubesphere/kube-apiserver:v1.28.8
  - docker.io/kubesphere/kube-controller-manager:v1.28.8
  - docker.io/kubesphere/kube-scheduler:v1.28.8
  - docker.io/kubesphere/kube-proxy:v1.28.8
  - docker.io/coredns/coredns:1.9.3
  - docker.io/kubesphere/k8s-dns-node-cache:1.22.20
  - docker.io/calico/kube-controllers:v3.27.3
  - docker.io/calico/cni:v3.27.3
  - docker.io/calico/node:v3.27.3
  - docker.io/calico/pod2daemon-flexvol:v3.27.3
  - docker.io/calico/typha:v3.27.3
  - docker.io/flannel/flannel:v0.21.3
  - docker.io/flannel/flannel-cni-plugin:v1.1.2
  - docker.io/cilium/cilium:v1.15.3
  - docker.io/cilium/operator-generic:v1.15.3
  - docker.io/hybridnetdev/hybridnet:v0.8.6
  - docker.io/kubeovn/kube-ovn:v1.10.10
  - docker.io/kubesphere/multus-cni:v3.8
  - docker.io/openebs/provisioner-localpv:3.3.0
  - docker.io/openebs/linux-utils:3.3.0
  - docker.io/library/haproxy:2.9.6-alpine
  - docker.io/plndr/kube-vip:v0.7.2
  - docker.io/kubesphere/kata-deploy:stable
  - docker.io/kubesphere/node-feature-discovery:v0.10.0
  registry:
    auths: {}
  1. 修改配置檔案

KubeKey v3.1.1 生成的 manifests 配置檔案適用於 ubuntu 部署純 Kubernetes 叢集,並沒有部署 KubeSphere 的映象。因此,我們需要結合部署 KubeSphere 需要的映象列表,生成一份完整的 manifests 檔案。

  1. 下載 KubeSphere 映象列表
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/images-list.txt

完整的映象一共 120 個,受限於篇幅,這裡不做展示,請檢視下文完整的 manifest 檔案。

1.3 獲取作業系統依賴包

本實驗環境使用的作業系統是 x64 的 openEuler 22.03 LTS SP3,需要自己製作安裝 Kubernetes 需要的作業系統依賴包映象 openEuler-22.03-rpms-amd64.iso,其他作業系統請讀者在 KubeKey releases 頁面下載。

個人建議在離線環境用 openEuler 的安裝 ISO,製做一個完整的離線軟體源。在利用 KubeKey 安裝離線叢集時,就不需要考慮作業系統依賴包的問題。

1.4 生成 manifest 檔案

根據上面的檔案及相關資訊,生成最終的 manifest 檔案 ksp-v3.4.1-v1.28.8-manifest.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: openEuler
    version: "22.03"
    osImage: openEuler 22.03 (LTS-SP3)
    repository:
      iso:
        localPath: "/data/kubekey/openEuler-22.03-rpms-amd64.iso"
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.28.8
  components:
    helm:
      version: v3.14.3
    cni:
      version: v1.2.0
    etcd:
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.3
    crictl:
      version: v1.29.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.5.3
    docker-compose:
      version: v2.2.2
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.9
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.28.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.28.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.28.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.28.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.3.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.13.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.13.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.4.0-2.319.3-1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
  - registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-dashboards:2.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.6.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.14.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.14.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.29
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.29
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.29
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.29
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.29
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.50.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.50
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
  - registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
  registry:
    auths: {}

manifest 修改說明

  • 開啟 harbordocker-compose 配置項,為後面透過 KubeKey 自建 harbor 倉庫推送映象使用。
  • 預設建立的 manifest 模板中的映象是從 docker.io 獲取,替換字首為 registry.cn-beijing.aliyuncs.com/kubesphereio
  • operatingSystems 配置刪除預設的 ubuntu,新增 openEuler 配置

1.5 匯出製品 artifact

根據生成的 manifest,執行下面的命令製作製品(artifact)。

export KKZONE=cn
./kk artifact export -m ksp-v3.4.1-v1.28-manifest.yaml -o ksp-v3.4.1-v1.28-artifact.tar.gz

正確執行後,輸出結果如下 :(受限於篇幅,僅展示最終結果

....
06:05:28 CST success: [LocalHost]
06:05:28 CST [ChownOutputModule] Chown output file
06:05:28 CST success: [LocalHost]
06:05:28 CST [ChownWorkerModule] Chown ./kubekey dir
06:05:28 CST success: [LocalHost]
06:05:28 CST Pipeline[ArtifactExportPipeline] execute successfully

製品製作完成後,檢視製品大小(全映象,製品包居然達到了 13G,生產環境還是有選擇的裁剪吧)。

$ ls -lh ksp-v3.4.1-v1.28-artifact.tar.gz
-rw-r--r-- 1 root root 13G May 20 06:05 ksp-v3.4.1-v1.28-artifact.tar.gz

1.6 匯出 KubeKey 離線安裝包

把 KubeKey 工具也製作成壓縮包,便於複製到離線節點。

$ tar zcvf kubekey-offline-v3.4.1-v1.28.tar.gz kk kubekey-v3.1.1-linux-amd64.tar.gz

2. 準備離線部署 KubeSphere 和 Kubernetes 的前置資料

請注意,以下操作無特殊說明時需在離線環境部署(Registry)節點上執行。

2.1 上傳離線部署資源包到部署節點

將以下離線部署資源包,上傳至離線環境部署(Registry) 節點的 /data/ 目錄(可根據實際情況修改)。

  • Kubekey:kubekey-offline-v3.4.1-v1.28.tar.gz
  • 製品 artifact:ksp-v3.4.1-v1.28-artifact.tar.gz

執行以下命令,解壓 KubeKey:

# 創離線資源存放的資料目錄
mkdir /data/kubekey
tar xvf /data/kubekey-offline-v3.4.1-v1.28.tar.gz -C /data/kubekey
mv ksp-v3.4.1-v1.28-artifact.tar.gz /data/kubekey

2.2 建立離線叢集配置檔案

  • 執行以下命令建立離線叢集配置檔案
cd /data/kubekey
./kk create config --with-kubesphere v3.4.1 --with-kubernetes v1.28.8 -f ksp-v341-v1228-offline.yaml

命令執行成功後,在當前目錄會生成檔名為 ksp-v341-v1228-offline.yaml 的配置檔案。

2.3 修改 Cluster 配置

在離線叢集配置檔案檔案中 kind: Cluster 小節的作用是部署 Kubernetes 叢集。本文示例採用 3 個節點同時作為 control-plane、etcd 節點和 worker 節點。

執行以下命令修改離線叢集配置檔案 ksp-v341-v1228-offline.yaml

vi ksp-v341-v1228-offline.yaml

修改 kind: Cluster 小節中 hosts 和 roleGroups 等資訊,修改說明如下。

  • hosts:指定節點的 IP、ssh 使用者、ssh 密碼、ssh 埠。示例演示了 ssh 埠號的配置方法。同時,新增一個 Registry 節點的配置
  • roleGroups:指定 3 個 etcd、control-plane 節點,複用相同的機器作為 3 個 worker 節點
  • 必須指定主機組 registry 作為倉庫部署節點(本文為了滿足讀者的需求使用了 KubeKey 自動部署 Harbor 映象倉庫。當然,也可以使用已有的 Harbor,使用已有 Harbor 時此配置可以不加)
  • internalLoadbalancer: 啟用內建的 HAProxy 負載均衡器
  • system.rpms:新增配置,部署時安裝 rpm 包(openEuler 系統預設沒有安裝 tar 包,必須提前安裝)
  • domain:自定義了一個 opsxlab.cn,沒特殊需求的場景保留預設值即可
  • containerManager:使用了 containerd
  • storage.openebs.basePath:新增配置,指定 openebs 預設儲存路徑為 /data/openebs/local
  • registry:必須指定 type 型別為 harbor,否則預設安裝 docker registry

修改後的完整示例如下:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ksp-master-1, address: 192.168.9.91, internalAddress: 192.168.9.91, port:22, user: root, password: "OpsXlab@2024"}
  - {name: ksp-master-2, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, password: "OpsXlab@2024"}
  - {name: ksp-master-3, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, password: "OpsXlab@2024"}
  - {name: ksp-registry, address: 192.168.9.90, internalAddress: 192.168.9.90, user: root, password: "OpsXlab@2024"}
  roleGroups:
    etcd:
    - ksp-master-1
    - ksp-master-2
    - ksp-master-3
    control-plane: 
    - ksp-master-1
    - ksp-master-2
    - ksp-master-3
    worker:
    - ksp-master-1
    - ksp-master-2
    - ksp-master-3
    registry:
    - ksp-registry
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.opsxlab.cn
    address: ""
    port: 6443
  system:
    rpms:
      - tar
  kubernetes:
    version: v1.28.8
    clusterName: opsxlab.cn
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /data/openebs/local # 預設沒有的新增配置,base path of the local PV provisioner
  registry:
    # 如需使用 kk 部署 harbor, 可將該引數設定為 harbor,不設定該引數且需使用 kk 建立容器映象倉庫,將預設使用 docker registry。
    type: harbor
    # 注意:
    # 1、如需使用 kk 部署的 harbor 或其他自定義倉庫,可設定對應倉庫的 auths,如使用 kk 建立 docker registry 倉庫,則無需配置該引數。
    # 2、kk 部署的 harbor,預設地址為 dockerhub.kubekey.local,如要修改確保與 privateRegistry 欄位的值保持一致。
    auths:
      "registry.opsxlab.cn":
        username: admin
        password: Harbor12345
        certsPath: "/etc/docker/certs.d/registry.opsxlab.cn"
    # 設定叢集部署時使用的私有倉庫
    privateRegistry: "registry.opsxlab.cn"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

2.4 修改 ClusterConfiguration 配置

在離線叢集配置檔案中 kind: ClusterConfiguration 小節的作用是部署 KubeSphere 及相關元件。

本文為了驗證離線部署的完整性,啟用了除 KubeEdge 、GateKeeper 以外的所有外掛。

繼續編輯離線叢集配置檔案 ksp-v341-v1228-offline.yaml,修改 kind: ClusterConfiguration 部分來啟用可插拔元件,具體的修改說明如下。

  • 啟用 etcd 監控
etcd:
    monitoring: true # 將 "false" 更改為 "true"
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  • 啟用 KubeSphere 告警系統
alerting:
  enabled: true # 將 "false" 更改為 "true"
  • 啟用 KubeSphere 審計日誌
auditing:
  enabled: true # 將 "false" 更改為 "true"
  • 啟用 KubeSphere DevOps 系統
devops:
  enabled: true # 將 "false" 更改為 "true"
  • 啟用 KubeSphere 事件系統
events:
  enabled: true # 將 "false" 更改為 "true"
  • 啟用 KubeSphere 日誌系統(v3.4.0 開始預設啟用 opensearch
logging:
  enabled: true # 將 "false" 更改為 "true"
  • 啟用 Metrics Server
metrics_server:
  enabled: true # 將 "false" 更改為 "true"

說明:KubeSphere 支援用於部署的容器組(Pod)彈性伸縮程式 (HPA)。在 KubeSphere 中,Metrics Server 控制著 HPA 是否啟用。

  • 啟用網路策略、容器組 IP 池,服務拓撲圖(名字排序,對應配置引數排序)
network:
  networkpolicy:
    enabled: true # 將 "false" 更改為 "true"
  ippool:
    type: calico # 將 "none" 更改為 "calico"
  topology:
    type: weave-scope # 將 "none" 更改為 "weave-scope"
  • 啟用應用商店
openpitrix:
  store:
    enabled: true # 將 "false" 更改為 "true"
  • 啟用 KubeSphere 服務網格(Istio)
servicemesh:
  enabled: true # 將 "false" 更改為 "true"
  • 修改上面的所有引數後,必須加入一個引數( 2.x 版本的 KubeKey 沒這個問題,3.x 的到 v3.1.1 為止,都存在這個問題。 不加的話,在部署 KubeSphere 的時候會有名稱空間不匹配的問題)
spec:
  namespace_override: kubesphereio
  ......

經過上述步驟,我們成功完成了對離線叢集配置檔案 ksp-v341-v1228-offline.yaml 的修改。由於篇幅限制,無法在此展示完整的檔案內容,請各位讀者根據上文提供的配置說明仔細核對。

3. 安裝配置 Harbor

為了驗證 KubeKey 離線部署 Harbor 服務的能力,本文采用 KubeKey 部署映象倉庫 Harbor。生產環境建議提前自建

請注意,以下操作如無特殊說明,均在離線環境部署(Registry)節點上執行。

3.1 安裝 Harbor

執行以下命令安裝映象倉庫 Harbor:

./kk init registry -f ksp-v341-v1228-offline.yaml -a ksp-v3.4.1-v1.28-artifact.tar.gz
  • 檢視安裝映象
$ docker images
REPOSITORY                      TAG       IMAGE ID       CREATED        SIZE
goharbor/harbor-exporter        v2.10.1   3b6d345aa4e6   2 months ago   106MB
goharbor/redis-photon           v2.10.1   9a994e1173fc   2 months ago   164MB
goharbor/trivy-adapter-photon   v2.10.1   108e2a78abf7   2 months ago   502MB
goharbor/harbor-registryctl     v2.10.1   da66353bc245   2 months ago   149MB
goharbor/registry-photon        v2.10.1   4cb6a644ceeb   2 months ago   83.5MB
goharbor/nginx-photon           v2.10.1   180accc8c6be   2 months ago   153MB
goharbor/harbor-log             v2.10.1   2215e7f0f2e7   2 months ago   163MB
goharbor/harbor-jobservice      v2.10.1   ab688ea50ad8   2 months ago   140MB
goharbor/harbor-core            v2.10.1   de73267578a3   2 months ago   169MB
goharbor/harbor-portal          v2.10.1   c75282ddf5df   2 months ago   162MB
goharbor/harbor-db              v2.10.1   db2ff40c7b27   2 months ago   269MB
goharbor/prepare                v2.10.1   92197f61701a   2 months ago   207MB
  • 檢視 Harbor 服務狀態
$ cd /opt/harbor/
$ docker-compose ps -a
WARN[0000] /opt/harbor/docker-compose.yml: `version` is obsolete
NAME                IMAGE                                   COMMAND                  SERVICE         CREATED         STATUS                   PORTS
harbor-core         goharbor/harbor-core:v2.10.1            "/harbor/entrypoint.…"   core            4 minutes ago   Up 4 minutes (healthy)
harbor-db           goharbor/harbor-db:v2.10.1              "/docker-entrypoint.…"   postgresql      4 minutes ago   Up 4 minutes (healthy)
harbor-jobservice   goharbor/harbor-jobservice:v2.10.1      "/harbor/entrypoint.…"   jobservice      4 minutes ago   Up 4 minutes (healthy)
harbor-log          goharbor/harbor-log:v2.10.1             "/bin/sh -c /usr/loc…"   log             4 minutes ago   Up 4 minutes (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       goharbor/harbor-portal:v2.10.1          "nginx -g 'daemon of…"   portal          4 minutes ago   Up 4 minutes (healthy)
nginx               goharbor/nginx-photon:v2.10.1           "nginx -g 'daemon of…"   proxy           4 minutes ago   Up 4 minutes (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp
redis               goharbor/redis-photon:v2.10.1           "redis-server /etc/r…"   redis           4 minutes ago   Up 4 minutes (healthy)
registry            goharbor/registry-photon:v2.10.1        "/home/harbor/entryp…"   registry        4 minutes ago   Up 4 minutes (healthy)
registryctl         goharbor/harbor-registryctl:v2.10.1     "/home/harbor/start.…"   registryctl     4 minutes ago   Up 4 minutes (healthy)
trivy-adapter       goharbor/trivy-adapter-photon:v2.10.1   "/home/scanner/entry…"   trivy-adapter   4 minutes ago   Up 4 minutes (healthy)
  • 檢視 hosts(確保使用了自定義域名
$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# kubekey hosts BEGIN
192.168.9.90  ksp-registry.opsxlab.cn ksp-registry
192.168.9.91  ksp-master-1.opsxlab.cn ksp-master-1
192.168.9.92  ksp-master-2.opsxlab.cn ksp-master-2
192.168.9.93  ksp-master-3.opsxlab.cn ksp-master-3
192.168.9.90  registry.opsxlab.cn
192.168.9.91  lb.opsxlab.cn
# kubekey hosts END
  • 檢視 Harbor 配置的域名(確保使用了自定義域名
$ cat /opt/harbor/harbor.yml | grep hostname:
hostname: registry.opsxlab.cn
  • 檢視 Docker 是否配置了私有證書(確保使用了自定義域名及證書
$ ll /etc/docker/certs.d/registry.opsxlab.cn/
total 12
-rw-r--r--. 1 root root 1103 May 20 6:01 ca.crt
-rw-r--r--. 1 root root 1253 May 20 6:01 registry.opsxlab.cn.cert
-rw-------. 1 root root 1679 May 20 6:01 registry.opsxlab.cn.key

3.2 在 Harbor 中建立專案

使用 KubeKey 安裝的 Harbor 預設資訊如下:

  • 登陸賬戶資訊:管理員賬號:admin,密碼:Harbor12345(生產環境必須修改)。

使用 Shell 指令碼建立專案,vim create_project_harbor.sh

#!/usr/bin/env bash

# Harbor 倉庫地址(寫域名,預設配置為 https://dockerhub.kubekey.local)
url="https://registry.opsxlab.cn"

# 訪問 Harbor 倉庫的預設使用者和密碼(生產環境建議修改)
user="admin"
passwd="Harbor12345"

# 需要建立的專案名列表,按我們製作的離線包的映象命名規則,實際上只需要建立一個 kubesphereio 即可,這裡保留了所有可用值,各位可根據自己的離線倉庫映象名稱規則修改。
harbor_projects=(kubesphereio)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -k -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}"
done
  • 執行指令碼建立專案
sh create_project_harbor.sh

3.3 推送離線映象到 Harbor 倉庫

將提前準備好的離線映象推送到 Harbor 倉庫,這一步為可選項,因為建立叢集的時候預設會推送映象(本文使用引數忽略了)。為了部署成功率,建議先推送。

  • 推送離線映象
./kk artifact image push -f ksp-v341-v1228-offline.yaml -a ksp-v3.4.1-v1.28-artifact.tar.gz
  • 正確的安裝結果如下(受限於篇幅,內容有刪減):
......
7:32:04 CST success: [LocalHost]
7:32:04 CST [ChownWorkerModule] Chown ./kubekey dir
7:32:04 CST success: [LocalHost]
7:32:04 CST Pipeline[ArtifactImagesPushPipeline] execute successfully
  • Harbor 管理頁面檢視專案和映象倉庫(提前在自己電腦上做好域名解析配置

ksp-offline-harbor-v210-projects

從專案管理頁面可以看到一共建立了 124 個映象倉庫。

4. 安裝 KubeSphere 和 Kubernetes 叢集

請注意,以下操作如無特殊說明,均在離線環境部署(Registry)節點上執行。

4.1 安裝 KubeSphere 和 Kubernetes 叢集

執行以下命令,安裝 KubeSphere 和 Kubernetes 叢集。

./kk create cluster -f ksp-v341-v1228-offline.yaml -a ksp-v3.4.1-v1.28-artifact.tar.gz --with-packages --skip-push-images

引數說明

  • --with-packages:安裝作業系統依賴
  • --skip-push-images: 忽略推送映象,前面已經完成了推送映象到私有倉庫的任務

特殊說明:

  • 由於本文在安裝的過程中啟用了日誌外掛,因此在安裝的過程中必須按照 「問題 2」的描述手工介入處理否則安裝會失敗

  • 當出現熟悉的安裝捲軸 >>---> 後,可以另開一個終端 使用命令 kubectl get pod -A 或是 kubectl get pod -A | grep -v Running 觀察進度,如出現異常可及時介入處理。

  • 也可以透過下面的命令檢視詳細的部署過程日誌及報錯資訊。

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

部署完成需要大約 10-30 分鐘左右,具體看網速、機器配置、啟用多少外掛等。

部署完成後,您應該會在終端上看到類似於下面的輸出。提示部署完成的同時,輸出中還會顯示使用者登陸 KubeSphere 的預設管理員使用者和密碼。

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.9.91:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2024-05-19 14:49:40
#####################################################
14:49:42 CST skipped: [ksp-master-3]
14:49:42 CST skipped: [ksp-master-2]
14:49:42 CST success: [ksp-master-1]
14:49:42 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

4.2 部署結果驗證

登入 Web 控制檯透過 http://{IP}:30880 使用預設帳戶和密碼 admin/P@88w0rd 訪問 KubeSphere 的 Web 控制檯,簡單的驗證一下部署結果。

  • 檢視叢集節點狀態

  • 檢視系統元件狀態

  • 檢視系統監控狀態

5. 常見問題

5.1 問題 1

  • 問題現象
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  18m                   default-scheduler  Successfully assigned kube-system/metrics-server-5d65c798b8-m9tbj to ksp-master-3
  Normal   Pulling    16m (x4 over 18m)     kubelet            Pulling image "registry.opsxlab.cn/kubesphere/metrics-server:v0.4.2"
  Warning  Failed     16m (x4 over 18m)     kubelet            Failed to pull image "registry.opsxlab.cn/kubesphere/metrics-server:v0.4.2": rpc error: code = NotFound desc = failed to pull and unpack image "registry.opsxlab.cn/kubesphere/metrics-server:v0.4.2": failed to resolve reference "registry.opsxlab.cn/kubesphere/metrics-server:v0.4.2": registry.opsxlab.cn/kubesphere/metrics-server:v0.4.2: not found
  Warning  Failed     16m (x4 over 18m)     kubelet            Error: ErrImagePull
  Warning  Failed     16m (x6 over 17m)     kubelet            Error: ImagePullBackOff
  Normal   BackOff    2m57s (x64 over 17m)  kubelet            Back-off pulling image "registry.opsxlab.cn/kubesphere/metrics-server:v0.4.2"
  • 解決方案

在離線叢集配置檔案 kind: ClusterConfiguration 小節的 spec 部分加入引數 namespace_override: kubesphereio

5.2 問題 2

  • 問題現象
# kubectl describe pod opensearch-cluster-data-0 -n kubesphere-logging-system
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m                   default-scheduler  Successfully assigned kubesphere-logging-system/opensearch-cluster-data-0 to ksp-master-2
  Warning  Failed     2m17s                kubelet            Failed to pull image "busybox:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:latest": failed to resolve reference "docker.io/library/busybox:latest": failed to do request: Head "https://registry-1.docker.io/v2/library/busybox/manifests/latest": dial tcp: lookup registry-1.docker.io on 114.114.114.114:53: read udp 192.168.9.92:49491->114.114.114.114:53: i/o timeout
  Warning  Failed     85s                  kubelet            Failed to pull image "busybox:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:latest": failed to resolve reference "docker.io/library/busybox:latest": failed to do request: Head "https://registry-1.docker.io/v2/library/busybox/manifests/latest": dial tcp: lookup registry-1.docker.io on 114.114.114.114:53: read udp 192.168.9.92:57815->114.114.114.114:53: i/o timeout
  Normal   Pulling    56s (x3 over 2m57s)  kubelet            Pulling image "busybox:latest"
  Warning  Failed     15s (x3 over 2m17s)  kubelet            Error: ErrImagePull
  Warning  Failed     15s                  kubelet            Failed to pull image "busybox:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:latest": failed to resolve reference "docker.io/library/busybox:latest": failed to do request: Head "https://registry-1.docker.io/v2/library/busybox/manifests/latest": dial tcp: lookup registry-1.docker.io on 114.114.114.114:53: read udp 192.168.9.92:37639->114.114.114.114:53: i/o timeout
  Normal   BackOff    0s (x3 over 2m16s)   kubelet            Back-off pulling image "busybox:latest"
  Warning  Failed     0s (x3 over 2m16s)   kubelet            Error: ImagePullBackOff
  • 解決方案

官方配置寫死了 busybox 的地址,暫時需要手動修改。

# 檢視 sts
$ kubectl get sts -n kubesphere-logging-system
NAME                        READY   AGE
opensearch-cluster-data     0/2     7m42s
opensearch-cluster-master   0/1     7m40s

# 修改 sts 使用的 busyboy 映象為本地映象(細節不展示)
kubectl edit sts opensearch-cluster-data -n kubesphere-logging-system
kubectl edit sts opensearch-cluster-master -n kubesphere-logging-system

# 本文修改後 image 內容(自己根據實際情況修改域名字首)
registry.opsxlab.cn/kubesphereio/busybox:1.31.1

本文雖然基於作業系統 openEuler 22.03 LTS SP3,但對於 CentOS、Ubuntu 等其他作業系統同樣具有借鑑意義。

免責宣告:

  • 筆者水平有限,儘管經過多次驗證和檢查,盡力確保內容的準確性,但仍可能存在疏漏之處。敬請業界專家大佬不吝指教。
  • 本文所述內容僅透過實戰環境驗證測試,讀者可學習、借鑑,但嚴禁直接用於生產環境由此引發的任何問題,作者概不負責

本文由部落格一文多發平臺 OpenWrite 釋出!

相關文章