KubeSphere 3.3.0 離線安裝教程

KubeSphere 發表於 2022-07-19
作者:老Z,中電信數智科技有限公司山東分公司運維架構師,雲原生愛好者,目前專注於雲原生運維,雲原生領域技術棧涉及Kubernetes、KubeSphere、DevOps、OpenStack、Ansible等。

KubeKey 是一個用於部署 K8s 叢集的開源輕量級工具。

它提供了一種靈活、快速、便捷的方式來僅安裝 Kubernetes/K3s,或同時安裝 K8s/K3s 和 KubeSphere,以及其他雲原生外掛。除此之外,它也是擴充套件和升級叢集的有效工具。

KubeKey v2.1.0 版本新增了清單 (manifest) 和製品 (artifact) 的概念,為使用者離線部署 K8s 叢集提供了一種解決方案。

manifest 是一個描述當前 K8s 叢集資訊和定義 artifact 製品中需要包含哪些內容的文字檔案。

在過去,使用者需要準備部署工具,映象 tar 包和其他相關的二進位制檔案,每位使用者需要部署的 K8s 版本和需要部署的映象都是不同的。現在使用 KubeKey,使用者只需使用清單 manifest 檔案來定義將要離線部署的叢集環境需要的內容,再通過該 manifest 來匯出製品 artifact 檔案即可完成準備工作。離線部署時只需要 KubeKey 和 artifact 就可快速、簡單的在環境中部署映象倉庫和 K8s 叢集。

KubeKey 生成 manifest 檔案有兩種方式。

  • 利用現有執行中的叢集作為源生成 manifest 檔案,也是官方推薦的一種方式,具體參考 KubeSphere 官網的離線部署文件
  • 根據 模板檔案 手動編寫 manifest 檔案。

第一種方式的好處是可以構建 1:1 的執行環境,但是需要提前部署一個叢集,不夠靈活度,並不是所有人都具備這種條件的。

因此,本文參考官方的離線文件,採用手寫 manifest 檔案的方式,實現離線環境的安裝部署。

本文知識點

  • 定級:入門級
  • 瞭解清單 (manifest) 和製品 (artifact) 的概念
  • 掌握 manifest 清單的編寫方法
  • 根據 manifest 清單製作 artifact
  • 離線部署 KubeSphere 和 Kubernetes

演示伺服器配置

主機名IPCPU記憶體系統盤資料盤用途
zdeops-master192.168.9.92440200Ansible 運維控制節點
ks-k8s-master-0192.168.9.9141640200+200KubeSphere/k8s-master/k8s-worker/Ceph
ks-k8s-master-1192.168.9.9241640200+200KubeSphere/k8s-master/k8s-worker/Ceph
ks-k8s-master-2192.168.9.9341640200+200KubeSphere/k8s-master/k8s-worker/Ceph
es-node-0192.168.9.952840200ElasticSearch
es-node-1192.168.9.962840200ElasticSearch
es-node-2192.168.9.972840200ElasticSearch
harbor192.168.9.892840200Harbor
合計822843202200

演示環境涉及軟體版本資訊

  • 作業系統:CentOS-7.9-x86_64
  • KubeSphere:3.3.0
  • Kubernetes:1.24.1
  • Kubekey:v2.2.1
  • Ansible:2.8.20
  • Harbor:2.5.1

離線部署資源製作

下載 KubeKey

# 在zdevops-master 運維開發伺服器執行

# 選擇中文區下載(訪問github受限時使用)
$ export KKZONE=cn

# 下載KubeKey
$ mkdir /data/kubekey
$ cd /data/kubekey/
$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -

獲取 manifest 模板

參考 https://github.com/kubesphere...

有兩個參考用例,一個簡單版,一個完整版。參考簡單版就可以。

獲取 ks-installer images-list

$ wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt

文中的 image 列表選用的 Docker Hub 倉庫其他元件存放的公共倉庫,國內建議統一更改字首為 registry.cn-beijing.aliyuncs.com/kubesphereio

修改後的完整的映象列表在下面的 manifest 檔案中展示。

請注意,example-images 包含的 image 中只保留了 busybox,其他的在本文中沒有使用。

獲取作業系統依賴包

$ wget https://github.com/kubesphere/kubekey/releases/download/v2.2.1/centos7-rpms-amd64.iso

將該 ISO 檔案放到製作離線映象的伺服器的 /data/kubekey 目錄下

生成 manifest 檔案

根據上面的檔案及相關資訊,生成最終 manifest.yaml

命名為 ks-v3.3.0-manifest.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: centos
    version: "7"
    osImage: CentOS Linux 7 (Core)
    repository:
      iso:
        localPath: "/data/kubekey/centos7-rpms-amd64.iso"
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.24.1
  components:
    helm: 
      version: v3.6.3
    cni: 
      version: v0.9.1
    etcd: 
      version: v3.4.13
    containerRuntimes:
    - type: containerd
      version: 1.6.4
    crictl: 
      version: v1.24.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
  registry:
    auths: {}

manifest 修改說明

  • 開啟 harbordocker-compose 配置項,為後面通過 KubeKey 自建 harbor 倉庫推送映象使用。
  • 預設建立的 manifest 裡面的映象列表從 docker.io 獲取,替換字首為 registry.cn-beijing.aliyuncs.com/kubesphereio
  • 若需要匯出的 artifact 檔案中包含作業系統依賴檔案(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相應的 ISO 依賴檔案下載地址為 localPath ,填寫提前下載好的 ISO 包在本地的存放路徑,並將 url 配置項置空。
  • 您可以訪問 https://github.com/kubesphere... 下載 ISO 檔案。

匯出製品 artifact

$ export KKZONE=cn

$ ./kk artifact export -m ks-v3.3.0-manifest.yaml -o kubesphere-v3.3.0-artifact.tar.gz

製品 (artifact) 說明

  • 製品(artifact)是一個根據指定的 manifest 檔案內容匯出的包含映象 tar 包和相關二進位制檔案的 tgz 包。
  • 在 KubeKey 初始化映象倉庫、建立叢集、新增節點和升級叢集的命令中均可指定一個 artifact,KubeKey 將自動解包該 artifact 並在執行命令時直接使用解包出來的檔案。

匯出 Kubekey

$ tar zcvf kubekey-v2.2.1.tar.gz kk kubekey-v2.2.1-linux-amd64.tar.gz

K8s 伺服器初始化配置

本節執行離線環境 K8s 伺服器初始化配置。

Ansible hosts 配置

[k8s]
ks-k8s-master-0 ansible_ssh_host=192.168.9.91  host_name=ks-k8s-master-0
ks-k8s-master-1 ansible_ssh_host=192.168.9.92  host_name=ks-k8s-master-1
ks-k8s-master-2 ansible_ssh_host=192.168.9.93  host_name=ks-k8s-master-2

[es]
es-node-0 ansible_ssh_host=192.168.9.95 host_name=es-node-0
es-node-1 ansible_ssh_host=192.168.9.96 host_name=es-node-1
es-node-2 ansible_ssh_host=192.168.9.97 host_name=es-node-2

harbor ansible_ssh_host=192.168.9.89 host_name=harbor

[servers:children]
k8s
es

[servers:vars]
ansible_connection=paramiko
ansible_ssh_user=root
[email protected]

檢測伺服器連通性

# 利用 ansible 檢測伺服器的連通性

$ cd /data/ansible/ansible-zdevops/inventories/dev/
$ source /opt/ansible2.8/bin/activate
$ ansible -m ping all

初始化伺服器配置

# 利用 ansible-playbook 初始化伺服器配置

$ ansible-playbook ../../playbooks/init-base.yaml -l k8s

掛載資料盤

  • 掛載第一塊資料盤
# 利用 ansible-playbook 初始化主機資料盤
# 注意 -e data_disk_path="/data" 指定掛載目錄, 用於儲存 Docker 容器資料

$ ansible-playbook ../../playbooks/init-disk.yaml -e data_disk_path="/data" -l k8s
  • 掛載驗證
# 利用 ansible 驗證資料盤是否格式化並掛載
$ ansible harbor -m shell -a 'df -h'

# 利用 ansible 驗證資料盤是否配置自動掛載

$ ansible harbor -m shell -a 'tail -1  /etc/fstab'

安裝 K8s 系統依賴包

# 利用 ansible-playbook 安裝 kubernetes 系統依賴包
# ansible-playbook 中設定了啟用 GlusterFS 儲存的開關,預設開啟,不需要的可以將引數設定為 False

$ ansible-playbook ../../playbooks/deploy-kubesphere.yaml -e k8s_storage_glusterfs=false -l k8s

離線安裝叢集

傳輸離線部署資源到部署節點

將以下離線部署資源,傳到部署節點 (通常是第一個 master 節點) 的 /data/kubekey 目錄。

  • Kubekey:kubekey-v2.2.1.tar.gz
  • 製品 artifact:kubesphere-v3.3.0-artifact.tar.gz

執行以下操作,解壓 kubekey。

$ cd /data/kubekey
$ tar xvf kubekey-v2.2.1.tar.gz

建立離線叢集配置檔案

  • 建立配置檔案
$ ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.24.1 -f config-sample.yaml
  • 修改配置檔案
$ vim config-sample.yaml

修改內容說明

  • 按照實際離線環境配置修改節點資訊。
  • 按實際情況新增 registry 的相關資訊。
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ks-k8s-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "[email protected]"}
  - {name: ks-k8s-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, password: "[email protected]"}
  - {name: ks-k8s-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, password: "[email protected]"}
  roleGroups:
    etcd:
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
    control-plane: 
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
    worker:
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.zdevops.com.cn
    address: ""
    port: 6443
  kubernetes:
    version: v1.24.1
    clusterName: zdevops.com.cn
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: "harbor"
    auths:
      "registry.zdevops.com.cn":
         username: admin
         password: Harbor12345
    privateRegistry: "registry.zdevops.com.cn"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

# 下面的內容不修改,不做展示

在 Harbor 中建立專案

本文采用提前部署好的 Harbor 來存放映象,部署過程參考我之前寫的 基於 KubeSphere 玩轉 k8s-Harbor 安裝手記

你可以使用 kk 工具自動部署 Harbor,具體參考官方離線部署文件

  • 下載建立專案指令碼模板
$ curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
  • 根據實際情況修改專案指令碼
#!/usr/bin/env bash

# Harbor 倉庫地址
url="https://registry.zdevops.com.cn"

# 訪問 Harbor 倉庫使用者
user="admin"

# 訪問 Harbor 倉庫使用者密碼
passwd="Harbor12345"

# 需要建立的專案名列表,正常只需要建立一個**kubesphereio**即可,這裡為了保留變數可擴充套件性多寫了兩個。
harbor_projects=(library
    kubesphereio
    kubesphere
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}"
done
  • 執行指令碼建立專案
$ sh create_project_harbor.sh

推送離線映象到 Harbor 倉庫

將提前準備好的離線映象推送到 Harbor 倉庫,這一步為可選項,因為建立叢集的時候也會再次推送映象。為了部署一次成功率,建議先推送。

$ ./kk artifact image push -f config-sample.yaml -a  kubesphere-v3.3.0-artifact.tar.gz

建立叢集並安裝 OS 依賴

$ ./kk create cluster -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz --with-packages

引數說明

  • config-sample.yaml:離線環境叢集的配置檔案。
  • kubesphere-v3.3.0-artifact.tar.gz:製品包的 tar 包映象。
  • --with-packages:若需要安裝作業系統依賴,需指定該選項。

檢視叢集狀態

$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

正確安裝完成後,您會看到以下內容:

**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.9.91:30880
Account: admin
Password: [email protected]

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2022-06-30 14:30:19
#####################################################

登入 Web 控制檯

通過 http://{IP}:30880 使用預設帳戶和密碼 admin/[email protected] 訪問 KubeSphere 的 Web 控制檯,進行後續的操作配置。

KubeSphere 3.3.0 離線安裝教程

總結

感謝您完整的閱讀完本文,為此,您應該 Get 到了以下技能

  • 瞭解了清單 (manifest) 和製品 (artifact) 的概念
  • 瞭解 manifest 和 image 資源的獲取地址
  • 手寫 manifest 清單
  • 根據 manifest 清單製作 artifact
  • 離線部署 KubeSphere 和 Kubernetes
  • Harbor 映象倉庫自動建立專案
  • Ansible 使用的小技巧

目前為止,我們已經完成了最小化環境的 KubeSphere 和 K8s 叢集的部署。但是,這僅僅是一個開始,後續還有很多配置和使用技巧,敬請期待 ...

本文由部落格一文多發平臺 OpenWrite 釋出!