IOMESH Installation

Etaon發表於2023-01-12

IOMESH Installation

官方文件:

Introduction · Documentation

同步發表於個人網站:www.etaon.top

實驗拓撲圖:

也可以再測試的時候是使用一個埠,官方建議將IOMESH的埠分可,即下圖10.234.1.0/24網段:

實驗使用裸機,配置如下:

配件型號規格數量備註
CPUIntel(R) Xeon(R) Silver 4214R  @ 2.40GHz2節點1、3
Intel(R) Xeon(R) Gold 6226R  @ 2.90GHz1節點2
記憶體256 GB3節點1、2、3
SSD1.6 TB NVMe SSD2*3節點1/節點2/節點3
HDD2.4 TB SAS4*3節點1/節點2/節點3
DOM盤M.2 240GB2節點1/節點2/節點3
千兆網路卡I3502節點1/節點2/節點3
儲存網路卡10/25G2節點1/節點2/節點3
  • 部署IOMesh 至少需要一塊 cache disk以及一塊 partition disk

安裝步驟:

建立Kubernetes叢集

採用Kubernetes 1.24 版本,Runtime為containerd。
安裝參考Install Kubernetes 1.24 - 路無止境! (etaon.top)

安裝前準備

在所有worker節點執行以下操作。

  1. 安裝 open-iscsi,ubuntu已經安裝。
    apt install open-iscsi -y

  1. 編輯iscsi配置檔案
    sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf

  1. 確保 iscsi_tcp 模組已被載入
    sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'

  1. 啟動 iscsid 服務
    systemctl enable --now iscsid

離線安裝 IOMesh

安裝文件:

Install IOMesh · Documentation

  1. 下載離線安裝檔案,將其上傳至 所有 需要部署 IOMesh 的節點上,包括Master節點。
#這個的節點一般指worker節點,且>=3;
#如果worker節點不夠,希望master節點加入,再master節點執行:
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
  1. 對上傳的離線安裝包進行解壓縮:
    tar -xvf  iomesh-offline-v0.11.1.tgz && cd iomesh-offline

  1. 載入映象檔案,Runtime 為 Containerd 的情況下執行以下命令
    ctr --namespace k8s.io image import ./images/iomesh-offline-images.tar

  1. 轉到Master 節點(或API Server接入點),生成IOMesh 配置檔案
    ./helm show values charts/iomesh > iomesh.yaml

  1. 定義 iomesh.yaml 配置檔案,注意修改 dataCIDR 為 IOMesh 儲存網段
...
iomesh:
  chunk:
    dataCIDR: "10.234.1.0/24" # change to your own data network CIDR

  1. Master 安裝 IOMesh 叢集
./helm install iomesh ./charts/iomesh \
    --create-namespace \
    --namespace iomesh-system \
    --values iomesh.yaml \
    --wait
#返回結果
NAME: iomesh
LAST DEPLOYED: Tue Dec 27 09:38:56 2022
NAMESPACE: iomesh-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

  1. 建立成功以後的POD情況
    建立POD過程:
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -w
NAME                                                   READY   STATUS    RESTARTS   AGE
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   6/6     Running   0          24s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   6/6     Running   0          24s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   6/6     Running   0          24s
iomesh-csi-driver-node-plugin-gr8bm                    3/3     Running   0          24s
iomesh-csi-driver-node-plugin-kshdt                    3/3     Running   0          24s
iomesh-csi-driver-node-plugin-xxhhx                    3/3     Running   0          24s
iomesh-hostpath-provisioner-59v28                      1/1     Running   0          24s
iomesh-hostpath-provisioner-79dgh                      1/1     Running   0          24s
iomesh-hostpath-provisioner-cknk8                      1/1     Running   0          24s
iomesh-openebs-ndm-7vdkm                               1/1     Running   0          24s
iomesh-openebs-ndm-cluster-exporter-75f568df84-dvz4g   1/1     Running   0          24s
iomesh-openebs-ndm-hctjc                               1/1     Running   0          24s
iomesh-openebs-ndm-node-exporter-f59t5                 1/1     Running   0          24s
iomesh-openebs-ndm-node-exporter-l48dj                 1/1     Running   0          24s
iomesh-openebs-ndm-node-exporter-sxgjn                 1/1     Running   0          24s
iomesh-openebs-ndm-operator-7d58d8fbc8-xxwdt           1/1     Running   0          24s
iomesh-openebs-ndm-x64pj                               1/1     Running   0          24s
iomesh-zookeeper-0                                     0/1     Running   0          18s
iomesh-zookeeper-operator-f5588b6d7-lrj6n              1/1     Running   0          24s
operator-765dd9678f-95rcw                              1/1     Running   0          24s
operator-765dd9678f-ns2gs                              1/1     Running   0          24s
operator-765dd9678f-q8pwn                              1/1     Running   0          24s
iomesh-zookeeper-0                                     1/1     Running   0          23s
iomesh-zookeeper-1                                     0/1     Pending   0          0s
iomesh-zookeeper-1                                     0/1     Pending   0          1s
iomesh-zookeeper-1                                     0/1     ContainerCreating   0          1s
iomesh-zookeeper-1                                     0/1     ContainerCreating   0          1s
iomesh-zookeeper-1                                     0/1     Running             0          2s
iomesh-csi-driver-node-plugin-gr8bm                    3/3     Running             1 (0s ago)   50s
iomesh-csi-driver-node-plugin-kshdt                    3/3     Running             1 (0s ago)   50s
iomesh-csi-driver-node-plugin-xxhhx                    3/3     Running             1 (0s ago)   50s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   2/6     Error               1 (1s ago)   53s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   2/6     Error               1 (1s ago)   53s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   2/6     Error               1 (1s ago)   54s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   6/6     Running             5 (1s ago)   54s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   6/6     Running             5 (2s ago)   54s
iomesh-zookeeper-1                                     1/1     Running             0            26s
iomesh-zookeeper-2                                     0/1     Pending             0            0s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   6/6     Running             5 (2s ago)   55s
iomesh-zookeeper-2                                     0/1     Pending             0            1s
iomesh-zookeeper-2                                     0/1     ContainerCreating   0            1s
iomesh-zookeeper-2                                     0/1     ContainerCreating   0            1s
iomesh-zookeeper-2                                     0/1     Running             0            2s
iomesh-zookeeper-2                                     1/1     Running             0            26s
iomesh-meta-0                                          0/2     Pending             0            0s
iomesh-meta-1                                          0/2     Pending             0            0s
iomesh-meta-2                                          0/2     Pending             0            0s
iomesh-meta-0                                          0/2     Pending             0            1s
iomesh-meta-1                                          0/2     Pending             0            1s
iomesh-meta-1                                          0/2     Init:0/1            0            1s
iomesh-meta-0                                          0/2     Init:0/1            0            1s
iomesh-meta-2                                          0/2     Pending             0            1s
iomesh-meta-2                                          0/2     Init:0/1            0            1s
iomesh-iscsi-redirector-jj2qm                          0/2     Pending             0            0s
iomesh-iscsi-redirector-jj2qm                          0/2     Pending             0            0s
iomesh-iscsi-redirector-9crx8                          0/2     Pending             0            0s
iomesh-iscsi-redirector-zlhtk                          0/2     Pending             0            0s
iomesh-iscsi-redirector-9crx8                          0/2     Pending             0            0s
iomesh-iscsi-redirector-zlhtk                          0/2     Pending             0            0s
iomesh-iscsi-redirector-zlhtk                          0/2     Init:0/1            0            0s
iomesh-iscsi-redirector-9crx8                          0/2     Init:0/1            0            0s
iomesh-chunk-0                                         0/3     Pending             0            0s
iomesh-meta-0                                          0/2     Init:0/1            0            2s
iomesh-meta-1                                          0/2     Init:0/1            0            2s
iomesh-meta-2                                          0/2     Init:0/1            0            2s
iomesh-iscsi-redirector-zlhtk                          0/2     Init:0/1            0            0s
iomesh-iscsi-redirector-jj2qm                          0/2     Init:0/1            0            0s
iomesh-chunk-0                                         0/3     Pending             0            1s
iomesh-meta-1                                          0/2     Init:0/1            0            3s
iomesh-meta-2                                          0/2     PodInitializing     0            3s
iomesh-iscsi-redirector-9crx8                          0/2     PodInitializing     0            1s
iomesh-chunk-0                                         0/3     Init:0/1            0            1s
iomesh-iscsi-redirector-zlhtk                          0/2     PodInitializing     0            2s
iomesh-iscsi-redirector-9crx8                          1/2     Running             0            2s
iomesh-meta-0                                          0/2     PodInitializing     0            4s
iomesh-meta-1                                          0/2     PodInitializing     0            4s
iomesh-meta-2                                          1/2     Running             0            4s
iomesh-meta-1                                          1/2     Running             0            5s
iomesh-iscsi-redirector-jj2qm                          0/2     PodInitializing     0            3s
iomesh-iscsi-redirector-9crx8                          1/2     Error               0            3s
iomesh-iscsi-redirector-zlhtk                          1/2     Running             0            4s
iomesh-meta-0                                          1/2     Running             0            6s
iomesh-iscsi-redirector-zlhtk                          1/2     Error               0            4s
iomesh-iscsi-redirector-zlhtk                          1/2     Running             1 (3s ago)   5s
iomesh-meta-0                                          1/2     Running             0            7s
iomesh-iscsi-redirector-9crx8                          1/2     Running             1 (2s ago)   5s
iomesh-iscsi-redirector-jj2qm                          1/2     Running             0            5s
iomesh-iscsi-redirector-9crx8                          1/2     Error               1 (2s ago)   5s
iomesh-iscsi-redirector-zlhtk                          1/2     Error               1 (3s ago)   5s
iomesh-iscsi-redirector-jj2qm                          1/2     Error               0            6s
iomesh-iscsi-redirector-9crx8                          1/2     CrashLoopBackOff    1 (1s ago)   6s
iomesh-iscsi-redirector-zlhtk                          1/2     CrashLoopBackOff    1 (2s ago)   6s
iomesh-chunk-0                                         0/3     PodInitializing     0            7s
iomesh-chunk-0                                         3/3     Running             0            7s
iomesh-chunk-1                                         0/3     Pending             0            0s
iomesh-iscsi-redirector-jj2qm                          1/2     Running             1 (5s ago)   8s
iomesh-chunk-1                                         0/3     Pending             0            1s
iomesh-chunk-1                                         0/3     Init:0/1            0            1s
iomesh-chunk-1                                         0/3     Init:0/1            0            2s
iomesh-meta-0                                          2/2     Running             0            12s
iomesh-meta-1                                          2/2     Running             0            12s
iomesh-meta-2                                          2/2     Running             0            12s
iomesh-chunk-0                                         2/3     Error               0            11s
iomesh-chunk-1                                         0/3     PodInitializing     0            4s
iomesh-chunk-0                                         3/3     Running             1 (2s ago)   12s
iomesh-chunk-1                                         3/3     Running             0            5s
iomesh-chunk-2                                         0/3     Pending             0            0s
iomesh-chunk-2                                         0/3     Pending             0            1s
iomesh-chunk-2                                         0/3     Init:0/1            0            1s
iomesh-chunk-2                                         0/3     Init:0/1            0            2s
iomesh-chunk-2                                         0/3     PodInitializing     0            3s
iomesh-iscsi-redirector-jj2qm                          2/2     Running             1 (13s ago)   16s
iomesh-csi-driver-node-plugin-gr8bm                    3/3     Running             2 (0s ago)    100s
iomesh-chunk-2                                         3/3     Running             0             4s
iomesh-csi-driver-node-plugin-xxhhx                    3/3     Running             2 (0s ago)    100s
iomesh-csi-driver-node-plugin-kshdt                    3/3     Running             2 (1s ago)    101s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   2/6     Error               6 (0s ago)    103s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   2/6     Error               6 (1s ago)    103s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   2/6     Error               6 (1s ago)    103s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   2/6     CrashLoopBackOff    6 (1s ago)    104s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   2/6     CrashLoopBackOff    6 (2s ago)    104s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   2/6     CrashLoopBackOff    6 (2s ago)    104s
iomesh-iscsi-redirector-9crx8                          1/2     Running             2 (18s ago)   23s
iomesh-iscsi-redirector-zlhtk                          1/2     Running             2 (19s ago)   23s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   6/6     Running             10 (14s ago)   116s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   6/6     Running             10 (14s ago)   117s
iomesh-iscsi-redirector-zlhtk                          2/2     Running             2 (31s ago)    35s
iomesh-iscsi-redirector-9crx8                          2/2     Running             2 (30s ago)    35s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   6/6     Running             10 (18s ago)   2m
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -owide
NAME                                                   READY   STATUS    RESTARTS         AGE     IP               NODE       NOMINATED NODE   READINESS GATES
iomesh-chunk-0                                         3/3     Running   0                7m27s   192.168.80.23    worker02   <none>           <none>
iomesh-chunk-1                                         3/3     Running   0                7m20s   192.168.80.22    worker01   <none>           <none>
iomesh-chunk-2                                         3/3     Running   0                7m16s   192.168.80.21    cp01       <none>           <none>
iomesh-csi-driver-controller-plugin-6887b8d974-4cjlq   6/6     Running   10 (7m12s ago)   9m10s   192.168.80.22    worker01   <none>           <none>
iomesh-csi-driver-controller-plugin-6887b8d974-d2d4r   6/6     Running   44 (11m ago)     35m     192.168.80.23    worker02   <none>           <none>
iomesh-csi-driver-controller-plugin-6887b8d974-j2rks   6/6     Running   44 (11m ago)     35m     192.168.80.21    cp01       <none>           <none>
iomesh-csi-driver-node-plugin-95f5z                    3/3     Running   14 (7m15s ago)   35m     192.168.80.22    worker01   <none>           <none>
iomesh-csi-driver-node-plugin-dftlv                    3/3     Running   12 (11m ago)     35m     192.168.80.21    cp01       <none>           <none>
iomesh-csi-driver-node-plugin-s549x                    3/3     Running   12 (11m ago)     35m     192.168.80.23    worker02   <none>           <none>
iomesh-hostpath-provisioner-8jvs8                      1/1     Running   1 (8m59s ago)    35m     10.211.5.10      worker01   <none>           <none>
iomesh-hostpath-provisioner-rkkrs                      1/1     Running   0                35m     10.211.30.71     worker02   <none>           <none>
iomesh-hostpath-provisioner-rmk4z                      1/1     Running   0                35m     10.211.214.131   cp01       <none>           <none>
iomesh-iscsi-redirector-6g2fk                          2/2     Running   2 (7m25s ago)    7m30s   192.168.80.22    worker01   <none>           <none>
iomesh-iscsi-redirector-cxnbq                          2/2     Running   2 (7m25s ago)    7m30s   192.168.80.23    worker02   <none>           <none>
iomesh-iscsi-redirector-wnglv                          2/2     Running   2 (7m24s ago)    7m30s   192.168.80.21    cp01       <none>           <none>
iomesh-meta-0                                          2/2     Running   0                7m29s   10.211.30.76     worker02   <none>           <none>
iomesh-meta-1                                          2/2     Running   0                7m29s   10.211.5.11      worker01   <none>           <none>
iomesh-meta-2                                          2/2     Running   0                7m29s   10.211.214.135   cp01       <none>           <none>
iomesh-openebs-ndm-55lgk                               1/1     Running   1 (8m59s ago)    35m     192.168.80.22    worker01   <none>           <none>
iomesh-openebs-ndm-cluster-exporter-75f568df84-mrd6b   1/1     Running   0                35m     10.211.30.70     worker02   <none>           <none>
iomesh-openebs-ndm-k7dxn                               1/1     Running   0                35m     192.168.80.23    worker02   <none>           <none>
iomesh-openebs-ndm-m5p2k                               1/1     Running   0                35m     192.168.80.21    cp01       <none>           <none>
iomesh-openebs-ndm-node-exporter-2stn9                 1/1     Running   0                35m     10.211.30.72     worker02   <none>           <none>
iomesh-openebs-ndm-node-exporter-ccfr4                 1/1     Running   0                35m     10.211.214.129   cp01       <none>           <none>
iomesh-openebs-ndm-node-exporter-jdzdv                 1/1     Running   1 (8m59s ago)    35m     10.211.5.8       worker01   <none>           <none>
iomesh-openebs-ndm-operator-7d58d8fbc8-plkzz           1/1     Running   0                9m10s   10.211.30.75     worker02   <none>           <none>
iomesh-zookeeper-0                                     1/1     Running   0                35m     10.211.30.73     worker02   <none>           <none>
iomesh-zookeeper-1                                     1/1     Running   0                8m14s   10.211.5.9       worker01   <none>           <none>
iomesh-zookeeper-2                                     1/1     Running   0                7m59s   10.211.214.134   cp01       <none>           <none>
iomesh-zookeeper-operator-f5588b6d7-wknqk              1/1     Running   0                9m10s   10.211.30.74     worker02   <none>           <none>
operator-765dd9678f-6x5jf                              1/1     Running   0                35m     10.211.30.68     worker02   <none>           <none>
operator-765dd9678f-h5mzh                              1/1     Running   0                35m     10.211.214.130   cp01       <none>           <none>
operator-765dd9678f-svff4                              1/1     Running   0                9m10s   10.211.214.133   cp01       <none>           <none>

部署 IOMesh

檢視 Blockdevice 裝置狀態

IOMesh 安裝完成之後,所有的 blockdevice 均處於 Unclaimed 狀態

root@cp01:~# kubectl -n iomesh-system get blockdevice
NAME                                           NODENAME   SIZE            CLAIMSTATE   STATUS   AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-0c6a53344f61d5f4789c393acf459c83   cp01       1600321314816   Unclaimed    Active   15m
blockdevice-4f6835975745542e4a36b0548a573ef3   cp01       2400476553216   Unclaimed    Active   15m
blockdevice-5254ac6aa3528ba39dd3cb18663a1211   cp01       1600321314816   Unclaimed    Active   15m
blockdevice-5f40eb004d563e77febbde69d930880c   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-6e9e9eaafee63535906f3f3b9ab35687   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-73f728aca1ab2f74a833303fffc59301   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-7401844e99d0b75f2543768a0464e16c   cp01       2400476553216   Unclaimed    Active   15m
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb   worker02   1600321314816   Unclaimed    Active   15m
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0   worker01   1600321314816   Unclaimed    Active   15m
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-d116cd6ee3b11542beee3737d03de26a   worker02   1600321314816   Unclaimed    Active   15m
blockdevice-d81eadce662fe037fa84d1ffa4e73a37   worker01   1600321314816   Unclaimed    Active   15m
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-de6e4ce89275cd799fdf88246c445399   cp01       2400476553216   Unclaimed    Active   15m
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-f505ed7231329ab53ebe26e8a00d4483   cp01       2400476553216   Unclaimed    Active   15m

標記磁碟(非必須)

透過以下命令分別對所有快取盤以及資料盤打上標籤,便於後續按照不同功能 claimed。

快取盤:
# kubectl label blockdevice blockdevice-20cce4274e106429525f4a0ca7d192c2 -n iomesh-system iomesh-system/disk=SSD
資料盤:
# kubectl label blockdevice blockdevice-31a2642808ca091a75331b6c7a1f9f68 -n iomesh-system iomesh-system/disk=HDD

裝置對映-HybridFlash

iomesh.yaml 檔案 chunk/deviceMap 部分用以說明哪些磁碟被 iomesh 所使用以及用以何種型別磁碟進行掛載,本文件進行label select時按照 iomesh-system/disk=SSD 或 HDD 進行區分。

也可以透過檢視磁碟的labels,選擇預設的label

root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system blockdevice-02e15a7c78768f42a1e552a3726cff89 -oyaml
apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
  annotations:
    internal.openebs.io/uuid-scheme: gpt
  creationTimestamp: "2022-12-27T09:39:07Z"
  generation: 1
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    iomesh.com/bd-devicePath: dev.sdd
    iomesh.com/bd-deviceType: disk
    iomesh.com/bd-driverType: HDD
    iomesh.com/bd-model: DL2400MM0159
    iomesh.com/bd-serial: 5000c500e2476207
    iomesh.com/bd-vendor: SEAGATE
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: worker02
    kubernetes.io/os: linux
    ndm.io/blockdevice-type: blockdevice
    ndm.io/managed: "true"
    nodename: worker02
  name: blockdevice-02e15a7c78768f42a1e552a3726cff89
  namespace: iomesh-system
  resourceVersion: "3333"
  uid: 489546d4-ef39-4980-9fcd-b699a21940ec
......

如果客戶環境中有多個磁碟但有部分磁碟不作為 IOMesh 使用,可以透過 exclude 排除。

示例:這個選用了我們上面設定的label

. . .
    deviceMap:
      # cacheWithJournal:
         #   selector: 
      #     matchLabels:      
      #       iomesh.com/bd-deviceType: disk
      cacheWithJournal:
        selector:
          matchExpressions:
          - key: iomesh-system/disk
            operator: In
            values:
            - SSD
       dataStore:
         selector:
           matchExpressions:
           - key: iomesh-system/disk
             operator: In
             values:
             - HDD
      # exclude: blockdev-xxxx   ### 需要排除的blockdevice

申領磁碟

iomesh.yaml 檔案修改完成之後透過以下命令完成磁碟申領

./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml

執行後查詢所有需要用到的磁碟均已處於 Claimed 狀態

root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system
NAME                                           NODENAME   SIZE            CLAIMSTATE   STATUS   AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-0c6a53344f61d5f4789c393acf459c83   cp01       1600321314816   Claimed      Active   5h19m
blockdevice-4f6835975745542e4a36b0548a573ef3   cp01       2400476553216   Claimed      Active   5h19m
blockdevice-5254ac6aa3528ba39dd3cb18663a1211   cp01       1600321314816   Claimed      Active   5h19m
blockdevice-5f40eb004d563e77febbde69d930880c   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-6e9e9eaafee63535906f3f3b9ab35687   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-73f728aca1ab2f74a833303fffc59301   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-7401844e99d0b75f2543768a0464e16c   cp01       2400476553216   Claimed      Active   5h19m
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb   worker02   1600321314816   Claimed      Active   5h19m
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0   worker01   1600321314816   Claimed      Active   5h19m
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-d116cd6ee3b11542beee3737d03de26a   worker02   1600321314816   Claimed      Active   5h19m
blockdevice-d81eadce662fe037fa84d1ffa4e73a37   worker01   1600321314816   Claimed      Active   5h19m
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-de6e4ce89275cd799fdf88246c445399   cp01       2400476553216   Claimed      Active   5h19m
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-f505ed7231329ab53ebe26e8a00d4483   cp01       2400476553216   Claimed      Active   5h19m

建立 StorageClass

接下來就是建立StorageClass的過程。

這裡面涉及到給容器提供持久化儲存時 PV - PVC - StorageClass(SC) 的概念。具體的描述可以參見 CNCF 的介紹。給出一個比較淺顯的理解

  • PV 對應的儲存叢集中的 Volume。
  • PVC 用來宣告需要什麼型別的儲存。當容器需要使用持久化儲存資源時透過 PVC 來實現容器和 PV 之間的一個介面。
  • 在定義 PV 的時候需要配置很多的欄位內容,在大規模部署環境中需要建立多個PV較為繁瑣。為了簡化配置過程,於是就有了動態供給概念 - Dynamic Provisioning,自動建立 PV。管理員可以定義 StorageClass 並部署 PV 配置器(provisioner)。開發人員在申請資源的時候透過 PVC 指定所需的儲存型別 - StorageClass,PVC 會把 StorageClass 傳遞給 PV provisioner,由Provisioner自動建立 PV。

IOMesh CSI driver 安裝完成之後,會預設安裝 Storage Class:iomesh-csi-driver。同時也可以根據需求建立自定義的 StorageClass.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: iomesh-sc
provisioner: com.iomesh.csi-driver # <-- driver.name in iomesh-values.yaml
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  # "ext4" / "ext3" / "ext2" / "xfs"
  csi.storage.k8s.io/fstype: "ext4"
  # "2" / "3"  
    replicaFactor: "2"
  # "true" / "false"
  thinProvision: "true"
volumeBindingMode: Immediate

示例:

------
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-default
provisioner: com.iomesh.csi-driver
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: "ext4"
  replicaFactor: "2"
  thinProvision: "true"
volumeBindingMode: Immediate

說明:reclaimPolicy 可以設定為 Ratain 或者是 Delete,預設 Delete。在建立PV的時候persistentVolumeReclaimPolicy 會繼承 StorageClass 的 reclaimPolicy策略。

兩者的區別在於:

Delete:透過 PVC-SC 建立的PV,當PVC被刪除的時候,PV會一併被刪除掉。

Retain:透過 PVC-SC 建立的PV,當PVC被刪除的時候,PV不會被刪除掉,會處於Released 狀態,可被其他容器使用。

root@cp01:~/iomesh-offline# kubectl get sc
NAME                PROVISIONER                        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
hostpath            kubevirt.io/hostpath-provisioner   Delete          WaitForFirstConsumer   false                  5h23m
iomesh-csi-driver   com.iomesh.csi-driver              Retain          Immediate              true                   5h23m
sc-default          com.iomesh.csi-driver              Delete          Immediate              true                   47m

建立 SnapshotClass

Kubernetes VolumeSnapshotClass 物件類似於 StorageClass。透過如下方式定義:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: iomesh-csi-driver-default
driver: com.iomesh.csi-driver
deletionPolicy: Delete

建立 Volume

建立PVC之前需要確保 StorageClass 已經建立完畢。

示例:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-test
spec:  storageClass
Name: sc-default    ## 先前建立的 StorageClass 名稱
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

透過上面的PVC建立對應的PV,建立完成之後查詢建立完成的PVC:

root@cp01:~/iomesh-offline# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
iomesh-pvc-10g   Bound    pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8   10Gi       RWO            sc-default     44m

查詢透過PVC 建立的 PV:

root@cp01:~/iomesh-offline# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                   STORAGECLASS   REASON   AGE
pvc-5900b5e5-00d4-42ad-821e-6d1199a7e3e3   217Gi      RWO            Delete           Bound    iomesh-system/coredump-iomesh-chunk-0   hostpath                4h55m
pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8   10Gi       RWO            Delete           Bound    default/iomesh-pvc-10g                  sc-default              44m

檢視有關Cluster的儲存情況

root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -oyaml
apiVersion: v1
items:
- apiVersion: iomesh.com/v1alpha1
  kind: IOMeshCluster
  metadata:
    annotations:
      meta.helm.sh/release-name: iomesh
      meta.helm.sh/release-namespace: iomesh-system
    creationTimestamp: "2022-12-27T09:38:58Z"
    finalizers:
    - iomesh.com/iomesh-cluster-protection
    - iomesh.com/access-protection
    generation: 4
    labels:
      app.kubernetes.io/instance: iomesh
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: iomesh
      app.kubernetes.io/part-of: iomesh
      app.kubernetes.io/version: v5.1.2-rc14
      helm.sh/chart: iomesh-v0.11.1
    name: iomesh
    namespace: iomesh-system
    resourceVersion: "653267"
    uid: cd33bce1-9400-4914-aa32-75a13f07d13e
  spec:
    chunk:
      dataCIDR: 10.234.1.0/24
      deviceMap:
        cacheWithJournal:
          selector:
            matchExpressions:
            - key: iomesh-system/disk
              operator: In
              values:
              - SSD
        dataStore:
          selector:
            matchExpressions:
            - key: iomesh-system/disk
              operator: In
              values:
              - HDD
      devicemanager:
        image:
          pullPolicy: IfNotPresent
          repository: iomesh/operator-devicemanager
          tag: v0.11.1
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-chunkd
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    diskDeploymentMode: hybridFlash
    meta:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-metad
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    portOffset: -1
    probe:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-probe
        tag: v0.11.1
    reclaimPolicy:
      blockdevice: Delete
      volume: Delete
    redirector:
      dataCIDR: 10.234.1.0/24
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-iscsi-redirectord
        tag: v5.1.2-rc14
      resources: {}
    storageClass: hostpath
    toolbox:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-toolbox
        tag: v0.11.1
  status:
    attachedDevices:
    - device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0
      mountType: cacheWithJournal
      nodeName: worker01
    - device: blockdevice-02e15a7c78768f42a1e552a3726cff89
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-7401844e99d0b75f2543768a0464e16c
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-6e9e9eaafee63535906f3f3b9ab35687
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-5f40eb004d563e77febbde69d930880c
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-de6e4ce89275cd799fdf88246c445399
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37
      mountType: cacheWithJournal
      nodeName: worker01
    - device: blockdevice-d116cd6ee3b11542beee3737d03de26a
      mountType: cacheWithJournal
      nodeName: worker02
    - device: blockdevice-f505ed7231329ab53ebe26e8a00d4483
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-4f6835975745542e4a36b0548a573ef3
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-73f728aca1ab2f74a833303fffc59301
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb
      mountType: cacheWithJournal
      nodeName: worker02
    - device: blockdevice-0c6a53344f61d5f4789c393acf459c83
      mountType: cacheWithJournal
      nodeName: cp01
    - device: blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-d017164bd3f69f337e1838cc3b3c4aaf
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211
      mountType: cacheWithJournal
      nodeName: cp01
    license:
      expirationDate: Thu, 26 Jan 2023 10:07:30 UTC
      maxChunkNum: 3
      maxPhysicalDataCapacity: 0 TB
      maxPhysicalDataCapacityPerNode: 128 TB
      serial: 44831211-c377-486e-acc5-7b4ea354091e
      signDate: Tue, 27 Dec 2022 10:07:30 UTC
      softwareEdition: COMMUNITY
      subscriptionExpirationDate: "0"
      subscriptionStartDate: "0"
    readyReplicas:
      iomesh-chunk: 3
      iomesh-meta: 3
    runningImages:
      chunk:
        iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14
      meta:
        iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14
    summary:
      chunkSummary:
        chunks:
        - id: 1
          ip: 10.234.1.23
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 2
          ip: 10.234.1.22
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 3
          ip: 10.234.1.21
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
      clusterSummary:
        spaceInfo:
          dirtyCacheSpace: 387.50Mi
          failureCacheSpace: 0B
          failureDataSpace: 0B
          totalCacheCapacity: 8.62Ti
          totalDataCapacity: 24.66Ti
          usedCacheSpace: 387.50Mi
          usedDataSpace: 11.50Gi
      metaSummary:
        aliveHost:
        - 10.211.5.11
        - 10.211.30.76
        - 10.211.214.135
        leader: 10.211.5.11:10100
        status: META_RUNNING
kind: List
metadata:
  resourceVersion: ""

主要檢視Summary的內容:

summary:
      chunkSummary:
        chunks:
        - id: 1
          ip: 10.234.1.23
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 2
          ip: 10.234.1.22
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 3
          ip: 10.234.1.21
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
      clusterSummary:
        spaceInfo:
          dirtyCacheSpace: 387.50Mi
          failureCacheSpace: 0B
          failureDataSpace: 0B
          totalCacheCapacity: 8.62Ti
          totalDataCapacity: 24.66Ti
          usedCacheSpace: 387.50Mi
          usedDataSpace: 11.50Gi
      metaSummary:
        aliveHost:
        - 10.211.5.11
        - 10.211.30.76
        - 10.211.214.135
        leader: 10.211.5.11:10100
        status: META_RUNNING

All Flash模式

改變iomesh.yaml檔案

iomesh:
  # Whether to create IOMeshCluster object
  create: true

  # Use All-Flash Mode or Hybrid-Flash Mode, in All-Flash mode will reject mounting `cacheWithJournal` and `rawCache` type.
  # And enable mount `dataStoreWithJournal`.
  diskDeploymentMode: "allFlash"
#注意:這個提示沒有給出關鍵詞,不是ALL-Flash。應用後會提示錯誤!
#./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
#Error: UPGRADE FAILED: cannot patch "iomesh" with kind IOMeshCluster: IOMeshCluster.iomesh.com "iomesh" is invalid: spec.diskDeploymentMode: Unsupported value: "ALL-Flash": supported values: "hybridFlash", "allFlash"

deviceMap:
    #  cacheWithJournal:
    #    selector:
    #      matchExpressions:
    #      - key: iomesh-system/disk
    #        operator: In
    #        values:
    #        - SSD
    #  dataStore:
    #    selector:
    #      matchExpressions:
    #      - key: iomesh-system/disk
    #        operator: In
    #        values:
    #        - HDD

      dataStoreWithJournal:
        selector:
          matchLabels:
            iomesh.com/bd-deviceType: disk
          matchExpressions:
          - key: iomesh.com/bd-driverType
            operator: In
            values:
            - SSD

檢視CRD的yaml可以看到這部分的配置:

diskDeploymentMode:
                default: hybridFlash
                description: DiskDeploymentMode set this IOMesh cluster start with
                  all-flash mode or hybrid-flash mode. In all-flash mode, the DeviceManager
                  will reject mount any `Cache` type partition. In hybrid-flash mode,
                  the DeviceManager will reject mount `dataStoreWithJournal` type
                  partition.
                enum:
                - hybridFlash
                - allFlash
                type: string

執行以後:

root@cp01:~/iomesh-offline# ./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
Release "iomesh" has been upgraded. Happy Helming!
NAME: iomesh
LAST DEPLOYED: Wed Dec 28 11:30:02 2022
NAMESPACE: iomesh-system
STATUS: deployed
REVISION: 8
TEST SUITE: None

磁碟會重新Claim

root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system
NAME                                           NODENAME   SIZE            CLAIMSTATE   STATUS   AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-0c6a53344f61d5f4789c393acf459c83   cp01       1600321314816   Claimed      Active   25h
blockdevice-4f6835975745542e4a36b0548a573ef3   cp01       2400476553216   Unclaimed    Active   25h
blockdevice-5254ac6aa3528ba39dd3cb18663a1211   cp01       1600321314816   Claimed      Active   25h
blockdevice-5f40eb004d563e77febbde69d930880c   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-6e9e9eaafee63535906f3f3b9ab35687   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-73f728aca1ab2f74a833303fffc59301   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-7401844e99d0b75f2543768a0464e16c   cp01       2400476553216   Unclaimed    Active   25h
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb   worker02   1600321314816   Claimed      Active   25h
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0   worker01   1600321314816   Claimed      Active   25h
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-d116cd6ee3b11542beee3737d03de26a   worker02   1600321314816   Claimed      Active   25h
blockdevice-d81eadce662fe037fa84d1ffa4e73a37   worker01   1600321314816   Claimed      Active   25h
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-de6e4ce89275cd799fdf88246c445399   cp01       2400476553216   Unclaimed    Active   25h
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-f505ed7231329ab53ebe26e8a00d4483   cp01       2400476553216   Unclaimed    Active   25h

再次檢視系統的儲存

root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -o yaml
apiVersion: v1
items:
- apiVersion: iomesh.com/v1alpha1
  kind: IOMeshCluster
  metadata:
    annotations:
      meta.helm.sh/release-name: iomesh
      meta.helm.sh/release-namespace: iomesh-system
    creationTimestamp: "2022-12-27T09:38:58Z"
    finalizers:
    - iomesh.com/iomesh-cluster-protection
    - iomesh.com/access-protection
    generation: 5
    labels:
      app.kubernetes.io/instance: iomesh
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: iomesh
      app.kubernetes.io/part-of: iomesh
      app.kubernetes.io/version: v5.1.2-rc14
      helm.sh/chart: iomesh-v0.11.1
    name: iomesh
    namespace: iomesh-system
    resourceVersion: "692613"
    uid: cd33bce1-9400-4914-aa32-75a13f07d13e
  spec:
    chunk:
      dataCIDR: 10.234.1.0/24
      deviceMap:
        dataStoreWithJournal:
          selector:
            matchExpressions:
            - key: iomesh.com/bd-driverType
              operator: In
              values:
              - SSD
            matchLabels:
              iomesh.com/bd-deviceType: disk
      devicemanager:
        image:
          pullPolicy: IfNotPresent
          repository: iomesh/operator-devicemanager
          tag: v0.11.1
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-chunkd
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    diskDeploymentMode: allFlash
    meta:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-metad
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    portOffset: -1
    probe:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-probe
        tag: v0.11.1
    reclaimPolicy:
      blockdevice: Delete
      volume: Delete
    redirector:
      dataCIDR: 10.234.1.0/24
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-iscsi-redirectord
        tag: v5.1.2-rc14
      resources: {}
    storageClass: hostpath
    toolbox:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-toolbox
        tag: v0.11.1
  status:
    attachedDevices:
    - device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0
      mountType: dataStoreWithJournal
      nodeName: worker01
    - device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37
      mountType: dataStoreWithJournal
      nodeName: worker01
    - device: blockdevice-d116cd6ee3b11542beee3737d03de26a
      mountType: dataStoreWithJournal
      nodeName: worker02
    - device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb
      mountType: dataStoreWithJournal
      nodeName: worker02
    - device: blockdevice-0c6a53344f61d5f4789c393acf459c83
      mountType: dataStoreWithJournal
      nodeName: cp01
    - device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211
      mountType: dataStoreWithJournal
      nodeName: cp01
    license:
      expirationDate: Thu, 26 Jan 2023 10:07:30 UTC
      maxChunkNum: 3
      maxPhysicalDataCapacity: 0 TB
      maxPhysicalDataCapacityPerNode: 128 TB
      serial: 44831211-c377-486e-acc5-7b4ea354091e
      signDate: Tue, 27 Dec 2022 10:07:30 UTC
      softwareEdition: COMMUNITY
      subscriptionExpirationDate: "0"
      subscriptionStartDate: "0"
    readyReplicas:
      iomesh-chunk: 3
      iomesh-meta: 3
    runningImages:
      chunk:
        iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14
      meta:
        iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14
    summary:
      chunkSummary:
        chunks:
        - id: 1
          ip: 10.234.1.23
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 0B
            totalDataCapacity: 2.85Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 2
          ip: 10.234.1.22
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 0B
            totalDataCapacity: 2.85Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 3
          ip: 10.234.1.21
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 0B
            totalDataCapacity: 2.85Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
      clusterSummary:
        spaceInfo:
          dirtyCacheSpace: 0B
          failureCacheSpace: 0B
          failureDataSpace: 0B
          totalCacheCapacity: 0B
          totalDataCapacity: 8.56Ti
          usedCacheSpace: 0B
          usedDataSpace: 0B
      metaSummary:
        aliveHost:
        - 10.211.5.11
        - 10.211.30.76
        - 10.211.214.135
        leader: 10.211.5.11:10100
        status: META_RUNNING
kind: List
metadata:
  resourceVersion: ""
  • 監控用pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pyzbs
spec:
  selector:
    matchLabels:
      app: pyzbs
  replicas: 1
  template:
    metadata:
      labels:
        app: pyzbs # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: pyzbs
        image: iomesh/zbs-client-py-builder:latest
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]

Uninstall

root@cp01:~/iomesh-offline# ./helm uninstall -n iomesh-system iomesh
These resources were kept due to the resource policy:
[Deployment] iomesh-openebs-ndm-operator
[Deployment] iomesh-zookeeper-operator
[Deployment] operator
[DaemonSet] iomesh-hostpath-provisioner
[DaemonSet] iomesh-openebs-ndm
[RoleBinding] iomesh-zookeeper-operator
[RoleBinding] iomesh:leader-election
[Role] iomesh-zookeeper-operator
[Role] iomesh:leader-election
[ClusterRoleBinding] iomesh-hostpath-provisioner
[ClusterRoleBinding] iomesh-openebs-ndm
[ClusterRoleBinding] iomesh-zookeeper-operator
[ClusterRoleBinding] iomesh:manager
[ClusterRole] iomesh-hostpath-provisioner
[ClusterRole] iomesh-openebs-ndm
[ClusterRole] iomesh-zookeeper-operator
[ClusterRole] iomesh:manager
[ConfigMap] iomesh-openebs-ndm-config
[ServiceAccount] openebs-ndm
[ServiceAccount] zookeeper-operator
[ServiceAccount] iomesh-operator

release "iomesh" uninstalled

Q1-發現pod在某個node上無法掛載pvc

Warning  FailedMount             5m25s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[fio-pvc-worker01 kube-api-access-thpm9]: timed out waiting for the condition
  Warning  FailedMapVolume         66s (x11 over 7m20s)  kubelet                  MapVolume.SetUpDevice failed for volume "pvc-bbe904d5-0362-481b-8ec4-e059c0ac2d52" : rpc error: code = Internal desc = failed to attach &{LunId:8 Initiator:iqn.2020-05.com.iomesh:817dd926-3e0f-4c82-9c02-5e30b8e045c5-59068042-6fff-4700-b0b4-537376943d5f IFace:59068042-6fff-4700-b0b4-537376943d5f Portal:127.0.0.1:3260 TargetIqn:iqn.2016-02.com.smartx:system:ebc8b5ba-7a07-4651-9986-f7ea4fe0121e ChapInfo:<nil>}, failed to iscsiadm  discovery target, command: [iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3260 -I 59068042-6fff-4700-b0b4-537376943d5f -o update], output: iscsiadm: Failed to load module tcp: No such file or directory
iscsiadm: Could not load transport tcp.Dropping interface 59068042-6fff-4700-b0b4-537376943d5f.
, error: exit status 21
  Warning  FailedMount  54s (x2 over 3m9s)  kubelet  Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[kube-api-access-thpm9 fio-pvc-worker01]: timed out waiting for the condition

重新檢查:

1. 編輯iscsi配置檔案
sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf

---

2. 確保 iscsi_tcp 模組已被載入
sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'

---

3. 啟動 iscsid 服務
systemctl enable --now iscsid

相關文章