kubernets1.13.1叢集使用ceph rbd塊儲存

disable發表於2021-09-09

簡介

ceph支援物件儲存,檔案系統及塊儲存,是三合一儲存型別,kubernetes的樣例中有cephfs與rbd兩種使用方式的介紹,cephfs需要node節點安裝ceph才能支援,rbd需要node節點安裝ceph-common才支援。
使用上的區別如下:

Volume Plugin   ReadWriteOnce   ReadOnlyMany    ReadWriteMany
CephFS                                            
RBD                                               -

基本環境

k81叢集1.13.1版本

[root@elasticsearch01 ~]# kubectl get nodesNAME        STATUS   ROLES    AGE   VERSION10.2.8.34   Ready    <none>   24d   v1.13.110.2.8.65   Ready    <none>   24d   v1.13.1

ceph叢集 luminous版本

[root@ceph01 ~]# ceph -s
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03    mgr: ceph03(active), standbys: ceph02, ceph01    osd: 24 osds: 24 up, 24 in
    rgw: 3 daemons active

操作步驟

一、ceph叢集建立ceph池及映象

[root@ceph01 ~]# ceph osd pool create rbd-k8s 1024 1024 For better initial performance on pools expected to store a large number of objects, consider supplying the expected_num_objects parameter when creating the pool.

[root@ceph01 ~]# ceph osd lspools 1 rbd-es,2 .rgw.root,3 default.rgw.control,4 default.rgw.meta,5 default.rgw.log,6 default.rgw.buckets.index,7 default.rgw.buckets.data,8 default.rgw.buckets.non-ec,9 rbd-k8s,

[root@ceph01 ~]# rbd create rbd-k8s/cephimage1 --size 10240[root@ceph01 ~]# rbd create rbd-k8s/cephimage2 --size 20480[root@ceph01 ~]# rbd create rbd-k8s/cephimage3 --size 40960[root@ceph01 ~]# rbd list rbd-k8scephimage1
cephimage2
cephimage3

二、k8s叢集使用ceph rbd塊儲存

1、下載樣例

[root@elasticsearch01 ~]# git clone  into 'examples'...
remote: Enumerating objects: 11475, done.
remote: Total 11475 (delta 0), reused 0 (delta 0), pack-reused 11475
Receiving objects: 100% (11475/11475), 16.94 MiB | 6.00 MiB/s, done.
Resolving deltas: 100% (6122/6122), done.

[root@elasticsearch01 ~]# cd examples/staging/volumes/rbd[root@elasticsearch01 rbd]# lsrbd-with-secret.yaml  rbd.yaml  README.md  secret
[root@elasticsearch01 rbd]# cp -a ./rbd /k8s/yaml/volumes/

2、k8s叢集節點安裝ceph客戶端

[root@elasticsearch01 ceph]# yum  install ceph-common

3、修改rbd-with-secret.yaml配置檔案
修改後配置如下:

[root@elasticsearch01 rbd]# cat rbd-with-secret.yaml apiVersion: v1kind: Podmetadata:
  name: rbd2spec:
  containers:
    - image: kubernetes/pause      name: rbd-rw      volumeMounts:
      - name: rbdpd        mountPath: /mnt/rbd  volumes:
    - name: rbdpd      rbd:
        monitors:
        - '10.0.4.10:6789'
        - '10.0.4.13:6789'
        - '10.0.4.15:6789'
        pool: rbd-k8s        image: cephimage1        fsType: ext4        readOnly: true
        user: admin        secretRef:
          name: ceph-secret

如下引數根據實際情況修改:
monitors:這是 Ceph叢集的monitor 監視器,Ceph 叢集可以配置多個 monitor,本配置3個mon
pool:這是Ceph叢集中儲存資料進行歸類區分使用,這裡用的pool為rbd-ceph
image:這是Ceph 塊裝置中的磁碟映像檔案,這裡用的是cephimage1
fsType:檔案系統型別,預設使用 ext4 即可
readOnly:是否為只讀,這裡測試使用只讀即可
user:這是Ceph Client訪問Ceph儲存叢集所使用的使用者名稱,這裡我們使用admin 即可
keyring:這是Ceph叢集認證需要的金鑰環,搭建Ceph儲存叢集時生成的ceph.client.admin.keyring
imageformat:這是磁碟映像檔案格式,可以使用 2,或者老一些的1,核心版本比較低的使用1
imagefeatures: 這是磁碟映像檔案的特徵,需要uname -r檢視叢集系統核心所支援的特性,這裡Ceontos7.4核心版本為3.10.0-693.el7.x86_64只支援layering

4、使用ceph認證秘鑰
在叢集中使用secret更方便易於擴充套件且安全

[root@ceph01 ~]# cat /etc/ceph/ceph.client.admin.keyring [client.admin]
    key = AQBHVp9bPirBCRAAUt6Mjw5PUjiy/RDHyHZrUw==

[root@ceph01 ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ==

5、建立ceph-secret

[root@elasticsearch01 rbd]# cat secret/ceph-secret.yaml apiVersion: v1kind: Secretmetadata:
  name: ceph-secrettype: "kubernetes.io/rbd"data:
  key: QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ==

[root@elasticsearch01 rbd]# kubectl create -f secret/ceph-secret.yaml secret/ceph-secret created

6、建立pod測試rbd
按照官網的案例直接建立即可

[root@elasticsearch01 rbd]# kubectl create -frbd-with-secret.yaml

但是生產環境中不直接使用volumes,他會隨著pods的建立兒建立,刪除而刪除,資料得不到儲存,如果需要資料不丟失,需要藉助pv和pvc實現

7、建立ceph-pv
注意rbd是讀寫一次,只讀多次,目前還不支援讀寫多次,我們日常使用rbd對映磁碟時也是一個image只掛載一個客戶端上;cephfs可以支援讀寫多次

[root@elasticsearch01 rbd]# cat rbd-pv.yaml apiVersion: v1kind: PersistentVolumemetadata:
  name: ceph-rbd-pvspec:
  capacity:
    storage: 20Gi  accessModes:
    - ReadWriteOnce  rbd:
    monitors:
      - '10.0.4.10:6789'
      - '10.0.4.13:6789'
      - '10.0.4.15:6789'
    pool: rbd-k8s    image: cephimage2    user: admin    secretRef:
      name: ceph-secret    fsType: ext4    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

[root@elasticsearch01 rbd]# kubectl create -f rbd-pv.yaml persistentvolume/ceph-rbd-pv created

[root@elasticsearch01 rbd]# kubectl get pvNAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
ceph-rbd-pv   20Gi       RWO            Recycle          Available

8、建立ceph-pvc

[root@elasticsearch01 rbd]# cat rbd-pv-claim.yaml apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-rbd-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

[root@elasticsearch01 rbd]# kubectl create -f rbd-pv-claim.yaml persistentvolumeclaim/ceph-rbd-pv-claim created

[root@elasticsearch01 rbd]# kubectl get pvcNAME                STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-rbd-pv-claim   Bound    ceph-rbd-pv   20Gi       RWO                           6s

[root@elasticsearch01 rbd]# kubectl get pvNAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
ceph-rbd-pv   20Gi       RWO            Recycle          Bound    default/ceph-rbd-pv-claim                           5m28s

9、建立pod透過pv、pvc方式測試rbd
由於需要格式化掛載rbd,rbd空間比較大10G,需要時間比較久,大概需要幾分鐘

[root@elasticsearch01 rbd]# cat rbd-pv-pod.yaml apiVersion: v1kind: Podmetadata:
  name: ceph-rbd-pv-pod1spec:
  containers:
  - name: ceph-rbd-pv-busybox    image: busybox    command: ["sleep", "60000"]    volumeMounts:
    - name: ceph-rbd-vol1      mountPath: /mnt/ceph-rbd-pvc/busybox      readOnly: false
  volumes:
  - name: ceph-rbd-vol1    persistentVolumeClaim:
      claimName: ceph-rbd-pv-claim

[root@elasticsearch01 rbd]# kubectl create -f rbd-pv-pod.yaml pod/ceph-rbd-pv-pod1 created

[root@elasticsearch01 rbd]# kubectl get podsNAME               READY   STATUS              RESTARTS   AGE
busybox            1/1     Running             432        18d
ceph-rbd-pv-pod1   0/1     ContainerCreating   0          19s

報錯如下
MountVolume.WaitForAttach failed for volume "ceph-rbd-pv" : rbd: map failed exit status 6, rbd output: rbd: sysfs write failed RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address
解決方法
禁用一些特性,這些特性在centos7.4核心上不支援,所以生產環境中k8s及相關ceph最好使用核心版本高的系統做為底層作業系統
rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten

[root@ceph01 ~]# rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten

三、驗證效果

1、k8s叢集端驗證

[root@elasticsearch01 rbd]# kubectl get pods -o wideNAME               READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
busybox            1/1     Running   432        18d     10.254.35.3   10.2.8.65   <none>           <none>
ceph-rbd-pv-pod1   1/1     Running   0          3m39s   10.254.35.8   10.2.8.65   <none>           <none>

[root@elasticsearch02 ceph]# df -h |grep rbd/dev/rbd0                  493G  162G  306G  35% /data
/dev/rbd1                   20G   45M   20G   1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2

[root@elasticsearch02 ceph]# cd /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2[root@elasticsearch02 rbd-k8s-image-cephimage2]# lslost+found

[root@elasticsearch01 rbd]# kubectl exec -ti ceph-rbd-pv-pod1 sh/ # df -hFilesystem                Size      Used Available Use% Mounted on
overlay                  49.1G      7.4G     39.1G  16% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     7.8G         0      7.8G   0% /sys/fs/cgroup
/dev/vda1                49.1G      7.4G     39.1G  16% /dev/termination-log/dev/vda1                49.1G      7.4G     39.1G  16% /etc/resolv.conf
/dev/vda1                49.1G      7.4G     39.1G  16% /etc/hostname
/dev/vda1                49.1G      7.4G     39.1G  16% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
/dev/rbd1                19.6G     44.0M     19.5G   0% /mnt/ceph-rbd-pvc/busybox
tmpfs                     7.8G     12.0K      7.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                     7.8G         0      7.8G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/timer_stats
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     7.8G         0      7.8G   0% /proc/scsi
tmpfs                     7.8G         0      7.8G   0% /sys/firmware
/ # cd /mnt/ceph-rbd-pvc/busybox//mnt/ceph-rbd-pvc/busybox # lslost+found
/mnt/ceph-rbd-pvc/busybox # touch ceph-rbd-pods/mnt/ceph-rbd-pvc/busybox # lsceph-rbd-pods  lost+found
/mnt/ceph-rbd-pvc/busybox # echo busbox>ceph-rbd-pods /mnt/ceph-rbd-pvc/busybox # cat ceph-rbd-pods busbox

[root@elasticsearch02 ceph]# cd /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2[root@elasticsearch02 rbd-k8s-image-cephimage2]# lsceph-rbd-pods  lost+found

2、ceph叢集端驗證

[root@ceph01 ~]# ceph dfGLOBAL:    SIZE        AVAIL       RAW USED     %RAW USED 
    65.9TiB     58.3TiB      7.53TiB         11.43 POOLS:    NAME                           ID     USED        %USED     MAX AVAIL     OBJECTS 
    rbd-es                         1      1.38TiB      7.08       18.1TiB      362911 
    .rgw.root                      2      1.14KiB         0       18.1TiB           4 
    default.rgw.control            3           0B         0       18.1TiB           8 
    default.rgw.meta               4      46.9KiB         0        104GiB         157 
    default.rgw.log                5           0B         0       18.1TiB         345 
    default.rgw.buckets.index      6           0B         0        104GiB        2012 
    default.rgw.buckets.data       7      1.01TiB      5.30       18.1TiB     2090721 
    default.rgw.buckets.non-ec     8           0B         0       18.1TiB           0 
    rbd-k8s                        9       137MiB         0       18.1TiB          67



作者:三杯水Plus
連結:


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/200/viewspace-2820977/,如需轉載,請註明出處,否則將追究法律責任。

相關文章