k8s使用rbd作為儲存
如果需要使用rbd作為後端儲存的話,需要先安裝ceph-common
1. ceph叢集建立rbd
需要提前在ceph叢集上建立pool,然後建立image
[root@ceph01 ~]# ceph osd pool create pool01
[root@ceph01 ~]# ceph osd pool application enable pool01 rbd
[root@ceph01 ~]# rbd pool init pool01
[root@ceph01 ~]# rbd create pool01/test --size 10G --image-format 2 --image-feature layerin
[root@ceph01 ~]# rbd info pool01/test
2. k8s編寫yaml檔案
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: rbd
name: rbd
spec:
replicas: 1
selector:
matchLabels:
app: rbd
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rbd
spec:
volumes:
- name: test
rbd:
fsType: xfs
keyring: /root/admin.keyring
monitors:
- 192.168.200.230:6789
pool: pool01
image: test
user: admin
readOnly: false
containers:
- image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test
name: nginx
resources: {}
status: {}
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
rbd-888b8b747-n56wr 1/1 Running 0 26m
這個時候k8s就使用了rbd作為儲存
如果這個地方一直顯示ContainerCreating的話,可能是沒有安裝ceph-common,也可能是你的keyring或者ceph.conf沒有發放到node節點,具體可以使用describe來看
2.1 進入容器檢視掛載
[root@master euler]# kubectl exec -it rbd-5db4759c-nj2b4 -- bash
root@rbd-5db4759c-nj2b4:/# df -hT |grep /dev/rbd0
/dev/rbd0 xfs 10G 105M 9.9G 2% /usr/share/nginx/html
可以看到,/dev/rbd0已經被格式化成xfs並且掛載到了/usr/share/nginx/html
2.2 進入容器修改內容
root@rbd-5db4759c-nj2b4:/usr/share/nginx# cd html/
root@rbd-5db4759c-nj2b4:/usr/share/nginx/html# ls
root@rbd-5db4759c-nj2b4:/usr/share/nginx/html# echo 123 > index.html
root@rbd-5db4759c-nj2b4:/usr/share/nginx/html# chmod 644 index.html
root@rbd-5db4759c-nj2b4:/usr/share/nginx/html# exit
[root@master euler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rbd-5db4759c-nj2b4 1/1 Running 0 8m5s 192.168.166.131 node1 <none> <none>
訪問容器檢視內容
[root@master euler]# curl 192.168.166.131
123
內容可以正常被訪問到,我們將容器刪除,然後讓他自己重新啟動一個來看看檔案是否還存在
[root@master euler]# kubectl delete pods rbd-5db4759c-nj2b4
pod "rbd-5db4759c-nj2b4" deleted
[root@master euler]# kubectl get pods
NAME READY STATUS RESTARTS AGE
rbd-5db4759c-v9cgm 0/1 ContainerCreating 0 2s
[root@master euler]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rbd-5db4759c-v9cgm 1/1 Running 0 40s 192.168.166.132 node1 <none> <none>
[root@master euler]# curl 192.168.166.132
123
可以看到,也是沒有問題的,這樣k8s就正常的使用了rbd儲存
有一個問題,那就是開發人員他們並不是很瞭解yaml檔案裡面改怎麼去寫掛載,每種型別的儲存都是不同的寫法,那有沒有一種方式遮蔽底層的寫法,直接告訴k8s叢集我想要一個什麼樣的儲存呢?
有的,那就是pv
3. pv使用rbd
[root@master euler]# vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 8Gi
這裡的pvc使用的是塊裝置,8個G,目前還沒有這個pv可以給到他
具體的這裡不細說,CKA裡面有寫
注意,這裡是pvc,並不是pv,pvc就是開發人員定義想要的儲存型別,大小,然後我可以根據你的pvc去給你建立pv,或者提前建立好pv你直接申領
[root@master euler]# vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: rbdpv
spec:
capacity:
storage: 8Gi
volumeMode: Block
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
rbd:
fsType: xfs
image: test
keyring: /etc/ceph/ceph.client.admin.keyring
monitors:
- 172.16.1.33
pool: rbd
readOnly: false
user: admin
3.1 檢視pvc狀態
[root@master euler]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound rbdpv 8Gi RWO 11s
這個時候pv和就和pvc繫結上了,一個pv只能繫結一個pvc,同樣,一個pvc也只能繫結一個pv
3.2 使用pvc
[root@master euler]# vim pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pvc-pod
name: pvc-pod
spec:
volumes:
- name: rbd
persistentVolumeClaim:
claimName: myclaim
readOnly: false
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pvc-pod
volumeDevices: # 因為是使用的塊裝置,所以這裡是volumeDevices
- devicePath: /dev/rbd0
name: rbd
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
~
[root@master euler]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pvc-pod 1/1 Running 0 2m5s
rbd-5db4759c-v9cgm 1/1 Running 0 39m
3.3 進入容器檢視塊裝置
root@pvc-pod:/# ls /dev/rbd0
/dev/rbd0
可以看到,現在rbd0已經存在於容器內部了
這樣做我們每次建立pvc都需要建立對應的pv,我們可以使用動態製備
4. 動態製備
使用storageClass,但是目前尤拉使用的k8s太老了,所以需要下載尤拉fork的一個storageClass
[root@master ~]# git clone https://gitee.com/yftyxa/ceph-csi.git
[root@master ~]# cd ceph-csi/deploy/
[root@master deploy]# ls
ceph-conf.yaml csi-config-map-sample.yaml rbd
cephcsi Makefile scc.yaml
cephfs nfs service-monitor.yaml
4.1 動態製備rbd
我們需要修改/root/ceph-csi/deploy/rbd/kubernetes/csi-config-map.yaml
# 先建立一個csi名稱空間
[root@master ~]# kubectl create ns csi
修改檔案內容
[root@master kubernetes]# vim csi-rbdplugin-provisioner.yaml
# 將第63行的內容改為false
63 - "--extra-create-metadata=false"
# 修改第二個檔案
[root@master kubernetes]# vim csi-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "ceph-csi-config"
data:
config.json: |-
[
{
"clusterID": "c1f213ae-2de3-11ef-ae15-00163e179ce3",
"monitors": ["172.16.1.33","172.16.1.32","172.16.1.31"]
}
]
- 這裡面的clusterID可以透過ceph -s去檢視
修改第三個檔案
[root@master kubernetes]# vim csidriver.yaml
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: "rbd.csi.ceph.com"
spec:
attachRequired: true
podInfoOnMount: false
# seLinuxMount: true # 將這一行註釋
fsGroupPolicy: File
自行編寫一個檔案
[root@master kubernetes]# vim csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-encryption-kms-config
data:
config-json: |-
{}
4.2 獲取admin的key
[root@ceph001 ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQC4QnJmng4HIhAA42s27yOflqOBNtEWDgEmkg==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
- AQC4QnJmng4HIhAA42s27yOflqOBNtEWDgEmkg== 只要這部分
然後自行編寫一個csi-secret.yaml的檔案
[root@master kubernetes]# vim csi-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-secret
stringData:
userID: admin
userKey: AQC4QnJmng4HIhAA42s27yOflqOBNtEWDgEmkg==
adminID: admin
adminKey: AQC4QnJmng4HIhAA42s27yOflqOBNtEWDgEmkg==
[root@master kubernetes]# kubectl apply -f csi-secret.yaml -n csi
secret/csi-secret created
[root@master kubernetes]# cd ../../
[root@master deploy]# kubectl apply -f ceph-conf.yaml -n csi
configmap/ceph-config created
[root@master deploy]# cd -
/root/ceph-csi/deploy/rbd/kubernetes
4.3替換所有的namespace
[root@master kubernetes]# sed -i "s/namespace: default/namespace: csi/g"
*.yaml
4.4 部署
[root@master kubernetes]# [root@master kubernetes]# kubectl apply -f . -n csi
注意:如果你的worker節點數量少於3個的話,是需要將 csi-rbdplugin-provisioner.yaml這個檔案裡面的replicas改小一點的。
[root@master kubernetes]# kubectl get pods -n csi
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-cv455 3/3 Running 1 (2m14s ago) 2m46s
csi-rbdplugin-pf5ld 3/3 Running 0 4m36s
csi-rbdplugin-provisioner-6846c4df5f-dvqqk 7/7 Running 0 4m36s
csi-rbdplugin-provisioner-6846c4df5f-nmcxf 7/7 Running 1 (2m11s ago) 4m36s
5.使用動態製備
5.1 建立storageClass
[root@master rbd]# /root/ceph-csi/examples/rbd
[root@master rbd]# grep -Ev "\s*#|^$" storageclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <cluster-id>
pool: <rbd-pool-name>
imageFeatures: "layering"
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
將這裡的內容複製出來
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: c1f213ae-2de3-11ef-ae15-00163e179ce3
pool: rbd
imageFeatures: "layering"
csi.storage.k8s.io/provisioner-secret-name: csi-secret
csi.storage.k8s.io/provisioner-secret-namespace: csi
csi.storage.k8s.io/controller-expand-secret-name: csi-secret
csi.storage.k8s.io/controller-expand-secret-namespace: csi
csi.storage.k8s.io/node-stage-secret-name: csi-secret
csi.storage.k8s.io/node-stage-secret-namespace: csi
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- discard
修改成這個樣子,這裡面的clusterID改成自己的,secret-name自己查一下
5.2 建立pvc
[root@master euler]# cp pvc.yaml sc-pvc.yaml
[root@master euler]# vim sc-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sc-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: "csi-rbd-sc"
resources:
requests:
storage: 15Gi
- storageClassName 可以使用 kubectl get sc檢視
現在我們只需要建立pvc,他就自己可以建立pv了
[root@master euler]# kubectl apply -f sc-pvc.yaml
persistentvolumeclaim/sc-pvc created
[root@master euler]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound rbdpv 8Gi RWO 111m
sc-pvc Bound pvc-dfe3497f-9ed7-4961-9265-9e7242073c28 15Gi RWO csi-rbd-sc 2s
回到ceph叢集檢視rbd
[root@ceph001 ~]# rbd ls
csi-vol-56e37046-b9d7-4ef1-a534-970a766744f3
test
[root@ceph001 ~]# rbd info csi-vol-56e37046-b9d7-4ef1-a534-970a766744f3
rbd image 'csi-vol-56e37046-b9d7-4ef1-a534-970a766744f3':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 38019ee708da
block_name_prefix: rbd_data.38019ee708da
format: 2
features: layering
op_features:
flags:
create_timestamp: Wed Jun 19 04:55:35 2024
access_timestamp: Wed Jun 19 04:55:35 2024
modify_timestamp: Wed Jun 19 04:55:35 2024
5.3 將sc設為預設
如果不設定為預設的話,每次寫yaml檔案都需要指定sc,將sc設為預設的話就不用每次都指定了
[root@master euler]# kubectl edit sc csi-rbd-sc
# 在註釋裡面寫入這一行
annotations:
storageclass.kubernetes.io/is-default-class: "true"
5.4 測試預設pvc
[root@master euler]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-rbd-sc (default) rbd.csi.ceph.com Retain Immediate true 29m
再去檢視sc就會有一個default的顯示
[root@master euler]# cp sc-pvc.yaml sc-pvc1.yaml
[root@master euler]# cat sc-pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sc-pvc1
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 20Gi
這個檔案裡面是沒有指定storageClassName的
[root@master euler]# kubectl apply -f sc-pvc1.yaml
persistentvolumeclaim/sc-pvc1 created
[root@master euler]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound rbdpv 8Gi RWO 138m
sc-pvc Bound pvc-dfe3497f-9ed7-4961-9265-9e7242073c28 15Gi RWO csi-rbd-sc 27m
sc-pvc1 Bound pvc-167cf73b-4983-4c28-aa98-bb65bb966649 20Gi RWO csi-rbd-sc 6s
這樣就好了