k8s使用glusterfs實現動態持久化儲存

店家小二發表於2018-12-17

簡介

本文章介紹如何使用glusterfs為k8s提供動態申請pv的功能。glusterfs 提供底層儲存功能,heketi為glusterfs提供restful風格的api,方便管理glusterfs。支援k8s的pv的3種訪問模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany

訪問模式只是能力描述,並不是強制執行的,對於沒有按pvc宣告的方式使用pv,儲存提供者應該負責訪問時的執行錯誤。例如如果設定pvc的訪問模式為ReadOnlyMany ,pod掛載後依然可寫,如果需要真正的不可寫,申請pvc是需要指定 readOnly: true 引數

安裝

實驗用的Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV["LC_ALL"] = "en_US.UTF-8"

Vagrant.configure("2") do |config|
    (1..3).each do |i|
      config.vm.define "lab#{i}" do |node|
        node.vm.box = "centos-7.4-docker-17"
        node.ssh.insert_key = false
        node.vm.hostname = "lab#{i}"
        node.vm.network "private_network", ip: "11.11.11.11#{i}"
        node.vm.provision "shell",
          inline: "echo hello from node #{i}"
        node.vm.provider "virtualbox" do |v|
          v.cpus = 2
          v.customize ["modifyvm", :id, "--name", "lab#{i}", "--memory", "3096"]
          file_to_disk = "lab#{i}_vdb.vdi"
          unless File.exist?(file_to_disk)
            # 50GB
            v.customize [`createhd`, `--filename`, file_to_disk, `--size`, 50 * 1024]
          end
          v.customize [`storageattach`, :id, `--storagectl`, `IDE`, `--port`, 1, `--device`, 0, `--type`, `hdd`, `--medium`, file_to_disk]
        end
      end
    end
end
複製程式碼

環境配置說明

# 安裝 glusterfs 每節點需要提前載入 dm_thin_pool 模組
modprobe dm_thin_pool

# 配置開啟自載入
cat >/etc/modules-load.d/glusterfs.conf<<EOF
dm_thin_pool
EOF

# 安裝 glusterfs-fuse
yum install -y glusterfs-fuse
複製程式碼

安裝glusterfs與heketi

# 安裝 heketi client
# https://github.com/heketi/heketi/releases
# 去github下載相關的版本
wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
tar xf heketi-client-v7.0.0.linux.amd64.tar.gz
cp heketi-client/bin/heketi-cli /usr/local/bin

# 檢視版本
heketi-cli -v

# 如下部署步驟都在如下目錄執行
cd heketi-client/share/heketi/kubernetes

# 在k8s中部署 glusterfs
kubectl create -f glusterfs-daemonset.json

# 檢視 node 節點
kubectl get nodes

# 給提供儲存 node 節點打 label
kubectl label node lab1 lab2 lab3 storagenode=glusterfs

# 檢視 glusterfs 狀態
kubectl get pods -o wide

# 部署 heketi server 
# 配置 heketi server 的許可權
kubectl create -f heketi-service-account.json
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account

# 建立 cofig secret
kubectl create secret generic heketi-config-secret --from-file=./heketi.json

# 初始化部署
kubectl create -f heketi-bootstrap.json

# 檢視 heketi bootstrap 狀態
kubectl get pods -o wide
kubectl get svc

# 配置埠轉發 heketi server
HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep deploy-heketi | awk `{print $1}`)
kubectl port-forward $HEKETI_BOOTSTRAP_POD 58080:8080

# 測試訪問
# 另起一終端
curl http://localhost:58080/hello

# 配置 glusterfs
# hostnames/manage 欄位裡必須和 kubectl get node 一致
# hostnames/storage 指定儲存網路 ip 本次實驗使用與k8s叢集同一個ip
cat >topology.json<<EOF
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab1"
              ],
              "storage": [
                "11.11.11.111"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab2"
              ],
              "storage": [
                "11.11.11.112"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab3"
              ],
              "storage": [
                "11.11.11.113"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        }
      ]
    }
  ]
}
EOF
export HEKETI_CLI_SERVER=http://localhost:58080
heketi-cli topology load --json=topology.json

# 使用 Heketi 建立一個用於儲存 Heketi 資料庫的 volume
heketi-cli setup-openshift-heketi-storage
kubectl create -f heketi-storage.json

# 檢視狀態
# 等所有job完成 即狀態為 Completed
# 才能進行如下的步驟
kubectl get pods
kubectl get job

# 刪除部署時產生的相關資源
kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"

# 部署 heketi server
kubectl create -f heketi-deployment.json

# 檢視 heketi server 狀態
kubectl get pods -o wide
kubectl get svc

# 檢視 heketi 狀態資訊
# 配置埠轉發 heketi server
HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep heketi | awk `{print $1}`)
kubectl port-forward $HEKETI_BOOTSTRAP_POD 58080:8080
export HEKETI_CLI_SERVER=http://localhost:58080
heketi-cli cluster list
heketi-cli volume list
複製程式碼

測試

# 建立 StorageClass
# 由於沒有開啟認證
# restuser restuserkey 可以隨意寫
HEKETI_SERVER=$(kubectl get svc | grep heketi | head -1 | awk `{print $3}`)
echo $HEKETI_SERVER
cat >storageclass-glusterfs.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://$HEKETI_SERVER:8080"
  restauthenabled: "false"
  restuser: "will"
  restuserkey: "will"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"
EOF
kubectl create -f storageclass-glusterfs.yaml

# 檢視
kubectl get sc

# 建立pvc測試
cat >gluster-pvc-test.yaml<<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
 annotations:
   volume.beta.kubernetes.io/storage-class: gluster-heketi
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 5Gi
EOF
kubectl apply -f gluster-pvc-test.yaml
 
# 檢視
kubectl get pvc
kubectl get pv
 
# 建立 nginx pod 掛載測試
cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1
EOF
kubectl apply -f nginx-pod.yaml
 
# 檢視
kubectl get pods -o wide
 
# 修改檔案內容
kubectl exec -ti nginx-pod1 -- /bin/sh -c `echo Hello World from GlusterFS!!! > /usr/share/nginx/html/index.html`
 
# 訪問測試
POD_ID=$(kubectl get pods -o wide | grep nginx-pod1 | awk `{print $(NF-1)}`)
curl http://$POD_ID
 
# node 節點檢視檔案內容
GLUSTERFS_POD=$(kubectl get pod | grep glusterfs | head -1 | awk `{print $1}`)
kubectl exec -ti $GLUSTERFS_POD /bin/sh
mount | grep heketi
cat /var/lib/heketi/mounts/vg_56033aa8a9131e84faa61a6f4774d8c3/brick_1ac5f3a0730457cf3fcec6d881e132a2/brick/index.html
複製程式碼

本文轉自掘金-k8s使用glusterfs實現動態持久化儲存


相關文章