k8s使用emptyDir,hostPath,nfs,pv,pvc做儲存
儲存卷三種方式:emptyDir,gitRepo,hostPath
emptyDir:一個pod建立兩個容器,一個pod提供請求服務,另一個pod提供檔案儲存,pod刪除,儲存卷就刪除。
gitRepo:使用docker映象提供儲存
hostPath:宿主機路徑,pod刪除,儲存卷還在(在多個node節點要建立路徑)
nfs:使用共享儲存(多個pod要在共享儲存中建立多個目錄)
幫助:
[root@k8s1 ~]# kubectl explain pods.spec.volumes.persistentVolumeClaim --pvc幫助
[root@k8s1 ~]# kubectl explain pods.spec.volumes --檢視幫助
[root@k8s1 ~]# kubectl explain pv --pv幫助
1.使用emptyDir做儲存(兩個pod,一個做儲存,一個提供服務)
[root@k8s1 ~]# vim 11.yaml
apiVersion: v1 kind: Pod metadata: name: pod-demo --定義一個pod namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp --定義一個容器 image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: html mountPath: /usr/share/nginx/html --myapp容器html卷掛載到/usr/share/nginx/html(是nginx預設路徑) - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: html --busybox容器將html卷掛載到/data mountPath: /data/ command: ["/bin/sh","-c","while true;do echo $(date) >> /data/index.html;sleep 2;done"] volumes: --定義一個html卷 - name: html emptyDir: {}
[root@k8s1 ~]# kubectl apply -f 11.yaml
pod/pod-demo created
[root@k8s1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 2/2 Running 0 103s 10.244.1.13 k8s2 <none> <none>
[root@k8s1 ~]# kubectl exec -it pod-demo -c busybox -- /bin/sh
/ # cat /data/index.html
Fri Feb 22 09:39:53 UTC 2019
Fri Feb 22 09:39:55 UTC 2019
Fri Feb 22 09:39:57 UTC 2019
Fri Feb 22 09:39:59 UTC 2019
[root@k8s1 ~]# curl
Fri Feb 22 09:39:53 UTC 2019
Fri Feb 22 09:39:55 UTC 2019
Fri Feb 22 09:39:57 UTC 2019
Fri Feb 22 09:39:59 UTC 2019
Fri Feb 22 09:40:01 UTC 2019
Fri Feb 22 09:40:03 UTC 2019
Fri Feb 22 09:40:05 UTC 2019
[root@k8s1 ~]#
2.使用hostPath做儲存(如果node節點當機,pod訪問當機node的資料就不存在了)
node1節點:
[root@k8s2 ~]# mkdir -p /data/pod
[root@k8s2 ~]# cat /data/pod/index.html --為了區分node節點,將檔案內容寫不一樣
node1
[root@k8s2 ~]#
node2節點:
[root@k8s3 ~]# mkdir -p /data/pod
[root@k8s3 ~]# cat /data/pod/index.html
node2
[root@k8s3 ~]#
master節點:
[root@k8s1 ~]# vim 12.yaml
apiVersion: v1 kind: Pod metadata: name: pod-vol-hostpath namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html --使用html卷儲存 mountPath: /usr/share/nginx/html --nginx網頁根目錄 volumes: - name: html hostPath: path: /data/pod/ --html卷的路徑(對應的node節點新建目錄,pod在哪個node上就要新建) type: DirectoryOrCreate
[root@k8s1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 2/2 Running 0 64m 10.244.1.13 k8s2 <none> <none>
pod-vol-hostpath 1/1 Running 0 4s 10.244.2.22 k8s3 <none> <none>
[root@k8s1 ~]# curl --pod在node2節點上,所以訪問的是node2的網頁,如果在node1就是node1的內容
node2
[root@k8s1 ~]#
3.使用nfs共享儲存
nfs儲存:
[root@liutie1 ~]# mkdir /data/v6
[root@liutie1 ~]# vim /etc/exports
/data/v6 172.16.8.0/24(rw,no_root_squash)
[root@liutie1 ~]# systemctl restart nfs
[root@liutie1 ~]# exportfs -arv
exporting 172.16.8.0/24:/data/v6
[root@liutie1 ~]# showmount -e
Export list for liutie1:
/data/v6 172.16.8.0/24
[root@liutie1 ~]#
k8s節點:
[root@k8s1 ~]# mkdir /data/v6 --建立共享目錄
[root@k8s1 ~]# mount.nfs 172.16.8.108:/data/v6 /data/v6 --測試手動掛載
[root@k8s1 ~]# umount /data/v6
[root@k8s1 ~]# vim nfs.yaml
apiVersion: v1 kind: Pod metadata: name: pod-vol-nfs namespace: default spec: containers: - name: pod-nfs image: ikubernetes/myapp:v1 volumeMounts: - name: html1 mountPath: /usr/share/nginx/html volumes: - name: html1 nfs: path: /data/v6 server: 172.16.8.108
[root@k8s1 ~]# kubectl apply -f nfs.yaml
pod/pod-vol-nfs created
[root@k8s1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-vol-nfs 1/1 Running 0 2m21s 10.244.1.78 k8s2 <none> <none>
[root@k8s1 ~]#
在nfs儲存建立檔案
[root@liutie1 ~]# cd /data/v6/
[root@liutie1 v6]# cat index.html
nfs store
[root@liutie1 v6]#
在k8s節點開啟網頁
[root@k8s1 ~]# curl 10.244.1.78 --pod的ip地址
nfs store
[root@k8s1 ~]#
4.使用nfs共享儲存(固定大小)
nfs伺服器:
[root@liutie1 ~]# mkdir /data/v{1,2,3,4,5} --在儲存上新建目錄
[root@liutie1 ~]# yum install nfs* -y --安裝nfs
[root@liutie1 ~]# vim /etc/exports --共享目錄
/data/v1 172.16.8.0/24(rw,no_root_squash)
/data/v2 172.16.8.0/24(rw,no_root_squash)
/data/v3 172.16.8.0/24(rw,no_root_squash)
/data/v4 172.16.8.0/24(rw,no_root_squash)
/data/v5 172.16.8.0/24(rw,no_root_squash)
[root@liutie1 ~]# exportfs -arv
exporting 172.16.8.0/24:/data/v5
exporting 172.16.8.0/24:/data/v4
exporting 172.16.8.0/24:/data/v3
exporting 172.16.8.0/24:/data/v2
exporting 172.16.8.0/24:/data/v1
[root@liutie1 ~]# showmount -e
Export list for liutie1:
/data/v5 172.16.8.0/24
/data/v4 172.16.8.0/24
/data/v3 172.16.8.0/24
/data/v2 172.16.8.0/24
/data/v1 172.16.8.0/24
[root@liutie1 ~]#
node各節點:
[root@k8s2 ~]# yum install nfs-common nfs-utils -y --所有node節點必須安裝nfs-utils軟體包,否則會出錯
master節點:
[root@k8s1 ~]# yum install -y nfs-utils
[root@k8s1 ~]# kubectl explain PersistentVolume --幫助資訊
[root@k8s1 ~]# vim pv.yaml --將遠端的nfs目錄轉換成pv
apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/v1 server: 172.16.8.108 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/v2 server: 172.16.8.108 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 15Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/v3 server: 172.16.8.108 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 1Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/v4 server: 172.16.8.108 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 20Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/v5 server: 172.16.8.108 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 13Gi
[root@k8s1 ~]# kubectl apply -f pv.yaml --生成pv
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@k8s1 ~]# kubectl get pv --檢視pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 2m40s
pv002 15Gi RWO,RWX Retain Available 2m40s
pv003 1Gi RWO,RWX Retain Available 2m40s
pv004 20Gi RWO,RWX Retain Available 2m40s
pv005 13Gi RWO,RWX Retain Available 2m40s
[root@k8s1 ~]# vim pvc.yaml --建立pvc,pvc的大小為6G
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: default --定義一個mypvc名字的pvc spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 6Gi --- apiVersion: v1 kind: Pod --定義一個pod,pod使用pvc metadata: name: pod-vol-pvc namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html --使用mypvc儲存 mountPath: /usr/share/nginx/html volumes: - name: html persistentVolumeClaim: claimName: mypvc --引用上面的mypvc
[root@k8s1 ~]# kubectl apply -f pvc.yaml
persistentvolumeclaim/mypvc created
pod/pod-vol-pvc created
[root@k8s1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 8m31s
pv002 15Gi RWO,RWX Retain Available 8m31s
pv003 1Gi RWO,RWX Retain Available 8m31s
pv004 20Gi RWO,RWX Retain Available 8m31s
pv005 13Gi RWO,RWX Retain Bound default/mypvc 8m31s --Bound表示使用
[root@k8s1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv005 13Gi RWO,RWX 2m31s --使用了pv005的mypvc儲存卷
[root@k8s1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-demo 2/2 Running 0 141m
pod-vol-hostpath 1/1 Running 0 77m
pod-vol-pvc 1/1 Running 0 4s
[root@k8s1 ~]# kubectl describe pods pod-vol-pvc --檢視詳細資訊
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25854343/viewspace-2641980/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- K8S系列第九篇(持久化儲存,emptyDir、hostPath、PV/PVC)K8S持久化
- 09 . Kubernetes之pv、pvc及使用nfs網路儲存應用NFS
- k8s-資料持久化儲存卷,nfs,pv/pvcK8S持久化NFS
- k8s階段03 持久卷, PV和PVC, CSI儲存方案示例csi-driver-nfs, OpenEBS, ConfigMap, Secret, DownwardAPI和ProjectedK8SNFSAPIProject
- kubernetes儲存類與PV與PVC關係及實踐
- kubernetes實踐之二十五:儲存卷 PV&PVC
- 使用NFS建立PVNFS
- k8s之PV、PVC、StorageClass詳解K8S
- 建立PV、PVC
- k8s動態儲存篇--NFSK8SNFS
- k8s入門之PV和PVC(八)K8S
- 在K8S中,PV 和 PVC有何作用?K8S
- 在K8S中,什麼是PV和PVC?K8S
- df-pv 工具檢視pvc,pv 容量使用情況
- PV、PVC、StorageClass講解
- 使用NFS作為Glance儲存後端NFS後端
- K8S中如何使用Glusterfs做持久化儲存?K8S持久化
- k8s-pv-pvcK8S
- k8s的nfs儲存外掛設定過程K8SNFS
- 容器編排系統K8s之PV、PVC、SC資源K8S
- NFS共享儲存服務NFS
- PV 與 PVC 狀態遷移
- 建立NFS型別的儲存NFS型別
- NFS儲存服務及部署NFS
- 實戰:用“廉價”的NFS作為K8S後端儲存NFSK8S後端
- kunbernetes-基於NFS的儲存NFS
- k8s使用rbd作為儲存K8S
- kuberbetes-PVC與PV的建立 和繫結
- K8S-PV和PVC的實踐K8S
- [Kubernetes]5. k8s叢集StatefulSet詳解,以及資料持久化(SC PV PVC)K8S持久化
- [k8s] k8s基於csi使用rbd儲存K8S
- k8s之資料儲存-配置儲存K8S
- LVM FS NFS CIFS NAS 等儲存概念解析LVMNFS
- centos7配置nfs共享儲存服務CentOSNFS
- 雲原生儲存詳解:容器儲存與 K8s 儲存卷K8S
- 安裝local-path-provisioner基於HostPath動態製備PV
- kubernetes資料持久化PV-PVC詳解(一)持久化
- MySQL 使用 PV 和 PVC - 每天5分鐘玩轉 Docker 容器技術(154)MySqlDocker