容器編排系統K8s之Volume的基礎使用

1874發表於2020-12-24

  前文我們聊到了k8s上的ingress資源相關話題,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14167581.html;今天們來聊一下k8s上volume相關話題;

   在說k8s上的volume的使用前,我們先來回顧下docker裡的volume;對於docker容器來說,映象是分成構建的且每一層都是隻讀的,只讀就意味著不能修改資料;只有當一個映象執行為容器以後,在映象的最頂端才會加上一個可寫層,一旦這個容器被刪除,對應可寫層上的資料也隨之被刪除;為了解決docker容器上的資料持久化的問題;docker使用了volume;在docker上volume有兩種管理方式,第一種是使用者手動指定把宿主機(對於宿主機上的目錄可能是掛載儲存系統上的某目錄)上的某個目錄掛載到容器某個目錄,這種管理方式叫做繫結掛載卷;還有一種就是docker自身維護把某個目錄掛載到容器某個目錄,這種叫docker自己管理的卷;不管使用那種方式管理的volume,它都是容器直接關聯宿主機上的某個目錄或檔案;docker中的volume解決了容器生命週期內產生的資料在容器終止後能夠持久化儲存的問題;同樣k8s也有同樣的煩惱,不同的是k8s面對的是pod;我們知道pod是k8s上最小排程單元,一個pod被刪除以後,pod裡執行的容器也隨之被刪除,那麼pod裡容器產生的資料該如何被持久化呢?要想解決這個問題,我們先來看看pod的組成;

  提示:在k8s上pod裡可以執行一個或多個容器,執行多個容器,其中一個容器我們叫主容器,其他的容器是用來輔助主容器,我們叫做sidecar;對於pod來說,不管裡面執行多少個容器,在最底層都會執行一個pause容器,該容器最主要用來為pod提供基礎架構支撐;並且位於同一個pod中的容器都共享pause容器的網路名稱空間以及IPC和UTS;這樣一來我們要想給pod裡的容器提供儲存卷,首先要把儲存卷關聯到pause容器,然後在容器裡掛載pause裡的儲存卷即可;如下圖所示

  提示:如上圖所示,對於pause容器來說它可以關聯儲存A,也可以關聯儲存B;對於pause關聯某個儲存,其位於同一pod中的其他容器就也可以掛載pause裡關聯的儲存目錄或檔案;對於k8s來說儲存本來就不屬於k8s內部元件,它是一個外來系統,這也意味著我們要想k8s使用外部儲存系統,首先pause容器要有適配其對應儲存系統的驅動;我們知道同一宿主機上執行的多個容器都是共享同一核心,即宿主機核心上有某個儲存系統的驅動,那麼pause就可以使用對應的驅動去適配對應的儲存;

  volumes型別

  我們知道要想在k8s上使用儲存卷,我們需要在對應的節點上提供對應儲存系統的驅動,對應執行在該節點上的所有pod就可以使用對應的儲存系統,那麼問題來了,pod怎麼使用對應的儲存系統呢?該怎麼向其驅動程式傳遞引數呢?我們知道在k8s上一切皆物件,要在k8s上使用儲存卷,我們還需要把對應的驅動抽象成k8s上的資源;在使用時,我們直接初始化對應的資源為物件即可;為了在k8s上簡化使用儲存卷的複雜度,k8s內建了一些儲存介面,對於不同型別的儲存,其使用的介面、傳遞的引數也有所不同;除此之外在k8s上也支援使用者使用自定義儲存,通過csi介面來定義;

  檢視k8s上支援的volumes介面

[root@master01 ~]# kubectl explain pod.spec.volumes
KIND:     Pod
VERSION:  v1

RESOURCE: volumes <[]Object>

DESCRIPTION:
     List of volumes that can be mounted by containers belonging to the pod.
     More info: https://kubernetes.io/docs/concepts/storage/volumes

     Volume represents a named volume in a pod that may be accessed by any
     container in the pod.

FIELDS:
   awsElasticBlockStore <Object>
     AWSElasticBlockStore represents an AWS Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

   azureDisk    <Object>
     AzureDisk represents an Azure Data Disk mount on the host and bind mount to
     the pod.

   azureFile    <Object>
     AzureFile represents an Azure File Service mount on the host and bind mount
     to the pod.

   cephfs       <Object>
     CephFS represents a Ceph FS mount on the host that shares a pod's lifetime

   cinder       <Object>
     Cinder represents a cinder volume attached and mounted on kubelets host
     machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

   configMap    <Object>
     ConfigMap represents a configMap that should populate this volume

   csi  <Object>
     CSI (Container Storage Interface) represents ephemeral storage that is
     handled by certain external CSI drivers (Beta feature).

   downwardAPI  <Object>
     DownwardAPI represents downward API about the pod that should populate this
     volume

   emptyDir     <Object>
     EmptyDir represents a temporary directory that shares a pod's lifetime.
     More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

   ephemeral    <Object>
     Ephemeral represents a volume that is handled by a cluster storage driver
     (Alpha feature). The volume's lifecycle is tied to the pod that defines it
     - it will be created before the pod starts, and deleted when the pod is
     removed.

     Use this if: a) the volume is only needed while the pod runs, b) features
     of normal volumes like restoring from snapshot or capacity tracking are
     needed, c) the storage driver is specified through a storage class, and d)
     the storage driver supports dynamic volume provisioning through a
     PersistentVolumeClaim (see EphemeralVolumeSource for more information on
     the connection between this volume type and PersistentVolumeClaim).

     Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes
     that persist for longer than the lifecycle of an individual pod.

     Use CSI for light-weight local ephemeral volumes if the CSI driver is meant
     to be used that way - see the documentation of the driver for more
     information.

     A pod can use both types of ephemeral volumes and persistent volumes at the
     same time.

   fc   <Object>
     FC represents a Fibre Channel resource that is attached to a kubelet's host
     machine and then exposed to the pod.

   flexVolume   <Object>
     FlexVolume represents a generic volume resource that is
     provisioned/attached using an exec based plugin.

   flocker      <Object>
     Flocker represents a Flocker volume attached to a kubelet's host machine.
     This depends on the Flocker control service being running

   gcePersistentDisk    <Object>
     GCEPersistentDisk represents a GCE Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

   gitRepo      <Object>
     GitRepo represents a git repository at a particular revision. DEPRECATED:
     GitRepo is deprecated. To provision a container with a git repo, mount an
     EmptyDir into an InitContainer that clones the repo using git, then mount
     the EmptyDir into the Pod's container.

   glusterfs    <Object>
     Glusterfs represents a Glusterfs mount on the host that shares a pod's
     lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md

   hostPath     <Object>
     HostPath represents a pre-existing file or directory on the host machine
     that is directly exposed to the container. This is generally used for
     system agents or other privileged things that are allowed to see the host
     machine. Most containers will NOT need this. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   iscsi        <Object>
     ISCSI represents an ISCSI Disk resource that is attached to a kubelet's
     host machine and then exposed to the pod. More info:
     https://examples.k8s.io/volumes/iscsi/README.md

   name <string> -required-
     Volume's name. Must be a DNS_LABEL and unique within the pod. More info:
     https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

   nfs  <Object>
     NFS represents an NFS mount on the host that shares a pod's lifetime More
     info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

   persistentVolumeClaim        <Object>
     PersistentVolumeClaimVolumeSource represents a reference to a
     PersistentVolumeClaim in the same namespace. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

   photonPersistentDisk <Object>
     PhotonPersistentDisk represents a PhotonController persistent disk attached
     and mounted on kubelets host machine

   portworxVolume       <Object>
     PortworxVolume represents a portworx volume attached and mounted on
     kubelets host machine

   projected    <Object>
     Items for all in one resources secrets, configmaps, and downward API

   quobyte      <Object>
     Quobyte represents a Quobyte mount on the host that shares a pod's lifetime

   rbd  <Object>
     RBD represents a Rados Block Device mount on the host that shares a pod's
     lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md

   scaleIO      <Object>
     ScaleIO represents a ScaleIO persistent volume attached and mounted on
     Kubernetes nodes.

   secret       <Object>
     Secret represents a secret that should populate this volume. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#secret

   storageos    <Object>
     StorageOS represents a StorageOS volume attached and mounted on Kubernetes
     nodes.

   vsphereVolume        <Object>
     VsphereVolume represents a vSphere volume attached and mounted on kubelets
     host machine

[root@master01 ~]# 

  提示:從上面的幫助資訊可以看到k8s上支援的儲存介面還是很多,每一個儲存介面都是一種型別;對於這些儲存型別我們大致可以分為雲端儲存,分散式儲存,網路儲存、臨時儲存,節點本地儲存,特殊型別儲存、使用者自定義儲存等等;比如awsElasticBlockStore、azureDisk、azureFile、gcePersistentDisk、vshperVolume、cinder這些型別劃分為雲端儲存;cephfs、glusterfs、rbd這些劃分為分散式儲存;nfs、iscsi、fc這些劃分為網路儲存;enptyDIR劃分為臨時儲存;hostPath、local劃分為本地儲存;自定義儲存csi;特殊儲存configMap、secret、downwardAPId;持久卷申請persistentVolumeClaim等等;

  volumes的使用

  示例:建立使用hostPath型別儲存卷Pod

[root@master01 ~]# cat hostPath-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vol-hostpath-demo
  namespace: default
spec:
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    volumeMounts: 
    - name: webhtml
      mountPath: /usr/share/nginx/html
      readOnly: true
  volumes:
  - name: webhtml
    hostPath:
      path: /vol/html/
      type: DirectoryOrCreate
[root@master01 ~]# 

  提示:以上配置清單表示建立一個名為nginx的pod,對應使用nginx:1.14-alpine的映象;並且定義了一個儲存卷,該儲存卷的名稱為webhtml,其型別為hostPath;在定義儲存卷時,我們需要在spec欄位下使用volumes欄位來指定,該欄位的值為一個物件列表;其中name是必須欄位,用於指定儲存卷的名稱,方便容器掛載其儲存卷時引用的識別符號;其次我們需要對應的儲存卷型別來表示使用對應的儲存介面;hostPath表示使用hostPath型別儲存介面;該型別儲存介面需要我們手動傳遞兩個引數,第一個是path指定宿主機的某個目錄或檔案路徑;type用來指定當宿主機上的指定的路徑不存在時該怎麼辦,這個值有7個值;其中DirectoryOrCteate表示對應path欄位指定的檔案必須是一個目錄,當這個目錄在宿主機上不存在時就建立;Directory表示對應path欄位指定的檔案必須是一個已存在的目錄;FileOrCreate表示對應path欄位指定的檔案必須是一個檔案,如果檔案不存在就建立;File表示對應path欄位必須為一個已存在的檔案;Socket表示對應path必須為一個已存在的Socket檔案;CharDevice表示對應path欄位指定的檔案必須是一個已存在的字元裝置;BlockDevice表示對應path欄位指定的是一個已存在的塊裝置;定義volumes相當於把外部儲存關聯到對應pod的pause容器,至於pod裡的其他容器是否要使用,怎麼使用,取決於volumeMounts欄位是否定義;spec.containers.volumeMounts欄位用於指定對應pod裡的容器儲存卷掛載配置,其中name和mountPath是必選欄位,name欄位用於指定引用的儲存卷名稱;mountPath欄位用於指定在容器內部的掛載點,readOnly用於指定是否為只讀,預設是讀寫,即readOnly的值為false;

  應用配置清單

[root@master01 ~]# kubectl apply -f hostPath-demo.yaml 
pod/vol-hostpath-demo created
[root@master01 ~]# kubectl get pod 
NAME                     READY   STATUS    RESTARTS   AGE
myapp-6479b786f5-9d4mh   1/1     Running   1          47h
myapp-6479b786f5-k252c   1/1     Running   1          47h
vol-hostpath-demo        1/1     Running   0          11s
[root@master01 ~]# kubectl describe pod/vol-hostpath-demo
Name:         vol-hostpath-demo
Namespace:    default
Priority:     0
Node:         node03.k8s.org/192.168.0.46
Start Time:   Wed, 23 Dec 2020 23:14:35 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.3.92
IPs:
  IP:  10.244.3.92
Containers:
  nginx:
    Container ID:   docker://eb8666714b8697457ce2a88271a4615f836873b4729b6a0938776e3d527c6536
    Image:          nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 23 Dec 2020 23:14:37 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html from webhtml (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhtml:
    Type:          HostPath (bare host directory volume)
    Path:          /vol/html/
    HostPathType:  DirectoryOrCreate
  default-token-xvd4c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xvd4c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  43s   default-scheduler  Successfully assigned default/vol-hostpath-demo to node03.k8s.org
  Normal  Pulled     42s   kubelet            Container image "nginx:1.14-alpine" already present on machine
  Normal  Created    41s   kubelet            Created container nginx
  Normal  Started    41s   kubelet            Started container nginx
[root@master01 ~]# 

  提示:可以看到對應pod裡以只讀方式掛載了webhtml儲存卷,對應webhtm儲存卷型別為HostPath,對應path是/vol/html/;

  檢視對應pod所在節點

[root@master01 ~]# kubectl get pod vol-hostpath-demo -o wide
NAME                READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
vol-hostpath-demo   1/1     Running   0          3m39s   10.244.3.92   node03.k8s.org   <none>           <none>
[root@master01 ~]# 

  在node03上檢視對應目錄是否建立?

[root@node03 ~]# ll /
total 16
lrwxrwxrwx.   1 root root    7 Sep 15 20:33 bin -> usr/bin
dr-xr-xr-x.   5 root root 4096 Sep 15 20:39 boot
drwxr-xr-x   20 root root 3180 Dec 23 23:10 dev
drwxr-xr-x.  80 root root 8192 Dec 23 23:10 etc
drwxr-xr-x.   2 root root    6 Nov  5  2016 home
lrwxrwxrwx.   1 root root    7 Sep 15 20:33 lib -> usr/lib
lrwxrwxrwx.   1 root root    9 Sep 15 20:33 lib64 -> usr/lib64
drwxr-xr-x.   2 root root    6 Nov  5  2016 media
drwxr-xr-x.   2 root root    6 Nov  5  2016 mnt
drwxr-xr-x.   4 root root   35 Dec  8 14:25 opt
dr-xr-xr-x  141 root root    0 Dec 23 23:09 proc
dr-xr-x---.   4 root root  213 Dec 21 22:46 root
drwxr-xr-x   26 root root  780 Dec 23 23:13 run
lrwxrwxrwx.   1 root root    8 Sep 15 20:33 sbin -> usr/sbin
drwxr-xr-x.   2 root root    6 Nov  5  2016 srv
dr-xr-xr-x   13 root root    0 Dec 23 23:09 sys
drwxrwxrwt.   9 root root  251 Dec 23 23:11 tmp
drwxr-xr-x.  13 root root  155 Sep 15 20:33 usr
drwxr-xr-x.  19 root root  267 Sep 15 20:38 var
drwxr-xr-x    3 root root   18 Dec 23 23:14 vol
[root@node03 ~]# ll /vol
total 0
drwxr-xr-x 2 root root 6 Dec 23 23:14 html
[root@node03 ~]# ll /vol/html/
total 0
[root@node03 ~]# 

  提示:可以看到對應節點上已經建立/vol/html/目錄,對應目錄下沒有任何檔案;

  在對應節點對應目錄下建立一個網頁檔案,訪問對應pod看看是否對應網頁檔案是否能夠被訪問到?

[root@node03 ~]# echo "this is test page from node03 /vol/html/test.html" > /vol/html/test.html
[root@node03 ~]# cat /vol/html/test.html
this is test page from node03 /vol/html/test.html
[root@node03 ~]# exit
logout
Connection to node03 closed.
[root@master01 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          47h     10.244.2.99   node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          47h     10.244.4.21   node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          7m45s   10.244.3.92   node03.k8s.org   <none>           <none>
[root@master01 ~]# curl 10.244.3.92/test.html
this is test page from node03 /vol/html/test.html
[root@master01 ~]# 

  提示:可以看到在對應節點上建立網頁檔案,訪問pod能夠正常被訪問到;

  測試:刪除pod,看看對應節點上的目錄是否會被刪除?

[root@master01 ~]# kubectl delete -f hostPath-demo.yaml 
pod "vol-hostpath-demo" deleted
[root@master01 ~]# kubectl get pod 
NAME                     READY   STATUS    RESTARTS   AGE
myapp-6479b786f5-9d4mh   1/1     Running   1          47h
myapp-6479b786f5-k252c   1/1     Running   1          47h
[root@master01 ~]# ssh node03 
Last login: Wed Dec 23 23:18:51 2020 from master01
[root@node03 ~]# ll /vol/html/
total 4
-rw-r--r-- 1 root root 50 Dec 23 23:22 test.html
[root@node03 ~]# exit
logout
Connection to node03 closed.
[root@master01 ~]# 

  提示:可以看到刪除了pod以後,在對應節點上的目錄並不會被刪除;對應的網頁檔案還是完好無損;

  測試:重新引用配置清單,訪問對應的pod,看看是否能夠訪問到對應的網頁檔案內容?

[root@master01 ~]# kubectl apply -f hostPath-demo.yaml 
pod/vol-hostpath-demo created
[root@master01 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          47h   10.244.2.99   node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          47h   10.244.4.21   node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          7s    10.244.3.93   node03.k8s.org   <none>           <none>
[root@master01 ~]# curl 10.244.3.93/test.html
this is test page from node03 /vol/html/test.html
[root@master01 ~]# 

  提示:可以看到對應pod被排程到node03上了,訪問對應的pod能夠訪問到我們建立的網頁檔案;假如我們明確指定將此pod執行在node02上,對應pod是否還可以訪問到對應的網頁檔案呢?

  測試:繫結pod執行在node02.k8s.org上

[root@master01 ~]# cat hostPath-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vol-hostpath-demo
  namespace: default
spec:
  nodeName: node02.k8s.org
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    volumeMounts: 
    - name: webhtml
      mountPath: /usr/share/nginx/html
      readOnly: true
  volumes:
  - name: webhtml
    hostPath:
      path: /vol/html/
      type: DirectoryOrCreate
[root@master01 ~]# 

  提示:繫結pod執行為某個節點上,我們可以在spec欄位中用nodeName欄位來指定對應節點的主機名即可;

  刪除原有pod,重新應用新資源清單

[root@master01 ~]# kubectl delete pod/vol-hostpath-demo
pod "vol-hostpath-demo" deleted
[root@master01 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
myapp-6479b786f5-9d4mh   1/1     Running   1          47h
myapp-6479b786f5-k252c   1/1     Running   1          47h
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml 
pod/vol-hostpath-demo created
[root@master01 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          47h   10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          47h   10.244.4.21    node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          8s    10.244.2.100   node02.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到重新應用新資源清單,對應pod執行在node02上;

  訪問對應pod,看看test.html是否能夠被訪問到?

[root@master01 ~]# curl 10.244.2.100/test.html
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
[root@master01 ~]# 

  提示:可以看到現在訪問pod,對應網頁檔案就不能被訪問到;其實原因很簡單;hostPath型別儲存卷是將宿主機上的某個目錄或檔案當作儲存卷對映進pause容器,然後供pod裡的容器掛載使用;這種型別的儲存卷不能跨節點;所以在node02上建立的pod,node03上的檔案肯定是不能被訪問到的;為此,如果要使用hostPath型別的儲存卷,我們就必須繫結節點;除此之外我們就應該在k8s節點上建立相同的檔案或目錄;

  示例:建立使用emptyDir型別儲存卷pod

[root@master01 ~]# cat emptyDir-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vol-emptydir-demo
  namespace: default
spec:
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    volumeMounts:
    - name: web-cache-dir
      mountPath: /usr/share/nginx/html
      readOnly: true
readOnly: true
  - name: alpine
    image: alpine
    volumeMounts:
    - name: web-cache-dir
      mountPath: /nginx/html
    command: ["/bin/sh", "-c"]
    args:
    - while true; do
        echo $(hostname) $(date) >> /nginx/html/index.html;
        sleep 10;
      done
  volumes:
  - name: web-cache-dir
    emptyDir: 
      medium: Memory
      sizeLimit: "10Mi"
[root@master01 ~]# 

  提示:以上清單表示定義執行一個名為vol-emptydir-demo的pod;在其pod內部執行兩個容器,一個名為nginx,一個名為alpine;同時這兩個容器都同時掛載一個名為web-cache-dir的儲存卷,其型別為emptyDir,如下圖所示;定義empytDir型別的儲存卷,我們需要在spec.volumes欄位下使用name指定其儲存卷的名稱;用emptyDir指定其儲存卷型別為emptyDir;對於empytDir型別儲存卷,它有兩個屬性,medium欄位用於指定媒介型別,Memory表示使用記憶體作為儲存媒介;預設該欄位的值為“”,表示使用預設的對應節點預設的儲存媒介;sizeLimit欄位是用來限制其對應儲存大小,預設是空,表示不限制;

  提示:如上圖,其pod內部有兩個容器,一個名為alpine的容器,它會每隔10往/nginx/html/inde.html檔案中寫入對應主機名+時間;而nginx容器掛載對應的empytDir型別儲存捲到本地的網頁儲存目錄;簡單講就是alpine容器往/nginx/html/index.html寫資料,nginx容器掛載對應檔案到網頁目錄;

  應用資源清單

[root@master01 ~]# kubectl apply -f emptyDir-demo.yaml
pod/vol-emptydir-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS              RESTARTS   AGE   IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running             1          2d    10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running             1          2d    10.244.4.21    node04.k8s.org   <none>           <none>
vol-emptydir-demo        0/2     ContainerCreating   0          8s    <none>         node03.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running             0          72m   10.244.2.100   node02.k8s.org   <none>           <none>
[root@master01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          2d    10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          2d    10.244.4.21    node04.k8s.org   <none>           <none>
vol-emptydir-demo        2/2     Running   0          16s   10.244.3.94    node03.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          72m   10.244.2.100   node02.k8s.org   <none>           <none>
[root@master01 ~]# kubectl describe pod vol-emptydir-demo 
Name:         vol-emptydir-demo
Namespace:    default
Priority:     0
Node:         node03.k8s.org/192.168.0.46
Start Time:   Thu, 24 Dec 2020 00:46:56 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.3.94
IPs:
  IP:  10.244.3.94
Containers:
  nginx:
    Container ID:   docker://58af9ef80800fb22543d1c80be58849f45f3d62f3b44101dbca024e0761cead5
    Image:          nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 24 Dec 2020 00:46:57 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html from web-cache-dir (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro)
  alpine:
    Container ID:  docker://327f110a10e8ef9edb5f86b5cb3dad53e824010b52b1c2a71d5dbecab6f49f05
    Image:         alpine
    Image ID:      docker-pullable://alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
    Args:
      while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done
    State:          Running
      Started:      Thu, 24 Dec 2020 00:47:07 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /nginx/html from web-cache-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  web-cache-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  10Mi
  default-token-xvd4c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xvd4c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  51s   default-scheduler  Successfully assigned default/vol-emptydir-demo to node03.k8s.org
  Normal  Pulled     51s   kubelet            Container image "nginx:1.14-alpine" already present on machine
  Normal  Created    51s   kubelet            Created container nginx
  Normal  Started    50s   kubelet            Started container nginx
  Normal  Pulling    50s   kubelet            Pulling image "alpine"
  Normal  Pulled     40s   kubelet            Successfully pulled image "alpine" in 10.163157508s
  Normal  Created    40s   kubelet            Created container alpine
  Normal  Started    40s   kubelet            Started container alpine
[root@master01 ~]# 

  提示:可以看到對應pod已經正常執行起來,其內部有2個容器;其中nginx容器一隻讀方式掛載名為web-cache-dir的儲存卷,alpine以讀寫方式掛載web-cache-dir的儲存卷;對應儲存卷型別為emptyDir;

  訪問對應pod,看看是否能夠訪問到對應儲存卷中index.html的內容?

[root@master01 ~]# kubectl get pods -o wide              
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          2d      10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          2d      10.244.4.21    node04.k8s.org   <none>           <none>
vol-emptydir-demo        2/2     Running   0          4m38s   10.244.3.94    node03.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          77m     10.244.2.100   node02.k8s.org   <none>           <none>
[root@master01 ~]# curl 10.244.3.94
vol-emptydir-demo Wed Dec 23 16:47:07 UTC 2020
vol-emptydir-demo Wed Dec 23 16:47:17 UTC 2020
vol-emptydir-demo Wed Dec 23 16:47:27 UTC 2020
vol-emptydir-demo Wed Dec 23 16:47:37 UTC 2020
vol-emptydir-demo Wed Dec 23 16:47:47 UTC 2020
vol-emptydir-demo Wed Dec 23 16:47:57 UTC 2020
vol-emptydir-demo Wed Dec 23 16:48:07 UTC 2020
vol-emptydir-demo Wed Dec 23 16:48:17 UTC 2020
vol-emptydir-demo Wed Dec 23 16:48:27 UTC 2020
vol-emptydir-demo Wed Dec 23 16:48:37 UTC 2020
vol-emptydir-demo Wed Dec 23 16:48:47 UTC 2020
vol-emptydir-demo Wed Dec 23 16:48:57 UTC 2020
vol-emptydir-demo Wed Dec 23 16:49:07 UTC 2020
vol-emptydir-demo Wed Dec 23 16:49:17 UTC 2020
vol-emptydir-demo Wed Dec 23 16:49:27 UTC 2020
vol-emptydir-demo Wed Dec 23 16:49:37 UTC 2020
vol-emptydir-demo Wed Dec 23 16:49:47 UTC 2020
vol-emptydir-demo Wed Dec 23 16:49:57 UTC 2020
vol-emptydir-demo Wed Dec 23 16:50:07 UTC 2020
vol-emptydir-demo Wed Dec 23 16:50:17 UTC 2020
vol-emptydir-demo Wed Dec 23 16:50:27 UTC 2020
vol-emptydir-demo Wed Dec 23 16:50:37 UTC 2020
vol-emptydir-demo Wed Dec 23 16:50:47 UTC 2020
vol-emptydir-demo Wed Dec 23 16:50:57 UTC 2020
vol-emptydir-demo Wed Dec 23 16:51:07 UTC 2020
vol-emptydir-demo Wed Dec 23 16:51:17 UTC 2020
vol-emptydir-demo Wed Dec 23 16:51:27 UTC 2020
vol-emptydir-demo Wed Dec 23 16:51:37 UTC 2020
vol-emptydir-demo Wed Dec 23 16:51:47 UTC 2020
[root@master01 ~]# 

  提示:可以看到能夠訪問到index.html檔案內容,並且該檔案內容是alpine容器動態生成的內容;從上面的示例,不難理解,在同一個pod內部可以共享同一儲存卷;

  示例:建立使用nfs型別的儲存卷pod

[root@master01 ~]# cat nfs-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vol-nfs-demo
  namespace: default
spec:
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    volumeMounts:
    - name: webhtml
      mountPath: /usr/share/nginx/html
      readOnly: true
  volumes:
  - name: webhtml
    nfs:
      path: /data/html/
      server: 192.168.0.99
[root@master01 ~]# 

  提示:定義nfs型別儲存卷,對應spec.volumes.nfs欄位下必須定義path欄位,該欄位用於指定其nfs檔案系統的匯出檔案路徑;server欄位是用於指定其nfs伺服器地址;在使用nfs儲存作為pod的後端儲存,首先我們要準備好nfs伺服器,並匯出對應的目錄;

  準備nfs伺服器,在192.168.0.99這臺伺服器上安裝nfs-utils包

[root@docker_registry ~]# ip a|grep 192.168.0.99
    inet 192.168.0.99/24 brd 192.168.0.255 scope global enp3s0
[root@docker_registry ~]# yum install nfs-utils -y
Loaded plugins: fastestmirror, langpacks
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
base                                                                                                                  | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                      | 3.5 kB  00:00:00     
epel                                                                                                                  | 4.7 kB  00:00:00     
extras                                                                                                                | 2.9 kB  00:00:00     
kubernetes/signature                                                                                                  |  844 B  00:00:00     
kubernetes/signature                                                                                                  | 1.4 kB  00:00:00 !!! 
mariadb-main                                                                                                          | 2.9 kB  00:00:00     
mariadb-maxscale                                                                                                      | 2.4 kB  00:00:00     
mariadb-tools                                                                                                         | 2.9 kB  00:00:00     
mongodb-org                                                                                                           | 2.5 kB  00:00:00     
proxysql_repo                                                                                                         | 2.9 kB  00:00:00     
updates                                                                                                               | 2.9 kB  00:00:00     
(1/6): docker-ce-stable/x86_64/primary_db                                                                             |  51 kB  00:00:00     
(2/6): kubernetes/primary                                                                                             |  83 kB  00:00:01     
(3/6): mongodb-org/primary_db                                                                                         |  26 kB  00:00:01     
(4/6): epel/x86_64/updateinfo                                                                                         | 1.0 MB  00:00:02     
(5/6): updates/7/x86_64/primary_db                                                                                    | 4.7 MB  00:00:01     
(6/6): epel/x86_64/primary_db                                                                                         | 6.9 MB  00:00:02     
Determining fastest mirrors
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
kubernetes                                                                                                                           612/612
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.x86_64 1:1.3.0-0.66.el7_8 will be updated
---> Package nfs-utils.x86_64 1:1.3.0-0.68.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================
 Package                          Arch                          Version                                    Repository                   Size
=============================================================================================================================================
Updating:
 nfs-utils                        x86_64                        1:1.3.0-0.68.el7                           base                        412 k

Transaction Summary
=============================================================================================================================================
Upgrade  1 Package

Total download size: 412 k
Downloading packages:
No Presto metadata available for base
nfs-utils-1.3.0-0.68.el7.x86_64.rpm                                                                                   | 412 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : 1:nfs-utils-1.3.0-0.68.el7.x86_64                                                                                         1/2 
  Cleanup    : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64                                                                                       2/2 
  Verifying  : 1:nfs-utils-1.3.0-0.68.el7.x86_64                                                                                         1/2 
  Verifying  : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64                                                                                       2/2 

Updated:
  nfs-utils.x86_64 1:1.3.0-0.68.el7                                                                                                          

Complete!
[root@docker_registry ~]# 

  建立/data/html目錄

[root@docker_registry ~]# mkdir /data/html -pv
mkdir: created directory ‘/data/html’
[root@docker_registry ~]# 

  配置該目錄能夠被k8s叢集節點所訪問

[root@docker_registry ~]# cat /etc/exports
/data/html 192.168.0.0/24 (rw,no_root_squash)
[root@docker_registry ~]# 

  提示:以上配置表示把/data/html這個目錄以讀寫,不壓榨root許可權共享給192.168.0.0/24這個網路中的所有主機使用;

  啟動nfs

[root@docker_registry ~]# systemctl start nfs
[root@docker_registry ~]# ss -tnl
State       Recv-Q Send-Q                         Local Address:Port                                        Peer Address:Port              
LISTEN      0      128                                127.0.0.1:1514                                                   *:*                  
LISTEN      0      128                                        *:111                                                    *:*                  
LISTEN      0      128                                        *:20048                                                  *:*                  
LISTEN      0      64                                         *:42837                                                  *:*                  
LISTEN      0      5                              192.168.122.1:53                                                     *:*                  
LISTEN      0      128                                        *:22                                                     *:*                  
LISTEN      0      128                             192.168.0.99:631                                                    *:*                  
LISTEN      0      100                                127.0.0.1:25                                                     *:*                  
LISTEN      0      64                                         *:2049                                                   *:*                  
LISTEN      0      128                                        *:59396                                                  *:*                  
LISTEN      0      128                                       :::34922                                                 :::*                  
LISTEN      0      128                                       :::111                                                   :::*                  
LISTEN      0      128                                       :::20048                                                 :::*                  
LISTEN      0      128                                       :::80                                                    :::*                  
LISTEN      0      128                                       :::22                                                    :::*                  
LISTEN      0      100                                      ::1:25                                                    :::*                  
LISTEN      0      128                                       :::443                                                   :::*                  
LISTEN      0      128                                       :::4443                                                  :::*                  
LISTEN      0      64                                        :::2049                                                  :::*                  
LISTEN      0      64                                        :::36997                                                 :::*                  
[root@docker_registry ~]# 

  提示:nfs監聽在tcp的2049埠,啟動請確保該埠能夠正常處於監聽狀態;到此nfs伺服器就準備好了;

  在k8s節點上安裝nfs-utils包,為其使用nfs提供所需驅動

yum install nfs-utils -y

  驗證:在node01上,看看能不能正常掛載nfs伺服器共享出來的目錄

[root@node01 ~]# showmount -e 192.168.0.99
Export list for 192.168.0.99:
/data/html (everyone)
[root@node01 ~]# mount -t nfs 192.168.0.99:/data/html /mnt
[root@node01 ~]# mount |grep /data/html
192.168.0.99:/data/html on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.44,local_lock=none,addr=192.168.0.99)
[root@node01 ~]# umount /mnt
[root@node01 ~]# mount |grep /data/html
[root@node01 ~]# 

  提示:可以看到在node01上能夠正常看到nfs伺服器共享出來的目錄,並且也能正常掛載使用;等待其他節點把nfs-utils包安裝完成以後,接下來就可以在master上應用配置清單了;

  應用資源清單

[root@master01 ~]# kubectl apply -f nfs-demo.yaml
pod/vol-nfs-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          2d1h   10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          2d1h   10.244.4.21    node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          141m   10.244.2.100   node02.k8s.org   <none>           <none>
vol-nfs-demo             1/1     Running   0          10s    10.244.3.101   node03.k8s.org   <none>           <none>
[root@master01 ~]# kubectl describe pod vol-nfs-demo
Name:         vol-nfs-demo
Namespace:    default
Priority:     0
Node:         node03.k8s.org/192.168.0.46
Start Time:   Thu, 24 Dec 2020 01:55:51 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.3.101
IPs:
  IP:  10.244.3.101
Containers:
  nginx:
    Container ID:   docker://72227e3a94622a4ea032a1ab0d7d353aef167d5a0e80c3739e774050eaea3914
    Image:          nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 24 Dec 2020 01:55:52 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html from webhtml (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhtml:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.0.99
    Path:      /data/html/
    ReadOnly:  false
  default-token-xvd4c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xvd4c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  28s   default-scheduler  Successfully assigned default/vol-nfs-demo to node03.k8s.org
  Normal  Pulled     27s   kubelet            Container image "nginx:1.14-alpine" already present on machine
  Normal  Created    27s   kubelet            Created container nginx
  Normal  Started    27s   kubelet            Started container nginx
[root@master01 ~]# 

  提示:可以看到對應pod已經正常執行,並且其內部容器已經正常掛載對應目錄;

  在nfs伺服器對應目錄,建立一個index.html檔案

[root@docker_registry ~]# cd /data/html
[root@docker_registry html]# echo "this is test file from nfs server ip addr is 192.168.0.99" > index.html
[root@docker_registry html]# cat index.html
this is test file from nfs server ip addr is 192.168.0.99
[root@docker_registry html]# 

  訪問對應pod,看看是否能夠訪問到對應檔案內容?

[root@master01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          2d2h   10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          2d2h   10.244.4.21    node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          145m   10.244.2.100   node02.k8s.org   <none>           <none>
vol-nfs-demo             1/1     Running   0          4m6s   10.244.3.101   node03.k8s.org   <none>           <none>
[root@master01 ~]# curl 10.244.3.101
this is test file from nfs server ip addr is 192.168.0.99
[root@master01 ~]# 

  提示:可以看到對應檔案內容能夠通過pod訪問到;

  刪除pod

[root@master01 ~]# kubectl delete -f nfs-demo.yaml
pod "vol-nfs-demo" deleted
[root@master01 ~]# kubectl get pod -o wide        
NAME                     READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          2d2h   10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          2d2h   10.244.4.21    node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          149m   10.244.2.100   node02.k8s.org   <none>           <none>
[root@master01 ~]#

  繫結pod執行在node02.k8s.org上,重新應用配置檔案建立pod,再次訪問對應pod,看看對應檔案是否能夠正常訪問到呢?

[root@master01 ~]# cat nfs-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vol-nfs-demo
  namespace: default
spec:
  nodeName: node02.k8s.org
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    volumeMounts:
    - name: webhtml
      mountPath: /usr/share/nginx/html
      readOnly: true
  volumes:
  - name: webhtml
    nfs:
      path: /data/html/
      server: 192.168.0.99
[root@master01 ~]# kubectl apply -f nfs-demo.yaml
pod/vol-nfs-demo created
[root@master01 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
myapp-6479b786f5-9d4mh   1/1     Running   1          2d2h   10.244.2.99    node02.k8s.org   <none>           <none>
myapp-6479b786f5-k252c   1/1     Running   1          2d2h   10.244.4.21    node04.k8s.org   <none>           <none>
vol-hostpath-demo        1/1     Running   0          151m   10.244.2.100   node02.k8s.org   <none>           <none>
vol-nfs-demo             1/1     Running   0          8s     10.244.2.101   node02.k8s.org   <none>           <none>
[root@master01 ~]# curl 10.244.2.101
this is test file from nfs server ip addr is 192.168.0.99
[root@master01 ~]# 

  提示:可以看到把對應pod繫結到node02上,訪問對應pod也能正常訪問到nfs伺服器上的檔案;從上述測試過程來看,nfs這種型別的儲存卷能夠脫離pod的生命週期,跨節點將pod裡容器產生的資料持久化到對應的nfs檔案系統伺服器上;當然nfs此時是單點,一旦nfs伺服器當機掛掉,對應pod執行時產生的資料將全部丟失;所以對應外部儲存系統,我們應該選擇一個對資料有冗餘,且k8s叢集支援的型別的儲存系統,比如cephfs,glusterfs等等;

相關文章