Running a stateful container with file system configurations

Use this section as a sample of how to run a stateful container with a file system configuration.

Before you begin

Before starting the procedure, be sure to see all information detailed in Running a stateful container with file system configurations.

Procedure

  1. Open a command-line terminal.
  2. Create an array secret.
    $> cat demo-secret-svc-array.yaml
    kind: Secret
    apiVersion: v1
    metadata:
      name: svc-array
      namespace: csi-ns
    type: Opaque
    stringData:
       management_address: <ADDRESS-1, ADDRESS-2> # replace with valid storage system management address
       username: <USERNAME>                   # replace with valid username
    data:
       password: <PASSWORD base64>            # replace with valid password
      
    $> kubectl create -f demo-secret-svc-array.yaml
    secret/svc-array created
    
  3. Create a storage class.
    $> cat demo-storageclass-gold-svc.yaml
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: gold
    provisioner: block.csi.ibm.com
    parameters:
      SpaceEfficiency: deduplicated
      pool: gold
    ​
      csi.storage.k8s.io/provisioner-secret-name: svc-array
      csi.storage.k8s.io/provisioner-secret-namespace: csi-ns
      csi.storage.k8s.io/controller-publish-secret-name: svc-array
      csi.storage.k8s.io/controller-publish-secret-namespace: csi-ns
    ​
      csi.storage.k8s.io/fstype: xfs   # Optional. values ext4\xfs. The default is ext4.
      volume_name_prefix: demo         # Optional.
    
    $> kubectl create -f demo-storageclass-gold-svc.yaml
    storageclass.storage.k8s.io/gold created
  4. Create a PVC demo-pvc-file-system.yaml with the size of 1 Gb.
    $> cat demo-pvc-file-system.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: demo-pvc-file-system
    spec:
      volumeMode: Filesystem
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: gold
    
    $> kubectl apply -f demo-pvc-file-system.yaml
    persistentvolumeclaim/demo-pvc-file-system created
  5. Display the existing PVC and the created persistent volume (PV).
    $> kubectl get pv,pvc
    NAME                                                        CAPACITY   ACCESS MODES
    persistentvolume/pvc-828ce909-6eb2-11ea-abc8-005056a49b44   1Gi        RWO
    
    RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
    Delete           Bound    default/demo-pvc-file-system   gold                    109m
    
    NAME                             STATUS   VOLUME                                     CAPACITY   
    persistentvolumeclaim/demo-pvc-file-system   Bound    pvc-828ce909-6eb2-11ea-abc8-005056a49b44   1Gi
    
    ACCESS MODES   STORAGECLASS   AGE
    RWO            gold           78s
    
    $> kubectl describe persistentvolume/pvc-828ce909-6eb2-11ea-abc8-005056a49b44
    Name:            pvc-828ce909-6eb2-11ea-abc8-005056a49b44
    Labels:          <none>
    Annotations:     pv.kubernetes.io/provisioned-by: block.csi.ibm.com
    Finalizers:      [kubernetes.io/pv-protection external-attacher/block-csi-ibm-com]
    StorageClass:    gold
    Status:          Bound
    Claim:           default/demo-pvc-file-system
    Reclaim Policy:  Delete
    Access Modes:    RWO
    VolumeMode:      Filesystem
    Capacity:        1Gi
    Node Affinity:   <none>
    Message:
    Source:
        Type:              CSI (a Container Storage Interface (CSI) volume source)
        Driver:            block.csi.ibm.com
        VolumeHandle:      SVC:60050760718106998000000000000543
        ReadOnly:          false
        VolumeAttributes:      array_address=baremetal10-cluster.xiv.ibm.com
                               pool_name=csi_svcPool
                               storage.kubernetes.io/csiProvisionerIdentity=1585146948772-8081-block.csi.ibm.com
                               storage_type=SVC
                               volume_name=demo_pvc-828ce909-6eb2-11ea-abc8-005056a49b44
    Events:                <none>
  6. Create a StatefulSet, using the demo-statefulset-file-system.yaml.
    $> kubectl create -f demo-statefulset-file-system.yaml
    statefulset/demo-statefulset-file-system created
    $> cat demo-statefulset-file-system.yaml
    kind: StatefulSet
    apiVersion: apps/v1
    metadata:
      name: demo-statefulset-file-system
    spec:
      selector:
        matchLabels:
          app: demo-statefulset
      serviceName: demo-statefulset
      replicas: 1
      template:
        metadata:
          labels:
            app: demo-statefulset
        spec:
          containers:
          - name: container-demo
            image: registry.access.redhat.com/ubi8/ubi:latest
            command: [ "/bin/sh", "-c", "--" ]
            args: [ "while true; do sleep 30; done;" ]
            volumeMounts:
              - name: demo-volume
                mountPath: "/data"
          volumes:
          - name: demo-volume
            persistentVolumeClaim:
              claimName: demo-pvc-file-system
    ​
    #      nodeSelector:
    #        kubernetes.io/hostname: NODESELECTOR
  7. Check the newly created pod.
    Display the newly created pod (make sure the pod status is Running).
    $> kubectl get pod demo-statefulset-file-system-0
    NAME                 READY   STATUS    RESTARTS   AGE
    demo-statefulset-file-system-0   1/1     Running   0          43s
  8. Write data to the persistent volume of the pod.
    The PV should be mounted inside the pod at /data.
    $> kubectl exec demo-statefulset-0 touch /data/FILE
    $> kubectl exec demo-statefulset-0 ls /data/FILE
    /data/FILE
  9. Log into the worker node that has the running pod and display the newly attached volume on the node.
    1. Verify which worker node is running the pod demo-statefulset-0.
      $> kubectl describe pod demo-statefulset-0| grep "^Node:"
      Node: k8s-node1/hostname
    2. Establish an SSH connection and log into the worker node.
      $> ssh root@k8s-node1
    3. List the multipath devices on the worker node.
      $>[k8s-node1]  multipath -ll
      mpathz (828ce9096eb211eaabc8005056a49b44) dm-3 IBM     ,2145 (for SVC)         
      size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
      `-+- policy='service-time 0' prio=1 status=active
        |- 37:0:0:12 sdc 8:32 active ready running
        `- 36:0:0:12 sdb 8:16 active ready running
      
      $>[k8s-node1] ls -l /dev/mapper/mpathz
      lrwxrwxrwx. 1 root root 7 Aug 12 19:29 /dev/mapper/mpathz -> ../dm-3
    4. List the physical devices of the multipath mpathz and its mountpoint on the host. (This is the /data inside the stateful pod).
      $>[k8s-node1]  lsblk /dev/sdb /dev/sdc
      NAME     MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
      sdb        8:16   0   1G  0 disk  
      └─mpathz 253:3    0   1G  0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-828ce909-6eb2-11ea-abc8-005056a49b44
      sdc        8:32   0   1G  0 disk  
      └─mpathz 253:3    0   1G  0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-828ce909-6eb2-11ea-abc8-005056a49b44
    5. View the PV mounted on this host.
      $>[k8s-node1]  df | egrep pvc
      /dev/mapper/mpathz      1038336    32944   1005392   4% /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-828ce909-6eb2-11ea-abc8-005056a49b44/mount
    6. Details about the driver internal metadata file .stageInfo.json is stored in the k8s PV node stage path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/<PVC-ID>/globalmount/.stageInfo.json. The CSI driver creates the metadata file during the NodeStage API and is used at later stages by the NodePublishVolume, NodeUnPublishVolume and NodeUnStage CSI APIs later on.
      $> cat /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-828ce909-6eb2-11ea-abc8-005056a49b44/globalmount/.stageInfo.json
      {"connectivity":"iscsi","mpathDevice":"dm-3","sysDevices":",sdb,sdc"}
  10. Delete StatefulSet and then recreate, in order to validate data (/data/FILE) remains in the persistent volume.
    $> kubectl delete statefulset/demo-statefulset-file-system
    statefulset/demo-statefulset-file-system deleted
    
    ### Wait until the pod is deleted. Once deleted the '"demo-statefulset-file-system" not found' is returned.
    $> kubectl get statefulset/demo-statefulset-file-system
    NAME                 READY   STATUS        RESTARTS   AGE
    demo-statefulset-file-system-0   0/1     Terminating   0          91m
    
    
    ###### Establish an SSH connection and log into the worker node in order to see that the multipath was deleted and that the PV mountpoint no longer exists.
    
    $> ssh root@k8s-node1
    
    $>[k8s-node1] df | egrep pvc
    $>[k8s-node1] multipath -ll
    $>[k8s-node1] lsblk /dev/sdb /dev/sdc
    lsblk: /dev/sdb: not a block device
    lsblk: /dev/sdc: not a block device
    
    
    ###### Recreate the statefulset and verify that /data/FILE exists.
    $> kubectl create -f demo-statefulset-file-system.yml
    statefulset/demo-statefulset-file-system created
    
    $> kubectl exec demo-statefulset-file-system-0 ls /data/FILE
    File
  11. Delete StatefulSet and the PVC.
    $> kubectl delete statefulset/demo-statefulset-file-system
    statefulset/demo-statefulset-file-system deleted
    
    $> kubectl get statefulset/demo-statefulset-file-system
    No resources found.
    
    $> kubectl delete pvc/demo-pvc-file-system
    persistentvolumeclaim/demo-pvc-file-system deleted
    
    $> kubectl get pv,pvc
    No resources found.