Running a stateful container on IBM FlashSystem A9000R

Use this information as a sample of how to run a stateful container on an IBM FlashSystem A9000R storage service with the IBM block storage CSI driver.

About this task

This example illustrates a basic configuration required for running a stateful container with volumes provisioned on an IBM FlashSystem A9000R storage service.
  • Creating a k8s secret a9000-array1 for the storage system.
  • Creating a storage class gold.
  • Creating a PersistentVolumeClaim (PVC) demo-pvc that uses the storage class gold and show some details on the created PVC and persistent volume (PV).
  • Creating a StatefulSet application demo-statefulset and observing the mountpoint / multipath device that was created by the driver.
  • Writing data inside demo-statefulset, and then deleting and recreating demo-statefulset, verifying that the data still exists.

Procedure

  1. Open a command-line terminal.
  2. Create an array secret.
    $> cat demo-secret-a9000-array1.yaml
    kind: Secret
    apiVersion: v1
    metadata:
      name: a9000-array1
      namespace: kube-system
    type: Opaque
    stringData:
       management_address: <VALUE-2,VALUE-3> # replace with valid storage system managment address
       username: <VALUE-4>                   # replace with valid username
    data:
       password: <VALUE-5 base64>            # replace with valid password  
    $> kubectl create -f demo-secret-a9000-array1.yaml
    secret/a9000-array1 created
    
  3. Create a storage class.
    $> cat demo-storageclass-gold-A9000R.yaml
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: gold
    provisioner: block.csi.ibm.com
    parameters:
      pool: gold
    
      csi.storage.k8s.io/provisioner-secret-name: a9000-array1
      csi.storage.k8s.io/provisioner-secret-namespace: kube-system
      csi.storage.k8s.io/controller-publish-secret-name: a9000-array1
      csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
    
      csi.storage.k8s.io/fstype: xfs   # Optional. Values are ext4/xfs. The default is ext4.
      volume_name_prefix: demo1        # Optional.
    
    $> kubectl create -f demo-storageclass-gold-A9000R.yaml
    storageclass.storage.k8s.io/gold created
  4. Create a PVC demo-pvc-gold with the size of 1 Gb.
    $> cat demo-pvc-gold.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: demo-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: gold
    
    
    $> kubectl apply -f demo-pvc-gold.yaml
    persistentvolumeclaim/demo-pvc created
  5. Display the existing PVC and the created persistent volume (PV).
    $> kubectl get pv,pvc
    NAME                                                        CAPACITY   ACCESS MODES
    persistentvolume/pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f   1Gi        RWO
    
    RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
    Delete           Bound    default/demo-pvc   gold                    78s
    
    NAME                             STATUS   VOLUME                                     CAPACITY   
    persistentvolumeclaim/demo-pvc   Bound    pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f   1Gi
    
    ACCESS MODES   STORAGECLASS   AGE
    RWO            gold           78s
    
    $> kubectl describe persistentvolume/pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f
    Name:            pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f
    Labels:          <none>
    Annotations:     pv.kubernetes.io/provisioned-by: block.csi.ibm.com
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    gold
    Status:          Bound
    Claim:           default/demo-pvc
    Reclaim Policy:  Delete
    Access Modes:    RWO
    VolumeMode:      Filesystem
    Capacity:        1Gi
    Node Affinity:   <none>
    Message:         
    Source:
        Type:              CSI (a Container Storage Interface (CSI) volume source)
        Driver:            block.csi.ibm.com
        VolumeHandle:      A9000:6001738CFC9035EB0000000000D1F68F
        ReadOnly:          false
        VolumeAttributes:      array_name=<IP>
                               pool_name=gold
                               storage.kubernetes.io/csiProvisionerIdentity=1565550204603-8081-
                               block.csi.ibm.com
                               storage_type=A9000
                               volume_name=demo1_pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f
    Events:                <none>
  6. Create a StatefulSet application demo-statefulset, using the demo-pvc.
    $> cat demo-statefulset-with-demo-pvc.yml
    kind: StatefulSet
    apiVersion: apps/v1
    metadata:
      name: demo-statefulset
    spec:
      selector:
        matchLabels:
          app: demo-statefulset
      serviceName: demo-statefulset
      replicas: 1
      template:
        metadata:
          labels:
            app: demo-statefulset
        spec:
          containers:
          - name: container1
            image: registry.access.redhat.com/ubi8/ubi:latest
            command: [ "/bin/sh", "-c", "--" ]
            args: [ "while true; do sleep 30; done;" ]
            volumeMounts:
              - name: demo-pvc
                mountPath: "/data"
          volumes:
          - name: demo-pvc
            persistentVolumeClaim:
              claimName: demo-pvc
    
          #nodeSelector:
          #  kubernetes.io/hostname: NODESELECTOR
    
    
    $> kubectl create -f demo-statefulset-with-demo-pvc.yml
    statefulset/demo-statefulset created
  7. Check the newly created pod.
    1. Display the newly created pod (make sure the pod status is Running).
      $> kubectl get pod demo-statefulset-0
      NAME                 READY   STATUS    RESTARTS   AGE
      demo-statefulset-0   1/1     Running   0          43s
    2. Check the mountpoint inside the pod.
      $> kubectl exec demo-statefulset-0 -- bash -c "df -h /data"
      Filesystem          Size  Used Avail Use% Mounted on
      /dev/mapper/mpathz 1014M   33M  982M   4% /data
      
      $> kubectl exec demo-statefulset-0 -- bash -c "mount | grep /data"
      /dev/mapper/mpathz on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
  8. Write data to the persistent volume of the pod.
    The PV should be mounted inside the pod at /data.
    $> kubectl exec demo-statefulset-0 touch /data/FILE
    $> kubectl exec demo-statefulset-0 ls /data/FILE
    File
  9. Log into the worker node that has the running pod and display the newly attached volume on the node.
    1. Verify which worker node is running the pod demo-statefulset-0.
      $> kubectl describe pod demo-statefulset-0| grep "^Node:"
      Node: k8s-node1/hostname
    2. Establish an SSH connection and log into the worker node.
      $> ssh k8s-node1
    3. List the multipath devices on the worker node. Note the same mpathz, as mentioned in step 7.b.
      $>[k8s-node1]  multipath -ll
      mpathz (36001738cfc9035eb0000000000d1f68f) dm-3 IBM     ,2810XIV         
      size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
      `-+- policy='service-time 0' prio=1 status=active
        |- 37:0:0:12 sdc 8:32 active ready running
        `- 36:0:0:12 sdb 8:16 active ready running
      
      $>[k8s-node1] ls -l /dev/mapper/mpathz
      lrwxrwxrwx. 1 root root 7 Aug 12 19:29 /dev/mapper/mpathz -> ../dm-3
    4. List the physical devices of the multipath mpathz and its mountpoint on the host. (This is the /data inside the stateful pod).
      $>[k8s-node1]  lsblk /dev/sdb /dev/sdc
      NAME     MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
      sdb        8:16   0   1G  0 disk  
      └─mpathz 253:3    0   1G  0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-a04bd32f-bd0f-11e9-a1f5
      sdc        8:32   0   1G  0 disk  
      └─mpathz 253:3    0   1G  0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-a04bd32f-bd0f-11e9-a1f5
    5. View the PV mounted on this host.
      $>[k8s-node1]  df | egrep pvc
      /dev/mapper/mpathz      1038336    32944   1005392   4% /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f/mount
    6. Details about the driver internal metadata file .stageInfo.json is stored in the k8s PV node stage path /var/lib/kubelet/plugins/kubernetes.io/csi/pv/<PVC-ID>/globalmount/.stageInfo.json. The CSI driver creates the metadata file during the NodeStage API and is used at later stages by the NodePublishVolume, NodeUnPublishVolume and NodeUnStage CSI APIs later on.
      $> cat /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-711b6fef-bcf9-11e9-a1f5-005056a45d5f/globalmount/.stageInfo.json
      {"connectivity":"iscsi","mpathDevice":"dm-3","sysDevices":",sdb,sdc"}
  10. Delete StatefulSet and then restart, in order to validate data (/data/FILE) remains in the persistent volume.
    $> kubectl delete statefulset/demo-statefulset
    statefulset/demo-statefulset deleted
    
    ### Wait until the pod is deleted. Once deleted the '"demo-statefulset" not found' is returned.
    $> kubectl get statefulset/demo-statefulset
    NAME                 READY   STATUS        RESTARTS   AGE
    demo-statefulset-0   0/1     Terminating   0          91m
    
    
    ###### Establish an SSH connection and log into the worker node in order to see that the multipath was deleted and that the PV mountpoint no longer exists.
    
    $> ssh k8s-node1
    
    $>[k8s-node1] df | egrep pvc
    $>[k8s-node1] multipath -ll
    $>[k8s-node1] lsblk /dev/sdb /dev/sdc
    lsblk: /dev/sdb: not a block device
    lsblk: /dev/sdc: not a block device
    
    
    ###### Recreate the statefulset and verify that /data/FILE exists.
    $> kubectl create -f demo-statefulset-with-demo-pvc.yml
    statefulset/demo-statefulset created
    
    $> kubectl exec demo-statefulset-0 ls /data/FILE
    File
  11. Delete StatefulSet and the PVC.
    $> kubectl delete statefulset/demo-statefulset
    statefulset/demo-statefulset deleted
    
    $> kubectl get statefulset/demo-statefulset
    No resources found.
    
    $> kubectl delete pvc/demo-pvc
    persistentvolumeclaim/demo-pvc deleted
    
    $> kubectl get pv,pvc
    No resources found.