Using the IBM Storage Enabler for Containers

You can use IBM Storage Enabler for Containers together with IBM Storage Kubernetes Dynamic Provisioner and the IBM Storage Kubernetes FlexVolume for running stateful containers with a storage volume provisioned from an external IBM storage system.

About this task

This example illustrates the configuration procedure of the Kubernetes containers. These containers use external storage volume provided via IBM Storage Enabler for Containers interface of Spectrum Control Base.
  • Creating a storage class gold that refers to Spectrum Control Base storage service gold with XFS file system.
  • Creating a PersistentVolumeClaim (PVC) pvc1 that uses the storage class gold.
  • Creating a pod pod1 with container container1 that uses PVC pvc1.
  • Starting I/Os into /data/myDATA in pod1\container1.
  • Deleting the pod1 and then creating a new pod1 with the same PVC. Verifying that the file /data/myDATA still exists.
  • Delete all storage elements (pod, PVC, persistent volume and storage class).

Procedure

  1. Open a command-line terminal.
  2. Create a storage class, as shown below. The storage class gold refers to a Spectrum Control Base storage service on a pool from IBM FlashSystem A9000R with QoS capability and XFS file system. As a result, any volume with this storage class will be provisioned on the gold service and initialized with XFS file system.
    #> cat storage_class_gold.yml
    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: "gold"                 # Storage Class name
      annotations:
       storageclass.beta.kubernetes.io/is-default-class: "true" 
    provisioner: "ubiquity/flex"   
    parameters:
      profile: "gold"              
      fstype: "xfs"                
      backend: "scbe"              
    
    #> kubectl create -f storage_class_gold.yml
    storageclass "gold" created
  3. Display the newly created storage class to verify its successful creation.
    #> kubectl get storageclass gold
    NAME             TYPE
    gold (default)   ubiquity/flex
  4. Create a PVC pvc1 with the size of 1 Gb that uses the storage class gold.
    #> cat pvc1.yml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: "pvc1"    
    spec:
      storageClassName: gold
      accessModes:
        - ReadWriteOnce 
      resources:
        requests:
          storage: 1Gi  
    
    #> kubectl create -f pvc1.yml
    persistentvolumeclaim "pvc1 created
    The IBM Storage Enabler for Containers creates a persistent volume (PV) and binds it to the PVC. The PV name will be PVC-ID. The volume name on the storage will be u_[ubiquity-instance]_[PVC-ID]. Keep in mind that the [ubiquity-instance] value is set in the IBM Storage Enabler for Containers configuration file.
  5. Display the existing PVC and persistent volume.
    #> kubectl get pvc
    NAME   STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    pvc1   Bound     pvc-254e4b5e-805d-11e7-a42b-005056a46c49   1Gi        RWO           1m
    
    #> kubectl get pv
    NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM        REASON   AGE 
    pvc-254e4b5e-805d-11e7-a42b-005056a46c49   1Gi        RWO           Delete          Bound     default/pvc1
  6. Display the additional persistent volume information, such as its WWN, location on a storage system, etc.
    #> kubectl get -o json pv pvc-254e4b5e-805d-11e7-a42b-005056a46c49 | grep -A15 flexVolume
            "flexVolume": {
                "driver": "ibm/ubiquity",
                "options": {
                    "LogicalCapacity": "1000000000",
                    "Name": "u_PROD_pvc-254e4b5e-805d-11e7-a42b-005056a46c49",
                    "PhysicalCapacity": "1023410176",
                    "PoolName": "gold-pool",
                    "Profile": "gold",
                    "StorageName": "A9000 system1",
                    "StorageType": "2810XIV",
                    "UsedCapacity": "0",
                    "Wwn": "6001738CFC9035EB0000000000CCCCC5",
                    "fstype": "xfs",
                    "volumeName": "pvc-254e4b5e-805d-11e7-a42b-005056a46c49"
                }
            },
  7. Create a pod pod1 with a persistent volume vol1 to cause the IBM Storage Kubernetes FlexVolume to:
    • Attach the volume to the host.
    • Rescan and discover the multipath device of the new volume.
    • Create XFS or EXT4 filesystem on the device (if filesystem does not exist on the volume).
    • Mount the new multipath device on /ubiquity/[WWN of the volume].
    • Create a symbolic link from /var/lib/kubelet/pods/[Pod ID]/volumes/ibm~ubiquity-k8s-flex/[PVC ID] to /ubiquity/[WWN of the volume].
    #> cat pod1.yml
    kind: Pod
    apiVersion: v1
    metadata:
      name: pod1          
    spec:
      containers:
      - name: container1 
        image: alpine:latest
        command: [ "/bin/sh", "-c", "--" ]  
        args: [ "while true; do sleep 30; done;" ]
        volumeMounts:
          - name: vol1
            mountPath: "/data" 
      restartPolicy: "Never"
      volumes:
        - name: vol1
          persistentVolumeClaim:
            claimName: pvc1
    
    #> kubectl create -f pod1.yml
    pod "pod1" created
  8. Display the newly created pod1 and write data to its persistent volume. Make sure that the Pod status is Running.
    #> kubectl get pod pod1
    NAME      READY     STATUS    RESTARTS   AGE
    pod1      1/1       Running   0          16m
    
    
    #> kubectl exec pod1 -c container1  -- bash -c "df -h /data"
    Filesystem          Size  Used Avail Use% Mounted on
    /dev/mapper/mpathi  951M   33M  919M   4% /data
    
    #> kubectl exec pod1 -c container1  -- bash -c "mount | grep /data"
    /dev/mapper/mpathi on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    
    #> kubectl exec pod1 touch /data/FILE
    #> kubectl exec pod1 ls /data/FILE
    File
    
    #> kubectl describe pod pod1| grep "^Node:" 
    Node:		k8s-node1/[IP]
  9. Log in to the worker node that has the running pod and display the newly attached volume on the node.
    > multipath -ll
    mpathi (36001738cfc9035eb0cc2bc5) dm-12 IBM     ,2810XIV
    size=954M features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='service-time 0' prio=1 status=active
      |- 3:0:0:1 sdb 8:16 active ready running
      `- 4:0:0:1 sdc 8:32 active ready running
    
    #> df | egrep "ubiquity|^Filesystem"
    Filesystem                       1K-blocks    Used Available Use% Mounted on
    /dev/mapper/mpathi                  973148   32928    940220   4% /ubiquity/6001738CFC9035EB0CC2BC5
    
    #> mount |grep ubiquity
    /dev/mapper/mpathi on /ubiquity/6001738CFC9035EB0CC2BC5 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    
    #> ls -l /var/lib/kubelet/pods/*/volumes/ibm~ubiquity-k8s-flex/*
    lrwxrwxrwx. 1 root root 42 Aug 13 22:41 pvc-254e4b5e-805d-11e7-a42b-005056a46c49 -> /ubiquity/6001738CFC9035EB0CC2BC5
  10. Delete the pod, causing Kubernetes to:
    • Remove symbolic link from /var/lib/kubelet/pods/[Pod ID]/volumes/ibm~ubiquity-k8s-flex/[PVC ID] to /ubiquity/[WWN of the volume].
    • Unmount the new multipath device on /ubiquity/[WWN of the volume].
    • Remove the multipath device of the volume.
    • Detach (unmap) the volume from the host.
    • Rescan with cleanup mode to remove the physical device files of the detached volume.
    #> kubectl delete pod pod1
    pod "pod1" deleted
  11. Remove the volume to delete its PVC and persistent volume.
    #> kubectl delete -f pvc1.yml
    persistentvolumeclaim "pvc1" deleted
  12. Remove the storage class.
    #> kubectl delete -f storage_class_gold.yml
    storageclass "gold" deleted