Running a stateful container on Storwize systems
This information provides an example of running a stateful container on Storwize systems using the IBM block storage CSI driver.
About this task
|
Note: This example can be used for any Storwize, Spectrum Virtualize, or FlashSystem as
Storwize storage service.
|
- Creating a k8s secret storwize-array1 for the storage system.
- Creating a storage class gold.
- Creating a PersistentVolumeClaim (PVC) demo-pvc that uses the storage class gold and show some details on the created PVC and persistent volume (PV).
- Creating a StatefulSet application demo-statefulset and observing the mountpoint / multipath device that was created by the driver.
- Writing data inside demo-statefulset, and then deleting and recreating demo-statefulset, verifying that the data still exists.
Procedure
- Open a command-line terminal.
-
Create an array secret.
$> cat demo-secret-storwize-array1.yaml kind: Secret apiVersion: v1 metadata: name: storwize-array1 namespace: kube-system type: Opaque stringData: management_address: <VALUE-2,VALUE-3> # replace with valid storage system managment address username: <VALUE-4> # replace with valid username data: password: <VALUE-5 base64> # replace with valid password $> kubectl create -f demo-secret-storwize-array1.yaml secret/storwize-array1 created -
Create a storage class.
$> cat demo-storageclass-gold-storwize.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: block.csi.ibm.com parameters: SpaceEfficiency: thick # SpaceEfficiency values are: thick, thin, deduplicated, and compressed. pool: gold csi.storage.k8s.io/provisioner-secret-name: storwize-array1 csi.storage.k8s.io/provisioner-secret-namespace: kube-system csi.storage.k8s.io/controller-publish-secret-name: storwize-array1 csi.storage.k8s.io/controller-publish-secret-namespace: kube-system csi.storage.k8s.io/fstype: xfs # Optional. Values are ext4/xfs. The default is ext4. volume_name_prefix: demo1 # Optional. $> kubectl create -f demo-storageclass-gold-storwize.yaml storageclass.storage.k8s.io/gold created -
Create a PVC demo-pvc-gold with the size of 1 Gb.
$> cat demo-pvc-gold.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-demo spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gold $> kubectl apply -f demo-pvc-gold.yaml persistentvolumeclaim/demo-pvc created -
Display the existing PVC and the created persistent volume (PV).
$> kubectl get pv,pvc NAME CAPACITY ACCESS MODES persistentvolume/pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f 1Gi RWO RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE Delete Bound default/demo-pvc gold 78s NAME STATUS VOLUME CAPACITY persistentvolumeclaim/demo-pvc Bound pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f 1Gi ACCESS MODES STORAGECLASS AGE RWO gold 78s $> kubectl describe persistentvolume/pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f Name: pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: block.csi.ibm.com Finalizers: [kubernetes.io/pv-protection] StorageClass: gold Status: Bound Claim: default/demo-pvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: <none> Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: block.csi.ibm.com VolumeHandle: SVC:6001738CFC9035EB0000000000D1F68F ReadOnly: false VolumeAttributes: array_name=<IP> pool_name=gold storage.kubernetes.io/csiProvisionerIdentity=1565550204603-8081- block.csi.ibm.com storage_type=SVC volume_name=demo1_pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f Events: <none> -
Create a StatefulSet application demo-statefulset, using the
demo-pvc.
$> cat demo-statefulset-with-demo-pvc.yml kind: StatefulSet apiVersion: apps/v1 metadata: name: demo-statefulset spec: selector: matchLabels: app: demo-statefulset serviceName: demo-statefulset replicas: 1 template: metadata: labels: app: demo-statefulset spec: containers: - name: container1 image: registry.access.redhat.com/ubi8/ubi:latest command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] volumeMounts: - name: demo-pvc mountPath: "/data" volumes: - name: demo-pvc persistentVolumeClaim: claimName: demo-pvc #nodeSelector: # kubernetes.io/hostname: NODESELECTOR $> kubectl create -f demo-statefulset-with-demo-pvc.yml statefulset/demo-statefulset created -
Check the newly created pod.
- Display the newly created pod (make sure the pod status is
Running).
$> kubectl get pod demo-statefulset-0 NAME READY STATUS RESTARTS AGE demo-statefulset-0 1/1 Running 0 43s - Check the mountpoint inside the pod.
$> kubectl exec demo-statefulset-0 -- bash -c "df -h /data" Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpathz 1014M 33M 982M 4% /data $> kubectl exec demo-statefulset-0 -- bash -c "mount | grep /data" /dev/mapper/mpathz on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
- Display the newly created pod (make sure the pod status is
Running).
- Write data to the persistent volume of the pod.
The PV should be mounted inside the pod at /data.
$> kubectl exec demo-statefulset-0 touch /data/FILE $> kubectl exec demo-statefulset-0 ls /data/FILE File -
Log into the worker node that has the running pod and display the newly attached volume on the
node.
- Verify which worker node is running the pod
demo-statefulset-0.
$> kubectl describe pod demo-statefulset-0| grep "^Node:" Node: k8s-node1/hostname - Establish an SSH connection and log into the worker node.
$> ssh k8s-node1 - List the multipath devices on the worker node. Note the same
mpathz, as mentioned in step 7.b.
$>[k8s-node1] multipath -ll mpathz (36001738cfc9035eb0000000000d1f68f) dm-3 IBM ,2145 (for SVC) size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 37:0:0:12 sdc 8:32 active ready running `- 36:0:0:12 sdb 8:16 active ready running $>[k8s-node1] ls -l /dev/mapper/mpathz lrwxrwxrwx. 1 root root 7 Aug 12 19:29 /dev/mapper/mpathz -> ../dm-3 - List the physical devices of the multipath mpathz and its
mountpoint on the host. (This is the /data inside the stateful
pod).
$>[k8s-node1] lsblk /dev/sdb /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 1G 0 disk └─mpathz 253:3 0 1G 0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-a04bd32f-bd0f-11e9-a1f5 sdc 8:32 0 1G 0 disk └─mpathz 253:3 0 1G 0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-a04bd32f-bd0f-11e9-a1f5 - View the PV mounted on this host. Note: All PV mountpoints look like: /var/lib/kubelet/pods/*/volumes/kubernetes.io~csi/pvc-*/mount
$>[k8s-node1] df | egrep pvc /dev/mapper/mpathz 1038336 32944 1005392 4% /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-a04bd32f-bd0f-11e9-a1f5-005056a45d5f/mount - Details about the driver internal metadata file .stageInfo.json
is stored in the k8s PV node stage path
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/<PVC-ID>/globalmount/.stageInfo.json.
The CSI driver creates the metadata file during
the NodeStage API and is used at later stages by the NodePublishVolume,
NodeUnPublishVolume and NodeUnStage CSI APIs later
on.
$> cat /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-711b6fef-bcf9-11e9-a1f5-005056a45d5f/globalmount/.stageInfo.json {"connectivity":"iscsi","mpathDevice":"dm-3","sysDevices":",sdb,sdc"}
- Verify which worker node is running the pod
demo-statefulset-0.
-
Delete StatefulSet and then restart, in order to validate data
(/data/FILE) remains in the persistent volume.
$> kubectl delete statefulset/demo-statefulset statefulset/demo-statefulset deleted ### Wait until the pod is deleted. Once deleted the '"demo-statefulset" not found' is returned. $> kubectl get statefulset/demo-statefulset NAME READY STATUS RESTARTS AGE demo-statefulset-0 0/1 Terminating 0 91m ###### Establish an SSH connection and log into the worker node in order to see that the multipath was deleted and that the PV mountpoint no longer exists. $> ssh k8s-node1 $>[k8s-node1] df | egrep pvc $>[k8s-node1] multipath -ll $>[k8s-node1] lsblk /dev/sdb /dev/sdc lsblk: /dev/sdb: not a block device lsblk: /dev/sdc: not a block device ###### Recreate the statefulset and verify that /data/FILE exists. $> kubectl create -f demo-statefulset-with-demo-pvc.yml statefulset/demo-statefulset created $> kubectl exec demo-statefulset-0 ls /data/FILE File -
Delete StatefulSet and the PVC.
$> kubectl delete statefulset/demo-statefulset statefulset/demo-statefulset deleted $> kubectl get statefulset/demo-statefulset No resources found. $> kubectl delete pvc/demo-pvc persistentvolumeclaim/demo-pvc deleted $> kubectl get pv,pvc No resources found.