Running a stateful container with file system configurations
Use this section as a sample of how to run a stateful container with a file system configuration.
Before you begin
Procedure
- Open a command-line terminal.
-
Create an array secret.
Important: Be sure that the username and password match the same username and password used on the storage system.
$> cat demo-secret-svc-array.yaml kind: Secret apiVersion: v1 metadata: name: svc-array namespace: csi-ns type: Opaque stringData: management_address: <ADDRESS-1, ADDRESS-2> # Array management addresses username: <USERNAME> # Array username data: password: <PASSWORD base64> # replace with valid password $> kubectl create -f demo-secret-svc-array.yaml secret/svc-array created -
Create a storage class.
Note: The SpaceEfficiency values for Spectrum Virtualize Family are: thick, thin, compressed, or deduplicated. These values are not case specific.
For DS8000® Family systems, the default value is standard, but can be set to thin, if required. These values are not case specific. For more information, see Creating storage classes.
This parameter is not applicable for IBM FlashSystem® A9000 and A9000R systems. These systems always include deduplication and compression.
$> cat demo-storageclass-gold-svc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: block.csi.ibm.com parameters: SpaceEfficiency: deduplicated pool: gold csi.storage.k8s.io/provisioner-secret-name: svc-array csi.storage.k8s.io/provisioner-secret-namespace: csi-ns csi.storage.k8s.io/controller-publish-secret-name: svc-array csi.storage.k8s.io/controller-publish-secret-namespace: csi-ns csi.storage.k8s.io/fstype: xfs # Optional. values ext4\xfs. The default is ext4. volume_name_prefix: demo # Optional. $> kubectl create -f demo-storageclass-gold-svc.yaml storageclass.storage.k8s.io/gold created -
Create a PVC demo-pvc-file-system.yaml with the size of 1
Gb.
Note: For more information about creating a PVC yaml file, see Creating a PersistentVolumeClaim (PVC).
$> cat demo-pvc-file-system.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: demo-pvc-file-system spec: volumeMode: Filesystem accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gold $> kubectl apply -f demo-pvc-file-system.yaml persistentvolumeclaim/demo-pvc-file-system created -
Display the existing PVC and the created persistent volume (PV).
Note: For more information about creating a PVC yaml file, see Creating a PersistentVolumeClaim (PVC).
$> kubectl get pv,pvc NAME CAPACITY ACCESS MODES persistentvolume/pvc-828ce909-6eb2-11ea-abc8-005056a49b44 1Gi RWO RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE Delete Bound default/demo-pvc-file-system gold 109m NAME STATUS VOLUME CAPACITY persistentvolumeclaim/demo-pvc-file-system Bound pvc-828ce909-6eb2-11ea-abc8-005056a49b44 1Gi ACCESS MODES STORAGECLASS AGE RWO gold 78s $> kubectl describe persistentvolume/pvc-828ce909-6eb2-11ea-abc8-005056a49b44 Name: pvc-828ce909-6eb2-11ea-abc8-005056a49b44 Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: block.csi.ibm.com Finalizers: [kubernetes.io/pv-protection external-attacher/block-csi-ibm-com] StorageClass: gold Status: Bound Claim: default/demo-pvc-file-system Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: <none> Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: block.csi.ibm.com VolumeHandle: SVC:60050760718106998000000000000543 ReadOnly: false VolumeAttributes: array_address=baremetal10-cluster.xiv.ibm.com pool_name=csi_svcPool storage.kubernetes.io/csiProvisionerIdentity=1585146948772-8081-block.csi.ibm.com storage_type=SVC volume_name=demo_pvc-828ce909-6eb2-11ea-abc8-005056a49b44 Events: <none> -
Create a StatefulSet, using the demo-statefulset-file-system.yaml.
Note: For more information about creating a StatefulSet, see Creating a StatefulSet.
$> kubectl create -f demo-statefulset-file-system.yaml statefulset/demo-statefulset-file-system created$> cat demo-statefulset-file-system.yaml kind: StatefulSet apiVersion: apps/v1 metadata: name: demo-statefulset-file-system spec: selector: matchLabels: app: demo-statefulset serviceName: demo-statefulset replicas: 1 template: metadata: labels: app: demo-statefulset spec: containers: - name: container-demo image: registry.access.redhat.com/ubi8/ubi:latest command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] volumeMounts: - name: demo-volume mountPath: "/data" volumes: - name: demo-volume persistentVolumeClaim: claimName: demo-pvc-file-system # nodeSelector: # kubernetes.io/hostname: HOSTNAME -
Check the newly created pod. Display the newly created pod (make sure the pod status is Running).
$> kubectl get pod demo-statefulset-file-system-0 NAME READY STATUS RESTARTS AGE demo-statefulset-file-system-0 1/1 Running 0 43s - Write data to the persistent volume of the pod.
The PV should be mounted inside the pod at /data.
$> kubectl exec demo-statefulset-0 touch /data/FILE $> kubectl exec demo-statefulset-0 ls /data/FILE /data/FILE -
Log into the worker node that has the running pod and display the newly attached volume on the
node.
- Verify which worker node is running the pod
demo-statefulset-0.
$> kubectl describe pod demo-statefulset-0| grep "^Node:" Node: k8s-node1/hostname - Establish an SSH connection and log into the worker node.
$> ssh root@k8s-node1 - List the multipath devices on the worker node.
$>[k8s-node1] multipath -ll mpathz (828ce9096eb211eaabc8005056a49b44) dm-3 IBM ,2145 (for SVC) size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 37:0:0:12 sdc 8:32 active ready running `- 36:0:0:12 sdb 8:16 active ready running $>[k8s-node1] ls -l /dev/mapper/mpathz lrwxrwxrwx. 1 root root 7 Aug 12 19:29 /dev/mapper/mpathz -> ../dm-3 - List the physical devices of the multipath mpathz and its
mountpoint on the host. (This is the /data inside the stateful
pod).
$>[k8s-node1] lsblk /dev/sdb /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 1G 0 disk └─mpathz 253:3 0 1G 0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-828ce909-6eb2-11ea-abc8-005056a49b44 sdc 8:32 0 1G 0 disk └─mpathz 253:3 0 1G 0 mpath /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-828ce909-6eb2-11ea-abc8-005056a49b44 - View the PV mounted on this host. Note: All PV mountpoints look like: /var/lib/kubelet/pods/*/volumes/kubernetes.io~csi/pvc-*/mount
$>[k8s-node1] df | egrep pvc /dev/mapper/mpathz 1038336 32944 1005392 4% /var/lib/kubelet/pods/d67d22b8-bd10-11e9-a1f5-005056a45d5f/volumes/kubernetes.io~csi/pvc-828ce909-6eb2-11ea-abc8-005056a49b44/mount - Details about the driver internal metadata file .stageInfo.json
is stored in the k8s PV node stage path
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/<PVC-ID>/globalmount/.stageInfo.json.
The CSI driver creates the metadata file during
the NodeStage API and is used at later stages by the NodePublishVolume,
NodeUnPublishVolume and NodeUnStage CSI APIs later
on.
$> cat /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-828ce909-6eb2-11ea-abc8-005056a49b44/globalmount/.stageInfo.json {"connectivity":"iscsi","mpathDevice":"dm-3","sysDevices":",sdb,sdc"}
- Verify which worker node is running the pod
demo-statefulset-0.
-
Delete StatefulSet and then recreate, in order to validate data
(/data/FILE) remains in the persistent volume.
- Delete the StatefulSet.
$> kubectl delete statefulset/demo-statefulset-file-system statefulset/demo-statefulset-file-system deleted - Wait until the pod is deleted. Once deleted, the '"demo-statefulset-file-system" not
found' is returned.
$> kubectl get statefulset/demo-statefulset-file-system NAME READY STATUS RESTARTS AGE demo-statefulset-file-system-0 0/1 Terminating 0 91m - Verify that the multipath was deleted and that the PV mountpoint no longer exists by
establishing an SSH connection and logging into the worker node.
$> ssh root@k8s-node1 $>[k8s-node1] df | egrep pvc $>[k8s-node1] multipath -ll $>[k8s-node1] lsblk /dev/sdb /dev/sdc lsblk: /dev/sdb: not a block device lsblk: /dev/sdc: not a block device - Recreate the StatefulSet and verify that /data/FILE exists.
$> kubectl create -f demo-statefulset-file-system.yaml statefulset/demo-statefulset-file-system created $> kubectl exec demo-statefulset-file-system-0 ls /data/FILE File
- Delete the StatefulSet.
-
Delete StatefulSet and the PVC.
$> kubectl delete statefulset/demo-statefulset-file-system statefulset/demo-statefulset-file-system deleted $> kubectl get statefulset/demo-statefulset-file-system No resources found. $> kubectl delete pvc/demo-pvc-file-system persistentvolumeclaim/demo-pvc-file-system deleted $> kubectl get pv,pvc No resources found.