Usage restrictions

Take note of the following restrictions before using IBM Storage Enabler for Containers with IBM Spectrum Scale.

  • If a single PVC is used by multiple pods then it is the application’s responsibility to maintain data consistency.
  • It is recommended to create the PVCs one at a time, serially whenever possible. You can create a new PVC after all the earlier PVCs created using the IBM Storage Enabler for Containers are in bound state.
  • Creating a large number of PVCs in a single batch or deleting all of them simultaneously is not recommended. Such actions might result in overloading the IBM Spectrum Scale GUI node, which in turn might lead to the failure of creation and deletion of filesets on IBM Spectrum Scale.
  • The uid, gid, inode-limit, and fileset-type parameters from the storage-classes are only allowed for new fileset creation.
  • For each uid-gid combination a new storage class needs to be defined.
  • Advanced IBM Spectrum Scale functionalities like AFM, Remote Mount, Encryption, Compression, TCT etc. are not supported by IBM Storage Enabler for Containers.
  • IBM Storage Enabler for Containers does not check the storage space available on the IBM Spectrum Scale file system before creating the PVC. You can use the Kubernetes storage resource quota to limit the number of PVCs or storage space.
  • Installing IBM Storage Enabler for Containers on Elastic Storage Server (ESS) I/O node and ESS EMS node are not supported.
  • IBM Storage Enabler for Containers is supported on zLinux platforms.
  • The fileset created using IBM Storage Enabler for Containers must not be unlinked or deleted from any other interface.
  • IBM Storage Enabler for Containers does not support volume expansion for storage class.
  • The df command inside the container shows the full size of the IBM Spectrum Scale file systems.
  • IBM Storage Enabler for Containers supports only up to 1000 PVCs with IBM Spectrum Scale.
  • Stop all PODs manually before running the mmshutdown command. Otherwise, a worker node might crash. If a crash occurs, its recovery involves recovery of the node, followed by manually stopping all PODs before resuming any prior shutdown.