Compatibility and requirements for IBM Storage Enabler for Containers

For the complete and up-to-date information about the compatibility and requirements for using IBM Storage Enabler for Containers with Kubernetes, refer to the IBM Spectrum Connect latest release notes. The release notes detail supported operating system and Kubernetes versions, as well as microcode versions of the supported storage systems. You can find the latest Spectrum Connect release notes on IBM Knowledge Center .

Follow these steps to prepare your environment for installing the IBM Storage Enabler for Containers in the Kubernetes cluster that requires persistent volumes for stateful containers.
  1. Contact your storage administrator and make sure IBM Storage Enabler for Containers interface has been added to active Spectrum Connect instance and at least one storage service has been delegated to it. See Managing integration with IBM Storage Enabler for Containers and Delegating storage services to the IBM Storage Enabler for Containers interface for details.
  2. Verify that there is a proper communication link between Spectrum Connect and Kubernetes cluster.
  3. Perform these steps for each worker node in Kubernetes cluster:
    1. Install the following Linux packages to ensure Fibre Channel and iSCSI connectivity. Skip this step, if the packages are already installed.
      • RHEL:
        • sg3_utils.
        • iscsi-initiator-utils (if iSCSI connection is required).
        sudo yum -y install sg3_utils
        sudo yum -y install iscsi-initiator-utils
      • Ubuntu:
        • scsitools.
        • open-iscsi (if iSCSI connection is required).
        sudo apt-get install scsitools
        sudo apt-get install open-iscsi
    2. Configure Linux multipath devices on the host. Create and set the relevant storage system parameters in the /etc/multipath.conf file. You can also use the default multipath.conf file located in the /usr/share/doc/device-mapper-multipath-* directory.
      Verify that the systemctl status multipathd output indicates that the multipath status is active and error-free.
      • RHEL:
        
        yum install device-mapper-multipath
        sudo modprobe dm-multipath
        systemctl start multipathd
        systemctl status multipathd
        multipath -ll
      • Ubuntu:
        
        apt-get install multipath-tools
        sudo modprobe dm-multipath
        systemctl start multipathd
        systemctl status multipathd
        multipath -ll
    3. Configure storage system connectivity.
      • Define the hostname of each Kubernetes node on the relevant storage systems with the valid WWPN or IQN of the node. The hostname on the storage system must be the same as the hostname defined in the Kubernetes cluster. Use the $> kubectl get nodes command to display hostname, as illustrated below. In this example, the k8s-worker-node1 and the k8s-worker-node2 hostnames must be defined on a storage system.
        root@k8s-user-v18-master:~# kubectl get nodes
        NAME               STATUS   ROLES      AGE       VERSION
        k8s-master         Ready     master    34d       v1.8.4
        k8s-worker-node1   Ready     <none>    34d       v1.8.4
        k8s-worker-node2   Ready     <none>    34d       v1.8.4
        
      • After the node hostnames are defined, log into Spectrum Connect UI and refresh the relevant storage systems in the Storage System pane.
      • For iSCSI, perform these three steps .
        • Make sure that the login used to log in to the iSCSI targets is permanent and remains available after a reboot of the worker node. To do this, verify that the node.startup in the /etc/iscsi/iscsid.conf file is set to automatic. If not, set it as required and then restart the iscsid service ($> /etc/init.d/iscsid restart).
        • Discover and log into the iSCSI targets of the relevant storage systems.
          $> iscsiadm -m discoverydb -t st -p ${storage system iSCSI port IP}:3260
          --discover
          $> iscsiadm -m node -p ${storage system iSCSI port IP/hostname} --login
        • Verify that the login was successful and display all targets that you logged in. The portal value must be the iSCSI target IP address.
          $> iscsiadm -m session --rescan
          Rescanning session [sid: 1, target: {storage system IQN},
          portal: {storage system iSCSI port IP},{port number}
    4. Make sure that the node kubelet service has the attach/detach capability enabled, enable-controller-attach-detach=true (enabled by default). To verify the current status, run the following command and check that the Setting node annotation to enable volume controller attach/detach message is displayed:
      $> journalctl -u kubelet | grep 'Setting node annotation to .
      * volume controller attach/detach' | tail -1
      Jan 03 17:55:05 k8s-19-master-shay kubelet[3627]: I0103 17:55:05.437720 3627 
      kubelet_node_status.go:273] Setting node annotation to enable volume controller
      attach/detach

      If the volume controller attach/detach functionality is disabled, enable it, as detailed in Kubernetes documentation.

  4. Perform these steps for every master node in Kubernetes cluster:
    1. Enable the attach/detach capability for the kubelet service (controller-attach-detach-enabled=true).
    2. If the controller-manager is configured to run as a pod in your Kubernetes cluster, you must allow for event recording in controller-manager log file. To achieve this, add the default path to the log file (/var/log) , as a host path. You can change this directory by configuring FLEX-LOG-DIR parameter in the ubiquity-configmap.yml file, as detailed in Updating the Enabler for Containers configuration files.
      • Stop the controller-manager pod by moving the kube-controller-manager.yml file to temporary directory: mv /etc/kubernetes/manifests/kube-controller-manager.yml /tmp.
      • Edit the kube-controller-manager.yml file: vi /tmp/kube-controller-manager.yml.
        • Add the following lines under the volumes tag.
          - hostPath:
              path: /var/log
              type: DirectoryOrCreate
            name: flexlog-dir
        • Add the following lines under the volumeMounts tag:
          - mountPath: /var/log
            name: flexlog-dir
        • Restart the controller-manager pod by moving the kube-controller-manager.yml file to its original location:
          mv /tmp/kube-controller-manager.yml /etc/kubernetes/manifests/.
        • Verify that the controller-manager pod is in the Running state: kubectl get pod -n kube-system | grep controller-manager.
  5. If dedicated SSL certificates are required, see the relevant section of the Managing SSL certificates with IBM Storage Enabler for Containers procedure. When no validation is required and you can use the self-signed certificates, generated by default by the IBM Storage Enabler for Containers server, skip this procedure.
  6. IBM Storage Dynamic Provisioner for Kubernetes uses the Kubernetes configuration file to access the Kubernetes API server and monitor the Persistent Volume Claims (PVCs). Usually, Kubernetes configuration file is located either in the ~/.kube/config or /etc/kubernetes directory. Make sure that this configuration file has access to all the namespaces intended persistent volume provisioning. If a PVC comes from namespace that cannot be accessed, it will not be served.
    If you plan to utilize IBM Cloud Private (ICP) as your orchestration platform, use the ICP user interface to access the .kube/config file. This allows you to configure the Kubernetes command line tool, kubectl, client.
    1. Activate the ICP cluster management console.
    2. In the console dashboard, click on the current user name and select Configure client from the drop-down menu.
    3. Follow instructions in the IBM Spectrum Access Blueprint for IBM Cloud Private to configure the client.
  7. When using IBM Cloud Private with the Spectrum Virtualize Family products, use only hostnames, and not IP addresses, for the Kubernetes cluster nodes. Then, in the config.yaml file, set the kubelet_nodename parameter to hostname to install the ICP nodes with hostnames as well.