Compatibility and requirements
For the complete and up-to-date information about the compatibility and requirements for using IBM Storage Enabler for Containers with Kubernetes, refer to its latest release notes. The release notes detail supported operating system and Kubernetes versions, as well as microcode versions of the supported storage systems. You can find the latest release notes on IBM Knowledge Center .
About this task
Procedure
- Contact your storage administrator and make sure that IBM Spectrum Connect has been installed; IBM Storage Enabler for Containers interface has been added to active Spectrum Connect instance; at least one storage service has been delegated to it. See Managing integration with IBM Spectrum Connect and Delegating storage services to the IBM Storage Enabler for Containers interface for details.
- Verify that there is a proper communication link between Spectrum Connect and Kubernetes cluster.
-
Perform these steps for each worker node in Kubernetes cluster:
- Install the following Linux packages to ensure Fibre Channel and iSCSI connectivity. Skip this
step, if the packages are already installed.
- RHEL:
sg3_utils.iscsi-initiator-utils(if iSCSI connection is required).sysfsutils(if Fibre Channel connection is required).
sudo yum -y install sg3_utils
sudo yum -y install iscsi-initiator-utils
sudo yum -y install sysfsutils - Ubuntu:
scsitools.open-iscsi(if iSCSI connection is required).sysfsutils(if Fibre Channel connection is required).
sudo apt-get install scsitools
sudo apt-get install open-iscsi
sudo apt-get install sysfsutils
- RHEL:
- Configure Linux multipath devices on the host. Create and set the relevant storage system
parameters in the /etc/multipath.conf file. You can also use the default
multipath.conf file located in the
/usr/share/doc/device-mapper-multipath-* directory.
Verify that thesystemctl status multipathdoutput indicates that the multipath status is active and error-free.- RHEL:
yum install device-mapper-multipath sudo modprobe dm-multipath systemctl start multipathd systemctl status multipathd multipath -ll - Ubuntu:
apt-get install multipath-tools sudo modprobe dm-multipath systemctl start multipathd systemctl status multipathd multipath -ll - SLES:
Note: For SLES, the
multipath-toolspackage version must be 0.7.1 or above.zypper install sg3_utils multipath-tools systemctl start multipathd systemctl status multipathd multipath -ll
Important: When configuring Linux multipath devices, verify that the find_multipaths parameter in the multipath.conf file is disabled.- RHEL: Remove the find_multipaths yes string from the multipath.conf file.
- Ubuntu: Add the find_multipaths no string to the
multipath.conf file, see
below:
defaults { find_multipaths no }
- RHEL:
- Configure storage system connectivity.
- Define the hostname of each Kubernetes node on the relevant storage systems with the valid WWPN
or IQN of the node. The hostname on the storage system must be the same as the hostname defined in
the Kubernetes cluster. Use the $> kubectl get nodes command to display hostname,
as illustrated below. In this example, the k8s-worker-node1 and the
k8s-worker-node2 hostnames must be defined on a storage system.
Note: In most cases, the local hostname of the node is the same as the Kubernetes node hostname as displayed in the kubectl get nodes command output. However, if the names are different, make sure to use the Kubernetes node name, as it appears in the command output.
root@k8s-user-v18-master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 34d v1.8.4 k8s-worker-node1 Ready <none> 34d v1.8.4 k8s-worker-node2 Ready <none> 34d v1.8.4 - After the node hostnames are defined, log into Spectrum Connect UI and refresh the relevant storage systems in the Storage System pane.
- For iSCSI, perform these three steps.
- Make sure that the login used to log in to the iSCSI targets is permanent and remains available after a reboot of the worker node. To do this, verify that the node.startup in the /etc/iscsi/iscsid.conf file is set to automatic. If not, set it as required and then restart the iscsid service ($> service iscsid restart).
- Discover and log into at least two iSCSI targets on the relevant storage
systems.
$> iscsiadm -m discoverydb -t st -p ${storage system iSCSI port IP}:3260
--discover
$> iscsiadm -m node -p ${storage system iSCSI port IP/hostname} --login - Verify that the login was successful and display all targets that you logged in. The
portal value must be the iSCSI target IP address.
$> iscsiadm -m session --rescan
Rescanning session [sid: 1, target: {storage system IQN},
portal: {storage system iSCSI port IP},{port number}
- Define the hostname of each Kubernetes node on the relevant storage systems with the valid WWPN
or IQN of the node. The hostname on the storage system must be the same as the hostname defined in
the Kubernetes cluster. Use the $> kubectl get nodes command to display hostname,
as illustrated below. In this example, the k8s-worker-node1 and the
k8s-worker-node2 hostnames must be defined on a storage system.
- If using Kubernetes version lower than 1.12, make sure that the node kubelet
service has the attach/detach capability enabled,
enable-controller-attach-detach=true (enabled by default). To verify the current
status, run the following command and check that the Setting node annotation to enable volume
controller attach/detach message is displayed:
$> journalctl -u kubelet | grep 'Setting node annotation to . * volume controller attach/detach' | tail -1 Jan 03 17:55:05 k8s-19-master-shay kubelet[3627]: I0103 17:55:05.437720 3627 kubelet_node_status.go:273] Setting node annotation to enable volume controller
attach/detachIf the volume controller attach/detach functionality is disabled, enable it, as detailed in Kubernetes documentation.
- Install the following Linux packages to ensure Fibre Channel and iSCSI connectivity. Skip this
step, if the packages are already installed.
-
Perform these steps for every master node in Kubernetes cluster:
- Enable the attach/detach capability for the kubelet service (controller-attach-detach-enabled=true). It is enabled by default.
- For Kubernetes version lower than 1.12, if the controller-manager is configured to run as a pod
in your Kubernetes cluster, you must allow for event recording in controller-manager log file. To
achieve this, add the default path to the log file (/var/log), as a host path.
You can change this directory by configuring ubiquityK8sFlex.flexLogDir
parameter in the values.yml file.
- Stop the controller-manager pod by moving the kube-controller-manager.yml file to temporary directory: mv /etc/kubernetes/manifests/kube-controller-manager.yml /tmp.
- Edit the kube-controller-manager.yml file: vi
/tmp/kube-controller-manager.yml.
- Add the following lines under the volumes
tag.
- hostPath: path: /var/log type: DirectoryOrCreate name: flexlog-dir - Add the following lines under the volumeMounts tag:
- mountPath: /var/log name: flexlog-dir - Restart the controller-manager pod by moving the
kube-controller-manager.yml file to its original location:
mv /tmp/kube-controller-manager.yml /etc/kubernetes/manifests/. - Verify that the controller-manager pod is in the Running state: kubectl get pod -n kube-system | grep controller-manager.
- Add the following lines under the volumes
tag.
- flexvolume-dir must be available within
kube-controller-manager.
- Verify that flexvolume-dir
(/usr/libexec/kubernetes/kubelet-plugins/volume/exec) is mounted inside
kube-controller-manager.
Use the $ kubectl describe pod <kube-controller-manager-pod-id> -n kube-system command to show the details of the kube-controller-manager, which includes the flexvolume-dir mount.
The output should look as follows:flexvolume-dir: Type: HostPath (bare host directory volume) Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec HostPathType: DirectoryOrCreateIf flexvolume-dir is not present, continue with the following steps.
- Stop the controller-manager pod by moving the kube-controller-manager.yml file to temporary directory: mv /etc/kubernetes/manifests/kube-controller-manager.yaml /tmp/kube-controller-manager.yaml.
- Edit the /tmp/kube-controller-manager.yaml file.
- Add the following lines under the volumeMounts
tag:
mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir - Add the following lines under the Volumes
tag:
hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir - Restart the controller-manager pod by moving the
kube-controller-manager.yml file to its original location:
mv /tmp/kube-controller-manager.yml /etc/kubernetes/manifests/. - Verify that the controller-manager pod is in the Running state: kubectl get pod -n kube-system | grep controller-manager.
- Add the following lines under the volumeMounts
tag:
- Verify that flexvolume-dir
(/usr/libexec/kubernetes/kubelet-plugins/volume/exec) is mounted inside
kube-controller-manager.
-
Define a namespace to be used for creating secrets.
- Kubernetes:
kubectl create ns <namespaces_name> - ICP:
- In the ICP GUI, go to Manage > Namespaces.
- Click Create Namespace. In the Create Namespace
dialog box, provide a namespace name and its pod security police.
The recommended predefined pod security policy name is ibm-anyuid-hostpath-psp, and it has been verified for this Helm chart. If your target namespace is bound to this pod security policy, you can proceed with the chart installation. If you choose another pod security policy, you must enable the default pod security policy, and use the predefined cluster role: ibm-anyuid-hostpath-clusterrole.
Figure 1. Create Namespace dialog box 
- Kubernetes:
-
Create two secrets: Enabler for Containers secret for its database and Enabler for Containers
secret for the IBM Spectrum Connect (Verify that Spectrum Connect secret username and password are
the same as Enabler for Containers interface username and password in Spectrum Connect UI.).
- Kubernetes:
kubectl create secret generic <ubiquity_db_credentials_secret_name> --from-literal=dbname=ubiquity --from-literal=username=<username> --from-literal=password=<password> -n <namespace> kubectl create secret generic <ubiquity_scb_credentials_secret_name> --from-literal=username=<username> --from-literal=password=<password> -n <namespace> - ICP:
- In the ICP GUI, go to Configuration > Secrets.
- Click Create Secret. In the Create Secret dialog
box, provide the following values for the Enabler for Containers database:
- In the General tab, select Namespace, and enter the namespace name, added in the previous step.
- In the Data tab, add the Base64-encrypted Name values: ubiquity, username and password.
Figure 2. Create Secret dialog box 
- Click Create to finish.
- Repeat the secret creation procedure for the IBM Spectrum Connect secret:
- In the General tab, select Namespace, and enter the namespace name, added in the previous step.
- In the Data tab, add the Base64-encrypted Name values: username and password.
- Kubernetes:
- If dedicated SSL certificates are required, see the relevant section of the Managing SSL certificates procedure. When no validation is required and you can use the self-signed certificates, generated by default by the IBM Storage Enabler for Containers server, skip this procedure.
- When using IBM Cloud Private with the Spectrum Virtualize Family products, use only hostnames, and not IP addresses, for the Kubernetes cluster nodes. Then, in the config.yaml file, set the kubelet_nodename parameter to hostname to install the ICP nodes with hostnames as well.