Compatibility and requirements for IBM Storage Enabler for Containers
For the complete and up-to-date information about the compatibility and requirements for using IBM Storage Enabler for Containers with Kubernetes, refer to the IBM® Spectrum Control Base Edition latest release notes. The release notes detail supported operating system and Kubernetes versions, as well as microcode versions of the supported storage systems. You can find the latest Spectrum Control Base release notes on IBM Knowledge Center .
About this task
Procedure
- Contact your storage administrator and make sure IBM Storage Enabler for Containers interface has been added to active Spectrum Control Base instance and at least one storage service has been delegated to it. See Managing integration with IBM Storage Enabler for Containers and Delegating storage services to the IBM Storage Enabler for Containers interface for details.
- Verify that there is a proper communication link between Spectrum Control Base and Kubernetes cluster.
- Perform these steps for the every worker node in Kubernetes cluster:
- Install the following Linux packages to ensure Fibre Channel and iSCSI connectivity. Skip this
step, if the packages are already installed.
- RHEL:
- sg3-utils.
- iscsi-initiator-utils (if iSCSI connection is required).
sudo yum -y install sg3-utilssudo yum -y install iscsi-initiator-utils - Ubuntu:
- scsitools.
- open-iscsi (if iSCSI connection is required).
sudo apt-get install scsitoolssudo apt-get install open-iscsi
- RHEL:
- Configure Linux multipath devices on the host. Create and set the relevant storage system
parameters in the /etc/multipath.conf file. You can also use the default
multipath.conf file located in the
/usr/share/doc/device-mapper-multipath-* directory.
Verify that the systemctl status multipathd output indicates that the multipath status is active and error-free.- RHEL:
yum install device-mapper-multipath sudo modprobe dm-multipath systemctl start multipathd systemctl status multipathd multipath -ll - Ubuntu:
apt-get install multipath-tools sudo modprobe dm-multipath systemctl start multipathd systemctl status multipathd multipath -ll
- RHEL:
- Configure storage system connectivity.
- Define the hostname of each Kubernetes node on the relevant storage systems with the valid WWPN
or IQN of the node. The hostname on the storage system must be the same as the as hostname defined
in the Kubernetes cluster. Use the $> kubectl get nodes command to display
hostname, as illustrated below. In this example, the k8s-worker-node1 and the
k8s-worker-node2 hostnames must be defined on a storage system.Note: In most cases, the local hostname of the node is the same as the Kubernetes node hostname as displayed in the kubectl get nodes command output. However, if the names are different, make sure to use the Kubernetes node name, as it appears in the command output.
root@k8s-user-v18-master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 34d v1.8.4 k8s-worker-node1 Ready <none> 34d v1.8.4 k8s-worker-node2 Ready <none> 34d v1.8.4 - For iSCSI, perform these two steps .
- Make sure that the login used to log in to the iSCSI targets is permanent and remains available after a reboot of the worker node. To do this, verify that the node.startup in the /etc/iscsi/iscsid.conf file is set to automatic. If not, set it as required and then restart the iscsid service ($> /etc/init.d/iscsid restart).
- Discover and log into the iSCSI targets of the relevant storage systems.
$> iscsiadm -m discoverydb -t st -p ${storage system iSCSI port IP}:3260 --discover$> iscsiadm -m node -p ${storage system iSCSI port IP/hostname} --login
- Define the hostname of each Kubernetes node on the relevant storage systems with the valid WWPN
or IQN of the node. The hostname on the storage system must be the same as the as hostname defined
in the Kubernetes cluster. Use the $> kubectl get nodes command to display
hostname, as illustrated below. In this example, the k8s-worker-node1 and the
k8s-worker-node2 hostnames must be defined on a storage system.
- Make sure that the node kubelet service has the attach/detach capability enabled,
enable-controller-attach-detach=true (enabled by default). To verify the current
status, run the following command and check that the Setting node annotation to enable volume
controller attach/detach message is displayed:
$> journalctl -u kubelet | grep 'Setting node annotation to . * volume controller attach/detach' | tail -1 Jan 03 17:55:05 k8s-19-master-shay kubelet[3627]: I0103 17:55:05.437720 3627 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detachIf the volume controller attach/detach functionality is disabled, enable it, as detailed in Kubernetes documentation.
- Install the following Linux packages to ensure Fibre Channel and iSCSI connectivity. Skip this
step, if the packages are already installed.
- Perform these steps for every master node in Kubernetes cluster:
- Enable the attach/detach capability for the kubelet service (controller-attach-detach-enabled=true).
- Configure the controller-manger pod to let it access the Kubernetes plug-in directory
(/usr/libexec/kubernetes/kubelet-plugins/volume/exec), where the FlexVolume
driver is located. This step is required, when the controller-manager on the master nodes is
deployed as a static pod under Kubernetes versions 1.6 and 1.7.
Skip this step if you use Kubernetes version 1.8 or if you run the controller-manager as a regular Linux service, which already has access to the required folder.- Stop the controller-manager pod by moving the kube-controller-manager.yml file to temporary directory: mv /etc/kubernetes/manifests/kube-controller-manager.yml /tmp.
- Edit the kube-controller-manager.yml file: vi
/tmp/kube-controller-manager.yml.
- Add the following lines under the volumes
tag.
- hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir - Add the following lines under the volumeMounts tag:
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir - Restart the controller-manager pod by moving the
kube-controller-manager.yml file to its original location:
mv /tmp/kube-controller-manager.yml /etc/kubernetes/manifests/. - Verify that the controller-manager pod is in the Running state: kubectl get pod -n kube-system | grep controller-manager.
- Add the following lines under the volumes
tag.
- If dedicated SSL certificates are required, see the relevant section of the Managing SSL certificates with IBM Storage Enabler for Containers procedure. When no validation is required and you can use the self-signed certificates, generated by default by the IBM Storage Enabler for Containers server, skip this procedure.