Performing pre-installation tasks

The following conditions must be met before the installation:

  • Ensure that the IBM Spectrum Scale version 5.x.x or above is installed along with the IBM Spectrum Scale management API (GUI).
  • Verify that there is a proper communication link between the IBM Spectrum Scale Management API Server (GUI) and the Kubernetes cluster.
  • Ensure that all Kubernetes worker nodes have the IBM Spectrum Scale client installed on them.
  • Verify that quota is enabled for all the file systems being used for creating persistent volumes.
  • The file system used for the persistent volume must be mounted on all the worker nodes at all times.
  • Ensure that all the worker nodes are running RedHat Enterprise Linux (RHEL) x86_64, ppc64le, s390x or SLES 12 SP3 s390x. For more information on supported RHEL versions, check the IBM Spectrum Scale support matrix at IBM Spectrum Scale FAQs.
  • All worker nodes must be running the same platform (hardware and Linux distribution).
  • Kubernetes controller-manager process must be run as root.
  • Ensure that IBM Cloud Private (ICP) or Kubernetes is installed. For supported version, see the release notes of IBM Storage Enabler for Containers.
  • Ensure that SELinux is in disabled mode.
  • Ensure that the node kubelet service has the attach/detach capability enabled. The enable-controller-attach-detach is set to True by default. However, confirm that this option is set to True if you are debugging a functional problem.
  • If the controller-manager is configured to run as a pod in your Kubernetes cluster, you must allow the events to be recorded in the controller-manager log file. To enable this, add the default path /var/log to the log file as a host path. You can change this directory by configuring the FLEX-LOG-DIR parameter in the ubiquity-configmap.yml file.
  • Run the # mmlsmount all -L command to ensure that the GPFS file systems are mounted before starting Kubernetes on the nodes.
    The command gives an output similar to the following:
    
    File system gpfs0 is mounted on 6 nodes:
      192.168.138.94     borg45
      192.168.138.62     borg48
      172.16.7.41        borg44
      192.168.138.59     borg50
      192.168.138.95     borg47
      192.168.138.92     borg43
  • Run the # mmlsfs gpfs0 -Q command to ensure that the quota is enabled on the file systems.
    The command gives an output similar to the following:
    flag                value                    description
    ------------------- ------------------------ -----------------------------------
     -Q                 user;group;fileset       Quotas accounting enabled
                        user;group;fileset       Quotas enforced
                        none                     Default quotas enabled

    If you fail to obtain this output, run the # mmchfs gpfs0 -Q yes command.

  • Run the following command to ensure that the GUI server is running and can communicate with the Kubernetes nodes:
    
    curl -u “admin:admin001” 
    -X GET https://9.11.213.85:443/scalemgmt/v2/cluster 
    --cacert < cert name>
    .
    The command gives an output similar to the following:
    {
      "filesystems" : [ {
        "name" : "gpfs0"
      } ],
      "status" : {
        "code" : 200,
        "message" : "The request finished successfully."
      }
  • Run the mmchconfig enforceFilesetQuotaOnRoot=yes command to set the enforceFilesetQuotaOnRoot value to yes. This ensures that quotas are enforced for the PVC created with root user ID.
  • Run the # mmlsnodeclass command to ensure that the Kubernetes are not installed or configured on the nodes running the IBM Spectrum Scale GUI. This prevents port number conflicts and memory usage concerns.
    The command gives an output similar to the following. In this example, the name of the node running the IBM Spectrum Scale GUI is defined by the Node Class Name with value GUI_MGMT_SERVERS. In this example, the hostname is borg43.<domain>.
    
    Node Class Name       Members
    --------------------- -----------------------------------------------------------
    GUI_MGMT_SERVERS      borg43.<domain>
    GUI_SERVERS           borg45.<domain>,borg47.<domain>
                          borg48.<domain>,borg43.<domain>
    
  • Ensure that IBM Spectrum Scale is tuned for the Kubernetes pod workload and the memory requirement of pods.
  • Perform these steps for every master node in Kubernetes cluster:
    1. Enable the attach/detach capability for the kubelet service (controller-attach-detach-enabled=true). It is enabled by default.
    2. For Kubernetes version lower than 1.12, if the controller-manager is configured to run as a pod in your Kubernetes cluster, you must allow for event recording in controller-manager log file. To achieve this, add the default path to the log file (/var/log), as a host path. You can change this directory by configuring ubiquityK8sFlex.flexLogDir parameter in the values.yml file.
      • Stop the controller-manager pod by moving the kube-controller-manager.yml file to temporary directory: mv /etc/kubernetes/manifests/kube-controller-manager.yml /tmp.
      • Edit the kube-controller-manager.yml file: vi /tmp/kube-controller-manager.yml.
        • Add the following lines under the volumes tag.
          - hostPath:
              path: /var/log
              type: DirectoryOrCreate
            name: flexlog-dir
        • Add the following lines under the volumeMounts tag:
          - mountPath: /var/log
            name: flexlog-dir
        • Restart the controller-manager pod by moving the kube-controller-manager.yml file to its original location:
          mv /tmp/kube-controller-manager.yml /etc/kubernetes/manifests/.
        • Verify that the controller-manager pod is in the Running state: kubectl get pod -n kube-system | grep controller-manager.
    3. flexvolume-dir must be available within kube-controller-manager.
      • Verify that flexvolume-dir (/usr/libexec/kubernetes/kubelet-plugins/volume/exec) is mounted inside kube-controller-manager.

        Use the $ kubectl describe pod <kube-controller-manager-pod-id> -n kube-system command to show the details of the kube-controller-manager, which includes the flexvolume-dir mount.

        The output should look as follows:
        flexvolume-dir:
        Type: HostPath (bare host directory volume)
        Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
        HostPathType: DirectoryOrCreate

        If flexvolume-dir is not present, continue with the following steps.

      • Stop the controller-manager pod by moving the kube-controller-manager.yml file to temporary directory: mv /etc/kubernetes/manifests/kube-controller-manager.yaml /tmp/kube-controller-manager.yaml.
      • Edit the /tmp/kube-controller-manager.yaml file.
        • Add the following lines under the volumeMounts tag:
          mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
          name: flexvolume-dir
        • Add the following lines under the Volumes tag:
          hostPath:
          path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
          type: DirectoryOrCreate
          name: flexvolume-dir
        • Restart the controller-manager pod by moving the kube-controller-manager.yml file to its original location:
          mv /tmp/kube-controller-manager.yml /etc/kubernetes/manifests/.
        • Verify that the controller-manager pod is in the Running state: kubectl get pod -n kube-system | grep controller-manager.
  • Define a namespace to be used for creating secrets.
    • Kubernetes:
      kubectl create ns <namespaces_name>
    • ICP:
      1. In the ICP GUI, go to Manage > Namespaces.
      2. Click Create Namespace. In the Create Namespace dialog box, provide a namespace name and its pod security police.

        The predefined pod security policy name is ibm-anyuid-hostpath-psp, and it has been verified for this Helm chart. If your target namespace is bound to this pod security policy, you can proceed with the chart installation. If you choose another pod security policy, you must enable the default pod security policy, and use the predefined cluster role: ibm-anyuid-hostpath-clusterrole.

        Figure 1. Create Namespace dialog box
        This image shows the Create Namespace dialog box.
  • Create two secrets: Enabler for Containers secret for its database and Enabler for Containers secret for the IBM Spectrum Scale. Verify that IBM Spectrum Scale secret username and password are the same as Enabler for Containers interface username and password in IBM Spectrum Scale UI.
    • Kubernetes:
      1. Create secret for database using the following command:
        kubectl create secret generic <ubiquity_db_credentials_secret_name> --from-literal=dbname=<db_name>
         --from-literal=username=<username> --from-literal=password=<password>  -n <namespace>
        where
        ubiquity_db_credentials_secret_name
        Secret name of your choice. The same secret name should be used in ubiquityDb.dbCredentials.existingSecret parameter of values.yaml.
        db_name
        Specify the value as ubiquity.
        Username
        Username of your choice.
        Password
        Password of your choice.
        namespace
        Namespace under which the secret is to be created.
      2. Create secret for IBM Spectrum Scale Management API (GUI) using the following command:
        
        kubectl create secret generic <ubiquity_spectrumscale_credentials_secret_name> --from-literal=username=<username>
         --from-literal=password=<password>  -n <namespace>
        where
        ubiquity_spectrumscale_credentials_secret_name
        Secret name of your choice. The same secret name should be used in ubiquity.spectrumScale.connectionInfo.existingSecret parameter of values.yaml.
        Username
        IBM Spectrum Scale Management API (GUI) username. In case of remote mount, provide the remote IBM Spectrum Scale cluster management API(GUI) username.
        Password
        IBM Spectrum Scale Management API (GUI) password. In case of remote mount, provide the remote IBM Spectrum Scale cluster management API(GUI) password.
        namespace
        Namespace under which the secret is to be created.
    • ICP:
      1. In the ICP GUI, go to Configuration > Secrets.
      2. Click Create Secret. In the Create Secret dialog box, provide the following values for the Enabler for Containers database:
        • In the General tab, select Namespace, and enter the namespace name, added in the previous step.
        • In the Data tab, specify the following values:
          dbname
          The name of the first entry should be set to dbname, and the value should be set to dWJpcXVpdHk=.
          Note: dWJpcXVpdHk= is the Base64-encrypted formatted representation for the value ubiquity.
          username
          The name of the second entry should be set to username, and the value should be a username of your choice in Base64-encrypted format.
          password
          The name of the third entry should be set to password, and the value should be a password of your choice in Base64-encrypted format.
        Figure 2. Create Secret dialog box
        This image shows the Create Secret dialog box.
      3. Click Create to finish.
      4. Repeat the secret creation procedure for the IBM Spectrum Scale management API (GUI) server secret.
        • In the General tab, select Namespace, and enter the namespace name, added in the previous step.
        • In the Data tab, specify the following values:
          username
          The name of the first entry should be set to username, and the value should be an IBM Spectrum Scale Management API (GUI) username in Base64-encrypted format. In case of remote mount, provide the remote IBM Spectrum Scale cluster management API(GUI) username in Base64-encrypted format.
          password
          The name of the second entry should be set to password, and the value should be an IBM Spectrum Scale Management API (GUI) password in Base64-encrypted format. In case of remote mount, provide the remote IBM Spectrum Scale cluster management API(GUI) password in Base64-encrypted format.
  • If dedicated SSL certificates are required, see the relevant section of the Managing SSL certificates procedure. When no validation is required and you can use the self-signed certificates, generated by default by the IBM Storage Enabler for Containers server, skip this procedure.