Enabling concurrent maintenance

To enable concurrent maintenance, configure IBM Spectrum Virtualize™ nodes in pairs. If one IBM Spectrum Virtualize node is being serviced, the other node can keep the SAN operational.

With concurrent maintenance, the hardware on one IBM Spectrum Virtualize node can be serviced or replaced while the SAN and host systems are doing productive work.

Attention: Do not remove the power from both IBM Spectrum Virtualize nodes unless the procedures instruct you to do so.

Verify that concurrent maintenance is enabled before you shut down a node that is part of a system or when you delete the node from a system. To do so, complete the following checks.

  1. Confirm that no volumes have dependencies on the node.

    In the management GUI, select Monitoring > System. Right-click the node that you might need to shut down or delete from the system to show a list of actions for that node. Click Show Dependent Volumes to display all the volumes that depend on a node. You can also use the node parameter with the lsdependentvdisks CLI command to view dependent volumes.

    If dependent volumes exist, determine whether the volumes are being used. If the volumes are being used, either restore the redundant configuration or suspend the host application. If a dependent quorum disk is reported, repair the access to the quorum disk or modify the quorum disk configuration.

  2. Ensure that the host multipathing device drivers can fail over to the partner node.

    Some host multipathing device drivers take a while to update after changes are made on the fabric. Do not shut down a node or delete the node from a cluster if the partner node in the I/O group to which the node belongs has not been online for more than 30 minutes.

    If possible, check the status of the host multipathing device drivers before shutting down a node to ensure that the device drivers can fail over to the partner node.

When you shut down the node, follow the procedure that is described in MAP 5350: Powering off a node.

When you delete a node from the clustered system, retain the node information that is described in Deleting a node from a clustered system by using the management GUI. This information will help you avoid data corruption when you add the node back to the system. The topic describes how to ensure that the multipathing device driver does not rediscover any paths that are manually removed. Other considerations about dependent volumes are also provided.

For more information about working with dependent volumes, see the following topics: