To enable concurrent maintenance, configure the system nodes in pairs. If one system node
is being serviced, the other node can keep the network operational.
While one system node is being serviced, the other node
keeps the I/O group operational. With concurrent maintenance, all field-replaceable units (FRUs) can
be removed, replaced, and tested on one system node while the network
and host systems are powered on and doing productive work.
Attention: Do not remove the power from both system nodes unless the procedures
instruct you to do so.
Verify that concurrent maintenance is enabled before you shut down a node that is part
of a system or when you delete the node from a system. To do so, complete the following checks.
- Confirm that no volumes have dependencies on the node.
In the management GUI, select . Right-click the appropriate node
to show a list of actions for that
node. Click Show Dependent Volumes to display all the volumes that depend on
a node. You can also use the node parameter with the lsdependentvdisks CLI
command to view dependent volumes.
If dependent volumes exist, determine whether the volumes are being
used. If the volumes are being used, either restore the redundant configuration or suspend the host
application. If a dependent quorum disk is reported, repair the access to the quorum disk or modify
the quorum disk configuration.
- Ensure that the host multipathing device drivers can fail over to the partner node.
Some host multipathing device drivers take a while to update after changes are
made on the fabric. Do not shut down a node or delete the node from a cluster if the
partner node in the I/O group to which the node belongs has not been online for more than 30
minutes.
If possible, check the status of the host multipathing device drivers before you
shut down a node to ensure that the device drivers can fail over to the partner node.
When you shut down the node, follow the procedure that is
described in MAP 5350: Powering off a node.
Attention: Do not power off any
expansion enclosures when you power off a node.
When you delete a node from the clustered system, retain the node information that is described
in Deleting a node from a clustered system by using the management GUI. This information helps you
avoid data corruption when you add the node back to the system. The topic describes how to ensure
that the multipathing device driver does not rediscover any paths that are manually removed. Other
considerations about dependent volumes are also provided.
For more information about working with dependent volumes, see the following topics: