rmnode
The rmnode command deletes a node (or spare node) from the clustered system. You can enter this command anytime after a clustered system is created. This command makes the node a candidate that is ready to be added back into this clustered system or another system.
Syntax
Parameters
- -force
- (Optional) Overrides the checks that this command runs. The parameter overrides the following
two checks:
- If the command results in volumes going offline, the command fails unless the force parameter is used.
- If the command results in a loss of data because there is unwritten data in the write cache that is contained only within the node to be removed, the command fails unless the force parameter is used.
- -deactivatespare
- (Optional) Specifies to remove an offline node that is protected by an online spare node. If the
command succeeds, the offline node is deleted from the cluster and the online spare node is
deactivated and returns to being a spare.Important: The I/O group loses the redundancy protection of the spare, and the remaining node enters write-through mode until another node is added.
- object_id | object_name
- (Required) Specifies the object name or ID that you want to modify. The variable that follows
the parameter is either:
- The object name that you assigned when you added the node to the clustered system
- The object ID that is assigned to the node (not the worldwide node name)
Description
This command deletes a node from the clustered system. This makes the node a candidate to be added back into this clustered system or into another system. After the node is deleted, the other node in the I/O group enters write-through mode until another node is added back into the I/O group.
- Small Computer System Interface-3 (SCSI-3) reservations (through that node) are removed
- Small Computer System Interface-3 (SCSI-3) registrations (through that node) are removed
By default, the rmnode command flushes the cache on the specified node before the node is taken offline. In some circumstances, such as when the system is already degraded (for example, when both nodes in the I/O group are online and the virtual disks within the I/O group are degraded), the system ensures that data loss does not occur as a result of deleting the only node with the cache data.
The cache is flushed before the node is deleted to prevent data loss if a failure occurs on the other node in the I/O group.
To take the specified node offline immediately without flushing the cache or ensuring data loss does not occur, run the rmnode command with the -force parameter.
Prerequisites:
Before you issue the rmnode command, perform the following tasks and read the following Attention notices to avoid losing access to data:
- Removing the last node in the cluster destroys the clustered system. Before you delete the last node in the clustered system, ensure that you want to destroy the clustered system.
- If you are removing a single node and the remaining node in the I/O group is online, the data can be exposed to a single point of failure if the remaining node fails.
- This command might take some time to complete since the cache in the I/O group for that node is flushed before the node is removed. If the -force parameter is used, the cache is not flushed and the command completes more quickly. However, if the deleted node is the last node in the I/O group, that uses the -force option results in the write cache for that node being discarded rather than flushed, and data loss can occur. The -force option must be used with caution.
- If both nodes in the I/O group are online and the volumes are already degraded before deleting the node, redundancy to the volumes is already degraded and loss of access to data and loss of data might occur if the -force option is used.
- If you are removing the configuration node, the rmnode command causes the configuration node to move to a different node within the clustered system. This process might take a short time: typically less than a minute. The clustered system IP address remains unchanged, but any SSH client that is attached to the configuration node might need to reestablish a connection. The management GUI reattaches to the new configuration node transparently.
- If this node is the last node in the clustered system or if it is assigned as the configuration node, all connections to the system are lost. The user interface and any open CLI sessions are lost if the last node in the clustered system is deleted. A timeout might occur if a command cannot be completed before the node is deleted.
- If the rmnode command is called against an inactive spare, then the spare is deactivated and deleted from the clustered system. The node is transitioned to a "candidate" state and the -force parameter is not required.
- If the rmnode command is called against an active spare, then the spare is deactivated and deleted from the clustered system. But the -force parameter is required if checks indicate that a volume is offline or can be corrupted if this action takes place. The node is transitioned to a "candidate" state and is no longer a spare node.
An invocation example
rmnode 1
The resulting output:
No feedback
An invocation example
- Use the lsnode command to list all the nodes in the system. In the following
example of lsnode output, the node "node2" with ID 2 is in service state and
protected by an online spare "spare1".
lsnodeThe resulting output:
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number site_id site_name 1 node1 500507680C000128 online 0 io_grp0 yes SV1 iqn.1986-03.com.ibm:2145.mcr-cay-cluster-23.node1 G71H00P 1 2 node2 500507680C000130 service 0 io_grp0 no SV1 iqn.1986-03.com.ibm:2145.mcr-cay-cluster-23.node2 G71H00M 1 3 spare1 500507680C000138 online_spare 1 io_grp1 no SV1 iqn.1986-03.com.ibm:2145.mcr-cay-cluster-23.node3 G71H00X 1 - Issue the rmnode -deactivatespare command with the ID or name of offline or
service node that needs to be deleted. The following example deletes the service node "node2" with
ID 2 that is protected by an online spare
"spare1".
rmnode -deactivatespare 2The resulting output:
No feedback - Use the lsnode command to verify that the node is deleted from the system. In
the following example of lsnode output, the node "node 2" is deleted from the
system and the online spare "spare1" returns to spare state.
lsnodeThe resulting output:
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number site_id site_name 1 node1 500507680C000128 online 0 io_grp0 yes SV1 iqn.1986-03.com.ibm:2145.mcr-cay-cluster-23.node1 G71H00P 1 3 spare1 500507680C000138 spare 1 io_grp1 no SV1 iqn.1986-03.com.ibm:2145.mcr-cay-cluster-23.node3 G71H00X 1
