Use this procedure to restore the system configuration in the following situations: only
if the recover system procedure fails or if the data that is stored on the volumes is not
required. This procedure is also known as Tier 4 (T4)
recovery.
Before you begin
This configuration restore procedure is designed to restore information about your
configuration, such as volumes, local Metro Mirror information, local Global Mirror information,
storage pools, and nodes. The data that you wrote to the volumes is not restored. To restore the
data on the volumes, you must restore application data from any application that uses the volumes on
the clustered system as storage separately. Therefore, you must have a backup of this data before
you follow the configuration recovery process.
If the system uses encryption and uses USB flash drives to manage encryption keys, then at least
3 USB flash drives need to be installed in the node USB ports for the configuration
restore to work. The 3 USB flash drives must be inserted into the node or enclosure from which the
configuration restore commands are run. Any USB flash drives in other nodes or enclosures (that
might become part of the system) are ignored. On systems with fewer than 3 USB ports, encryption
must be enabled manually later on in the recovery. On these systems, follow the instructions
displayed on screen to manually enable encryption during step 14 when the configuration restore is
prepared. If you are not recovering an encrypted transparent cloud tiering configuration, the USB
flash drives do not need to contain any keys. They are for generation of new keys as part of the
restore process. If you are recovering an encrypted transparent cloud tiering configuration, the USB
flash drives must contain the previous set of keys to allow the current encrypted data to be
unlocked and re-encrypted with the new keys.
During recovery, a new system is created with a new certificate. If the system uses key servers
to manage encryption keys, the new system certificate must be exported using the
chsystemcert
-export command, then installed on all key servers before the configuration
restore operation will prepare successfully. It might also be necessary to get the new system's
certificate signed if the previous system was using a signed certificate.
Important: Prior to running a T4 procedure, contact
IBM® support for assistance.
About this task
You must regularly back up your configuration data and your application data to avoid data
loss. If a system is lost after a severe failure occurs, both configuration for the system and
application data is lost. You must restore the system to the exact state it was in before the
failure, and then recover the application data.
During the restore process, the nodes and the storage enclosure are restored to the system,
and then the MDisks and the array are re-created and configured. If multiple storage enclosures
are involved, the arrays and MDisks are restored on the proper enclosures based on the enclosure
IDs.
If you do not understand the instructions to run the CLI commands, see the command-line
interface reference information.
To restore your configuration data, follow these steps:
Procedure
-
Verify that all nodes are available as candidate nodes before you run this recovery procedure.
You must remove errors 550 or 578 to put the node in candidate state.
-
Create a system. If possible, use the node that was originally in
I/O group
0.
- For SAN Volume Controller 2145-DH8
and
SAN Volume Controller 2145-SV1
systems, use the
technician port.
-
In a supported browser, enter the IP address that you used to initialize the system and the
default superuser password (passw0rd).
-
Issue the following CLI command to ensure that only the configuration node is online:
The following output is an example of what is displayed.
id name status IO_group_id IO_group_name config_node
1 nodel online 0 io_grp0 yes
-
Using the command-line interface, issue the following command to log on to the
system:
plink -i ssh_private_key_file superuser@cluster_ip
Where ssh_private_key_file is the name of the SSH private key file for
the superuser and cluster_ip is the IP address or DNS name of the system
for which you want to restore the configuration.
Note: Because the RSA host key changed, a warning message might display when you connect to the
system by using SSH.
-
Identify the configuration backup file that you want to restore.
The file can be either a local copy of the configuration backup XML file that you saved when you
backed-up the configuration or an up-to-date file on one of the nodes.
Configuration data is automatically backed up daily at 01:00 system time on the configuration
node.
Download and check the configuration backup files on all nodes that were previously in the system
to identify the one containing the most recent complete backup.
-
From the management GUI, click .
-
Expand Manual Upload Instructions and select Download Support
Package.
-
On the Download New Support Package or Log File page, select
Download Existing Package.
-
For each node (canister) in the system, complete the following steps:
- Select the node to operate on from the selection box at the top of the table.
- Find all the files with names that match the pattern
svc.config.*.xml*.
- Select the files and click Download to download them to your
computer.
-
If a recent configuration file is not present on this node, configure service IP addresses for
other nodes and connect to the service assistant to look for configuration files on other nodes. For
more information, see the Service IPv4 or Service IPv6 options topic in
the Related reference section at the end of the page.
The XML files contain a date and time that can be used to identify the most recent backup.
After you identify the backup XML file that is to be used when you restore the system, rename the
file to svc.config.backup.xml.
-
Copy onto the system the XML backup file from which you want to restore.
pscp full_path_to_identified_svc.config.file
superuser@cluster_ip:/tmp/svc.config.backup.xml
-
If the system contains any nodes with a 10 GB interface adapter or a second Fibre Channel
interface adapter that is installed and non-default localfcportmask and
partnerfcportmask settings were previously configured, then manually
reconfigure these settings before you restore your data.
-
If the system uses a stretched or HyperSwap® topology with
nodes that are at two sites, or if the system contains any nodes with internal flash drives (including nodes that are
connected to expansion enclosures), these nodes must be added to the system now.
To add these nodes, determine the panel name, node name, and I/O groups of any such nodes
from the configuration backup file. To add the nodes to the system, run the following
command:
svctask addnode -panelname panel_name -iogrp iogrp_name_or_id -name node_name
Where
panel_name is the name that is displayed on the panel,
iogrp_name_or_id is the name or ID of the I/O group to which you want to
add this node, and
node_name is the name of the node.
-
If the system contains any iSCSI storage controllers, these controllers must be detected
manually now. The nodes that are connected to these controllers, the iSCSI port IP addresses, and
the iSCSI storage ports must be added to the system before you restore your data.
Note: If the system contains only Fibre Channel
storage controllers, proceed to the next step.
Note: For a stretched or
HyperSwap
topology, after you run the
addnode command, change the sites of all of the nodes
added in the system. For example,
chnode -site site_id node_id/node_name
-
To add these nodes, determine the panel name, node name, and I/O groups of any such nodes from
the configuration backup file. To add the nodes to the system, run the following command:
svctask addnode -panelname panel_name -iogrp iogrp_name_or_id -name node_name
Where panel_name is the name that is displayed
on the panel, iogrp_name_or_id is the name or ID of the I/O group to which you
want to add this node, and node_name is the name of the node.
-
To restore iSCSI port IP addresses, use the cfgportip command.
- To restore IPv4 address, determine id (port_id), node_id, node_name, IP_address, mask, gateway,
host (0/1 stands for no/yes), remote_copy (0/1 stands for no/yes), and storage (0/1 stands for
no/yes) from the configuration backup file, run the following command:
svctask cfgportip -node node_name_or_id -ip ipv4_address -masksubnet_mask-gw ipv4_gw
-host yes | no -remotecopy remote_copy_port_group_id -storage yes | no -hpgid
host_port_grp_id port_id
Where node_name_or_id is the name or id of the node,
ipv4_address is the IPv4 version protocol address of the port, and
ipv4_gw is the IPv4 gateway address for the port.
- To restore IPv6 address, determine id (port_id), node_id, node_name, IP_address_6, mask,
gateway_6, prefix_6, host_6 (0/1 stands for no/yes), remote_copy_6 (0/1 stands for no/yes), and
storage_6 (0/1 stands for no/yes) from the configuration backup file, run the following command:
svctask cfgportip -node node_name_or_id -ip_6 ipv6_address -gw_6 ipv6_gw
-prefix_6 prefix -host_6 yes | no -remotecopy_6 remote_copy_port_group_id -storage_6 yes | no
-hpgid host_port_grp_id port_id
Where node_name_or_id is the name or id of the node,
ipv6_address is the IP v6 version protocol address of the port,
ipv6_gw is the IPv6 gateway address for the port, and prefix
is the IPv6 prefix.
Note: The parameter -hpgid is used exclusively for a
manual T4 recovery. Do not use this parameter in other scenarios.
Complete steps b.i and b.ii for all (earlier configured) IP ports in the
node_ethernet_portip_ip sections from the backup configuration file.
-
Next, detect and add the iSCSI storage port candidates by using the
detectiscsistorageportcandidate and addiscsistorageport
commands. Make sure that you detect the iSCSI storage ports and add these ports in the same order as
you see them in the configuration backup file. If you do not follow the correct order, it might
result in a T4 failure. Step c.i must be followed by steps c.ii and c.iii. You must repeat these
steps for all the iSCSI sessions that are listed in the backup configuration file exactly in the
same order.
- To detect iSCSI storage ports, determine src_port_id,
IO_group_id (optional, not required if the value is 255),
target_ipv4/target_ipv6 (the target IP that is not blank is required),
iscsi_user_name (not required if blank), iscsi_chap_secret
(not required if blank), and site (not required if blank) from the configuration
backup file, run the following command:
svctask detectiscsistorageportcandidate -srcportid src_port_id -iogrp IO_group_id
-targetip/targetip6 target_ipv4/target_ipv6 -username iscsi_user_name -chapsecret iscsi_chap_secret -site site_id_or_name
Where src_port_id is the source Ethernet port ID of the configured port,
IO_group_id is the I/O group ID or name being detected,
target_ipv4/target_ipv6 is the IPv4/IPv6 target iSCSI controller IPv4/IPv6
address, iscsi_user_name is the target controller user name being detected,
iscsi_chap_secret is the target controller chap secret being detected, and
site_id_or_name is the specified id or name of the site being detected.
- Match the discovered target_iscsiname with the
target_iscsiname for this particular session in the backup configuration file by
running the lsiscsistorageportcandidate command, and use the matching index to
add iSCSI storage ports in step c.iii.
Run the svcinfo
lsiscsistorageportcandidate command and determine the id field of the row whose
target_iscsiname matches with the target_iscsiname from the
configuration backup file. This is your candidate_id to be used in step
c.iii.
- To add the iSCSI storage port, determine IO_group_id (optional, not required
if the value is 255), site (not required if blank),
iscsi_user_name (not required if blank in backup file), and
iscsi_chap_secret (not required if blank) from the configuration backup file,
provide the target_iscsiname_index matched in step c.ii, and then run the
following command:
addiscsistorageport -iogrp iogrp_id -username iscsi_user_name -chapsecret iscsi_chap_secret -site site_id_or_name candidate_id
Where iogrp_id is the I/O group ID or name that is added,
iscsi_user_name is the target controller user name that is being added,
iscsi_chap_secret is the target controller chap secret being added, and
site_id_or_name specified the ID or name of the site being that is
added.
- If the configuration is a HyperSwap or stretched system, the controller name and site needs to be restored. To restore the controller
name and site, determine ccontroller_name and controller
site_id/name from the backup xml file by matching the inter_WWPN field with the
newly added iSCSI controller, and then run the following command:
chcontroller -name controller_name -site site_id/name controller_id/name
Where
controller_name is the name of the controller from the backup xml file,
site_id/name is the ID or name of the site of iSCSI controller from the backup
xml file, and controller_id/name is the ID or current name of the
controller.
-
If the system is using Lightweight Directory Access Protocol (LDAP) as the remote
authentication service with an administrator password configured, the password must be restored
manually before you restore your data. The following example shows the command to configure the
LDAP administrator user name and password:
svctask chldap -username ldap_username -password 'administrator_password'
-
Issue the following CLI command to compare the current configuration with the backup
configuration data file:
svcconfig restore -prepare
This
CLI command creates a log file in the
/tmp directory of the
configuration node. The name of the log file is
svc.config.restore.prepare.log.
Note: It
can take up to a minute for each 256-MDisk batch to be discovered. If you receive error
message CMMVC6200W for an MDisk after you enter this command, all
the managed disks (MDisks) might not be discovered yet. Allow a suitable time to elapse and
try the svcconfig restore -prepare command again.
-
If the system has key server encryption, the new certificate must be exported by using the
chsystemcert
-export command, and then installed on all key servers in the correct
device group before you run the T4 recovery. The device group that is used is the one in which
the previous system was defined. It might also be necessary to get the new system's certificate
signed.
-
Issue the following command to copy the log file to another server that is accessible to
the system:
pscp superuser@cluster_ip:/tmp/svc.config.restore.prepare.log
full_path_for_where_to_copy_log_files
-
Open the log file from the server where the copy is now stored.
-
Check the log file for errors.
- If you find errors, correct the condition that caused the errors and reissue the
command. You must correct all errors before you can proceed to step 17.
- If you need assistance, contact the support center.
-
Issue the following CLI command to restore the configuration:
svcconfig restore -execute
Note: Any nodes that you did not add manually to the system are
added automatically as part of the restore process.
This CLI command creates a log file in the /tmp directory of the
configuration node. The name of the log file is
svc.config.restore.execute.log.
-
Issue the following command to copy the log file to another server that is accessible to
the system:
pscp superuser@cluster_ip:/tmp/svc.config.restore.execute.log
full_path_for_where_to_copy_log_files
-
Open the log file from the server where the copy is now stored.
-
Check the log file to ensure that no errors or warnings occurred.
Note: You might receive a warning that states that a licensed feature is not enabled. This
message means that after the recovery process, the current license settings do not match the
previous license settings. The recovery process continues normally and you can enter the
correct license settings in the management GUI later.
When you log
in to the CLI again over SSH, you see this output:
IBM_2145:your_cluster_name:superuser>
What to do next
You can remove any unwanted configuration backup and restore files from the
/tmp directory on your configuration by issuing the following CLI
command:svcconfig clear -all