External storage system configuration details (Fibre Channel)

Plan for your external storage system configurations through Fibre Channel connections with a node.

See the following website for the latest support information:

www.ibm.com/support

All nodes in a system must be able to connect to the same set of storage system ports on each device. A system that contains any two nodes that cannot connect to the same set of storage-system ports is considered degraded. In this situation, a system error is logged that requires a repair action. This rule can have important effects on a storage system which has exclusion rules that determine to which host bus adapter (HBA) worldwide node names (WWNNs) a storage partition can be mapped.

A storage-system logical unit (LU) must not be shared between the system and a host.

You can configure certain storage systems to safely share resources between the node and direct-attached hosts. This type of configuration is described as a split storage system. In all cases, it is critical that you configure the storage system and SAN so that the system cannot access logical units (LUs) that a host or another system can also access. This split storage system configuration can be arranged by storage system logical unit number (LUN) mapping and masking. If the split storage system configuration is not guaranteed, data corruption can occur.

Configurations where a storage system is split between two nodes are also supported. In all cases, it is critical that you configure the storage system and SAN so that the node cannot access LUs that a host or another node can also access. You can use storage system LUN mapping and masking to arrange for this configuration. If this configuration is not guaranteed, data corruption can occur.

Attention: Avoid configuring a storage system to present the same LU to more than one system. This configuration is not supported and is likely to cause undetected data loss or corruption.

Unsupported storage systems

When a storage system is detected on the SAN, the system attempts to recognize it using its Inquiry data. If the device is not supported, the system configures the device as a generic device. A generic device might not function correctly when it is addressed by a node, especially under failure scenarios. However, the system does not regard accessing a generic device as an error condition and does not log an error. Managed disks (MDisks) that are presented by generic devices are not eligible to be used as quorum disks.

Split storage-system configuration details

The system is configured to manage LUs that are exported only by RAID storage systems. Non-RAID storage systems are not supported. If you are using the node to manage flash drive or other JBOD (just a bunch of disks) LUs that are presented by non-RAID storage systems, the system itself does not provide RAID functions. These LUs are exposed to data loss when a disk failure occurs.

If a single RAID storage system presents multiple LUs, either by having multiple RAIDs configured or by partitioning one or more RAID into multiple LUs, each LU can be owned by either the system or a direct-attached host. LUN masking must also be configured to ensure that LUs are not shared between the nodes and direct-attach hosts.

In a split storage-system configuration, a storage system presents some of its LUs to a node (which treats the LU as an MDisk) and the remaining LUs to another host. The node presents volumes that are created from the MDisk to another host. There is no requirement for the multi-pathing driver for the two hosts to be the same. Figure 1 shows that the RAID storage system could be an IBM® DS4000®, for example, with RDAC used for pathing on the directly attached host, and SDD used on the host that is attached with the node. Hosts can simultaneously access LUs that are provided by the system and directly by the device.

Note: A connection from a host can be either a Fibre Channel or an iSCSI connection.
Figure 1. Storage system shared between a node and a host
This figure depicts a shared storage system.
It is also possible to split a host so that it accesses some of its LUNs through the system and some directly. In this case, the multi-pathing software that is used by the storage system must be compatible with the node's multi-pathing software. Figure 2 is a supported configuration because the same multi-pathing driver is used for both directly accessed LUNs and volumes.
Figure 2. IBM DS8000® LUs accessed directly with a node
ESS LUs accessed directly and through the SAN Volume Controller
In the case where the RAID storage system uses multipathing software that is compatible with the node's multipathing software (see Figure 3), it is possible to configure a system where some LUNs are mapped directly to the host and others are accessed through the system. An IBM TotalStorage™ Enterprise Storage Server® (ESS) that uses the same multipathing driver as the node is one example. Another example with IBM DS5000 is shown in Figure 3.
Figure 3. IBM DS5000 direct connection with a node on one host
DS5000 direct connection and SAN Volume Controller node on one host