Stretched system configuration by using interswitch links

You can use interswitch links (ISLs) in paths between nodes of the same I/O group. If the cable distance between the two production sites exceeds 100 km, potential performance impacts can result.

Stretched system configuration rules for stretched system configurations that use ISLs

In a stretched system configuration, a site is defined as an independent failure domain. Different types of sites protect against different types of fault. For example, if configured properly, the system continues to operate after the loss of one failure domain.

However, the system does not guarantee that it can survive the failure of two sites.

  • For every storage system, create one zone that contains ports from every node and all storage system ports, unless otherwise stated by the zoning guidelines for that storage system. However, do not connect a storage system in one site directly to a switch fabric in the other site. Instead, connect each storage system only to switched fabrics in the local site. (In stretched system configurations with ISLs in node-to-node paths, these fabrics belong to the public SAN).

    For stretched systems that use the enhanced configuration functions, storage systems that are configured to one of the main sites (1 or 2) need be zoned only to be visible by the nodes in that site. Storage systems in site 3 or storage systems that have no site that is defined must be zoned to all nodes.

  • Each node must have direct Fibre Channel connections to at least two SAN fabrics. One is a public fabric and the other is private, in its location. For an example configuration, see Figure 1.
  • Some service actions require the ability to do actions on the front panel or through the technician port of all nodes in a system within a short-time window. If you use stretched systems, you are required to assist the support engineer and provide communication technology to coordinate these actions between the sites.

Like for every managed disk, all nodes need access to the quorum disk by using the same storage system ports. If a storage system with active/passive controllers (such as IBM® DS3000, IBM DS4000®, IBM DS5000, or IBM FAStT) is attached to a fabric, the storage system must be connected with both internal controllers to this fabric. This connection is illustrated in Figure 1.

By using FCIP, passive WDM, or active WDM for quorum site connectivity, you can add to the extension. The connections must be reliable. It is strictly required that the links from both production sites to the quorum site are independent and do not share any long-distance equipment. FCIP links are supported also for ISLs between the two production sites in public and private SANs. A private SAN and a public SAN can be routed across the same FCIP link. However, to ensure bandwidth to the private SAN (see also Additional bandwidth requirements), it is typically necessary to configure FCIP tunnels. Similarly, it is permissible to multiplex multiple ISL links across a DWDM link.

Note: It is not required to UPS-protect FCIP routers or active WDM devices that are used only for the node-to-quorum communication.

A stretched system configuration is supported only when the storage system that hosts the quorum disks supports extended quorum. Although the system can use other types of storage systems for providing quorum disks, access to these quorum disks is always through a single path.

For quorum disk configuration requirements, see the Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates technote at the following website:

 Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates