Long distance links for Metro Mirror and Global Mirror partnerships

The links between clustered system pairs that perform remote mirroring must meet specific configuration, latency, and distance requirements.

Figure 1 shows an example of a configuration that uses dual redundant fabrics that can be configured for Fibre Channel connections. Part of each fabric is located at the local system and the remote system. There is no direct connection between the two fabrics.

You can use Fibre Channel extenders or SAN routers to increase the distance between two systems. Fibre Channel extenders transmit Fibre Channel packets across long links without changing the contents of the packets. SAN routers provide virtual N_ports on two or more SANs to extend the scope of the SAN. The SAN router distributes the traffic from one virtual N_port to the other virtual N_port. The two Fibre Channel fabrics are independent of each other. Therefore, N_ports on each of the fabrics cannot directly log in to each other. See the following website for specific firmware levels and the latest supported hardware:

www.ibm.com/support

If you use Fibre Channel extenders or SAN routers, you must meet the following requirements:

  • The maximum round-trip latency that is supported between sites depends on the type of partnership between the systems, the version of software, and the system hardware that is used. This restriction applies to all variants of remote mirroring.

    The following table lists the maximum round-trip latency for each type of partnership.

  • Metro Mirror and Global Mirror require a specific amount of bandwidth for intersystem heartbeat traffic. When using a Fibre Channel partnership, the amount of traffic depends on the number of nodes that are in both the local system and the remote system. Table 2 provides a guideline for the intersystem heartbeat traffic between the primary system and the secondary system. These numbers represent the total traffic between two systems when no I/O operations run on the copied volumes. Half of the data is sent by the primary system and half of the data is sent by the secondary system. Therefore, traffic is evenly divided between all of the available intersystem links. If you have two redundant links, half of the traffic is sent over each link.
  • In a Metro Mirror or non-cycling Global Mirror relationship, the bandwidth between two sites must meet the peak workload requirements and maintain the maximum round-trip latency between the sites. When you evaluate the workload requirement in a multiple-cycling Global Mirror relationship, you must consider the average write workload and the required synchronization copy bandwidth. If there are no active synchronization copies and no write I/O operations for volumes that are in the Metro Mirror or Global Mirror relationship, the system protocols operate with the bandwidth that is indicated in Table 2. However, you can determine only the actual amount of bandwidth that is required for the link by considering the peak write bandwidth to volumes that are participating in Metro Mirror or Global Mirror relationships and then adding the peak write bandwidth to the peak synchronization bandwidth.
  • If the link between two sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements are correct during single failure conditions.
  • The same channel must not be used for links between nodes in a single system. Configurations that use long-distance links in a single system are supported as stretched systems, but stretched systems require dedicated channels for intrasystem node-to-node traffic.
  • The configuration is tested to confirm that any failover mechanisms in the intersystem links interoperate satisfactorily with the systems.
  • All other configuration requirements are met.

Configuration requirements for systems that perform remote mirroring over extended distances (greater than 80-ms round-trip latency between sites)

If you use remote mirroring between systems with 80 - 250-ms round-trip latency, you must meet the following extra requirements:

  • All nodes that are used for replication must be of a supported model.
  • A Fibre Channel partnership must exist between the systems, not an IP partnership.
  • All systems in the partnership must have a minimum software level of 7.4.0.
  • The RC buffer size setting must be 512 MB on each system in the partnership. This setting can be accomplished by running the chsystem -rcbuffersize 512 command on each system.
    Note: Changing this setting is disruptive to Metro Mirror and Global Mirror operations. Use this command only before partnerships are created between systems or when all partnerships with the system are stopped.
  • Two Fibre Channel ports on each node that is used for replication must be dedicated for replication traffic, by using SAN zoning and port masking.
  • SAN zoning should be applied to provide separate intersystem zones for each local-remote I/O group pair that is used for replication. Figure 2 illustrates this type of configuration.

In addition to the preceding list of requirements, the following guidelines are provided for optimizing performance for remote mirroring by using Global Mirror:

  • Partnered systems should use the same number of nodes in each system for replication.
  • For maximum throughput, all nodes in each system should be used for replication, both in terms of balancing the preferred node assignment for volumes and for providing intersystem Fibre Channel connectivity.
  • On the system, provisioning dedicated node ports for local node-to-node traffic (by using port masking) isolates Global Mirror node-to-node traffic between the local nodes from other local SAN traffic. As a result, optimal response times can be achieved. This configuration of local node port masking is less of a requirement on Storwize® family systems, where traffic between node canisters in an I/O group is serviced by the dedicated inter-canister link in the enclosure.
  • Where possible, use the minimum number of partnerships between systems. For example, assume site A contains systems A1 and A2, and site B contains systems B1 and B2. In this scenario, creating separate partnerships between pairs of systems (such as A1-B1 and A2-B2) offers greater performance for Global Mirror replication between sites than a configuration with partnerships that are defined between all four systems.

Limitations on host-to-system distances

There is no limit on the Fibre Channel optical distance between the system nodes and host servers. You can attach a server to an edge switch in a core-edge configuration with the system at the core. The system can support up to three ISL hops in the fabric. Therefore, the host server and the system can be separated by up to five Fibre Channel links. If you use longwave small form-factor pluggable (SFP) transceivers, four of the Fibre Channel links can be up to 10 km long.