The
links between clustered system pairs that perform remote mirroring must meet specific configuration, latency, and distance
requirements.
Figure 1 shows an
example of a configuration that uses dual redundant fabrics that can be configured for Fibre Channel
connections. Part of each fabric is located at the local system and the remote system. There is no
direct connection between the two fabrics.
Figure 1. Redundant fabrics
You can use Fibre Channel extenders or SAN routers to increase the distance
between two systems. Fibre Channel extenders transmit Fibre
Channel packets across long links without changing the contents of the packets. SAN
routers provide virtual N_ports on two or more SANs to extend the scope of the SAN. The SAN router
distributes the traffic from one virtual N_port to the other virtual N_port. The two Fibre
Channel fabrics are independent of each other. Therefore, N_ports on each of the fabrics
cannot directly log in to each other. See the following website for specific firmware levels and the
latest supported hardware:
www.ibm.com/support
If you use Fibre Channel extenders or SAN routers, you must meet the following
requirements:
- The maximum supported round-trip latency between sites depends on the type of
partnership between systems, the version of software, and the system hardware that is used. Table 1 lists the maximum round-trip
latency. This restriction applies to all variant of remote mirroring. More configuration
requirements and guidelines apply to systems that perform remote mirroring over extended distances,
where the round-trip time is greater than 80 ms.
Table 1. Maximum supported round-trip latency between sites| Software version |
System node hardware |
Partnership |
| FC |
1 Gbps IP |
10 Gbps IP |
| 7.3.0 and earlier |
All |
80 ms |
80 ms |
10 ms |
| 7.4.0 and later |
- SAN Volume Controller 2145-CG8, with a second four-port Fibre Channel adapter
installed
- SAN Volume Controller 2145-DH8
|
250 ms |
| All other models |
80 ms |
- Metro
Mirror and Global Mirror require a
specific amount of bandwidth for intersystem heartbeat traffic. When using a Fibre Channel
partnership, the amount of traffic depends on the number of nodes that are in both the local system
and the remote system. Table 2 provides
a guideline for the intersystem heartbeat traffic between the primary system and the secondary
system. These numbers represent the total traffic between two systems when no I/O operations run on
the copied volumes. Half of the data is sent by the primary system and half of
the data is sent by the secondary system. Therefore, traffic is evenly divided between all of the
available intersystem links. If you have two redundant links, half of the traffic is sent over each
link.
Table 2. Intersystem heartbeat traffic in Mbps| System 1 |
System 2 |
| 2 nodes |
4 nodes |
6 nodes |
8 nodes |
| 2 nodes |
5 |
6 |
6 |
6 |
| 4 nodes |
6 |
10 |
11 |
12 |
| 6 nodes |
6 |
11 |
16 |
17 |
| 8 nodes |
6 |
12 |
17 |
21 |
- In a Metro
Mirror or noncycling Global Mirror relationship, the bandwidth
between two sites must meet the peak workload requirements and maintain the maximum round-trip
latency between the sites. When you evaluate the workload requirement in a multiple-cycling Global Mirror relationship, you must
consider the average write workload and the required synchronization copy bandwidth. If there are no
active synchronization copies and no write I/O operations for volumes that are in
the Metro
Mirror or Global Mirror relationship, the SAN Volume Controller protocols operate with the bandwidth
that is indicated in Table 2.
However, you can determine only the actual amount of bandwidth that is required for the link by
considering the peak write bandwidth to volumes that are participating in Metro
Mirror or Global Mirror relationships and then
adding the peak write bandwidth to the peak synchronization bandwidth.
- If the link between two sites is configured with redundancy so that it can tolerate single
failures, the link must be sized so that the bandwidth and latency statements are correct during
single failure conditions.
- The same channel must not be used for links between nodes in a single
system. Configurations that use long-distance links in a single system are supported as stretched
systems, but stretched systems require dedicated channels for intrasystem node-to-node traffic.
- The configuration is tested to confirm that any failover mechanisms in the intersystem links
interoperate satisfactorily with SAN Volume Controller systems.
- All other configuration requirements are met.
Configuration requirements for systems that perform remote mirroring over extended distances
(greater than 80 ms round-trip latency between sites)
If you use remote mirroring between systems with 80 - 250 ms round-trip latency, you must meet
the following additional requirements:
In addition to the preceding list of requirements, the following
guidelines are provided for optimizing performance for remote mirroring
by using Global Mirror:
- Partnered systems should use the same number of nodes in each
system for replication.
- For maximum throughput, all nodes in each system should be used
for replication, both in terms of balancing the preferred node assignment
for volumes and for providing intersystem Fibre Channel connectivity.
- On SAN Volume Controller systems,
provisioning dedicated node ports for local node-to-node traffic (by using port masking) isolates
Global Mirror node-to-node traffic between the local nodes from other local SAN traffic. As a
result, optimal response times can be achieved. This configuration of local node port masking is
less of a requirement on Storwize® family systems, where
traffic between node canisters in an I/O group is serviced by the dedicated inter-canister link in
the enclosure.
- Where possible, use the minimum number of partnerships between systems. For example, assume site
A contains systems A1 and A2, and site B contains systems B1 and B2. In this scenario, creating
separate partnerships between pairs of systems (such as A1-B1 and A2-B2) offers greater performance
for Global Mirror replication between sites than a configuration with partnerships that are defined
between all four systems.
Limitations on host-to-system distances
There is no limit on the Fibre Channel optical distance between
SAN Volume Controller nodes and host
servers. You can attach a server to an edge switch in a core-edge configuration with the SAN Volume Controller system at the
core. SAN Volume Controller systems support
up to three ISL hops in the fabric. Therefore, the host server and the SAN Volume Controller system can be
separated by up to five Fibre Channel links. If you use longwave small
form-factor pluggable (SFP) transceivers, four of the Fibre Channel links can be up to 10 km long.