Zoning details

Ensure that you are familiar with these zoning details. These details explain zoning for external storage system zones and host zones. More details are included in the SAN configuration and zoning rules summary.

Paths to hosts

The number of paths through the network from the SAN Volume Controller nodes to a host must not exceed eight. Configurations in which this number is exceeded are not supported.
  • Each SAN Volume Controller 2145-CG8 node has four FC ports standard and optionally four more Fibre Channel (FC) ports or two 10 Gbps Ethernet ports for FCoE use. Each I/O group has two nodes. Therefore, with no zoning in a dual SAN environment, the number of paths to a volume are as follows:
    • With standard four-port FC host bus adapter (HBA), it would be four multiplied by the number of host ports. For example, if a host has two ports, you multiply two times four resulting in eight paths, which are the maximum supported.
    • With standard four-port FC HBA and an optional second four-port HBA, it would be eight multiplied by the number of host ports. For example, if a host has two ports, you multiply two times eight resulting in 16 paths, which exceed the limit of eight and is not supported.
    • With standard four-port FC HBA and an optional two-port FCoE HBA, it would be six multiplied by the number of host ports. For example, if a host has two ports, you multiply two times six resulting in 12 paths, which exceed the limit of eight and is not supported.
  • This rule exists to limit the number of paths that must be resolved by the multipathing device driver. More paths do not equate to better performance or higher availability. For optimum performance and availability, limit a host with two Fibre Channel ports to only four paths: one path to each node on each SAN.
  • More layout and zoning requirements are necessary for an N_Port ID Virtualization (NPIV) configuration in comparison to Fibre Channel host attachment without NPIV. These requirements follow from the fact that NPIV port failover between nodes must be transparent to hosts. Hence, the set of host ports on which an NPIV port is visible cannot change as a result of a failover.
  • If you set the NPIV status of a specified I/O group to transitional by entering the CLI command chiogrp -fctargetportmode transitional, you might double the number of paths from the system to a host. To avoid increasing the number of paths substantially, use zoning or other means to temporarily remove some of the paths until such a time as the NPIV status of the I/O group is changed to enabled.
  • If a replication layer system is using an NPIV-enabled storage layer system for backend storage, the replication layer system must be zoned in to both the NPIV and physical ports of the storage layer system.

To find the worldwide port names (WWPNs) that are required to set up Fibre Channel zoning with hosts, use the lstargetportfc command. This command also displays the current failover status of host I/O ports.

To restrict the number of paths to a host, zone the switches so that each host bus adapter (HBA) port is zoned with one SAN Volume Controller port from each node in each I/O group that it accesses volumes from. If a host has multiple HBA ports, zone each port to a different set of SAN Volume Controller ports to maximize performance and redundancy. This also applies to a host with a Converged Network Adapter (CNA) that accesses volumes via FCoE.

External storage system zones

Switch zones that contain storage system ports must not have more than 40 ports. A configuration that exceeds 40 ports is not supported.

SAN Volume Controller zones

The switch fabric must be zoned so that the SAN Volume Controller nodes can detect the back-end storage systems and the front-end host HBAs. Typically, the front-end host HBAs and the back-end storage systems are not in the same zone. The exception to this is where split host and split storage system configuration is in use.

All nodes in a system must be able to detect the same ports on each back-end storage system. Operation in a mode where two nodes detect a different set of ports on the same storage system is degraded, and the system logs errors that request a repair action. This can occur if inappropriate zoning is applied to the fabric or if inappropriate LUN masking is used. This rule has important implications for back-end storage, such as IBM® DS4000® storage systems, which impose exclusive rules for mappings between HBA worldwide node names (WWNNs) and storage partitions.

Each SAN Volume Controller port must be zoned so that it can be used for internode communications. When configuring switch zoning, you can zone some SAN Volume Controller node ports to a host or to back-end storage systems.

When configuring zones for communication between nodes in the same system, the minimum configuration requires that all Fibre Channel ports on a node detect at least one Fibre Channel port on each other node in the same system. You cannot reduce the configuration in this environment.

It is critical that you configure storage systems and the SAN so that a system cannot access logical units (LUs) that a host or another system can also access. You can achieve this configuration with storage system logical unit number (LUN) mapping and masking.

If a node can detect a storage system through multiple paths, use zoning to restrict communication to those paths that do not travel over ISLs.

With Metro Mirror and Global Mirror configurations, additional zones are required that contain only the local nodes and the remote nodes. It is valid for the local hosts to see the remote nodes or for the remote hosts to see the local nodes. Any zone that contains the local and the remote back-end storage systems and local nodes or remote nodes, or both, is not valid.

For best results in Metro Mirror and Global Mirror configurations where the round-trip latency between systems is less than 80 milliseconds, zone each node so that it can communicate with at least one Fibre Channel port on each node in each remote system. This configuration maintains redundancy of the fault tolerance of port and node failures within local and remote systems. For communications between multiple systems, this also achieves optimal performance from the nodes and the intersystem links.

However, to accommodate the limitations of some switch vendors on the number of ports or worldwide node names (WWNNs) that are allowed in a zone, you can further reduce the number of ports or WWNNs in a zone. Such a reduction can result in reduced redundancy and additional workload being placed on other system nodes and the Fibre Channel links between the nodes of a system.

If the round-trip latency between systems is greater than 80 milliseconds, stricter configuration requirements apply:
  • Use SAN zoning and port masking to ensure that two Fibre Channel ports on each node that is used for replication are dedicated for replication traffic.
  • Apply SAN zoning to provide separate intersystem zones for each local-to-remote I/O group pair that is used for replication. See the information about long-distance links for Metro Mirror and Global Mirror partnerships for further details.

The minimum configuration requirement is to zone both nodes in one I/O group to both nodes in one I/O group at the secondary site. The I/O group maintains fault tolerance of a node or port failure at either the local or remote site location. It does not matter which I/O groups at either site are zoned because I/O traffic can be routed through other nodes to get to the destination. However, if an I/O group that is doing the routing contains the nodes that are servicing the host I/O, there is no additional burden or latency for those I/O groups because the I/O group nodes are directly connected to the remote system.

If only a subset of the I/O groups within a system is using Metro Mirror and Global Mirror, you can restrict the zoning so that only those nodes can communicate with nodes in remote systems. You can have nodes that are not members of any system zoned to detect all the systems. You can then add a node to the system in case you must replace a node.

Host zones

The configuration rules for host zones are different depending upon the number of hosts that access the system. For configurations of fewer than 64 hosts per system, SAN Volume Controller supports a simple set of zoning rules that enable a small set of host zones to be created for different environments. For configurations of more than 64 hosts per system, SAN Volume Controller supports a more restrictive set of host zoning rules. These rules apply for both Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) connectivity.

Zoning that contains host HBAs must ensure host HBAs in dissimilar hosts or dissimilar HBAs are in separate zones. Dissimilar hosts means that the hosts are running different operating systems or are different hardware platforms; thus different levels of the same operating system are regarded as similar.

To obtain the best overall performance of the system and to prevent overloading, the workload to each SAN Volume Controller port must be equal. This can typically involve zoning approximately the same number of host Fibre Channel ports to each SAN Volume Controller Fibre Channel port.

Systems with fewer than 64 hosts:
For systems with fewer than 64 hosts that are attached, zones that contain host HBAs must contain no more than 40 initiators, including the SAN Volume Controller ports that act as initiators. A configuration that exceeds 40 initiators is not supported. A valid zone can be 32 host ports plus 8 SAN Volume Controller ports. When it is possible, place each HBA port in a host that connects to a node into a separate zone. Include exactly one port from each node in the I/O groups that are associated with this host. This type of host zoning is not mandatory, but is preferred for smaller configurations.
Note: If the switch vendor recommends fewer ports per zone for a particular SAN, the rules that are imposed by the vendor take precedence over SAN Volume Controller rules.

To obtain the best performance from a host with multiple Fibre Channel ports, the zoning must ensure that each Fibre Channel port of a host is zoned with a different group of SAN Volume Controller ports.

Systems with more than 64 hosts:
Each HBA port must be in a separate zone and each zone must contain exactly one port from each SAN Volume Controller node in each I/O group that the host accesses.
Note: A host can be associated with more than one I/O group and therefore access volumes from different I/O groups in a SAN. However, this reduces the maximum number of hosts that can be used in the SAN. For example, if the same host uses volumes in two different I/O groups, this consumes one of the 256 iSCSI hosts in each I/O group, or one of the 512 FC, FCoE or SAS hosts in each I/O group for CF8 and CG8 nodes (256 FC, FCoE or SAS hosts for other node types). If each host accesses volumes in every I/O group, there can be only 256 iSCSI hosts, or 512 FC, FCoE, or SAS hosts for CF8 and CG8 nodes (256 FC, FCoE, or SAS hosts for other node types), in the configuration.