Node configuration details

Apply these configuration details to nodes to ensure that you have a valid configuration.

Host bus adapters and node canisters

Storwize® V7000 Gen2 and Storwize V7000 Gen2+ systems feature 16 Gb Fibre Channel (FC), 10 Gb iSCSI / Fibre Channel over Ethernet (FCoE), and 1 Gb iSCSI connectivity options. Storwize V7000 Gen3 also supports a 32 Gbps Fibre Channel (FC) adapter that supports simultaneous SCSI and NVMeFC connections on the same port. For information about supported hardware, see the following website: www.ibm.com/support.

Each of the Storwize V7000 2076-724/U7B node canisters feature the optional adapters that are shown in this table.
Table 1. Host interface adapters
Supported number of adapters Ports Protocol Possible slots
0-2 4 32 Gb Fibre Channel 1,2
0-2 4 16 Gb Fibre Channel 1,2
0-2 2 25 Gb Ethernet (iWARP) 1,2
0-2 2 25 Gb Ethernet (RoCE) 1,2
0-1 in slot 1 only 2 (4-port adapter but only 2 ports are active.) 12 Gb SAS Expansion 1

Storwize V7000 Gen 2 or Gen2+ can be connected to Storwize V7000 Gen3 , FlashSystem 9100, and FlashSystem 9200 over a 16 Gbps Fibre Channel adapter. Storwize V7000 Gen3, FlashSystem 7200, FlashSystem 9100, and FlashSystem 9200 can be connected over 16 or 32 Gbps Fibre Channel or 25 Gbps Ethernet. The 32 Gbps Fibre Channel adapter supports simultaneous SCSI and NVMeFC connections on the same port. The SAS expansion adapter is required to use 2076-24F or 2076-92F expansion enclosures. The 25 Gb adapters support iSCSI and iSER host attachment.

Storwize V7000 2076-724/U7B node canisters also contain two USB ports and the on-board Ethernet ports shown in this table.
Table 2. On-board Ethernet ports
On-board Ethernet port Speed Functions
1 10 GbE Management IP, Service IP, Host I/O
2 10 GbE Secondary Management IP, Host I/O
3 10 GbE Host I/O
4 10 GbE Host I/O
  1 GbE Technician Port - DHCP/DNS for direct attach service management
Fibre Channel over Ethernet (FCoE) is not supported.

For information about supported hardware, see the following website: www.ibm.com/support.

Volumes

Each node can present a volume (SCSI logical unit) to a host through network ports of the node canister. Each volume is accessible from the two nodes in an I/O group. Each host network port can recognize up to eight paths to each logical unit (LU) that is presented by the system. The hosts must run a multipathing device driver before the multiple paths can resolve to a single device. You can use fabric zoning, VLANs, or port masking to reduce the number of paths to a volume that are visible by the host. The network port types are Ethernet (iSCSI), Fibre Channel, and Fibre Channel over Ethernet.

The number of paths through the network from an I/O group to a host must not exceed eight; configurations that exceed eight paths are not supported. Each node has four 8 Gbps Fibre Channel ports, two 10 G FCoE ports, and each I/O group has two nodes.

Increased connectivity across SAN fabrics

The system supports more than four Fibre Channel and FCoE ports per node with the following restrictions:

  • Systems with a combined total of more than four Fibre Channel and FCoE ports on a node must be running version 6.4.0 or later.
  • Systems with more than four total FC and FCoE ports cannot establish a remote copy partnership to another system at a version earlier than 6.4.0.
  • Systems at 6.4.0 or later in partnership to systems at an earlier version cannot add a node that results in more than four combined FC and FCoE ports. Activating more ports by enabling FCoE or installing new hardware on existing nodes in the system is also not allowed.

To resolve these limitations, you must update the software on the remote system to 6.4.0 or later or disable the additional hardware by using the chnodehw -legacy CLI command.

Optical connections

Valid optical connections are based on the fabric rules that the manufacturers impose for the following connection methods:
  • Host to a switch
  • Back end to a switch.
  • Inter-switch links (ISLs)

Optical fiber connections can be used between a node and its switches.

Systems that use the intersystem Metro Mirror and Global Mirror functions can use optical fiber connections between the switches, or they can use distance-extender technology that is supported by the switch manufacturer.

Ethernet connection

To ensure system management failover operations, adhere to this requirement: When a 10 Gbps Ethernet adapter is installed, the system has four Ethernet ports. The 1 Gbps Ethernet ports are numbered 1 and 2, and the 10 Gbps Ethernet ports are numbered ports 3 and 4. Only the 1 Gbps Ethernet ports can be used for configuration or management. Either the 1 Gbps or 10 Gbps Ethernet ports can be used for iSCSI connections.

For host multipathing failover, node canister iSCSI ports must match between the two nodes in a control enclosure.

Fibre Channel connection

The system supports shortwave and longwave Fibre Channel connections between nodes and the switches to which they are connected using the 16 Gbps Fibre Channel adapter. The system only supports shortwave Fibre Channel connections between nodes and the switches to which they are connected using the 32 Gbps Fibre Channel adapter.

No ISL hops are permitted among the nodes within the same I/O group. However, no more than three ISL hops are permitted among nodes that are in the same system though different I/O groups. If your configuration requires more than three ISL hops for nodes that are in the same system but in different I/O groups, contact your support center.

Avoid communication between nodes and storage systems that are being routed across ISLs. To do so, connect all storage systems to the same Fibre Channel or FCF switches as the nodes. One ISL hop between the nodes and the storage systems is permitted. If your configuration requires more than one ISL, contact your support center.

In larger configurations, it is common to have ISLs between host systems and the nodes.

Port speed

Fibre Channel ports on node canisters can operate at 4 Gbps or 8 Gbps.