Node configuration details
Apply these configuration details to nodes to ensure that you have a valid configuration.
Host bus adapters and node canisters
Storwize® V7000 Gen2 and Storwize V7000 Gen2+ systems feature 16 Gb Fibre Channel (FC), 10 Gb iSCSI / Fibre Channel over Ethernet (FCoE), and 1 Gb iSCSI connectivity options. Storwize V7000 Gen3 also supports a 32 Gbps Fibre Channel (FC) adapter that supports simultaneous SCSI and NVMeFC connections on the same port. For information about supported hardware, see the following website: www.ibm.com/support.
| Supported number of adapters | Ports | Protocol | Possible slots |
|---|---|---|---|
| 0-2 | 4 | 32 Gb Fibre Channel | 1,2 |
| 0-2 | 4 | 16 Gb Fibre Channel | 1,2 |
| 0-2 | 2 | 25 Gb Ethernet (iWARP) | 1,2 |
| 0-2 | 2 | 25 Gb Ethernet (RoCE) | 1,2 |
| 0-1 in slot 1 only | 2 (4-port adapter but only 2 ports are active.) | 12 Gb SAS Expansion | 1 |
Storwize V7000 Gen 2 or Gen2+ can be connected to Storwize V7000 Gen3 , FlashSystem 9100, and FlashSystem 9200 over a 16 Gbps Fibre Channel adapter. Storwize V7000 Gen3, FlashSystem 7200, FlashSystem 9100, and FlashSystem 9200 can be connected over 16 or 32 Gbps Fibre Channel or 25 Gbps Ethernet. The 32 Gbps Fibre Channel adapter supports simultaneous SCSI and NVMeFC connections on the same port. The SAS expansion adapter is required to use 2076-24F or 2076-92F expansion enclosures. The 25 Gb adapters support iSCSI and iSER host attachment.
| On-board Ethernet port | Speed | Functions |
|---|---|---|
| 1 | 10 GbE | Management IP, Service IP, Host I/O |
| 2 | 10 GbE | Secondary Management IP, Host I/O |
| 3 | 10 GbE | Host I/O |
| 4 | 10 GbE | Host I/O |
| 1 GbE | Technician Port - DHCP/DNS for direct attach service management |
For information about supported hardware, see the following website: www.ibm.com/support.
Volumes
Each node can present a volume (SCSI logical unit) to a host through network ports of the node canister. Each volume is accessible from the two nodes in an I/O group. Each host network port can recognize up to eight paths to each logical unit (LU) that is presented by the system. The hosts must run a multipathing device driver before the multiple paths can resolve to a single device. You can use fabric zoning, VLANs, or port masking to reduce the number of paths to a volume that are visible by the host. The network port types are Ethernet (iSCSI), Fibre Channel, and Fibre Channel over Ethernet.
The number of paths through the network from an I/O group to a host must not exceed eight; configurations that exceed eight paths are not supported. Each node has four 8 Gbps Fibre Channel ports, two 10 G FCoE ports, and each I/O group has two nodes.
Increased connectivity across SAN fabrics
The system supports more than four Fibre Channel and FCoE ports per node with the following restrictions:
To resolve these limitations, you must update the software on the remote system to 6.4.0 or later or disable the additional hardware by using the chnodehw -legacy CLI command.
Optical connections
Optical fiber connections can be used between a node and its switches.
Systems that use the intersystem Metro Mirror and Global Mirror functions can use optical fiber connections between the switches, or they can use distance-extender technology that is supported by the switch manufacturer.
Ethernet connection
To ensure system management failover operations, adhere to this requirement: When a 10 Gbps Ethernet adapter is installed, the system has four Ethernet ports. The 1 Gbps Ethernet ports are numbered 1 and 2, and the 10 Gbps Ethernet ports are numbered ports 3 and 4. Only the 1 Gbps Ethernet ports can be used for configuration or management. Either the 1 Gbps or 10 Gbps Ethernet ports can be used for iSCSI connections.
For host multipathing failover, node canister iSCSI ports must match between the two nodes in a control enclosure.
Fibre Channel connection
The system supports shortwave and longwave Fibre Channel connections between nodes and the switches to which they are connected using the 16 Gbps Fibre Channel adapter. The system only supports shortwave Fibre Channel connections between nodes and the switches to which they are connected using the 32 Gbps Fibre Channel adapter.
No ISL hops are permitted among the nodes within the same I/O group. However, no more than three ISL hops are permitted among nodes that are in the same system though different I/O groups. If your configuration requires more than three ISL hops for nodes that are in the same system but in different I/O groups, contact your support center.
Avoid communication between nodes and storage systems that are being routed across ISLs. To do so, connect all storage systems to the same Fibre Channel or FCF switches as the nodes. One ISL hop between the nodes and the storage systems is permitted. If your configuration requires more than one ISL, contact your support center.
In larger configurations, it is common to have ISLs between host systems and the nodes.
Port speed
Fibre Channel ports on node canisters can operate at 4 Gbps or 8 Gbps.