System overview

The IBM® Storwize® V7000 system is a virtualizing RAID storage system.

IBM Spectrum Virtualize software

IBM Storwize V7000 system is built with IBM Spectrum Virtualize software, which is part of the IBM Spectrum Storage™ family.

The software provides these functions for the host systems that attach to the system:
  • Creates a single pool of storage
  • Provides logical unit virtualization
  • Manages logical volumes
  • Mirrors logical volumes
The system also provides the following functions:
  • Large scalable cache
  • Copy Services:
    • IBM FlashCopy® (point-in-time copy) function, including thin-provisioned FlashCopy to make multiple targets affordable
    • IBM HyperSwap® (active-active copy) function
    • Metro Mirror (synchronous copy)
    • Global Mirror (asynchronous copy)
    • Data migration
  • Space management:
    • IBM Easy Tier® function to migrate the most frequently used data to higher-performance storage
    • Metering of service quality when combined with IBM Spectrum® Connect. For information, refer to the IBM Spectrum Connect documentation.
    • Thin-provisioned logical volumes
    • Compressed volumes to consolidate storage using data reduction pools (Real-Time Compression is not supported in Storwize V7000 Gen 3.)
    • Data Reduction pools with deduplication

System models

There are three models of Storwize V7000 systems:
  • Storwize V7000 Gen2 (2076-524)
  • Storwize V7000 Gen2+ (2076-624)
  • Storwize V7000 Gen3 (2076-724)

The following sections describe general information that applies to the systems. However, for more information about Storwize V7000 2076-724/U7B systems, see Storwize V7000 Gen3 system overview.

System hardware

The storage system consists of a set of drive enclosures. Control enclosures contain disk drives and two node canisters. A collection of control enclosures that are managed as a single system is called a clustered system.

The two node canisters in each control enclosure are arranged into a pair that is called an I/O group. A single pair is responsible for serving I/O on a specific volume. Because a volume is served by two node canisters, the volume continues to be available if one node canister fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to disable the I/O for a node before it is taken offline or when a volume cannot be accessed through that node.

The system supports both regular and flash drives. In addition, a system without any internal drives can be used as a storage virtualization solution.

Figure 1 shows the system as a traditional RAID storage system. The internal drives are configured into arrays and volumes are created from those arrays.

Figure 1. System as a RAID storage system
This figure shows an overview of a RAID storage system.

The system can also be used to virtualize other storage systems, as shown in Figure 2.

Figure 2. System shown virtualizing other storage systems
This figure shows an overview of virtualizing other storage systems.
Figure 3 shows an example of a Storwize V7000 control enclosure ( 1 ). The control enclosure is composed of two node canisters ( 2 ). Together, the node canisters comprise an I/O group ( 3 ). Both node canisters in the I/O group (or control enclosure) have the same level of access to each drive. Each drive has dual SAS ports. One port connects to node canister 1 and the other port connects to node canister 2. The SAN (or each host) is then connected to both node canisters in the I/O group, giving dual redundant access to the volumes ( 4 ) presented by the I/O group.
Figure 3. Relationship between a control enclosure, node canisters, and an I/O group
An example of a control enclosure, node canisters, and I/O group
  •  1  Control enclosure
  •  2  Node canister
  •  3  I/O group
  •  4  Volumes
  •  5  Power supply unit

The control enclosure contains two independent power supply units (PSU). Each PSU ( 5 ) provides power to the entire enclosure. If one power supply fails, the remaining PSU supplies power to keep all components in the enclosure operational.

Expansion enclosures contain drives and are attached to the control enclosure. Expansion canisters include the serial-attached SCSI (SAS) interface hardware that enables each of the node canisters to use the drives of the expansion enclosures. Figure 4 shows an example of a control enclosure ( 1 ) that is connected to four expansion enclosures ( 2 ). The SAS chain above the control enclosure is separate to the SAS chain that is shown below the control enclosure. Each chain is dual-redundant because it is connected to node canisters 1 and 2.

Figure 4. Relationship between a control enclosure and expansion enclosures
An example of a control enclosure and expansion enclosures

System topology

The system topology can be set up in several different ways.
  • Standard topology, where all nodes in the system are at the same site.Standard topology, where all node canisters in the system are at the same site.
    Figure 5. Example of a standard system topology
    This figure shows an example of a standard system topology

System management

The nodes in a clustered system operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface for performing hardware service actions.

Fabric types

I/O operations between hosts and nodes and between nodes and RAID storage systems are performed by using the SCSI standard. The nodes communicate with each other by using private SCSI commands.

Storwize V7000 Gen2 or Storwize V7000 Gen2+ systems can have up to eight 10 Gbps Ethernet ports per control enclosure when two 4-port 10 GbE host interface adapters are installed.

Table 1 shows the fabric type that can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be used at the same time.

Table 1. Types of communications
Communications type Host to node Node to storage system Node to node
Fibre Channel SAN Yes Yes Yes
iSCSI
  • 1 Gbps Ethernet
  • 10 Gbps Ethernet
  • 25 Gbps Ethernet (Storwize V7000 Gen2+ systems)
Yes Yes Yes
RDMA-capable Ethernet ports for node-to-node communication (25 Gbps Ethernet) No No Yes
Fibre Channel Over Ethernet SAN (10 Gbps Ethernet) Yes Yes Yes