SAN Volume Controller overview e-Learning course

The new IBM(R) SAN Volume Controller system provides many benefits to storage administrators, including efficient storage administration, a single point of management for different storage systems, regardless of the manufacturer, and data migration that does not interrupt host I/O activity. With the latest model of the SAN Volume Controller system, expansions are supported, and hardware can be intermixed in any clustered system. In this e-Learning overview, you will learn basic information about the SAN Volume Controller system and how it works.

The SAN Volume Controller system combines hardware and software to control the mapping of storage into volumes in a SAN environment. The system consists of hardware nodes and a management user interface, which helps you perform management tasks.

The SAN Volume Controller system includes hardware components that are rack-mounted units called nodes. IBM offers more than one node model, so the nodes that you work with might look slightly different from this example. You can mix different node models in a single clustered system. Note that some node models are not supported on all versions of the SAN Volume Controller software.

Flash drives are supported through the expansion enclosures in the system. An enclosure is shared by a pair of nodes.

Nodes are always installed in pairs, and each pair is known as an I/O group. A single pair is responsible for serving I/O on a given volume. Because a volume is served by two nodes, there is no loss of availability should one node fail or be taken offline. One to four I/O groups make up a clustered system, so you can have a maximum of eight nodes in any clustered system.

During the system setup wizard, you were given the option for the system to be stretched across multiple sites. Spreading an installation across two or more sites is called a stretched system. This enables basic disaster protection, and offers high availability for workloads over the two datacenters. With stretched systems, the two nodes in an I/O group are split or separated by distance between two sites, and each volume is mirrored with one copy on each site. This means that you can lose the SAN or power at one site and still access the volumes from the alternative site. You can have up to four I/O groups in the SVC stretch case.

To understand how the SAN Volume Controller system works, you must first understand the concept of virtualization. You can use virtualization to manage physical resources as shared pools of virtual resources. The SAN Volume Controller system brings storage devices together in a virtual-storage pool. The pool can be used to centrally manage and allocate capacity as needed.

Here is how it works. A storage array consists of several physical drives that are logically grouped into redundant arrays of independent disks, or RAID.

These arrays are managed by one or more storage systems. Each storage system manages a set of logical unit numbers, or LUNs, that correspond to arrays.

Instead of mapping to hosts directly, the LUNs are mapped to the SAN Volume Controller system as groups of managed disks, which are combined into pools of virtual storage.

Virtual volumes are then presented to the hosts, using a process known as symmetric virtualization. The virtual storage can be accessed on the storage area network, or SAN, allowing you to more efficiently manage storage resources.

So how does the SAN Volume Controller system perform this virtualization? First, it automatically discovers the managed disks that are presented by the storage systems. Each managed disk corresponds to a LUN. You assign disks to storage pools that you create, based on performance and other characteristics.

The I/O groups translate the managed disks into storage pools that are then translated into one or more volumes. The volumes are presented to a host system. The I/O groups are connected to the SAN so that all storage systems and all application servers are visible to all the I/O groups.

The system divides the disks into storage blocks, called extents, that are typically of equal size, which are then mapped in different ways to become volumes.

You can create different types of volumes, including generic, thin-provisioned, mirrored, thin mirrored, and compressed volumes.

A generic volume is considered a thick provisioned volume. For example, if you create a 10 GB generic volume, then 10GB of space is allocated. When you create a 10GB thin-provisioned volume, nothing is allocated until a host writes data to it.

With a thin-provisioned volume, the volume size presented to a host system is larger than the real storage actually allocated to the volume. This saves space if many of the blocks within the volume are not used. When additional real storage is required, you can manually or automatically expand the real storage.

With mirrored volumes, there are two volume copies, and the host is only aware of the original volume. Mirrored volumes can enable a volume to remain online even when some of the associated storage systems cannot be accessed.

By using a thin mirror volume, you can allocate the required physical space on demand and have several copies of a volume available.

You can use compressed volumes to help ensure efficient use of storage resources. When the volume is created, the data is compressed, so that less real storage capacity is required.

After the volumes are created, you can specify which hosts can access the volumes.

You can attach Fibre Channel, Fibre Channel over Ethernet, which is known as FCoE, and iSCSI hosts to the SAN Volume Controller system. For more information, see the Attaching hosts e-Learning modules.

In addition to all these capabilities, the SAN Volume Controller system also provides advanced SAN functions, including data migration and Copy Services. You might migrate data when you want to rebalance workload, move workload either to newly installed storage or from storage that is about to be replaced, or migrate data from existing disks to disks managed by the SAN Volume Controller system. Data migration is performed without interruption to the host I/O.

Volumes are created by mapping disk extents to volume extents. Data migration essentially changes this mapping. Migration can be performed at the volume, disk, or the extent level, depending on the purpose of the migration.

Several types of Copy Services are provided that help you to migrate, back up, and recover data. These functions are performed by creating synchronous and asynchronous copies of volumes. These Copy Services include IBM FlashCopy(R), Metro Mirror, and Global Mirror.

The FlashCopy function copies data instantaneously from a source volume to a target volume. This copy is taken at a particular point in time as hosts continue to access the data. You must create a mapping between the source volume and the target volume. A mapping can be created between any two volumes of the same size in a clustered system. FlashCopy consistency groups perform point-in-time copy functions for multiple volumes. You can set up FlashCopy mappings and consistency groups using the management GUI.

Metro Mirror is a Copy Service that provides a continuous, synchronous mirror of one volume to a second volume. The secondary volumes can be located in the same clustered system or in different clustered systems. The different systems can be up to 300 kilometers apart, so by using Metro Mirror you can make a copy to a location offsite or across town. Because the mirror is updated in real time, no data is lost when a failure occurs, so Metro Mirror is generally used for disaster-recovery purposes where it is important to avoid data loss.

Global Mirror is a Copy Service that is very similar to Metro Mirror. Both provide a continuous mirror of a primary volume to a secondary volume. But with Global Mirror, the copy is asynchronous. You do not have to wait for the write to the secondary volume to complete. So, for long distances, performance is improved compared to Metro Mirror. However, if a failure occurs, you might lose data. Global Mirror works well for data protection and migration when recovery sites are more than 300 kilometers away. Before creating a Metro Mirror or Global Mirror copy, you need to first establish a partnership between two clustered systems using the management GUI.

With an additional license, IBM Easy TierĀ® can be included with this system and other Storwize products. Easy Tier responds to the storage pools that contain a mixture of flash, enterprise SAS, and nearline SAS storage. The system automatically and nondisruptively moves data between these tiers of storage to optimize volume performance. The Easy Tier function eliminates manual intervention when assigning highly active data on volumes to faster responding storage, and assigns inactive data to slow responding storage. In this dynamically tiered environment, data movement is seamless to the host application regardless of the storage tier in which the data resides. Manual controls exist so that you can change the default behavior, such as turning off the Easy Tier function on storage pools that have more than one tier of storage.

Now that you understand the basics of how the product works, virtualization, volume creation, and advanced SAN functions, see the information center and other e-Learning modules to learn more.