Flash drive configuration details
Apply these configuration details for SAN Volume Controller flash drives.
Optional flash drives provide high-speed MDisk capability for SAN Volume Controller 2145-CF8 and SAN Volume Controller 2145-CG8 nodes. Each node supports up to four flash drives. Flash drives are local drives that are not accessible over the SAN fabric.
Flash drive configuration details for nodes, I/O groups, and clustered systems
- SAN Volume Controller 2145-CG8 or 2145-CF8 nodes that contain flash drives can coexist in a single system with any other SAN Volume Controller 2145-CG8 or 2145-CF8 nodes.
- An I/O group that has SAN Volume Controller 2145-CG8 or 2145-CF8 nodes that are mixed with SAN Volume Controller 2145-DH8 nodes is supported for migration purposes only. Drives that are contained in nodes that were previously mirrored are not mirrored in such mixed I/O groups and have no redundancy from mirroring. Ensure that important data is protected from drive failure by using another mechanism during data migration or by migrating the data to another I/O group before you update.
- Quorum function is not supported on flash drives within SAN Volume Controller nodes.
Configuration 1: Recommended configuration for storage pools, arrays, and volumes
The following SAN Volume Controller flash drive configuration details are recommended processes.
Storage pools and arrays:
- Create either a RAID 1 or RAID 10 array, where the data is mirrored between flash drives on two nodes in the same I/O group. The management GUI does this mirroring automatically if you select RAID 1 or RAID 10 presets.
- Create a flash drive storage pool for high-performing disks. As an alternative, you can use the Easy Tier function to add the flash drive array to a storage pool that contains flash drive MDisks.
For optimal performance, use only flash drives from a single I/O group in a single storage pool.
Volumes:
For optimal performance, follow these guidelines for volumes:
- When you create a volume in a storage pool that contains flash drive arrays by using drives in a certain I/O group, create the volumes in the same I/O group.
- If a storage pool contains flash drives in a single I/O group, create the volumes in the same I/O group.
Configuration 2: Alternative configuration for storage pools, arrays, and volumes
The following details are not recommended but are similar to flash drive configuration processes from an earlier release.
Storage pools and arrays:
For each node that contains flash drives, follow these steps:
- Create one storage pool.
- Create one RAID 0 array in this storage pool that contains all
the flash drives in
the node.Note: If required, you can create more than one array and storage pool per node.
Volumes:
- Volumes must be mirrored in one of the following two ways:
- Between two storage pools that contain flash drives from two nodes in the same I/O group
- Between one flash drive storage pool and one regular storage pool
- For optimal performance, volumes must be in the same I/O group as the nodes that contain the flash drives that are being used.
- For optimal performance, if the preferred node of a volume is node x, for example, the primary copy of the volume is in the storage pool that contains flash drives from that same node x.
- The synchronization rate must be set such that the volume copies resynchronize quickly after loss of synchronization. Synchronization is lost if one of the nodes goes offline either during a concurrent code update or because of maintenance. During code update, the synchronization must be restored within 30 minutes or the update stalls. During the period that the flash drive volume copies are not synchronized, access to the volume depends on the single node that contains the flash drive storage that is associated with the synchronized volume copy. This dependency is different from volume copies from external storage systems. The default synchronization rate is typically too low for flash drive volume mirrors. Instead, set it to 80 or above.
- To prevent volume-mirror copy suspension during code update,
set the volume mirrorwritepriority field
to redundancy before the code update starts. After the code update
is complete, the volume mirrorwritepriority field
can be changed back to its previous value.
To increase the amount of time between the two nodes that contain volume copies and prevent the nodes from going offline during an update, consider manually updating the software.