A quorum disk is an MDisk or a managed drive that contains a
reserved area that is used exclusively for system management. A system automatically assigns quorum
disk candidates. When you add new storage to a system or remove existing storage, however, it is a
good practice to review the quorum disk assignments.
It is possible for a system to split into two groups where each group contains half
the original number of nodes in the system. A quorum disk determines which group of nodes stops
operating and processing I/O requests. In this tie-break situation, the first group of nodes that
accesses the quorum disk is marked as the owner of the quorum disk and as a result continues to
operate as the system, handling all I/O requests. If the other group of nodes cannot access the
quorum disk or finds the quorum disk is owned by another group of nodes, it stops operating as the
system and does not handle I/O requests.
A system can have only one active quorum disk that is used for a tie-break
situation. However, the system uses three quorum disks to record a backup of system configuration
data to be used in the event of a disaster. The system automatically selects one active quorum disk
from these three disks. The active quorum disk can be specified by using the
chquorum command-line interface (CLI) command with the
active parameter. To view the current quorum disk status, use the
lsquorum command.
The other quorum disk
candidates provide redundancy if the active quorum disk fails before a system is partitioned. To
avoid the possibility of losing all the quorum disk candidates with a single failure, assign quorum
disk candidates on multiple storage systems.Note: Mirrored volumes can be taken offline
if no quorum disk is available. The synchronization status for mirrored volumes is recorded on the
quorum disk.
When you change the managed disks that are assigned as quorum
candidate disks, follow these general guidelines:
- When possible, aim to distribute the quorum candidate disks so that each MDisk is provided by a
different storage system. For information about which storage systems are supported for quorum disk
use, refer to the supported hardware list.
- Before you change a quorum candidate disk, ensure that the status of the managed disk that is
being assigned as a quorum candidate disk is reported as online. Also,
ensure that it has a capacity of 512 MB or larger.
- Use smaller capacity MDisks or use drives as the quorum devices to
significantly reduce the amount of time that is needed to run a recover system procedure (also known
as Tier 3 or T3 recovery), if necessary.
Quorum MDisks or drives in HyperSwap or stretched system
configurations
To provide protection against failures that affect an entire location (for example, a power
failure), you can use volume mirroring with a configuration that splits a single system between two
physical locations. For more information, see HyperSwap® configuration details or
stretched system configuration details. For detailed guidance about HyperSwap and stretched system
configuration for high-availability purposes, contact your
service
representative.
If you configure a HyperSwap or
stretched system with the enhanced stretched system functions, the system automatically selects
quorum disks that are placed in each of the three sites. If you are not using the enhanced stretched
system configuration functions, then assign quorum disks manually as described here.
Generally, when the nodes in a system are split among sites, configure the
system this way:
- Site 1: Half of system nodes + one quorum disk candidate
- Site 2: Half of system nodes + one quorum disk candidate
- Site 3: Active quorum disk
This configuration ensures that a quorum disk is always available,
even after a single-site failure.
The following scenarios describe examples that result in changes to the active quorum disk:
- Scenario 1:
- Site 3 is either powered off or connectivity to the site is broken.
- If topology is standard, the system selects a quorum disk candidate at site 2 to become the
active quorum disk. If topology is HyperSwap or stretched, the system
operates without any active quorum disk.
- Site 3 is either powered on or connectivity to the site is restored.
- Assuming that the system was correctly configured initially,
the system automatically
recovers the configuration when the power is restored.
- Scenario 2:
- The storage system that is hosting the
preferred quorum disk at site 3 is removed from the configuration.
- If possible, the system automatically configures a new quorum disk candidate.
- In HyperSwap or stretched
topology, the system selects only a new quorum disk that is in site 3. In a standard topology, the
system selects a quorum disk candidate at site 1 or 2 to become the active quorum disk.
- A new storage system is added to site
3.
- In standard topology, the
administrator must reassign
all three quorum disks to ensure that the active quorum disk is now at site 3 again. In HyperSwap or stretched topology, the
system automatically assigns the new active quorum disk when the storage system is installed and the
site setting is configured.
Fibre Channel over IP usage
Fibre Channel over IP (FCIP) routers can be used for quorum disk connections
under the following circumstances:
- The FCIP router device is supported for
remote mirroring (Metro
Mirror or Global
Mirror).
- The maximum round-trip delay must not exceed 80 ms, which means 40 ms each
direction.
- A minimum bandwidth of 2 megabytes per second is guaranteed for node-to-quorum
traffic.
Note:
- To avoid fabric topology changes in case of IP errors, it is a good practice to
configure FCIP links so that they do not carry ISLs.
- Connections that use iSCSI are not supported.
Usage of wavelength division multiplexing devices that do not require electrical
power
Passive wavelength division multiplexing (WDM) devices can be used for
quorum disk connections. These connections rely on
SFP transceivers with different wavelengths
(referred to as colored
SFP transceivers)
for fiber sharing. The following requirements apply when you use these type of
connections:
- The WDM vendor must support the colored SFP transceivers for usage in the WDM
device.
- The Fibre Channel switch vendor must support the colored SFP transceivers for ISL.
- The WDM device for Metro
Mirror, Global
Mirror, or HyperSwap functions is
supported.
- The SFP transceivers must comply
with the SFP/SFP+ power and heat specifications.
Note: To purchase colored SFP transceivers for passive WDM, contact
your WDM vendor.