There are several failure types associated with a business continuity cluster. Understanding the failure types and knowing how to respond to each can help you more quickly recover a cluster. Some of the failure types and responses differ depending on whether you have implemented storage-based mirroring or host-based mirroring. Promoting or demoting LUNs is sometimes necessary when responding to certain types of failures.
NOTE:The terms promote and demote are used here in describing the process of changing LUNs to a state of primary. However, your storage vendor documentation might use different terms, such as mask and unmask.
Storage-based mirroring failure types and responses are described in the following sections:
This type of failure can be temporary (transient) or long-term. There should be an initial response and then a long-term response based on whether the failure is transient or long-term. The initial response is to BCC migrate the resources to a peer cluster. Next, work to restore the failed cluster to normal operations. The long-term response is total recovery from the failure.
Promote the secondary LUN to primary. Cluster resources load and become primary on the peer cluster.
Prior to bringing up the original cluster servers, ensure that the storage system and SAN interconnect are in a state in which the cluster resources cannot come online and cause a divergence in data. Divergence in data occurs when connectivity between storage systems has been lost and both clusters assert that they have ownership of their respective disks. Ensure that the former primary storage system is demoted to secondary before bringing cluster servers back up. If the former primary storage system has not been demoted to secondary, you might need to demote it manually. Consult your storage hardware documentation for instructions on demoting and promoting LUNs. You can use the cluster resetresources console command to change resource states to offline and secondary.
Bring the primary storage system back up. Follow your storage vendor’s instructions to remirror it. Promote the former primary storage system back to primary. Then bring up the former primary cluster servers, and fail back the cluster resources.
Secondary clusters are not currently running the resource. No additional response is necessary for this failure other than recovering the secondary cluster. When you bring the secondary cluster back up, the LUNs are still in a secondary state to the primary SAN.
Secondary clusters are not currently running the resource. Bring the secondary storage system back up. Follow your storage vendor’s instructions to remirror. When you bring the secondary cluster back up, the LUNs are still in a secondary state to the primary SAN.
When the primary storage system fails, the primary cluster also fails. BCC migrate the resources to a peer cluster. Bring the primary storage system back up. Follow your storage vendor’s instructions to remirror. Promote the former primary storage system back to primary. You might need to demote the LUNs and resources to secondary on the primary storage before bringing them back up. You can use the cluster resetresources console command to change resource states to offline and secondary. Bring up the former primary cluster servers and fail back the resources.
Secondary clusters are not currently running the resource. When the secondary storage system fails, the secondary cluster also fails. Bring the secondary storage back up. Follow your storage vendor’s instructions to remirror. Then bring the secondary cluster back up. When you bring the secondary storage system and cluster back up, resources are still in a secondary state.
Recover the connection. If divergence of the storage systems occurred, remirror from the good side to the bad side.
User connectivity might be lost to a given service or data, depending on where the resources are running and whether multiple clusters run the same service. Users might not be able to access servers in the cluster that they usually connect to, but can possibly access servers in another peer cluster. If users are co-located with the cluster that runs the service or stores the data, nothing additional is required. An error is displayed. Wait for connectivity to resume.
If you have configured the auto-failover feature, see Section C.0, Setting Up Auto-Failover.
Host-based mirroring failure types and responses are described in the following sections:
The initial response is to BCC migrate the resources to a peer cluster. Next, work to restore the failed cluster to normal operations. The long-term response is total recovery from the failure. Do not disable MSAP (Multiple Server Activation Prevention), which is enabled by default. When the former primary cluster is recovered, bring up the former primary cluster servers, and fail back the cluster resources.
Bring up your primary storage system before bringing up your cluster servers. Then run the Cluster Scan For New Devices command from a secondary cluster server. Ensure that remirroring completes before bringing the downed cluster servers back up. Then bring up the former primary cluster servers, and fail back the cluster resources.
Secondary clusters are not currently running the resource. No additional response is necessary for this failure other than recovering the secondary cluster. When you bring the secondary cluster back up, the storage system is still secondary to the primary cluster.
Secondary clusters are not currently running the resource. Bring up your secondary storage system before bringing up your cluster servers. Then run the Cluster Scan For New Devices command on a primary cluster server to ensure that remirroring takes place. When you bring the secondary cluster back up, the storage system is still secondary to the primary cluster.
If your primary storage system fails, all nodes in your primary cluster also fail. BCC migrate the resources to a peer cluster. Bring the primary storage system back up. Bring up your primary cluster servers. Ensure that remirroring completes before failing back resources to the former primary cluster.
Secondary clusters are not currently running the resource. When the secondary storage system fails, the secondary cluster also fails. Bring the secondary storage back up. Bring up your secondary cluster servers. Ensure that remirroring completes on the secondary storage system. When you bring the secondary storage system and cluster back up, resources are still in a secondary state.
Recover the connection. If divergence of the storage systems occurred, remirror from the good side to the bad side.
User connectivity might be lost to a given service or data, depending on where the resources are running and whether multiple clusters run the same service. Users might not be able to access servers in the cluster that they usually connect to, but can possibly access servers in another peer cluster. If users are co-located with the cluster that runs the service or stores the data, nothing additional is required. An error is displayed. Wait for connectivity to resume.
If you have configured the auto-failover feature, see Section C.0, Setting Up Auto-Failover.