A shared disk subsystem is required for a cluster in order to make data highly available. The OES Cluster Services software must be installed in order to be able to mark devices as shareable, such as the devices you use for clustered pools and the device you use for the SBD (split-brain detector) during the cluster configuration.
Ensure that your shared storage devices meet the following requirements:
OES Cluster Services supports the following shared disks:
Fibre Channel LUN (logical unit number) devices in a storage array
iSCSI LUN devices
SCSI disks (shared external drive arrays)
Before configuring OES Cluster Services, the shared disk system must be properly set up and functional according to the manufacturer's instructions.
Prior to installation, verify that all the drives in your shared disk system are recognized by Linux by viewing a list of the devices on each server that you intend to add to your cluster. If any of the drives in the shared disk system do not show up in the list, consult the OES documentation or the shared disk system documentation for troubleshooting information.
Prepare the device for use in a cluster resource:
NSS Pool: For new devices, you must initialize and share the device before creating the pool. For an existing pool that you want to cluster-enable, use NSSMU or iManager to share the device.
All devices that contribute space to a clustered pool must be able to fail over with the pool cluster resource. You must use the device exclusively for the clustered pool; do not use space on it for other pools or for Linux volumes. A device must be marked as Shareable for Clustering before you can use it to create or expand a clustered pool.
Linux LVM volume group: For new devices, use an unpartitioned device that has been initialized. Do not mark the device as shared because doing so creates a small partition on it. LVM uses the entire device for the volume group. For an existing volume group, no not mark the device as shared.
If this is a new cluster, connect the shared disk system to the first server so that the SBD cluster partition can be created during the Cluster Services install. See Section 4.8.2, SBD Partitions.
If your cluster uses physically shared storage resources, you must create an SBD (split-brain detector) partition for the cluster. You can create an SBD partition in YaST as part of the first node setup, or by using the SBD Utility (sbdutil) before you add a second node to the cluster. Both the YaST new cluster setup and the SBD Utility (sbdutil) support mirroring the SBD partition.
An SBD must be created before you attempt to create storage objects like pools or volumes for file system cluster resources, and before you configure a second node in the cluster. NLVM and other NSS management tools need the SBD to detect if a node is a member of the cluster and to get exclusive locks on physically shared storage.
For information about how SBD partitions work and how to create an SBD partition for an existing cluster, see Section 9.18, Creating or Deleting Cluster SBD Partitions.
Use the SAN storage array software to carve a LUN to use exclusively for the SBD partition. For 512 byte device, you should have at least 20 MB of free available space and the minimum partition size is 8 MB. For 4096 (4Kn) device, you should have at least 80 MB of free available space and the minimum partition size is 64 MB. Connect the LUN device to all nodes in the cluster.
For device fault tolerance, you can mirror the SBD partition by specifying two devices when you create the SBD. Use the SAN storage array software to carve a second LUN of the same size to use as the mirror. Connect the LUN device to all nodes in the cluster.
The device you use to create the SBD partition can be a software RAID device or a hardware RAID device.
Before you use YaST to set up the a cluster, you must initialize each SAN device that you created for the SBD, and mark each as Shareable for Clustering.
IMPORTANT:The OES Cluster Services software must already be installed in order to be able to mark the devices as shareable.
After you install OES Cluster Services, but before you configure the cluster, you can initialize a device and set it to a shared state by using NSSMU, the Storage plug-in for iManager, OES Linux Volume Manager (NLVM) commands, or an NSS utility called ncsinit.
If you configure a cluster before you create an SBD, NSS tools cannot detect if the node is a member of the cluster and cannot get exclusive locks to the physically shared storage. In this state, you must use the -s NLVM option with the nlvm init command to override the shared locking requirement and force NLVM to execute the command. To minimize the risk of possible corruption, you are responsible for ensuring that you have exclusive access to the shared storage at this time.
When you mark the device as Shareable for Clustering, share information is added to the disk in a free-space partition that is about 4 MB in size. This space becomes part of the SBD partition.
When you configure a new cluster, you can specify how much free space to use for the SBD, or you can specify the Use Maximum Size option to use the entire device. If you specify a second device to use as a mirror for the SBD, the same amount of space is used. If you specify to use the maximum size and the mirror device is bigger than the SBD device, you will not be able to use the excess free space on the mirror for other purposes.
Because an SBD partition ends on a cylinder boundary, the partition size might be slightly smaller than the size you specify. When you use an entire device for the SBD partition, you can use the Use Maximum Size option, and let the software determine the size of the partition.
If you are using iSCSI for shared disk system access, ensure that you have installed and configured the iSCSI initiators and targets (LUNs) and that they are working properly. The iSCSI target devices must be mounted on the server before the cluster resources are brought online.
We recommend that you use hardware RAID in the shared disk subsystem to add fault tolerance to the shared disk system.
Consider the following when using software RAIDs:
NSS software RAID is supported for shared disks for NSS pools. Any RAID0/5 device that is used for a clustered pool must contribute space exclusively to that pool; it cannot be used for other pools. This allows the device to fail over between nodes with the pool cluster resource. Ensure that its component devices are marked as Shareable for Clustering before you use a RAID0/5 device to create or expand a clustered pool
Linux software RAID can be used in shared disk configurations that do not require the RAID to be concurrently active on multiple nodes. Linux software RAID cannot be used underneath clustered file systems (such as OCFS2, GFS, and CXFS) because OES Cluster Services does not support concurrent activation.
WARNING:Activating Linux software RAID devices concurrently on multiple nodes can result in data corruption or inconsistencies.