You must configure the cluster software to register each cluster node as a member of the cluster. As part of this configuration, you can also set up fencing and Shoot The Other Node In The Head (STONITH) resources to ensure cluster consistency.
IMPORTANT:For SLES 12 SP5 and later, use the systemctl pacemaker.service command.
For example, for the /etc/rc.d/openais start command, use the systemctl start pacemaker.service command.
Use the following procedure for cluster configuration:
For this solution you must use private IP addresses for internal cluster communications, and use unicast to minimize the need to request a multicast address from a network administrator. You must also use an iSCSI Target configured on the same SLES virtual machine that hosts the shared storage to serve as a Split Brain Detection (SBD) device for fencing purposes.
SBD Setup
Connect to storage03 and start a console session. Run the following command to create a blank file of any desired size:
dd if=/dev/zero of=/sbd count=<file size> bs=<bit size>
For example, run the following command to create a 1MB file filled with zeros copied from the /dev/zero pseudo-device:
dd if=/dev/zero of=/sbd count=1024 bs=1024
Run YaST from the command line or the Graphical User Interface: /sbin/yast
Select Network Services > iSCSI Target.
Click Targets and select the existing target.
Select Edit. The UI will present a list of LUNs (drives) that are available.
Select Add to add a new LUN.
Leave the LUN number as 2. Browse in the Path dialog and select the /sbd file that you created.
Leave the other options at their defaults, then select OK then Next, then click Next again to select the default authentication options.
Click Finish to exit the configuration. Restart the services if needed. Exit YaST.
NOTE:The following steps require that each cluster node be able to resolve the hostname of all other cluster nodes (the file sync service csync2 will fail if this is not the case). If DNS is not set up or available, add entries for each host to the /etc/hosts file that list each IP address and its hostname (as reported by the hostname command). Also, ensure that you do not assign a hostname to a loopback IP address.
Perform the following steps to expose an iSCSI Target for the SBD device on the server at IP address 10.0.0.3 (storage03).
Node Configuration
Connect to a cluster node (node1) and open a console:
Run YaST.
Open Network Services > iSCSI Initiator.
Select Connected Targets, then the iSCSI Target you configured above.
Select the Log Out option and log out of the Target.
Switch to the Discovered Targets tab, select the Target, and log back in to refresh the list of devices (leave the automatic startup option and deselect No Authentication).
Select OK to exit the iSCSI Initiator tool.
Open System > Partitioner and identify the SBD device as the 1MB IET-VIRTUAL-DISK. It will be listed as /dev/sdd or similar - note which one.
Exit YaST.
Execute the command ls -l /dev/disk/by-id/ and note the device ID that is linked to the device name you located above.
(Conditional) Execute one of the following commands:
If you are using SLES 12 SP5 or later:
ha-cluster-init
When prompted for the network address to bind to, specify the external NIC IP address (172.16.0.1).
Accept the default multicast address and port. We will override this later.
Enter y to enable SBD, then specify /dev/disk/by-id/<device id>, where <device id> is the ID you located above (you can use Tab to auto-complete the path).
(Conditional) Enter N when prompted with the following:
Do you wish to configure an administration IP? [y/N]
To configure an administration IP address, provide the virtual IP address duringResource Configuration
(Conditional) Enter N when prompted with the following:
Do you want to configure QDevice? [y/N]
Complete the wizard and make sure no errors are reported.
Start YaST.
Select High Availability > Cluster (or just Cluster on some systems).
In the box at left, ensure Communication Channels is selected.
Tab over to the top line of the configuration, and change the udp selection to udpu (this disables multicast and selects unicast).
Select to Add a Member Address and specify this node (172.16.0.1), then repeat and add the other cluster node(s): 172.16.0.2.
(Conditional) If you have not enabled authentication, then select Security from the left panel, clear Enable Security Auth.
Select Finish to complete the configuration.
Exit YaST.
Run the command systemctl restart pacemaker to restart the cluster services with the new sync protocol.
Connect to each additional cluster node (node2) and open a console:
Run YaST.
Open Network Services > iSCSI Initiator.
Select Connected Targets, then the iSCSI Target you configured above.
Select the Log Out option and log out of the Target.
Switch to the Discovered Targets tab, select the Target, and log back in to refresh the list of devices (leave the automatic startup option and deselect No Authentication).
Select OK to exit the iSCSI Initiator tool.
(Conditional) Execute one of the following commands:
If you are using SLES 12 SP5 or later:
ha-cluster-join
Enter the IP address of the first cluster node.
(Conditional) If the cluster does not start correctly, perform the following steps:
Run the command crm status to check if the nodes are joined. If the nodes are not joined, restart all the nodes in the cluster.
Manually copy the /etc/corosync/corosync.conf file from node1 to node2, or run the csync2 -x -v on node1, or manually set the cluster up on node2 through YaST.
(Conditional) If the csync2 -x -v command you run in Step 1 fails to synchronize all the files, perform the following procedure:
Clear the csync2 database in the /var/lib/csync2 directory on all the nodes.
On all the nodes, update the csync2 database to match the filesystem without marking anything as needing to be synchronized to other servers:
csync2 -cIr /
On the active node, perform the following:
Find all the differences between active and passive nodes, and mark those differences for synchronization:
csync2 -TUXI
Reset the database to force the active node to override any conflicts:
csync2 -fr /
Start synchronization to all the other nodes:
csync2 -xr /
On all the nodes, verify that all the files are synchronized:
csync2 -T
This command will list only the files that are not synchronized.
Run the following command on node2:
For SLES 12 SP5 and later:
systemctl start pacemaker.service
(Conditional) If the xinetd service does not properly add the new csync2 service, the script will not function properly. The xinetd service is required so that the other node can sync the cluster configuration files down to this node. If you see errors like csync2 run failed, you may have this problem.
To resolve this issue, execute the kill -HUP `cat /var/run/xinetd.init.pid command and then re-run the sleha-join script.
Run crm_mon on each cluster node to verify that the cluster is running properly. You can also use 'hawk', the web console, to verify the cluster. The default login name ishacluster and the password is linux.
(Conditional) Depending on your environment, perform the following tasks to modify additional parameters:
To ensure that in a single node failure in your two-node cluster does not unexpectedly stop the entire cluster, set the global cluster option no-quorum-policy to ignore:
crm configure property no-quorum-policy=ignore
NOTE:If your cluster contains more than two nodes, do not set this option.