8.3 Installation and Configuration

8.3.1 Prerequisites

8.3.2 Verifying the OES Cluster Services Setup

To ensure that the OES Cluster Services is set up properly:

  1. Log in to iManager.

  2. In Roles and Tasks, select Clusters > My Clusters, then select the cluster.

    If the cluster does not appear in your personalized list of clusters to manage, you can add it. Click Add, browse and select the cluster, then click OK. Wait for the cluster to appear in the list and report its status, then select the cluster.

  3. Click the Cluster Options tab.

  4. Select the check box next to the Cluster resource object that you created for the shared NSS pool, then click the Details link.

  5. Click the Preferred Nodes tab to view a list of the nodes that are assigned as the preferred nodes for failover and migration.

After executing these steps, you can mount the shared volume on the preferred nodes by using the Client for Open Enterprise Server. The shared volume is mounted on the preferred node so that the directories and lease files are created. This process also assigns rights to the shared volume.

8.3.3 Installing and Configuring a Cluster

  1. Ensure that association between the DHCP Server object and the DHCP Service object is set by using Java Console.

  2. Use Java Management Console for DHCP to create a DHCP Subnet and a DHCP Pool object. For details see Section 7.1.6, Subnet Management and Section 7.1.7, Pool Management.

  3. The DHCP server by default uses the dhcpd user that is created in the local system during installation process. If you want to use another user, create the user by using the Security and Users > User Management option in YaST.

    After creating the user, update /etc/sysconfig/dhcpd file, then set the value of the variable DHCPD_RUN_AS to the new user.

  4. Click the Users > Create User task in iManager to open the Create User window. Specify the details and click OK to create user dhcpd or the new user in eDirectory.

  5. The user created in Step 4 needs to be LUM-enabled. To do this, click the Linux User Management > Enable Users for Linux task. This opens the Enable Users for Linux window. Search for and select the user created in Step 4, then click OK to select the user.

    1. Make sure that every user belongs to a primary group. To add a user to a group, search for an Existing eDirectory Group object.

    2. Select the DHCPGroup object from the list.

    3. Select the workstations to which the Linux-enabled user should have access.

    4. Click Next to confirm the selection.

      The user is now Linux-enabled, included in the DHCP Group, and granted access to cluster nodes.

    5. Update the UID of the user created above to the dhcpd user’s default UID. Select Modify User task in iManager. Select the user, go to Linux Profile tab of the user and Modify User ID to the dhcpd userʹs default UID.

  6. Mount the shared volume on one of the nodes in the cluster.

  7. Execute the following command at the command prompt:

    /opt/novell/dhcp/bin/ncs_dir.sh <MountPath> <FQDN of Username with tree-name>

    The MountPath parameter indicates the target directory in the volume where DHCP-specific directories are created.

    For example, /opt/novell/dhcp/bin/ncs_dir.sh /media/nss/DHCPVOL/ cn=dhcpd.o=novell.T=MyTree;

    When the script is executed, it creates the following folders:

    • /media/nss/DHCPVOL/etc

    • /media/nss/DHCPVOL/var/lib/dhcp/db

    The script also takes care of assigning permissions for these directories.

  8. Copy the /etc/dhcpd.conf file to /media/nss/DHCPVOL/etc directory and modify the LDAP attributes as required.

    For example, ldap-server "192.168.0.1"; ldap-dhcp-server-cn "DHCP_acme";

    Set the ldap-server attribute with the shared NSS pool IP Address.

    Set the ldap-dhcp-server-cn attribute with the name of the DHCP server object that you want to use.

  9. To hardlink, enable the shared volume on which the dhcpd.conf and dhcpd.leases files are hosted eg.DHCPVOL.

    Invoke nsscon in the linux terminal and execute the following command:

    /hardlinks=VolName
  10. To ensure that hard links are enabled, execute the following commands in the shared volume:

    touch testfile.txt
    ln testfile.txt testlink.txt
    unlink testlink.txt
    rm testfile.txt

    If the hard link was successfully enabled, these commands execute without errors.

  11. Open a terminal on the node where the shared volume is mounted and execute the following command at the prompt:

    dhcpd -cf /media/nss/DHCPVOL/etc/dhcpd.conf -lf /media/nss/DHCPVOL/var/lib/dhcp/db/dhcpd.leases

    This step ensures that the DHCP server can work on a cluster setup with shared volumes.

    Stop the server by executing the following command at the prompt: killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd

  12. In iManager, select Clusters > My Cluster, select the cluster, then select the Cluster Options tab.

    Select the DHCP Cluster resource that was created as part of Prerequisites and click Details. The Cluster Pool Properties are displayed. Click the Scripts tab. You can now view or edit the load or unload scripts.

    If you modify a script, click Apply to save your changes before you leave the page. Changes do not take effect until you take the resource offline, and bring it online again.

    1. Click Load Script.

    2. Ensure that the DHCP load script is same as specified in DHCP Load Script. Click Apply if you make changes.

    3. Click Unload Script.

    4. Ensure that the DHCP unload script is same as specified in DHCP Unload Script. Click Apply if you make changes.

    5. Click Monitor Script.

    6. Ensure that the DHCP monitor script is the same as specified in Configuring the DHCP Monitor Script. Click Apply if you make changes.

    7. Click OK to save the changes.

  13. Set the DHCP resource online. Select the Cluster Manager tab, select the check box next to the DHCP resource, then click Online.

8.3.4 DHCP Load, Unload, and Monitor Scripts

DHCP Load Script

The load script contains commands to start the DHCP service.The load script appears similar to the following example:

#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs
exit_on_error nss /poolact=DHCPPOOL
exit_on_error ncpcon mount DHCPVOL=254
exit_on_error add_secondary_ipaddress 10.10.2.1
exit_on_error ncpcon bind --ncpservername=DHCPCLUSTER-DHCPPOOL-SERVER --ipaddress=10.10.2.1
exit 0

Configuring the DHCP Load Script

  1. Add the following line to the script before exit 0 to load DHCP:

    exit_on_error /opt/novell/dhcp/bin/cluster_dhcpd.sh -m <MOUNT_POINT>

    For example: MOUNT_POINT= /media/nss/DHCPVOL

  2. Click Next and continue with the unload script configuration.

DHCP Unload Script

The unload script contains commands to stop the DHCP service. The unload script appears similar to the following example:

#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs
ignore_error ncpcon unbind --ncpservername=DHCPCLUSTER-DHCPPOOL-SERVER --ipaddress=10.10.2.1
ignore_error del_secondary_ipaddress 10.10.2.1
ignore_error nss /pooldeact=DHCPPOOL
exit 0

Configuring the DHCP Unload Script

Add the following line after the./opt/novell/ncs/lib/ncsfuncs statement:

ignore_error killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd

The path for the dhcpd.pid file changed between OES 11 and OES 11 SP1. In OES 11, the DHCP process ID is located in /var/run/dhcpd.pid. In OES 11 SP1 and later versions, the DHCP process ID is located in /var/lib/dhcp/var/run/dhcpd.pid. During a cluster upgrade from OES 11 to OES 11 SP1 and later, you must change the path for dhcpd.pid. For more information, see Changing the Path for dhcpd.pid“.

Changing the Path for dhcpd.pid

During a cluster upgrade from OES 11 to OES 11 SP1 and later versions, you must modify the location of the dhcpd.pid file in the unload script from /var/run/dhcpd.pid to /var/lib/dhcp/var/run/dhcpd.pid. After you modify the script, you should bring the resource online only on OES 11 SP1 and later nodes.

  1. In your OES 11 cluster, upgrade one or more nodes to OES 11 SP1 and later.  

    At least one of the upgraded nodes should appear in the DHCP resource's preferred nodes list. If it is not, you can modify the resource's preferred nodes list. For information about how to set preferred nodes, see Configuring Preferred Nodes and Node Failover Order for a Resource in the OES 23.4: OES Cluster Services for Linux Administration Guide“.

  2. Cluster migrate the DHCP resource to an OES 11 SP1 and later node in its preferred nodes list:

    1. Log in as the root user to the OES 11 node where the resource is running, then open a terminal console.

    2. At the command prompt, enter

      cluster migrate <dhcp_resource_name> <oes11sp1_node_name>

      The DHCP resource goes offline on the OES 11 node and comes online on the specified OES 11 SP1 and later node.

  3. Log in to iManager, click Clusters, select the cluster, then click the Cluster Manager tab.

  4. On the Cluster Manager tab, select the check box next to the DHCP resource, then click Offline.

  5. At a command prompt on the OES 11 SP1 and later cluster node, manually stop the DHCP process by entering:

    killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd

    You must do this because the path in the old unload script is different from the path in OES 11 SP1 and later versions.

  6. In iManager, click the Cluster Options tab, then click the DHCP resource link to open its Properties page.

  7. Modify the path for the dhcpd.pid file in the unload script for the DHCP resource:

    1. Click the Scripts tab, then click Unload Script.

    2. Look for the following line in the DHCP unload script from OES 11:

      ignore_error killproc -p /var/run/dhcpd.pid -TERM /usr/sbin/dhcpd

    3. Change it to the following for OES 11 SP1 and later versions:

      ignore_error killproc -p /var/lib/dhcp/var/run/dhcpd.pid -TERM /usr/sbin/dhcpd

    4. Click Apply to save the script changes.

  8. Click the Preferred Nodes tab, remove the OES 11 nodes from the Assigned Nodes list, then click Apply.

    After the unload script change, you want the DHCP resource to fail over only to OES 11 SP1 and later nodes. This is necessary to ensure a graceful shutdown of the dhcpd.pid when the DHCP resource fails over to a different node. For information about how to set preferred nodes, see Configuring Preferred Nodes and Node Failover Order for a Resource in the OES 23.4: OES Cluster Services for Linux Administration Guide.

  9. Click OK to save your changes and close the resource's Properties page.

  10. Bring the DHCP resource online again. Click the Cluster Manager tab, select the check box next to the DHCP resource, then click Online.

    The resource will come online on the OES 11 SP1 and later node that is listed as its most preferred node if the node is available.

DHCP Monitor Script

The monitor script contains commands to monitor the DHCP service. The monitor script appears similar to the following example:

#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs
exit_on_error status_fs /dev/pool/POOL1 /opt/novell/nss/mnt/.pools/DHCPPOOL nsspool
exit_on_error status_secondary_ipaddress 10.10.2.1 
exit_on_error ncpcon volume DHCPVOL
exit 0 

Configuring the DHCP Monitor Script

  1. Add the following before exit 0 :

    rcnovell-dhcpd status 
    if test $? != 0; then 
         exit_on_error /opt/novell/dhcp/bin/cluster_dhcpd.sh -m <MOUNT_POINT> 
    fi 
    exit_on_error rcnovell-dhcpd status