E.2 Cluster Migration

E.2.1 Overview

Table E-2 summarizes the pre-migration and post-migration status of the cluster being migrated.

Table E-2 Service Migration Summary

Pre-Migration: Three-node NetWare 6.5 SP7 Cluster

Post-Migration: Three-node OES Cluster

Each node contains some eDirectory replicas

Each node contains some eDirectory replicas

Each node has access to fiber-attached shared storage that includes:

  • Several clustered NSS volumes for home directories and shared file systems

  • One clustered NSS volume for NDPS and iPrint

  • Clustered DHCP

  • Clustered DNS

  • Clustered AFP (NFAP)

  • Clustered CIFS (NFAP)

Each node has access to fiber-attached shared storage that includes:

  • Several clustered NSS volumes for home directories and shared file systems

  • One clustered NSS volume for iPrint

  • Clustered DHCP

  • Clustered DNS

  • Clustered Novell AFP

  • Clustered Novell CIFS

E.2.2 General Notes and Tips

  • In the YaST install, clustering is disabled by default. To set up clustering, you must enable it for configuration. See OES Cluster Services Parameters and Values in the OES 2023: Installation Guide.

  • Clustering on OES is case-sensitive. Always make sure that you have specified the correct case for each name, etc. The SPD on the OES node is created exactly as you specify it. (NetWare was case-insensitive.)

  • NetWare cluster names display in uppercase. Using lowercase for OES cluster names makes them easier to distinguish from the NetWare names.

  • On NetWare nodes, the load and unload scripts are stored in eDirectory and accessible through iManager.

  • On OES nodes, the load and unload scripts are dynamically created in /var/run/ncs from the scripts stored in eDirectory each time that you cluster-migrate a cluster resource to the OES node.

    Scripts are retained only while the OES server is running. If the server goes down for any reason, the scripts are removed. This is not a problem, however, because they are created again when you cluster-migrate the cluster resources.

  • NetWare has a limitation of 1024 characters in scripts. Linux doesn't have this limitation.

    The best solution for this limitation is to create a small script to call the larger scripts. The script must be the same on each box. Transferring DHCP in the Cluster illustrates this concept.

  • There's a utility called sbdutil that lets you manage the sbd on OES. For documentation, access the sbdutil man page on a clustered server.

E.2.3 Preparing to Migrate the Cluster

  1. Read through Table E-3 to understand what happens to the existing volumes during a cluster migration.

    Table E-3 What Happens to Existing Volumes During a Cluster Migration

    Pre-Migration Status

    Migration Action

    Post-Migration Status

    Users volume active on NetWare

    Cluster-migrates to an OES node

    Users volume active on OES

    Shared volume active on NetWare

    Cluster-migrates to an OES node

    Shared volume active on OES

    NDPS volume active on NetWare

    Migration Tool migrates configuration, etc. to the new iPrint volume.

    Offline

    DNS volume active on NetWare

    Moved by iManager to the new DNS2 volume on OES.

    Offline

  2. Create all of the NSS volumes that are required for your service migrations as listed in Table E-3.

    WARNING:This must be done while the cluster has only NetWare nodes. If you have already joined OES nodes to your cluster, make sure that you remove them from the cluster before you create the NSS volumes.

    Table E-4 New NSS Pools and Volumes Are Required

    Create These

    Migration Action

    Post-Migration Status

    Destination iPrint pool and volume (newly created through the NetWare server)

    Cluster-migrates to an OES node

    iPrint volume active on OES

    Destination DHCP pool and volume (newly created through the NetWare server)

    Cluster-migrates to an OES node

    DHCP volume active on OES

    Destination DNS2 pool and volume (newly through the NetWare server).

    Cluster-migrates to an OES node

    DNS2 volume active on OES

E.2.4 Transferring DHCP in the Cluster

  1. Before starting the migration, create the Destination DHCP volume specified in Table E-4.

  2. Add one or more OES servers to the cluster. For more information, see Adding New OES Nodes to Your NetWare Cluster in the OES 2023: OES Cluster Services for Linux Administration Guide.

  3. Set up an OES DHCP cluster resource using the instructions in the first three section only of Installation and Configuration in the OES 2023: DNS/DHCP Services for Linux Administration Guide.

  4. Edit the destination DHCP pool resource load script and insert the following line just before the last (exit 0) line:

    /destination_dhcp_volume/dhcp_cluster.sh

    where destination_dhcp_volume is the path to the destination DHCP volume listed in Table E-3.

    For example, insert the following line:

    /media/nss/DHCP_VOLUME/dhcp_cluster.sh

    IMPORTANT:This step is required to circumvent the 1024 byte script-size limitation on NetWare mentioned in General Notes and Tips.

  5. Download the dhcp_cluster.sh script file from the OES Documentation Web site.

  6. Using a UNIX-compatible text editor, replace <DHCP_VOLUME> in the dhcp_cluster.sh script with the local mount point of your destination DHCP volume.

    For example, MOUNT_POINT="/media/nss/DHCP_VOLUME".

  7. Using the instructions in Migrating DHCP to OES 2023 in the OES 2023: Migration Tool Administration Guide, migrate the NetWare DHCP configuration to one of the OES servers added to the cluster in Step 2.

  8. Copy the /etc/dhcpd.conf file to the destination DHCP volume.

    For example cp /etc/dhcpd.conf /media/nss/DHCP_VOLUME/dhcpd.conf.

  9. Edit the dhcpd.conf file you copied in Step 8, as follows:

    1. Change the ldap-server IP address to the IP address associated with your destination DHCP pool.

    2. Change the ldap-dhcp-server-cn to the OES DHCP Server Object created by the Migration Tool in Step 7.

  10. Copy the migrated_server.leases file from the /var/opt/novell/dhcp/leases folder to the /var/lib/dhcp/db folder on your Destination DHCP Volume and rename it to dhcpd.leases.

    Continuing with the same example, you use the following command to copy and rename the file:

    cp /var/opt/novell/dhcp/leases/DHCP_SERVER.leases /media/nss/DHCP_VOLUME/var/lib/dhcp/db/dhcpd.leases.

  11. Offline the DHCP cluster resource that has been running on NetWare.

  12. Online the OES DHCP cluster resource.

  13. (Optional) Use iManager to enable the DHCP server as the authoritative server.

E.2.5 Transferring DNS in a Cluster

Using Java Console to Migrate DNS Servers within the Same eDirectory Tree

See Migrating DNS from NetWare to OES 2 SP3 Linux in the OES 2 SP3: Migration Tool Administration Guide.

Installing and Configuring a Cluster-Enabled DNS

  1. Verify that all OES cluster nodes have the DNS pattern installed with a common locator group context.

  2. Mount the shared volume on one of the OES nodes in the cluster.

  3. Execute the following script at the command prompt:

    /opt/novell/named/bin/ncs_dir.sh mount_point username

    where mount_point is the Destination DNS2 volume listed in Table E-4 and username is the fully distinguished name of the DNS user (named by default).

    For example, you might enter the following command:

    /opt/novell/named/bin/ncs_dir /media/nss/DNSVOL/ cn=named.o=novell.T=MyTree

    The script creates the following directory:

    /media/nss/DEST_DNS2_VOL/etc/opt/novell/named

    The script also assigns access and ownership rights for the preceding directory to the DNS user.

  4. Run the DNS Server by using the following command:

    /opt/novell/named/bin/novell-named -u DNS_User -V DEST_DNS2_VOL

    This step ensures that DNS server is running on the cluster node.

  5. Click Cluster > Cluster Options, then select the Destination DNS2 cluster pool resource and click Details.

  6. Click the Scripts tab.

    1. Click Load Script.

    2. Add following line before exit 0 to load DNS.

      exit_on_error /opt/novell/named/bin/novell-named -u DNS_User -V DESTINATION_DNS2_VOLUME

    3. Click Unload Script.

    4. Add following line at the beginning to unload DNS.

      killproc -p /var/opt/novell/run/named/named.pid -TERM /opt/novell/named/bin/novell-named

  7. Set the Destination DNS2 cluster resource offline and then online by using the Clusters > Cluster Manager task in iManager.

  8. Verify that DNS services are functioning correctly.

E.2.6 iPrint Migration in a Cluster

How Clustered iPrint Migration Works

The OES Migration Tool (miggui) contains an NLM named PSMINFO.NLM that copies all of the iPrint data from the cluster to an XML text file named psminfo.xml on the iPrint NSS volume that you created in Step 2. The psminfo.xml file is located in an /ndps directory at the root of the volume.

The migration tool uses the information in psminfo.xml to create new printer objects, set up the driver store, create printer agents, etc. The tool also changes the names of the old iPrint objects in eDirectory by appending _nw to each name. The old names can then be applied to the new printer objects. All changes are completely transparent to iPrint users.

Tips and Caveats

  • Legacy queue-based printing cannot be serviced by an OES Printer Agent.

  • You can manage both OES iPrint and NetWare iPrint from NetWare, but you can only manage OES iPrint from OES.

  • You must create the iPrint NSS pool and volume as instructed in Step 2 prior to adding OES nodes to the cluster or running the migration.

Transferring iPrint in a Cluster

  1. Download the iprint_load.sh script and the iprint_unload.sh script from the OES Documentation Web site.

  2. Customize the iPrint load script for your iPrint pool resource by doing the following:

    1. In iManager, access the load script for the destination iPrint pool resource.

    2. Copy and paste the contents of the downloaded iprint_load.sh file below the last line of the current load script.

    3. Using the information in the current script, replace each variable (indicated by <angle brackets>) with the correct values for the cluster resource.

      For example, if the first line in the current script reads

      nss /poolactivate=POOLNAME

      Modify the third line in the downloaded script to read

      exit_on_error nss /poolact=POOLNAME

    4. Remove all of the lines down to the first line you inserted.

    5. Click Apply.

  3. Customize the iPrint unload script for your iPrint pool resource by doing in the following:

    1. In iManager, access the unload script for the destination iPrint pool resource.

    2. Copy and paste the contents of the downloaded iprint_unload.sh file below the last line of the current unload script.

    3. Using the information in the current script, replace each variable (indicated by <angle brackets>) with the correct values for the cluster resource.

      For example, if the first line in the current script reads

      ncpcon unbind ‑‑ncpservername=CLUSTERNAME_POOLNAME_SERVER ‑‑ipaddress=192.168.10.10

      Modify the third line in the downloaded script to read

      ignore_error ncpcon unbind ‑‑ncpservername=CLUSTERNAME_POOLNAME_SERVER ‑‑ipaddress=192.168.10.10

    4. Remove all of the lines down to the first line you inserted.

    5. Click Apply > OK.

  4. In iManager > Cluster Options, select the iPrint cluster resource object and click the Details link.

  5. On the Cluster Pool Properties page, click the Preferred Nodes tab and move all of the NetWare nodes to the Unassigned column.

  6. Offline and then online the cluster resource.

  7. On the server where the iPrint cluster resource is running, open a terminal and enter the following commands:

    cd /opt/novell/iprint/bin

    ./iprint_nss_relocate -a admin.fqdn -p password -n NSS/path -l cluster

    For example, enter

    ./iprint_nss_relocate -a cn=admin,o=novell -p novell -n /media/nss/NSSVOLNAME -l cluster

  8. Migrate the iPrint resource to another OES node in the cluster, then repeat Step 7 until all of the OES nodes in the cluster have run the iprint_nss_relocate script.

  9. Create the Print Manager and Driver Store on the OES cluster.

    When choosing the target server, use the IP address of the cluster resource. This specifies where the driver store and Print Manager database will reside. Begin by using the IP address of the new resource. This will need to be changed to a DNS name later by editing the .conf file.

    When you receive a certificate management error, allow the error and proceed.

    While you are creating the Print Manager, the lower dialog box indicates where the Print Manager will be located. Specify the IP address of the cluster resource. This changes later to a DNS name.

    The iPrint service doesn’t know that it's running on a cluster because the script creates a symbolic link. If the link exists, you know that the service is clustered.

  10. After you create the Print Manager and Driver Store, modify the /etc/opt/novell/iprint/conf/ipsmd.conf and idsd.conf to have multiple DSServer values.

    For example:

    • DSServer1 replicaServer
    • DSServer2 replicaServer
    • DSServer3 replicaServer
  11. Remove the pound sign (#) from the following two lines in the load script:

    • exit_on_error rcnovell-idsd start
    • exit_on_error rcnovell-ipsmd start
  12. Offline and online the cluster resource and verify that the Print Manager and Driver Store load.

  13. Create a printer to test that the service is working.

  14. Follow the instructions in Migrating iPrint to OES 2023 in the OES 2023: Migration Tool Administration Guide.

    IMPORTANT:When you authenticate to the source and target servers, use the IP address of the source Novell Cluster Services iPrint resource (secondary IP) and the IP address of the target Novell Cluster Services iPrint resource (secondary IP).

    The ipsmd.conf file is located in the /etc/opt/novell/iprint/conf directory.

E.2.7 Transferring AFP in a Cluster

  1. Install AFP on each OES server that will be in the cluster. For details, see the OES 2018 SP3: OES AFP for Linux Administration Guide.

  2. Cluster-enable the AFP service. For details, see Configuring AFP with OES Cluster Services for an NSS File System in the OES 2018 SP3: OES AFP for Linux Administration Guide.

E.2.8 Transferring CIFS in a Cluster

  1. Install CIFS on each OES server that will be in the cluster. For details, see the OES 2023: OES CIFS for Linux Administration Guide.

  2. Cluster-enable the CIFS service. For details, see Configuring CIFS with Cluster Services for an NSS File System in the OES 2023: OES CIFS for Linux Administration Guide.