40.2 Upgrading the Traditional Sentinel HA

The procedure in this section guide you through upgrading the traditional Sentinel HA and Operating System.

Sentinel 8.3.0.0 onwards it uses PostgreSQL instead of MongoDB to store Security Intelligence data and alerts data.

IMPORTANT:If you are upgrading from earlier versions of Sentinel 8.3.0.0, below steps are applicable.

On the active node, the upgrade process does the following:

  • Migrates Security Intelligence data, alerts data, and so on from MongoDB to PostgreSQL.

    Sentinel now stores Security Intelligence data and alerts data in PostgreSQL instead of MongoDB. The upgrade process will first migrate this data to PostgreSQL and if successful, it will automatically proceed with the upgrade. If the data migration is unsuccessful, you cannot upgrade Sentinel.

  • Generates a cleanup script that you can use to remove data and MongoDB related RPMs.

The data stored in MongoDB is retained as a backup and you can delete it after upgrading Sentinel.

40.2.1 Upgrading Sentinel HA

  1. Enable the maintenance mode on the cluster:

    crm configure property maintenance-mode=true

    Maintenance mode helps you to avoid any disturbance to the running cluster resources while you update Sentinel. You can run this command from any cluster node.

  2. Verify whether the maintenance mode is active:

    crm status

    The cluster resources should appear in the unmanaged state.

  3. Upgrade the passive cluster node:

    1. Stop the cluster stack:

      rcpacemaker stop

      Stopping the cluster stack ensures that the cluster resources remain accessible and avoids fencing of nodes.

    2. Log in as root to the server where you want to upgrade Sentinel.

    3. Extract the install files from the tar file:

      tar xfz <install_filename>
    4. Run the following command in the directory where you extracted the install files:

      ./install-sentinel --cluster-node
    5. After the upgrade is complete, restart the cluster stack:

      rcpacemaker start

      Repeat Step 3.a to Step 3.e for all passive cluster nodes.

    6. Remove the autostart scripts so that the cluster can manage the product.

      cd /
      systemctl -q disable sentinel.service
      
      systemctl -q daemon-reload
  4. Upgrade the active cluster node:

    1. Back up your configuration, then create an ESM export.

      For more information about backing up data, see Backing Up and Restoring Data in the Sentinel Administration Guide.

    2. Stop the cluster stack:

      rcpacemaker stop

      Stopping the cluster stack ensures that the cluster resources remain accessible and avoids fencing of nodes.

    3. Log in as root to the server where you want to upgrade Sentinel.

    4. Run the following command to extract the install files from the tar file:

      tar xfz <install_filename>
    5. Run the following command in the directory where you extracted the install files:

      ./install-sentinel
    6. IMPORTANT:If you are upgrading from earlier versions of Sentinel 8.3.0.0, below steps are applicable.

      1. WARNING:Ensure that you select the appropriate option because you cannot repeat this procedure after the upgrade is successful.

        If your data is migrated successfully, the upgrade process will automatically proceed with the upgrade.

        The upgrade process retains the data that was stored in MongoDB as a backup.

      2. (Conditional) If the data migration is not successful:

        1. Clean up the unsuccessful migrated data. For more information, see Cleaning Up Data From PostgreSQL When Migration Fails

        2. (Conditional) If the Sentinel is not started automatically, start the Sentinel:

          systemctl stop sentinel.service 

          or

          <sentinel_installation_path>/opt/novell/sentinel/bin/server.sh stop
    7. After the upgrade is complete, start the cluster stack:

      rcpacemaker start
    8. Remove the autostart scripts so that the cluster can manage the product.

      cd /
      systemctl -q disable sentinel.servicel
      
      systemctl -q daemon-reload
    9. Run the following command to synchronize any changes in the configuration files:

      csync2 -x -v
  5. Disable the maintenance mode on the cluster:

    crm configure property maintenance-mode=false

    You can run this command from any cluster node.

  6. Verify whether the maintenance mode is inactive:

    crm status

    The cluster resources should appear in the Started state.

  7. (Optional) Verify whether the Sentinel upgrade is successful:

    /opt/novell/sentinel/bin/server.sh version
  8. Log in to Sentinel and verify if you are able to see the migrated data such as alerts, Security Intelligence data, and so on.

  9. The data in MongoDB is now redundant because Sentinel 8.3 and later will store data only in PostgreSQL. To clear up the disk space, delete this data. For more information, see Removing Data from MongoDB.

40.2.2 Upgrading the Operating System

This section provides information about how to upgrade the operating system to a major version, such as upgrading from SLES 11 to SLES 12, in a Sentinel HA cluster. When you upgrade the operating system, you must perform few configuration tasks, to ensure that Sentinel HA works correctly after you upgrade the operating system.

Perform the steps described in the following sections:

Upgrading the Operating System

To upgrade the operating system:

  1. Log in as root user to any node in the Sentinel HA cluster.

  2. Run the following command to enable the maintenance mode on the cluster:

    crm configure property maintenance-mode=true

    The maintenance mode helps you to avoid any disturbance to the running cluster resources while you upgrade the operating system.

  3. Run the following command to verify whether the maintenance mode is active:

    crm status

    The cluster resources should appear in the unmanaged state.

  4. Ensure that you have upgraded Sentinel to version 8.2 or later on all the cluster nodes.

  5. Ensure that all the nodes in the cluster are registered with SLES and SLESHA.

  6. Perform the following steps to upgrade the operating system on the passive cluster node:

    1. Run the following command to stop the cluster stack:

      rcpacemaker stop

      Stopping the cluster stack ensures that the cluster resources remain inaccessible and avoids fencing of nodes.

    2. Upgrade the operating system. For more information, see Upgrading the Operating System.

  7. Repeat Step 6 on all the passive nodes to upgrade the operating system.

  8. Repeat Step 6 on the active node to upgrade the operating system on it.

  9. Repeat Step 6b to upgrade the operating system on shared storage.

  10. Ensure that the operating system on all the nodes in the cluster is same.

Configuring iSCSI Targets

Perform the following procedure to configure localdata and networkdata files as iSCSI Targets.

For more information about configuring iSCSI targets, see Creating iSCSI Targets with YaST in theSUSE documentation.

To configure iSCSI targets:

  1. Run YaST from the command line (or use the Graphical User Interface, if preferred): /sbin/yast.

  2. Select Network Devices > Network Settings.

  3. Ensure that the Overview tab is selected.

  4. Select the secondary NIC from the displayed list, then tab forward to Edit and press Enter.

  5. On the Address tab, assign a static IP address of 10.0.0.3. This will be the internal iSCSIcommunications IP address.

  6. Click Next, then click OK.

  7. (Conditional) On the main screen:

    • Select Network Services > iSCSI LIO Target.

      NOTE:If you do not find this option, go to Software > Software Management > iSCSI LIO Server and install the iSCSI LIO package.

  8. (Conditional) If prompted, install the required software:

    iscsiliotarget RPM

  9. Conditional) Perform the following steps on all the nodes in the cluster:

    1. Run the following command to open the file which contains the iSCSI initiator name:

      cat /etc/iscsi/initiatorname.iscsi

    2. Note the initiator name which will be used for configuring iSCSI initiators:

      For example:

      InitiatorName=iqn.1996-04.de.suse:01:441d6988994

      These initiator names will be used while configuring iSCSI Target Client Setup.

  10. Click Service and select the When Booting option to ensure that the service starts when the operating system boots.

  11. Select the Global tab, deselect No Authentication to enable authentication, and then specify the user name and the password for incoming and outgoing authentication.

    The No Authentication option is enabled by default. However, you should enable authentication to ensure that the configuration is secure.

    NOTE:Open Text recommends you to use the different password for the iSCSI target and initiator.

  12. Click Targets, and click Add to add a new target.

  13. Click Add to add a new LUN.

  14. Leave the LUN number as 0, browse in the Path dialog (under Type=fileio) and select the /localdata file that you created. If you have a dedicated disk for storage, specify a block device, such as /dev/sdc.

  15. Repeat steps13 and14, and add LUN 1 and select /networkdata this time.

  16. Leave the other options at their default values. Click Next.

  17. (Conditional) If you are using SLES 12, click Add. When prompted for Client Name, specify the initiator name you have copied in Step 9. Repeat this step to add all the client names, by specifying the initiator names.

    The list of client names will be displayed in the Client List.

    You do not have to add the client initiator name for SLES 15 and later.

  18. (Conditional) If you have enabled authentication in Step 11, provide the authentication credentials.

    Select a client, select Edit Auth > Incoming Authentication, and specify the user name and password. Repeat this for all the clients.

  19. Click Next to select the default authentication options, and then click Finish to exit the configuration. Restart iSCSI if prompted.

  20. Exit YaST.

Configuring iSCSI Initiators

To configure iSCSI initiators:

  1. Connect to one of the cluster nodes (node1) and start YaST.

  2. Click Network Services > iSCSI Initiator.

  3. If prompted, install the required software (iscsiclient RPM).

  4. Click Service, and select When Booting to ensure that the iSCSI service is started on boot.

  5. Click Discovered Targets.

    NOTE:If any previously existing iSCSI targets are displayed, delete those targets.

    Select Discovery to add a new iSCSI target.

  6. Specify the iSCSI Target IP address (10.0.0.3).

    (Conditional) If you have enabled authentication in Step 4 in Configuring iSCSI Targets, deselect No Authentication. In the Outgoing Authentication section, enter the authentication credentials you specified while configuring iSCSI targets.

    Click Next.

  7. Select the discovered iSCSI Target with the IP address 10.0.0.3 and select Log In.

  8. Perform the following steps:

    1. Switch to Automatic in the Startup drop-down menu.

    2. (Conditional) If you have enabled authentication, deselect No Authentication.

      The user name and the password you have specified should be displayed in the Outgoing Authentication section. If these credentials are not displayed, enter the credentials in this section.

    3. Click Next.

  9. Switch to the Connected Targets tab to ensure that you are connected to the target.

  10. Exit the configuration. This should have mounted the iSCSI Targets as block devices on the cluster node.

  11. In the YaST main menu, select System > Partitioner.

  12. In the System View, you should see new hard disks of the LIO-ORG-FILEIO type (such as /dev/sdb and /dev/sdc) in the list, along with already formatted disks (such as /dev/sdb1 or /dev/<SHARED1).

  13. Repeat steps 1 through 12 on all the nodes.

Configuring the HA Cluster

To configure the HA cluster:

  1. Start YaST2 and go to High Availability > Cluster.

  2. If prompted, install the HA package and resolve the dependencies.

    After the HA package installation, Cluster—Communication Channels is displayed.

  3. Ensure that Unicast is selected as the Transport option.

  4. Select Add a Member Address and specify the node IP address, and then repeat this action to add all the other cluster node IP addresses.

  5. Ensure that Auto Generate Node ID is selected.

  6. Ensure that the HAWK service is enabled on all the nodes. If it is not enabled, run the following command to enable it:

    service hawk start

  7. Run the following command:

    ls -l /dev/disk/by-id/

    The SBD partition ID is displayed. For example, scsi-1LIO-ORG_FILEIO:33caaa5a-a0bc-4d90-b21b-2ef33030cc53.

    Copy the ID.

  8. Open the sbd file (/etc/sysconfig/sbd), and change the ID of SBD_DEVICE with the ID you have copied in step 7.

  9. Run the following commands to restart the pacemaker service:

    rcpacemaker restart

  10. Run the following commands to remove the autostart scripts, so that the cluster can manage the product.

    cd /

    systemctl -q disable sentinel.service systemctl -q daemon-reload

  11. Repeat steps 1 through 10 on all the cluster nodes.

  12. Run the following command to synchronize any changes in the configuration files:

    csync2 -x -v

  13. Run the following command to disable the maintenance mode on the cluster:

    crm configure property maintenance-mode=false

    You can run this command from any cluster node.

  14. Run the following command to verify whether the maintenance mode is inactive:

    crm status

    The cluster resources should appear in the Started state.