There are two options to install Sentinel: install every part of Sentinel onto the shared storage using the --location option to redirect the Sentinel installation to the location where you have mounted the shared storage or install only the variable application data on the shared storage.
Install Sentinel to each cluster node that can host it. After you install Sentinel the first time, you must perform a complete installation including the application binaries, configuration, and all the data stores. For subsequent installations on the other cluster nodes, you will only install the application. The Sentinel data will be available once you have mounted the shared storage.
Connect to one of the cluster nodes (node1) and open a console window.
Download the Sentinel installer (a tar.gz file) and store it in /tmp on the cluster node.
Perform the following steps to start the installation:
Execute the following commands:
mount /dev/<SHARED1> /var/opt/novell
cd /tmp
tar -xvzf sentinel_server*.tar.gz
cd sentinel_server*
./install-sentinel --record-unattended=/tmp/install.props
Specify 2 to select Custom Configuration when prompted to select the configuration method.
If you are enabling FIPS mode, add the path of the OpenSearch certificate using following command when it prompts for the external certificate:
<sentinel_installation_path>/opt/novell/sentinel/3rdparty/opensearch/config/certs <certificate_name>.pem
Where <certificate_name> has following values:
root-ca
admin
node
client
Run through the installation, configuring the product as appropriate.
Start Sentinel and test the basic functions. You can use the standard external cluster node IP address to access the product.
Shut down Sentinel and dismount the shared storage using the following commands:
rcsentinel stop
umount /var/opt/novell
This step removes the autostart scripts so that the cluster can manage the product.
cd /
insserv -r sentinel
The Sentinel HA appliance includes the Sentinel software that is already installed and configured. To configure the Sentinel software for HA, perform the following steps:
Connect to one of the cluster nodes (node1) and open a console window.
Navigate to the following directory:
cd /opt/novell/sentinel/setup
Record the configuration:
Execute the following command:
./configure.sh --record-unattended=/tmp/install.props --no-start
This step records the configuration in the file install.props, which is required to configure the cluster resources using the install-resources.sh script.
Specify 2 to select Custom Configuration when prompted to select the configuration method.
When prompted for password, specify 2 to enter a new password.
If you specify 1, the install.props file does not store the password.
Shut down Sentinel using the following command:
rcsentinel stop
This step removes the autostart scripts so that the cluster can manage the product.
insserv -r sentinel
Move the Sentinel data folder to the shared storage using the following commands. This movement allows the nodes to utilize the Sentinel data folder through shared storage.
mkdir -p /tmp/new
mount /dev/<SHARED1> /tmp/new
mv /var/opt/novell/sentinel/* /tmp/new
umount /tmp/new/
Verify the movement of the Sentinel data folder to the shared storage using the following commands:
mount /dev/<SHARED1> /var/opt/novell/sentinel
umount /var/opt/novell/sentinel
Perform the following steps to configure the appliance with SMT:
Enable the appliance repositories by running the following commands in the SMT server:
smt-repos -e Sentinel-Server-HA-8-OS-Updates sle-12-x86_64
smt-repos -e Sentinel-Server-HA-8-Prod-Updates sle-12-x86_64
Configure the appliance with SMT by performing the steps in the Configuring Clients to Use SMT
section of the SMT documentation.
Repeat the installation on other nodes:
The initial Sentinel installer creates a user account for use by the product, which uses the next available user ID at the time of the install. Subsequent installs in unattended mode will attempt to use the same user ID for account creation, but the possibility for conflicts (if the cluster nodes are not identical at the time of the install) does exist. It is highly recommended that you do one of the following:
Synchronize the user account database across cluster nodes (manually through LDAP or similar), making sure that the sync happens before subsequent installs. In this case the installer will detect the presence of the user account and use the existing one.
Watch the output of the subsequent unattended installs - a warning will be issued if the user account could not be created with the same user ID.
Connect to each additional cluster node (node2) and open a console window.
Execute the following commands:
cd /tmp
scp root@node1:/tmp/sentinel_server*.tar.gz .
scp root@node1:/tmp/install.props .
tar -xvzf sentinel_server*.tar.gz
cd sentinel_server*
./install-sentinel --no-start --cluster-node --unattended=/tmp/install.props
insserv -r sentinel
Connect to each additional cluster node (node2) and open a console window.
Execute the following command:
insserv -r sentinel
Stop Sentinel services.
rcsentinel stop
Remove Sentinel directory.
rm -rf /var/opt/novell/sentinel/*
At the end of this process, Sentinel should be installed on all nodes, but it will likely not work correctly on any but the first node until various keys are synchronized, which will happen when we configure the cluster resources.
Perform the following steps to connect RCM/RCE in traditional HA mode for both fresh and existing setup:
Add an entry in /etc/hosts file as given below on RCM/RCE box before installing/configuring RCM/RCE.
<virtual ip> <FQDN of first_scuccessful_activenode_host> <first_scuccessful_activenode_hostname>
For example: 164.99.87.27 first_active_host.dom.name first_active_host
IMPORTANT:Make sure that always this entry matches with the proper first successful active node hostname in HA environment specified in /etc/hosts file before running configure.sh
Give virtual IP at the prompt while connecting RCM/RCE to the server.
IMPORTANT:Although first successful active node is down and the other node is currently active, still use first successful active node name with virtual IP in /etc/hosts file.
Perform the following steps to connect RCM/RCE in appliance HA mode for fresh setup:
Use only the hostname of the first successful active node in the HA cluster.
Perform the following steps to connect RCM/RCE in appliance HA mode for existing setup:
Add an entry in /etc/hosts file as given below on RCM/RCE box before installing/configuring RCM/RCE.
<virtual ip> <FQDN of first_scuccessful_activenode_host> <first_scuccessful_activenode_hostname>
For example: 164.99.87.27 first_active_host.dom.name first_active_host
IMPORTANT:Make sure that always this entry matches with the proper first successful active node hostname in HA environment specified in /etc/hosts file before running configure.sh
Give virtual IP at the prompt while connecting RCM/RCE to the server.
IMPORTANT:Although first successful active node is down and the other node is currently active, still use first successful active node name with virtual IP in /etc/hosts file.