Preparing Your Environment

To prepare your environment for installing the OMT infrastructure, ensure that you complete the following procedures:

 

Checking Your Firewall Settings

Ensure that the firewalld.service is enabled and running on all nodes.

# systemctl start firewalld
# systemctl enable firewalld

 

Enabling the Masquerade Setting in the Firewall

You must enable the masquerade setting only when the firewall is enabled.

Run the following command on all master and worker nodes to check whether the masquerade setting is enabled:

# firewall-cmd --query-masquerade

 

Modifying the System Clock

A network time server must be available. chrony implements this protocol and is installed by default on some versions of RHEL . chrony must be installed on every node.

Verify the chrony configuration by using the command:

# chronyc tracking

To install chrony, start the chrony daemon, then verify operation with these commands:

# yum install chrony 
# systemctl start chronyd
# systemctl enable chronyd
# chronyc tracking

 

Checking Password Authentication Settings

If you use a user name and password authentication for adding cluster nodes during the installation, ensure that the PasswordAuthentication parameter in the /etc/ssh/sshd_config file is set to "yes."

There is no need to check the password authentication setting when you add the cluster nodes using a user name and key authentication.

To ensure the password authentication is enabled, perform the following steps on every master and worker node:

  1. Log on to the cluster node.
  2. Open the following file:
  3. /etc/ssh/sshd_config
  4. Check if the parameter PasswordAuthentication is set to yes. If not, set the parameter to yes as below.
  5. PasswordAuthentication yes
  6. Run the following command to restart the sshd service:
  7. systemctl restart sshd.service

 

Ensuring That Required OS Packages Are Installed

The packages listed in the following table are required on one or more node types, as shown here. These packages are available in the standard yum repositories.

Additional Information

 

Package Name

Required by Master Nodes?

Required by Worker Nodes? Required by NFS Server?

conntrack-tools

Yes

Yes No
container-selinux (package version 2.74 or later) Yes Yes No

curl

Yes

Yes No

device-mapper-libs

Yes

Yes No

httpd-tools

Yes

Yes No

java-1.8.0-openjdk

Yes

No No

libgcrypt

Yes

Yes No

libseccomp

Yes

Yes No

libtool-ltdl

Yes

Yes No

lvm2

Yes

Yes No

net-tools

Yes

Yes No

nfs-utils

Yes

Yes Yes

rpcbind

Yes

Yes Yes
socat Yes Yes No

systemd-libs (version >= 219)

Yes

Yes No

unzip

Yes

Yes No
bind-utils

Yes

Yes No
openssl Yes Yes No
If bash-completion is not installed as a package on nodes, a warning is shown. However, the bash-completion package is not required.

To check for prior installation of any of these packages:

  1. Set up the yum repository on your server.
  2. Run this command:
  3. yum list installed <package name>
  4. This command returns an exit status code where:
    • 0 indicates the package is installed

      1 indicates the package is not installed (does not check whether the package is valid)

To install a required package:

Run the following command:

yum -y install <package name>

 

Checking MAC and Cipher Algorithms

Ensure that the /etc/ssh/sshd_config files on every master and worker nodes are configured with at least one of the following values, which lists all supported algorithms. Add only the algorithms that meet the security policy of your organization.

To verify configurations:

For example, you could add the following lines to the /etc/ssh/sshd_config files on all master and worker nodes:

MACs hmac-sha2-256,hmac-sha2-512
Ciphers aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr

 

Setting System Parameters (Network Bridging)

  1. Log in to the node.
  2. Run the following command:
  3. echo -e "\nnet.bridge.bridge-nf-call-ip6tables=1\nnet.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
  4. Run the following command:

    echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
  5. Run the following commands:
  6. modprobe br_netfilter && sysctl -p
    echo -e '\nmodprobe br_netfilter && sysctl -p' >> /etc/rc.d/rc.local; chmod +x /etc/rc.d/rc.local
  7. Open the following file in a text editor:
  8. /etc/sysctl.conf
  9. (Conditional) If installing on RHEL earlier than version 8.1, change the following if the line exists.
  10. net.ipv4.tcp_tw_recycle=1 to net.ipv4.tcp_tw_recycle=0
  11. (Conditional) If installing on RHEL 8.1 or later, remove or comment out this line, if it exists.
  12. net.ipv4.tcp_tw_recycle=
  13. Save your changes and close the file.
  14. Run this command to apply your updates to the node:
  15. reboot

Understanding Example Files

To view example files:

Example sysctl.conf file for RedHat version 7.x:

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
kernel.sem=50100 128256000 50100 2560

Example sysctl.conf file for RedHat 8.1 or later:

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
kernel.sem=50100 128256000 50100 2560

 

Remove IPV6 entry (mandatory)

A manual installation requires removing the IPv6 entry from /etc/hosts. Follow these steps to accomplish the removal:

vi /etc/hosts

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

to

#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

Removing Libraries to Prevent Ingress

You must remove any libraries that will prevent ingress from starting.

  1. Run the following command:
  2. yum remove rsh rsh-server vsftpd
  3. Confirm the removal when prompted.

 

Configuring Elasticsearch Settings

This procedure applies only when you are deploying the Intelligence capability.

To ensure the Elasticsearch pods run after deployment and the Elasticsearch cluster is accessible:

  1. Launch a terminal session and log in to a worker node.
  2. Change to the following directory:
  3. cd /etc/
  4. In the sysctl.conf file, add the following:
  5. vm.max_map_count=262144
  6. Restart the node:
  7. reboot
  8. Repeat steps 1-4 on all worker nodes.