Preparing for Manual On-premises Installation Using sudo

Applies only to manual On-premises installations.

If you choose to run the Installer as a sudo (non-root) user, the root user must first grant the sudo user installation permission. The sudo user must have permission to execute scripts under temporary directory /tmp on all master and worker nodes.

There are two distinct file edits that need to be performed: first on the Initial Master Node only, then on all remaining master and worker nodes. These file edits are detailed below.

Editing the sudoers File on the Initial Master Node

Make the following modifications only on the Initial Master Node.

First, log on to the initial master node as the root user. Then, using visudo, edit the /etc/sudoers file and add or modify the following lines.

After replacing the corresponding values in the following commands, you must format them into a single line and ensure that there is, at most, a single space character after each comma that delimits parameters. Otherwise, you might receive an error similar to this when you attempt to save the file.
>>> /etc/sudoers: syntax error near line nn <<<
  1. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.

    Cmnd_Alias CDFINSTALL = <unzipped-installer-dir>/installers/cdf/scripts/pre-check.sh, <unzipped-installer-dir>/installers/cdf/install, <unzipped-installer-dir>/installers/cdf/node_prereq, <CDF_HOME>/uninstall.sh, <CDF_HOME>/bin/cdfctl, <CDF_HOME>/scripts/cdfctl.sh, <CDF_HOME>/bin/jq, /usr/bin/kubectl, /usr/bin/mkdir, /usr/bin/cp, /usr/bin/helm, /bin/rm, /bin/chmod, /bin/tar, <CDF_HOME>/scripts/uploadimages.sh, <CDF_HOME>/scripts/cdf-updateRE.sh, <CDF_HOME>/bin/kube-status.sh, <CDF_HOME>/bin/kube-stop.sh, <CDF_HOME>/bin/kube-start.sh, <CDF_HOME>/bin/kube-restart.sh, <CDF_HOME>/bin/env.sh, <CDF_HOME>/bin/kube-common.sh, <CDF_HOME>/bin/kubelet-umount-action.sh, /bin/chown, /bin/ls, /bin/cd, /bin/openssl, /bin/cat, /bin/vi, /bin/systemctl daemon-reload 
    For an AWS installation, the cdf-updateRE.sh script has the path:
    aws-byok-installer/installer/cdf-deployer/scripts/cdf-updateRE.sh
    If you are specifying an alternate tmp folder using the --tmp-folder parameter, ensure that you specify the correct path to <tmp path>/scripts/pre-check.sh in the Cmnd_Alias line.
    • Replace the {unzipped-installer-dir} with the directory where you unzipped the installation package. For example, /tmp/arcsight-platform-installer-<version>.zip.
    • Replace <CDF_HOME> with the value defined from a command line. By default, <CDF_HOME> is /opt/arcsight/kubernetes.

  2. Add the following lines to the wheel users group, replacing <username> with your sudo username.

    %wheel ALL=(ALL) ALL
    <username> ALL=NOPASSWD: CDFINSTALL
    Defaults:<username> env_keep += "CDF_HOME", !requiretty
    Defaults: root !requiretty
  3. Locate the secure_path line in the sudoers file and ensure the following paths are present.

    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

    By doing this, the sudo user can execute the showmount, curl, ifconfig and unzip commands when installing the OMT Installer.

  4. Save the file.

Editing the sudoers File on the Remaining Master and Worker Nodes

Make the following modifications only on the remaining master and worker nodes.

Log in to each master and worker node. Then, using visudo, edit the /etc/sudoers file and add or modify the following:

In the following commands you must ensure there is, at most, a single space character after each comma that delimits parameters. Otherwise, you might get an error similar to this when you attempt to save the file. >>> /etc/sudoers: syntax error near line nn <<<
  1. Add the following Cmnd_Alias line to the command aliases group in the sudoers file.

    Cmnd_Alias CDFINSTALL = /tmp/pre-check.sh, /tmp/ITOM_Suite_Foundation_Node/install, /tmp/ITOM_Suite_Foundation_Node/node_prereq, <CDF_HOME>/uninstall.sh, <CDF_HOME>/bin/cdfctl, <CDF_HOME>/scripts/cdfctl.sh, /usr/bin/kubectl, /usr/bin/mkdir, /usr/bin/cp, /usr/bin/helm, /bin/rm, /bin/su, /bin/chmod, /bin/tar, <CDF_HOME>/scripts/uploadimages.sh, <CDF_HOME>/scripts/cdf-updateRE.sh, <CDF_HOME>/bin/kube-status.sh, <CDF_HOME>/bin/kube-stop.sh, <CDF_HOME>/bin/kube-start.sh, <CDF_HOME>/bin/kube-restart.sh, <CDF_HOME>/bin/env.sh, <CDF_HOME>/bin/kube-common.sh, <CDF_HOME>/bin/kubelet-umount-action.sh, /bin/chown

    If you are specifying an alternate tmp folder using the --tmp-folder parameter, ensure that you specify the correct path to <tmp path>/scripts/pre-check.sh in the Cmnd_Alias line.

    • Replace <CDF_HOME> with the value defined from a command line. By default, <CDF_HOME> is /opt/arcsight/kubernetes.

  2. Add the following lines to the wheel users group, replacing <username> with your sudo username.

    %wheel ALL=(ALL) ALL
    <username> ALL=NOPASSWD: CDFINSTALL
    Defaults:<username> env_keep += "CDF_HOME", !requiretty
    Defaults: root !requiretty
  3. Locate the secure_path line in the sudoers file and ensure the following paths are present.

    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin

    By doing this, the sudo user can execute the showmount, curl, ifconfig and unzip commands when installing the OMT Installer.

  4. Save the file.

  5. Repeat the process for each remaining master and worker node.

Configuring the OS on the ArcSight Database Cluster Nodes

To prepare ArcSight Database nodes for installation as a non-root user, you must configure the operating system on the database cluster nodes so that the non-root user can run the sudo command with the correct permissions.

  1. Create the non-root user for all nodes in the cluster.

  2. Give /opt ownership to non-root user for all nodes:

    chown <non-root>:<non-root> /opt
  3. Enable the non-root user to be able to run sudo commands. Append the following line to /etc/sudoers on all nodes:

    <non_root_userid> ALL=(ALL) ALL
  4. (Optional) Disable root ssh remote login on all nodes:

    • In /etc/ssh/sshd_config, change PermitRootLogin to no:

      PermitRoot Login no
    • Run the following command to restart sshd:

      systemctl restart sshd