Configuring HDFS Security in OMT

This section provides steps to configure or reconfigure HDFS security in the OMT.

You need not perform this procedure if you already enabled Kerberos Authentication at the time of deploying Intelligence and do not intend to modify the Kerberos details.
Perform this procedure for any of the following scenarios:
  • If you are enabling Kerberos Authentication for the first time.
  • If you need to modify the Kerberos details in the Intelligence tab. In this case, ensure that you first enable and configure Kerberos Authentication with the new Kerberos details before proceeding with this procedure.

To configure or reconfigure HDFS security in OMT:

  1. Open a certified web browser.

  2. Specify the following URL to log in to the OMT Management Portal: https://<OMT_masternode_hostname or virtual_ip hostname>:5443.

  3. Select Deployment > Deployments.

  4. Click ... (Browse) on the far right and choose Reconfigure. A new screen will be opened in a separate tab.

  5. Click Intelligence and specify details under the Hadoop File System (HDFS) Security section.

    The Enable Secure Data Transfer with HDFS Cluster field is enabled by default to encrypt communication between the HDFS cluster and the database. However, this increases the run time of the analytics jobs.
    • If you have a non-collocated database cluster and Enable Secure Data Transfer with HDFS Cluster is enabled, perform the following steps:
      1. Execute the following command in the master node:

        /opt/arcsight/kubernetes/scripts/cdf-updateRE.sh > /tmp/re_ca.cert.pem
      2. Execute the following commands in each database node:

        scp root@<master_node_FQDN>:/tmp/re_ca.cert.pem /etc/pki/ca-trust/source/anchors/
        update-ca-trust
      3. Execute the following command to verify that there is a trust relationship with the CA from each database node:

        curl https://<WORKER_RUNNING_HDFS_NAMENODE>:30071

        You should not encounter any certificate errors after executing the above command.

    • The Kerberos details that you provide in Kerberos Domain Controller Server, Kerberos Domain Controller Admin Server, Kerberos Domain Controller Domain, and Default Kerberos Domain Controller Realm will be considered only if you select kerberos in Enable Authentication with HDFS Cluster. They are not valid if you select simple.

  6. Click Save.

  7. The following containers restart:

    • interset-analytics-xxxxx-xxx

    • hdfs-namenode-x

    • hdfs-datanode-xxx

  8. (Conditional) After modifying the value of Enable Secure Data Transfer with HDFS Cluster, if HDFS namenode enters the safe mode when you run analytics, perform the following steps:

    1. Do the following to bring the HDFS namenode up:

      1. Launch a terminal session and log in to the NFS server.

      2. Navigate to the directory where NFS is created.

        (Conditional) If you have used the ArcSight Platform Installer, navigate to the following NFS directory:

        /opt/arcsight-nfs/arcsight-volume/interset/hdfs/namenode

        (Conditional) If you have used the manual deployment method, navigate to the following NFS directory:

        /<arcsight_nfs_vol_path>/interset/hdfs/namenode

        for example:

        /opt/arcsight/nfs/volumes/itom/arcsight/interset/hdfs/namenode
      3. Delete the name folder under the namenode directory.

    2. Do the following to bring the HDFS datanodes up:

      1. Navigate to the following directory:

        (Conditional) If you have used the ArcSight Platform Installer, navigate to the following directory:

        /opt/arcsight/k8s-hostpath-volume/interset/hdfs

        (Conditional) If you have used the manual deployment method, navigate to the following directory:

        <arcsight_k8s-hostpath-volume>/interset/hdfs
      2. Delete the data folder under the hdfs directory.

      3. Repeat steps i and ii on all the datanodes.

    3. Restart the HDFS datanode and namenode containers.