Setting Up SSL Client-Side Authentication Between Transformation Hub and ESM - Non-FIPS Mode

ArcSight Platform maintains its own certificate authority (CA) to issue certificates for individual nodes in the Transformation Hub cluster and external communication. ESM needs the signed certificates in its truststore so that it will trust connections to the Arcsight Platform and Transformation Hub. You might need to contact the ArcSight Platform administrator to obtain the signed certificates if you do not have sufficient privileges to access them and run the necessary commands.

Note: When configuring Transformation Hub access, you must specify the FQDN of the ArcSight Platform virtual IP for HA or single master node and not the IP address.

To complete the configuration, complete the following tasks:

Enabling Client-side Authentication Between Transformation Hub and ESM:

  1. Verify that Transformation Hub is functional and that client authentication is configured.

  2. As user arcsight, stop the ArcSight Manager:

    /etc/init.d/arcsight_services stop manager
  3. If /opt/arcsight/manager/config/client.properties does not exist, create it using an editor of your choice.

  4. Change the store password for the keystore, keystore.client, which has an empty password by default. This empty password interferes with the certificate import:

    /opt/arcsight/manager/bin/arcsight keytool -store clientkeys -storepasswd -storepass ""
  5. Run the following command to update the empty password of the generated key services-cn in the keystore to be the same password as that of the keystore itself. When prompted, specify the same password that you entered for the store password:

    /opt/arcsight/manager/bin/arcsight keytool -store clientkeys -keypasswd -keypass "" -alias services-cn
  6. Run the following command to update the password in config/client.properties:

    /opt/arcsight/manager/bin/arcsight changepassword -f config/client.properties -p ssl.keystore.password
  7. Generate the keypair and certificate signing request (.csr) file. When generating the keypair, specify the fully qualified domain name of the ArcSight Manager host as the common name (CN) for the certificate.

    Run the following commands:

    export COMMON_NAME=<your ESM host's fully qualified domain name>
    /opt/arcsight/manager/bin/arcsight keytool -store clientkeys -genkeypair -dname "cn=${COMMON_NAME}, ou=<your organization>, o=<your company>, c=<your country>" -keyalg rsa -keysize 2048 -alias ebkey -startdate -1d -validity 366
    /opt/arcsight/manager/bin/arcsight keytool -certreq -store clientkeys -alias ebkey -file  ${COMMON_NAME}.csr

    where ${COMMON_NAME}.csr is the output file where the .csr is stored.

  8. To sign the ESM certificate signing request, perform the following steps in the ArcSight Platform. For an on-premises deployment, perform the steps on the master node. For a cloud deployment, perform the steps on the Bastion host.

    1. Create a temporary folder to store the generated certificates:

      mkdir –m 700 /tmp/esm
    2. Move the certificate signing request (.csr) file from the ESM host to the temporary folder that you created.

    3. Set the environment variables:

      export CA_CERT=re_ca.cert.pem
      export COMMON_NAME=<your ESM host's fully qualified domain name>
      export TH=<FQDN of the ArcSight Platform virtual IP for HA or single master node>_<Kafka TLS-enabled port>
      Note: For COMMON_NAME, use the same host FQDN as you used for the ESM client key pair.
    4. Run the following commands to sign the ESM certificate signing request:

      cd /tmp/esm
      export CDF_APISERVER=$(kubectl get pods -n core -o custom-columns=":metadata.name"| grep cdf-apiserver)
      export PASSPHRASE=$(kubectl get secret vault-passphrase -n core -o json 2>/dev/null | jq -r '.data.passphrase')
      export ENCRYPTED_ROOT_TOKEN=$(kubectl get secret vault-credential -n core -o json 2>/dev/null | jq -r '.data."root.token"')
      export VAULT_TOKEN=$(echo ${ENCRYPTED_ROOT_TOKEN} | openssl aes-256-cbc -md sha256 -a -d -pass pass:"${PASSPHRASE}")
      export CSR=$(cat ${COMMON_NAME}.csr)
      
      WRITE_RESPONSE=$(kubectl exec -it -n core ${CDF_APISERVER} -c cdf-apiserver -- bash -c "VAULT_TOKEN=$VAULT_TOKEN vault write -tls-skip-verify -format=json RE/sign/coretech csr=\"${CSR}\"") && \
      echo "$WRITE_RESPONSE" | jq -r ".data | .certificate" > ${COMMON_NAME}.signed.crt && \
      echo "$WRITE_RESPONSE" | jq -r ".data | .issuing_ca" > ${COMMON_NAME}.issue_ca.crt && \
      echo "$WRITE_RESPONSE" | jq -r ".data | .certificate, if .ca_chain then .ca_chain[] else .issuing_ca end" > ${COMMON_NAME}.signed.cert.with.ca.crt 

      The signed certificate is in the file ${COMMON_NAME}.signed.crt. The issuing CA is in the file ${COMMON_NAME}.issue_ca.crt. The signed certificate with the CA chain is in the file ${COMMON_NAME}.signed.cert.with.ca.crt.

  9. Retrieve the RE certificates:

    For an on-premises deployment:

    cd /opt/arcsight/kubernetes/scripts/
    ./cdf-updateRE.sh > /tmp/esm/${CA_CERT}

    For a cloud deployment:

    cd {path to cdf installer}/cdf-deployer/scripts/
    ./cdf-updateRE.sh > /tmp/esm/${CA_CERT}
  10. Copy the following files from the Transformation Hub /tmp/esm folder to an ESM host folder (for example, /opt/arcsight/tmp):

    /tmp/esm/${COMMON_NAME}.signed.cert.with.ca.crt

    /tmp/esm/${CA_CERT}

    Remove the files from /tmp/esm after you copy them.

  11. On the ESM server, import the RE certificate from file ${CA_CERT} into the ESM client truststore:

    /opt/arcsight/manager/bin/arcsight keytool -store clientcerts -alias <alias for the certificate> -importcert -file <absolute path to certificate file>

    For example:

    /opt/arcsight/manager/bin/arcsight keytool -store clientcerts -alias thcert -importcert -file /opt/arcsight/tmp/re_ca.cert.pem

    Note: You might receive the following message:

    Certificate already exists in keystore under alias <alias1>

    Do you still want to add it? [no]:

    It is not necessary to add an existing certificate.

  12. On the ESM server, run the following command to import the signed certificate:

    /opt/arcsight/manager/bin/arcsight keytool -store clientkeys -alias <alias for the key> -importcert -file <path to signed cert> -trustcacerts

    For example:

    /opt/arcsight/manager/bin/arcsight keytool -store clientkeys -alias ebkey -importcert -file /opt/arcsight/tmp/${COMMON_NAME}.signed.cert.with.ca.crt –trustcacerts

    Note: You might see the following warning:

    ...

    Top-level certificate in reply:

    ...

    ... is not trusted. Install reply anyway? [no]:

    This is because the root certificate of the RE CA is not in the ESM truststore. This does not affect the functionality of ESM. Enter yes to allow the new certificate to be imported.

Configuring ESM to Consume from Transformation Hub

  1. Run the following command:

    /opt/arcsight/manager/bin/arcsight managersetup -i console
  2. In the wizard, press Enter until the wizard asks whether you want to read events from Transformation Hub. Select Yes, then provide the following information:

    1. Host name and port information for the worker nodes in Transformation Hub. Use a comma-separated list (for example: <host>:<port>,<host>:<port>) and specify the FQDN of the worker nodes.

      Note: You must specify the host name and not the IP address.

      Transformation Hub can only accept IPv4 connections from ESM.

      If the Kafka cluster is configured to use SASL/PLAIN authentication, ensure that you specify the port configured in the cluster for the SASL_SSL listener.

    2. Topics in Transformation Hub from which you want to read. These topics determine the data source.

      Note: You can specify up to 25 topics using a comma-separated list (for example: topic1,topic2). ESM will read Avro-format events from any topic where the name contains "avro" in lower case. For example, th-arcsight-avro.
    3. Leave the path to the Transformation Hub root certificate empty, as you already imported the certificates.

    4. Leave the authentication type as None.
    5. Leave the user name and password empty.
    6. If you specified an Avro topic, specify the host name and port for connecting to the Schema Registry in the format <FQDN of the ArcSight Platform virtual IP for HA or single master node:port>.

      Note: The default port for connecting to the Schema Registry is 32081.

      Transformation Hub runs a Confluent Schema Registry that producers and consumers use to manage compatibility of Avro-format events.

      The wizard uses this information to connect to the Schema Registry, read the Avro schemas for the Avro topic that you specified, and verify that the topic contains Avro events that are compatible with ESM. If ESM cannot retrieve the Avro schemas for the Avro topic that you specified and compare it to the event schema that is packaged with ESM, or if incompatible schemas are detected, the wizard generates warning messages but allows you to continue. In some cases, you might already know that Transformation Hub will use a compatible schema when the Manager is running.

    7. If you choose to configure the Forwarding Connector to forward CEF events to Transformation Hub and then configure Transformation Hub to filter Avro events, use filters to ensure that ESM does not receive duplicate events. You might want to use filters to accomplish the following:

      • Filter out desired events from Connectors so that ESM does not process them.
      • Filter out ESM's correlation events that were forwarded (CEF events that the Forwarding Connector sent to th-cef) so that ESM does not re-process its own events.

        If you do not configure filtering, ESM must consume from the th-arcsight-avro topic. If you configure filtering, ESM must consume from the mf-event-avro-esmfiltered topic. For more information, see configuring filters and local and global event enrichment.

    After providing the information, specify Yes and complete the remaining sections of the wizard.

  3. Start the ArcSight Manager:

    In compact mode:

    /etc/init.d/arcsight_services start manager

    In distributed mode:

    /etc/init.d/arcsight_services stop all
    /etc/init.d/arcsight_services start all

    Ensure that all services started:

    /etc/init.d/arcsight_services status
  4. Verify that the connection was successful:

    grep -rnw '/opt/arcsight/var/logs/manager/' -e 'Transformation Hub service is initialized' -e 'Started kafka readers'

    The output should be similar to the following:

    /opt/arcsight/var/logs/manager/default/server.std.log:5036:2021-07-13 09:51:36 =====> Transformation Hub service is initialized (49 s) <=====

    /opt/arcsight/var/logs/manager/default/server.log:11664:[2021-07-13 09:51:36,656][INFO ][default.com.arcsight.common.messaging.events.aa] Started kafka readers in PT0.115S

    /opt/arcsight/var/logs/manager/default/server.log:11665:[2021-07-13 09:51:36,657][INFO ][default.com.arcsight.server.NGServer] Transformation Hub service is initialized (49 s)