Tuning Your Deployment for Recon or Intelligence

This section describes tuning your deployment for Recon or Intelligence. Skip this section if you have not deployed Recon or Intelligence.

Verifying Recon cron Jobs

After deployment, check Recon to verify that the corresponding cron jobs are running, as follows:

  1. In Recon, browse to INSIGHT >Data Timeseries and Source Agents and Hourly Event Volume. If there is no information displayed after an hour, the cron job events_quality.sh is not running.

  2. Go to DASHBOARD > Data Processing monitoring and Health and Performance Monitoring. If there is no information displayed after an hour, the cron job events_hourly_rate.sh is not running.

If either of these cron jobs is not running, then restart fusion-db-adm-schema-mgmt, as follows:

  1. Connect to the first master node.

  2. Run the following commands:

PODS=`kubectl get pods -A | grep fusion-db-adm-schema-mgmt | awk '{print $1, $2}'
kubectl delete pods -n $PODS

Updating Event Topic Partition Number

Refer to the Technical Requirements for ArcSight Platform, section entitled System Hardware Sizing and Tuning Guidelines to determine an appropriate event topic partition number for your workload.

To update the topic partition number from the master node1:

Select one of the following commands, based on your encryption and authentication configuration:

Note: The commands contain two variables that will need to be replaced before the execution:

  • {topic_to_update}: replace with th-arcsight-avro, mf-event-avro-esmfiltered, th-cef and mf-event-avro-enriched and for each iteration of the command

  • {number_of_partitions}: For Recon - database node count * 12

For example, for a 3 nodes database cluster, the partition number would be = 3 * 12 = 36

For FIPS (or non-FIPS) Encryption with Client Authentication:

kubectl exec th-kafka-0 -n $(kubectl get ns|awk '/arcsight/ {print $1}') -- sh -c 'sed -ir "s/^[#]*\s*ssl.truststore.password=.*/ssl.truststore.password=$STORES_SECRET/" /etc/kafka/client.properties && \
sed -ir "s/^[#]*\s*ssl.keystore.password=.*/ssl.keystore.password=$STORES_SECRET/" /etc/kafka/client.properties && \
sed -ir "s/^[#]*\s*ssl.key.password=.*/ssl.key.password=$STORES_SECRET/" /etc/kafka/client.properties && \
kafka-topics --bootstrap-server th-kafka-svc:9093 --alter --topic {topic_to_update} --partitions {number_of_partitions} --command-config /etc/kafka/client.properties'

After executing the above, Copy and then paste the following command block:

kubectl exec th-kafka-0 -n $(kubectl get ns|awk '/arcsight/ {print $1}') -- sh -c 'sed -ir "s/^[#]*\s*ssl.truststore.password=.*/ssl.truststore.password=/" /etc/kafka/client.properties && \
sed -ir "s/^[#]*\s*ssl.keystore.password=.*/ssl.keystore.password=/" /etc/kafka/client.properties && \
sed -ir "s/^[#]*\s*ssl.key.password=.*/ssl.key.password=/" /etc/kafka/client.properties'

For FIPS Encryption Without Client Authentication

kubectl exec th-kafka-0 -n $(kubectl get ns|awk '/arcsight/ {print $1}') -- sh -c 'KAFKA_OPTS+=" -Djavax.net.ssl.trustStore=/etc/kafka/secrets/th-kafka.truststore " && \
KAFKA_OPTS+="-Djavax.net.ssl.trustStorePassword=$STORES_SECRET " && \
KAFKA_OPTS+="-Djavax.net.ssl.trustStoreProvider=BCFIPS " && \
KAFKA_OPTS+="-Djavax.net.ssl.trustStoreType=BCFKS " && \
kafka-topics --bootstrap-server th-kafka-svc:9093 --alter --topic {topic_to_update} --partitions {number_of_partitions} --command-config /etc/kafka/client2.properties'

For non-FIPS Encryption Without Client Authentication

kubectl exec th-kafka-0 -n $(kubectl get ns|awk '/arcsight/ {print $1}') -- sh -c 'KAFKA_OPTS+=" -Djavax.net.ssl.trustStore=/etc/kafka/secrets/th-kafka.truststore " && \
KAFKA_OPTS+="-Djavax.net.ssl.trustStorePassword=$STORES_SECRET " && \
kafka-topics --bootstrap-server th-kafka-svc:9093 --alter --topic {topic_to_update} --partitions {number_of_partitions} --command-config /etc/kafka/client2.properties'

Copy the selected command (or commands in case of For FIPS (or non-FIPS) Encryption with Client Authentication) and execute it 4 times according to the following table:

Command Execution Replace the {topic_to_update} variable with: Replace the {number_of_partitions} variable with:
First
th-arcsight-avro

A number of partitions that will comply with your Recon requirements

Second
mf-event-avro-esmfiltered

Use the same number as in the first execution of the command

Third
th-cef
Use the same number as in the first execution of the command
Fourth
mf-event-avro-enriched
Use the same number as in the first execution of the command
    Standard Kafka topics settings only permit increasing the number of partitions, not decreasing them.
  1. Use the Kafka manager to verify that the partition number for the th-cef, th-arcsight-avro,mf-event-avro-enriched and mf-event-avro-esmfiltered topics have been updated to the desired partition number.

 

Updating the OMT Hard Eviction Policy

You need to update the Kubernetes hard eviction policy from 15% (default) to 100 GB to maximize disk usage.

To update the OMT Hard Eviction Policy, perform the following steps on each worker node, after deployment has been successfully completed. Please verify the operation is successfully executed on one work node first, then proceed on the next worker node.

The eviction-hard can be defined as either a percentage or a specific amount. The percentage or the specific amount will be determined by the volume storage.

To update the policy:

  1. Run the following commands:
    cp /usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/kubelet.service.orig
    vim /usr/lib/systemd/system/kubelet.service
  2. In the file, after ExecStart=/usr/bin/kubelet \, add the following line:
    --eviction-hard=memory.available<100Mi,nodefs.available<100Gi,imagefs.available<2Gi \
  3. Save your change to the file.

  4. To activate the change, run the following command:

    systemctl daemon-reload ; systemctl restart kubelet
  5. To verify the change, run:

    systemctl status kubelet

    No error should be reported.