Labeling On-premises Worker Nodes

Labeling is a means for identifying application processing and qualifying the application as a candidate to run on a specific node. For example, labeling a node with the label kafka:yes indicates that a Kafka instance will run on that node. The labels tell Kubernetes the types of workloads that can run on a specific host system.

Immediately following deployment of your chosen capabilities, many of their associated pods will remain in a Pending state until you complete the labeling process. For example, the following Transformation Hub pods will be pending: th-kafka, th-zookeeper, th-kafka-manager, th-web-service, and th-schemaregistry.

When you finish labeling the nodes, Kubernetes immediately schedules and starts the label-dependent containers on the labeled nodes. The starting of services might take 15 minutes or more to complete. For more information about labeling, see Understanding Labels and Pods

Labels required for worker nodes include the following:

Capability Required Labels
ArcSight ESM Command Center fusion=yes
ArcSight Layered Analytics fusion=yes
ArcSight Recon fusion=yes
Fusion fusion=yes

Intelligence

fusion=yes

intelligence=yes

intelligence-datanode=yes

intelligence-spark=yes

intelligence-namenode=yes

Place intelligence-namenode=yes label on one node only. The node and the hostname or IP address in the HDFS NameNode field in the Intelligence tab of the OMT Management Portal must match.
Transformation Hub

kafka=yes

zk=yes

th-processing=yes

th-platform=yes

fusion=yes

Perform the following steps to label your worker nodes:

  1. Retrieve a list of worker nodes by running the following command:

    kubectl get nodes
  2. Label the first worker node by running the following command:

    kubectl label node <node_name> <label_1> <label_2> <label_3> ... <label_n>

    For example:

    kubectl label node <node_name> zk=yes kafka=yes th-processing=yes th-platform=yes fusion=yes
  3. Repeat Step 2 for each worker node.