Understanding Labels and Pods
During installation, you apply labels, which are associated with the deployed capabilities, to the worker nodes in the Kubernetes cluster. The labels indicate to Kubernetes the various types of workloads that can run on a specific host system. Based on the labels, Kubernetes then assigns pods to the nodes to provide functions, tasks, and services. Each pod belongs to a specific namespace in the OMT Management portal. On occasion, you might need to restart pods or reconfigure the environment by moving labels to different nodes, thus reassigning the workload of the pods.
<label name>:yes
. However, when using the kubectl command line the label format is <label name>=yes
. - Adding Labels to Worker Nodes
- Understanding the Pods that Do Not Have Labels
- Understanding Pods that Run Master Nodes
Adding Labels to Worker Nodes
Depending on the capabilities that you deploy, you must to assign certain a set of labels to the Worker Nodes. Each of the following sections defines the pods and their associated capabilities that get installed per assigned label.
To avoid issues caused by conflicting label assignments, review the following considerations.
- Labeling for the Intelligence capability
- The HDFS NameNode, which corresponds with the
intelligence-namenode:yes
label, should run on one worker node only. The worker node must match the hostname or IP address that you provided in the HDFS NameNode field in the OMT Management Portal > Configure/Deploy page > Intelligence. - Assign the label for Spark2,
intelligence-spark:yes
, to the same worker nodes where you placed theintelligence-datanode:yes
label.
- The HDFS NameNode, which corresponds with the
- For Transformation Hub's Kafka and ZooKeeper, make sure that the number of the nodes you have labeled corresponds to the number of worker nodes in the Kafka cluster and the number of worker nodes running Zookeeper in the Kafka cluster properties from the pre-deployment configuration page. The default number is 3 for a Multiple Worker deployment.
- Although ESM Command Center, Recon, Intelligence, and SOAR all require Fusion, you do not need to assign the label for Fusion to more than one worker node.
fusion:yes
The Fusion capability includes many of the core services needed for your deployed products, including the Dashboard and user management; all deployed capabilities require Fusion. Add the fusion:yes
label to the Worker Nodes where you want to run the associated pods. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
esm-acc-web-app | Manages the user interface for ESM Command Center. The interface connects to an ESM Manager server running outside the Kubernetes cluster. | arcsight-installer | ESM Command Center |
esm-web-app | Manages how ESM Command Center links to main navigation of the Platform user interface. | arcsight-installer | ESM Command Center |
esm-widgets |
Manages the dashboards and widgets that are designed to incorporate data from ESM. The widgets connect to an ESM Manager server running outside of the Kubernetes cluster. For example, when you start this pod, it installs the provided How is my SOC running? dashboard. |
arcsight-installer | ESM Command Center |
fusion-arcmc-web-app |
Manages the user interface for ArcSight Management Center. A |
arcsight-installer | Fusion |
fusion-common-doc-web-app | Provides the context-sensitive user guides for Fusion (the Platform), Recon, and Reporting. | arcsight-installer | Fusion |
fusion-metadata-web-app | Manages the REST API for the metadata of the Dashboard feature. | arcsight-installer | Fusion |
fusion-dashboard-web-app | Manages the framework, including the user interface, for the Dashboard feature. | arcsight-installer | Fusion |
fusion-db-monitoring-web-app | Manages the REST API for the database monitoring function. | arcsight-installer | Fusion |
fusion-db-search-engine |
Provides APIS to access data in the ArcSight Database. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Fusion |
fusion-metadata-rethinkdb | Manages the RethinkDB database, which stores information about a user's preferences and configurations. | arcsight-installer | Fusion |
fusion-single-sign-on | Manages the SSO service that enables users to log in to any of the deployed capabilities and the consoles for ArcSight Intelligence, SOAR, and ESM Command Center. | arcsight-installer | Fusion |
fusion-ui-services | Manages the framework, including the user interface, for the primary navigation functions in the user interface. | arcsight-installer | Fusion |
fusion-user-management | Manages the framework, including the user interface, for the user management function. | arcsight-installer | Fusion |
interset-widgets | Manages the widgets that are designed to incorporate data from ArcSight Intelligence. The widgets connect to an Intelligence server running outside of the Kubernetes cluster. | arcsight-installer | Intelligence |
layered-analytics-widgets |
Manages and installs the widgets that can incorporate data from multiple capabilities. For example, the provided Entity Priority widget connects to ESM Command Center and Intelligence servers outside the Kubernetes cluster to display entity data. |
arcsight-installer | Layered Analytics |
recon-analytics |
Manages the backend of Outlier Analytics; the user interface for Outlier Analytics is managed by the fusion-search-web-app pod. |
arcsight-installer | Recon |
fusion-search-web-app |
Manages lookup lists, Data Quality Dashboard and Outlier UI capabilities. Also hosts the APIs used for search. |
arcsight-installer | Fusion |
fusion-reporting-web-app |
Manages the REST API and user interface for the Reporting feature. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Fusion |
fusion-search-and-storage-web-app |
Manages the Search and Storage Groups and capabilities |
arcsight-installer | Fusion |
fusion-db-adm-schema-mgmt | Manages installation, upgrade, and maintenance of the <tenant>_secops_adm schema and data | arcsight-installer | Fusion |
fusion-arcsight-configuration-service | A secure, shared configuration repository for ArcSight capabilities | arcsight-installer | Fusion |
soar-message-broker | Manages SOAR events | arcsight-installer | SOAR |
soar-web-app | Manages SOAR backend services | arcsight-installer | SOAR |
soar-db-init | Manages the SOAR DB schema lifecycle | arcsight-installer | SOAR |
soar-jms-migration | Manages SOAR JMS migration | arcsight-installer | SOAR |
soar-widgets | Manages SOAR widget deployment | arcsight-installer | SOAR |
soar-frontend | Manages SOAR user interface services | arcsight-installer | SOAR |
soar-gateway | Manages SOAR user requests | arcsight-installer | SOAR |
th-enrichment-processor-group | Manages the instances that process events coming from the selected source topic (by default, th-arcsight-avro) by executing enrichment tasks , which include generating a Global ID. Events are then routed to the topic mf-event-avro-enriched. The default is 2 instances running Out-of-the-box. | arcsight-installer | Transformation Hub |
intelligence:yes
Add the intelligence:yes
label to Worker Nodes where you want to run the pods that manage functions and services for the ArcSight Intelligence capability. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
elasticsearch-data | Manages the Elasticsearch functions that store all raw events for Intereset Analytics and provide all data that drives the user interface. | arcsight-installer | Intelligence |
elasticsearc-master | Manages the Elasticsearch services. | arcsight-installer | Intelligence |
h2 | Stores user identities required to authenticate and authorize users. | arcsight-installer | Intelligence |
interset-analytics |
Determines the individual baselines , then discovers and ranks devisions from those baselines for the Intelligence Analytics feature. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
interset-api |
Manages the REST API that the Intelligence user interface uses to gather the Intelligence Analytics results. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
interset-exports | Generates the PDF reports of organization risks and the users involved in risky behaviors. | arcsight-installer | Intelligence |
interset-logstash | Manages Logstash, which collects raw events from Transofrmation Hub and sends them to Elasticsearch for indexing. | arcsight-installer | Intelligence |
interset-spark-config-file-server | Hosts a file server to provide configuration files for Spark3 to consume. | arcsight-installer | Intelligence |
interset-ui | Manages the user interface that displays the Intelligence Analytics results and the raw data in the Intelligence dashboard | arcsight-installer | Intelligence |
intelligence-arcsightconnector-api | Manages APIs related to licensing support and provides Fusion menu registration for Intelligence. | arcsight-installer | Intelligence |
intelligence-tuning-api |
Manages APIs that tune the Intelligence Analytics metadata that can change the Intelligence Analytics results. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
intelligence-tenant-control |
Manages tenant configurations and secrets for Intelligence. |
arcsight-installer | Intelligence |
searchmanager-api | Manages APIs that provide administrative tools related to Elasticsearch and the search capability in general. | arcsight-installer | Intelligence |
searchmanager-engine |
Manages jobs that provide administrative tools related to Elasticsearch and the search capability in general. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
intelligence-datanode:yes
Add the intelligence-datanode:yes
label to Worker Nodes where you want to run the pods that manage HDFS services for the ArcSight Intelligence capability.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
hdfs-datanode | Manages how HDFS stores the results of Intelligence Analytics searches before transferring them to the ArcSight database. The HDFS Datanodes contain blocks of HDFS files. | arcsight-installer | Intelligence |
intelligence-namenode:yes
Add the intelligence-namenode:yes
label to a Worker Node for the HDFS NameNode.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
hdfs-namenode | Manages how the HDFS NameNode stores the location of all HDFS files distributed across the cluster. | arcsight-installer | Intelligence |
intelligence-spark:yes
Add the intelligence-spark:yes
label to Worker Nodes where you want to run the Analytics services for the ArcSight Intelligence capability. For high availability, add this label to multiple worker nodes. To reduce network traffic, add the label to the same worker nodes where you placed the intelligence-datanode:yes
label.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
Spark2 | Launches when users run the Intelligence Analytics feature. Spark2 generates multiple pods, changing the names of the pods according to the different phases of the analytics tasks. | arcsight-installer | Intelligence |
kafka:yes
Add the kafka:yes
label to Worker Nodes where you want to run the Kafka Broker functions and services for the Transformation Hub capability.
# of Kafka broker nodes in the Kafka cluster
setting in the OMT Management Portal > Configure/Deploy > Transformation Hub > Kafka and Zookeeper Configuration. The default number is 3.Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-kafka |
Manages the Kafka Broker, to which publishers and consumers connect so they can exhange messages over Kafka. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Transformation Hub |
th-platform:yes
Add the th-platform:yes
label to Worker Nodes where you want to run the Kafka Manager, schema registry, and WebServices for the Transformation Hub capability. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-kafka-manager | Provides the user interface that allows the Kafka Manager to manage the Kafka Brokers. | arcsight-installer | Transformation Hub |
th-schemaregistry |
Provides the scheme registry that is used for managing the schema of data in Avro format. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Transformation Hub |
th-web-service |
Manages the WebServices module of Transformation Hub. WebServices provides the API that ArcMC uses to retrieve data. NOTE: This pod requires communication outside of the Kubernetes cluster to receive client requests from and initiate connections to ArcMC. |
arcsight-installer | Transformation Hub |
th-processing:yes
Add the th-processing:yes
label to Worker Nodes where you want to run services that manage processing for the Transformation Hub capability. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-c2av-processor | Manages the instances that convert CEF messages on the topic th-cef to Avro on the topic th-arcsight-avro. The quantity of instances depends on the number of partition in the th-cef topic and load. The default is 0 instances. | arcsight-installer | Transformation Hub |
th-cth | Manages up to 50 instances of connectors in Transformation Hub that distribute the load of data received from collectors by creating a consumer group that is based on the source top and destination and topic names. | arcsight-installer | Transformation Hub |
th-c2av-processor-esm | Manages the instances that convert CEF messages on the topic mf-event-cef-esm-filtered to Avro on the topic mf-event-avro-emsfiltered. The quantity of instances depends on the number of partition in the th-cef topic and load. The default is 0 instances. | arcsight-installer | Transformation Hub |
th-routing-processor-group | Manages the routing rules for topics. Use ArcMC to configure the rules. | arcsight-installer | Transformation Hub |
zk:yes
Add the th-zookeeper:yes
label to Worker Nodes where you want to Kafka Zookeeper for the Transformation Hub capability.
# of Zookeeper nodes in the Zookeper cluster
setting in the OMT Management Portal > Configure/Deploy > Transformation Hub > Kafka and Zookeeper Configuration. The default number is 3.Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-zookeeper | Manages Kafka Zookeeper, which stores metadata about partitions and brokers. | arcsight-installer | Transformation Hub |
Understanding the Pods that Do Not Have Labels
The Platform includes several pods that are not associated with a deployed capability and thus do not require a label. The installation process automatically creates these pods.
Pod | Description | Namespace |
---|---|---|
autopass-lm | Manages the Autopass service, which tracks license keys. | arcsight-installer |
itom-pg-backup | Performs backup of the PostgreSQL database. | arcsight-installer |
suite-reconf-pod-arcsight-intaller | Manages the Reconfiguration features in the OMT Management Portal. | arcsight-installer |
Understanding Pods that Run Master Nodes
The Platform includes pods that run master nodes.
Pod | Description | Namespace |
---|---|---|
itom-postgresql-default | Manages the PostgreSQL database, which stores information for SOAR, ArcMC, OMT status, and license keys. | |
idm | Manages user authentication and authorization for the OMT Management Portal. | core |
nginx-ingress-controller |
Provides the proxy web server that end-users need to connect to the deployed capabilities. By default, server uses HTTPS and port 443. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer |