Understanding Labels and Pods
During installation, you apply labels, which are associated with the deployed capabilities, to the worker nodes in the Kubernetes cluster. The labels indicate to Kubernetes the various types of workloads that can run on a specific host system. Based on the labels, Kubernetes then assigns pods to the nodes to provide functions, tasks, and services. Each pod belongs to a specific namespace in the CDF Management portal. On occasion, you might need to restart pods or reconfigure the environment by moving labels to different nodes, thus reassigning the workload of the pods.
<label name>:yes
. However, when using the kubectl command line the label format is <label name>=yes
. - Adding Labels to Worker Nodes
- Understanding the Pods that Do Not Have Labels
- Understanding Pods that Run Master Nodes
Adding Labels to Worker Nodes
Depending on the capabilities that you deploy, you must assign a set of labels to the Worker Nodes. Each of the following sections defines the pods and their associated capabilities that areinstalled for an assigned label.
To avoid issues caused by conflicting label assignments, review the following considerations.
- Labeling for the Intelligence capability
- The HDFS NameNode, which corresponds with the
intelligence-namenode:yes
label, should run on one worker node only. The worker node must match the hostname or IP address that you provided in the HDFS NameNode field in the CDF Management Portal > Configure/Deploy page > Intelligence. - Assign the label for Spark2,
intelligence-spark:yes
, to the same worker nodes where you placed theintelligence-datanode:yes
label.
- The HDFS NameNode, which corresponds with the
- For Transformation Hub's Kafka and ZooKeeper, make sure that the number of the nodes you have labeled corresponds to the number of worker nodes in the Kafka cluster and the number of worker nodes running Zookeeper in the Kafka cluster properties from the pre-deployment configuration page. The default number is 3 for a Multiple Worker deployment.
- Although ESM Command Center, Recon, Intelligence, and SOAR all require Fusion, you do not need to assign the label for Fusion to more than one worker node.
fusion:yes
The Fusion capability includes many of the core services needed for your deployed products, including the Dashboard and user management; all deployed capabilities require Fusion. Add the fusion:yes
label to the Worker Nodes where you want to run the associated pods. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
esm-acc-web-app | Manages the user interface for ESM Command Center. The interface connects to an ESM Manager server running outside the Kubernetes cluster. | arcsight-installer | ESM Command Center |
esm-web-app | Manages how ESM Command Center links to main navigation of the Platform user interface. | arcsight-installer | ESM Command Center |
esm-widgets |
Manages the dashboards and widgets that are designed to incorporate data from ESM. The widgets connect to an ESM Manager server running outside of the Kubernetes cluster. For example, when you start this pod, it installs the provided How is my SOC running? dashboard. |
arcsight-installer | ESM Command Center |
fusion-arcmc-web-app |
Manages the user interface for ArcSight Management Center. A |
arcsight-installer | Fusion |
fusion-common-doc-web-app | Provides the context-sensitive user guides for Fusion (the Platform), Recon, and Reporting. | arcsight-installer | Fusion |
fusion-metadata-web-app | Manages the REST API for the metadata of the Dashboard feature. | arcsight-installer | Fusion |
fusion-dashboard-web-app | Manages the framework, including the user interface, for the Dashboard feature. | arcsight-installer | Fusion |
fusion-db-monitoring-web-app | Manages the REST API for the database monitoring function. | arcsight-installer | Fusion |
fusion-db-search-engine |
Provides APIS to access data in the ArcSight Database. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Fusion |
fusion-metadata-rethinkdb | Manages the RethinkDB database, which stores information about a user's preferences and configuration. | arcsight-installer | Fusion |
fusion-single-sign-on | Manages the SSO service that enables users to log in to any of the deployed capabilities and the consoles for ArcSight Intelligence, SOAR, and ESM Command Center. | arcsight-installer | Fusion |
fusion-ui-services | Manages the framework, including the user interface, for the primary navigation functions in the user interface. | arcsight-installer | Fusion |
fusion-user-management | Manages the framework, including the user interface, for the user management function. | arcsight-installer | Fusion |
soar-message-broker | Manages SOAR JMS messages. | arcsight-installer | Fusion |
soar-web-app | Manages SOAR services and capabilities. | arcsight-installer | Fusion |
soar-db-init | Manages the SOAR DB schema and creates associated structures. | arcsight-installer | Fusion |
soar-jms-migration | Manages the migration of SOAR JMS messages to the next release. | arcsight-installer | Fusion |
soar-frontend | Manages the SOAR user interface. | arcsight-installer | Fusion |
soar-widgets | Deploys SOAR Fusion widgets. | arcsight-installer | Fusion |
interset-widgets | Manages the widgets that are designed to incorporate data from ArcSight Intelligence. | arcsight-installer | Intelligence |
layered-analytics-widgets |
Manages and installs the widgets that can incorporate data from multiple capabilities. For example, the provided Entity Priority widget connects to Intelligence and ESM Command Center server outside the Kubernetes cluster to display entity data. |
arcsight-installer | Layered Analytics |
recon-analytics |
Manages the backend of Outlier Analytics; the user interface for Outlier Analytics is managed by the recon-search-web-app pod. |
arcsight-installer | Recon |
recon-search-web-app | Manages the Search, Lookup lists, and Data Quality Dashboard functions, as well as the user interface for Outlier Analytics. | arcsight-installer | Recon |
reporting-web-app |
Manages the REST API and user interface for the Reporting feature. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Recon |
recon-search-and-storage-web-app | Manages the configuration of and sends events to storage groups. | arcsight-installer | Recon |
intelligence:yes
Add the intelligence:yes
label to Worker Nodes where you want to run the pods that manage functions and services for the ArcSight Intelligence capability. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
elasticsearch-data | Manages the Elasticsearch functions that store all raw events for Intereset Analytics and provide all data that drives the user interface. | arcsight-installer | Intelligence |
elasticsearc-master | Manages the Elasticsearch services. | arcsight-installer | Intelligence |
h2 | Stores user identities required to authenticate and authorize users. | arcsight-installer | Intelligence |
interset-analytics |
Determines the individual baselines , then discovers and ranks devisions from those baselines for the Intelligence Analytics feature. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
interset-api |
Manages the REST API that the Intelligence user interface uses to gather the Intelligence Analytics results. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
interset-exports | Generates the PDF reports of organization risks and the users involved in risky behaviors. | arcsight-installer | Intelligence |
interset-logstash | Manages Logstash, which collects raw events from Transofrmation Hub and sends them to Elasticsearch for indexing. | arcsight-installer | Intelligence |
interset-spark-config-file-server | Hosts a file server to provide configuration files for Spark3 to consume. | arcsight-installer | Intelligence |
interset-ui | Manages the user interface that displays the Intelligence Analytics results and the raw data in the Intelligence dashboard | arcsight-installer | Intelligence |
intelligence-arcsightconnector-api | Manages APIs related to licensing support and provides Fusion menu registration for Intelligence. | arcsight-installer | Intelligence |
intelligence-tuning-api |
Manages APIs that tune the Intelligence Analytics metadata that can change the Intelligence Analytics results. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
searchmanager-api | Manages APIs that provide administrative tools related to Elasticsearch and the search capability in general. | arcsight-installer | Intelligence |
searchmanager-engine |
Manages jobs that provide administrative tools related to Elasticsearch and the search capability in general. This pod requires communication outside of the Kubernetes cluster.
|
arcsight-installer | Intelligence |
intelligence-datanode:yes
Add the intelligence-datanode:yes
label to Worker Nodes where you want to run the pods that manage HDFS services for the ArcSight Intelligence capability.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
hdfs-datanode | Manages how HDFS stores the results of Intelligence Analytics searches before transferring them to the ArcSight database. The HDFS Datanodes contain blocks of HDFS files. | arcsight-installer | Intelligence |
intelligence-namenode:yes
Add the intelligence-namenode:yes
label to a Worker Node for the HDFS NameNode.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
hdfs-namenode | Manages how the HDFS NameNode stores the location of all HDFS files distributed across the cluster. | arcsight-installer | Intelligence |
intelligence-spark:yes
Add the intelligence-spark:yes
label to Worker Nodes where you want to run the Analytics services for the ArcSight Intelligence capability. For high availability, add this label to multiple worker nodes. To reduce network traffic, add the label to the same worker nodes where you placed the intelligence-datanode:yes
label.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
Spark2 | Launches when users run the Intelligence Analytics feature. Spark2 generates multiple pods, changing the names of the pods according to the different phases of the analytics tasks. | arcsight-installer | Intelligence |
kafka:yes
Add the kafka:yes
label to Worker Nodes where you want to run the Kafka Broker functions and services for the Transformation Hub capability.
# of Kafka broker nodes in the Kafka cluster
setting in the CDF Management Portal > Configure/Deploy > Transformation Hub > Kafka and Zookeeper Configuration. The default number is 3.Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-kafka |
Manages the Kafka Broker, to which publishers and consumers connect so they can exhange messages over Kafka. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Transformation Hub |
th-platform:yes
Add the th-platform:yes
label to Worker Nodes where you want to run the Kafka Manager, schema registry, and WebServices for the Transformation Hub capability. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-kafka-manager | Provides the user interface that allows the Kafka Manager to manage the Kafka Brokers. | arcsight-installer | Transformation Hub |
th-schemaregistry |
Provides the scheme registry that is used for managing the schema of data in Avro format. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer | Transformation Hub |
th-web-service |
Manages the WebServices module of Transformation Hub. WebServices provides the API that ArcMC uses to retrieve data. NOTE: This pod requires communication outside of the Kubernetes cluster to receive client requests from and initiate connections to ArcMC. |
arcsight-installer | Transformation Hub |
th-processing:yes
Add the th-processing:yes
label to Worker Nodes where you want to run services that manage processing for the Transformation Hub capability. For high availability, add this label to multiple worker nodes.
Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-c2av-processor | Manages the instances that convert CEF messages on the topic th-cef to Avro on the topic th-arcsight-avro. The quantity of instances depends on the number of partition in the th-cef topic and load. The default is 0 instances. | arcsight-installer | Transformation Hub |
th-cth | Manages up to 50 instances of connectors in Transformation Hub that distribute the load of data received from collectors by creating a consumer group that is based on the source top and destination and topic names. | arcsight-installer | Transformation Hub |
th-c2av-processor-esm | Manages the instances that convert CEF messges on the topic mf-event-cef-esm-filtered to Avro on the topic mf-eent-avro-emsfiltered. The quantity of instances depends on the number of partition in the th-cef topic and load. The default is 0 instances. | arcsight-installer | Transformation Hub |
th-routing-processor-group | Manages the routing rules for topics. Use ArcMC to configure the rules. | arcsight-installer | Transformation Hub |
zk:yes
Add the th-zookeeper:yes
label to Worker Nodes where you want to Kafka Zookeeper for the Transformation Hub capability.
# of Zookeeper nodes in the Zookeper cluster
setting in the CDF Management Portal > Configure/Deploy > Transformation Hub > Kafka and Zookeeper Configuration. The default number is 3.Pod | Description | Namespace | Associated Capability |
---|---|---|---|
th-zookeeper | Manages Kafka Zookeeper, which stores metadata about partitions and brokers. | arcsight-installer | Transformation Hub |
Understanding the Pods that Do Not Have Labels
The Platform includes several pods that are not associated with a deployed capability and thus do not require a label. The installation process automatically creates these pods.
Pod | Description | Namespace |
---|---|---|
autopass-lm | Manages the Autopass service, which tracks license keys. | arcsight-installer |
itom-pg-backup | Performs backup of the PostgreSQL database. | arcsight-installer |
suite-reconf-pod-arcsight-intaller | Manages the Reconfiguration features in the CDF Management Portal. | arcsight-installer |
Understanding Pods that Run Master Nodes
The Platform includes pods that run master nodes.
Pod | Description | Namespace |
---|---|---|
itom-postgresql-default | Manages the PostgreSQL database, which stores information for SOAR, ArcMC, CDF status, and license keys. | |
idm | Manages user authentication and authorization for the CDF Management Portal. | core |
nginx-ingress-controller |
Provides the proxy web server that end-users need to connect to the deployed capabilities. By default, server uses HTTPS and port 443. NOTE: This pod requires communication outside of the Kubernetes cluster. |
arcsight-installer |