Understanding Firewall Ports for the ArcSight Platform

This section lists the ports that must be open for the elements that make up the ArcSight Platform:

Firewall Ports for OMT Infrastructure Components

The following tables list the ports that must be open for the OMT infrastructure components:

In most cases, the firewalls for these components are host-based. These components are not likely to have network-based firewalls between them.

In most cases, you do not need to take action to configure the firewalls for these ports.

OMT Vault

Ports Protocol Source Server Target Server Description
8200 TCP Control plane and worker Control plane

Used by the itom-vault service, which provides a secured configuration store

All cluster nodes should be able to access this port for the client connection.

8201 TCP Control plane and worker Control plane

Used by the itom-vault service, which provides a secured configuration store

Web clients must be able to access this port for peer member connections.

OMT Management Portal

Ports Protocol Source Server Target Server Description
3000 TCP All clients Control plane

The port is exposed on the ingress node. All clients should be able to access this port. Used only for accessing the OMT Management Portal during OMT installation from a web browser

Web clients must be able to access this port during the OMT installation. Post-installation, this port can be blocked, and re-opened only if re-installation is required.

After installation, web clients use port 5443 to access the OMT Management Portal.

5443 TCP All clients Control plane

The port is exposed on the ingress node. All clients should be able to access this port. Used for accessing the OMT Management Portal post OMT deployment from a web browser

Web clients must be able to access this port for OMT administration and management.

5444 TCP All clients Control plane

The port is exposed on the ingress node. All nodes should be able to access this port when using 2-way certificate authentication. Used for accessing the OMT Management Portal post OMT deployment from a web browser, when using two-way (mutual) TLS authentication

Web clients must be able to access this port for OMT administration and management, when using two-way (mutual) TLS authentication.

Kubernetes

Ports Protocol Source Server Target Server Description
2380 TCP Control plane Control plane

Used by the etcd component, which provides a distributed configuration database

All the master nodes should be able to access this port for the etcd cluster communication.

This port will need to be opened only in multi-master deployments
4001 TCP Control plane and Worker Control plane

Used by the etcd component, which provides a distributed configuration database

All cluster nodes should be able to access this port for the client connection.

This port will need to be opened only in multi-master deployments, or if worker nodes require access to this port
7443 TCP Control plane and Worker Control plane

(Conditional) Used by the Kubernetes API server when performing one of the following methods of installation:

  • Using the provided scripts

  • Installing manually and on the same node as ESM

All cluster nodes should be able to access this port for internal communication.

8443 TCP Control plane and Worker Control plane

(Conditional) Used by the Kubernetes API server when manually installing on a different node from ESM.

All cluster nodes should be able to access this port for internal communication.

8472 UDP Control plane and Worker Control plane and Worker

Uses UDP protocol

Used by the Flannel service component, which manages the internal cluster networking

All cluster nodes should be able to access this port for internal communication.

10250 TCP Control plane and Worker Control plane and Worker

Used by the Kubelet service, which functions as a local node agent that watches pod specifications through the Kubernetes API server

All cluster nodes should be able to access this port for internal communications, and the Kubelet API worker node for exec and logs.

10259 TCP Access by localhost only Control plane

Used by the kube-scheduler component that watches for any new pod with no assigned node and assigns a node to the pod

All cluster nodes should be able to access this port for internal communication.

This port will need to be opened only in multi-master deployments
10257 TCP Control plane and Worker nodes Control plane

Used by the kube-controller-manager component that runs controller processes which regulate the state of the cluster.

All cluster nodes should be able to access this port for internal communication

This port will need to be opened only in multi-master deployments
10256 TCP Control plane and worker Control plane and Worker

Used by the Kube-proxy component, which is a network proxy that runs on each node, for exposing the services on each node

All cluster nodes should be able to access this port for internal communication.

Network File System (NFS)

Ports Protocol Source Server Target Server Description
111

TCP/NFS

UDP/NFS

Control plane and worker NFS

NFS server port. Used by the portmapper service

All cluster nodes should be able to access this port.

This port must be opened if NFS is running on a cluster node
2049 TCP/NFS Control plane and worker NFS

Used by the nfsd daemon

All cluster nodes should be able to access this port.

This port must be opened if NFS is running on a cluster node
Note: This port must be open even during a single-node deployment.
20048 TCP/NFS Control plane and worker NFS

Used by the mountd daemon

All cluster nodes should be able to access this port.

This port must be opened if NFS is running on a cluster node

Firewall Ports for Deployed Capabilities

The following tables list the ports that must be available when you deploy the associated capability into the OMT infrastructure:

In most cases, you do not need to take action to configure the firewalls for these ports.

ArcMC

Ports Protocol Description
32080, 9000 TCP Used for Transformation Hub and ArcMC communication

Intelligence

Ports Node Direction Description
30820/TCP Worker (HDFS Namenode) Inbound Used for the database to connect to HDFS during Analytics processing
30070/TCP Worker (HDFS Namenode) Inbound Used for the Hadoop Monitoring Dashboard (optional)
30010/TCP Worker (HDFS Datanodes) Inbound Used for communication between the HDFS Namenode and the HDFS Datanodes
30210/TCP Worker (HDFS Datanodes) Inbound Used by the database to establish secure communication with HDFS during Analytics processing
30110/TCP Worker (HDFS Datanodes and Namenode) Inbound Used for communication between the ArcSight Database and HDFS worker nodes
30071/TCP Worker (HDFS Namenode) Inbound Used for Secure Data Transfer with the HDFS cluster

SOAR

The SOAR cluster listens on the following ports on all Kubernetes master and worker nodes, but Micro Focus recommends that you only use the ports on the master virtual IP.

Port Description
32200 Data from ESM

Transformation Hub

Ports Protocol Source Server Target Server Description
2181, 2182 TCP Worker Node Worker Node

Used by ZooKeeper as internal communication ports to client requests (i.e from Kafka).

All cluster nodes should be able to access this port for internal communication.

9092 TCP Client machine, Worker node Worker Node

Only needs to be opened if Transformation Hub is configured to accept connections over a clear text channel.

Port 9092 needs to be opened only if your configuration is set to communicate with Transformation Hub over a non-encrypted communication channel.

While this type of setup is not recommended by Micro Focus, it represents an option in case the goal is to prioritize performance over security.
9093 TCP Client machine, Worker node Worker Node Required for secure communications with clients.
32092 TCP Client machine, Worker node Worker Node

Only needs to be opened if Transformation Hub is configured to accept connections over a clear text channel.

Port 32092 needs to be opened only if your configuration is set to communicate with Transformation Hub over a non-encrypted communication channel.

While this type of setup is not recommended by Micro Focus, it represents an option in case the goal is to prioritize performance over security.
32093 TCP Client machine, Worker node Worker Node Required for secure communications with clients.
32080 HTTPS Client machine, Worker node Worker Node Used by Transformation Hub (TH) WebServices as external communication port to serve HTTP requests from ArcMC (externally)
32081 HTTPS Client machine, Worker node Worker Node Used by Schema Registry as external communication port to serve HTTP requests for providing Schemas information for external Avro consumers.
443 HTTPS Client machine   Used by Transformation Hub, ArcMC, Fusion, etc., for UI access
9000 HTTPS Worker Node Worker Node Used by Kafka Manager as internal communication port to provision the Kafka Manager UI access in Transformation Hub. All cluster nodes should be able to access this port for internal communication.
9999 JMX Worker Node Worker Node Used by Kafka as internal communication port to provide monitoring information to Kafka Manager and WebServices (for monitoring purposes). All cluster nodes should be able to access this port for internal communication.
10000 JMXRMI Worker Node Worker Node Used by Kafka as internal communication port to provide extra monitoring information (for monitoring purposes). All cluster nodes should be able to access this port for internal communication.
32101 - 32150 TCP Client machine, Worker node Worker Node

Used by Transformation Hub (TH) as external communication ports to allow ArcMC to communicate with and manage Connectors in Transformation Hub (CTH)

These ports are needed only if the plan is to deploy Connectors in Transformation Hub

Firewall Ports for Supporting Components

The following tables list the ports that must be available for supporting components:

Database

The database requires several ports to be open on the local network. Micro Focus does not recommend placing a firewall between nodes (all nodes should be behind a firewall), but if you must use a firewall between nodes, ensure that the following ports are available:

Ports Description
TCP 22 Required for the Administration Tools and Management Console Cluster installation wizard
TCP 5433 Used by database clients, such as vsql, ODBC, JDBC, and so on
TCP 5434 Used for Intra-cluster and inter-cluster communication
UDP 5433 Used for database spread monitoring
TCP 5438 Used as Management Console-to-node and node-to-node (agent) communication port
TCP 5450 Used to connect to Management Console from a web browser and allows communication from nodes to the Management Console application/web server
TCP 4803 Used for client connections
UDP 4803 Used for daemon to daemon connections
UDP 4804 Used for daemon to daemon connections
UDP 6543 Used to monitor daemon connections

SmartConnectors

If you have SmartConnectors that are deployed logically far away in the network with firewalls in between, those intermediate firewalls will need to permit traffic on port 9092 (for non-TLS traffic) and 9093 (for TLS traffic).

Port Direction Description
  • 1515 (Raw TCP)
  • 1999 (TLS)
Inbound Used by SmartConnector to receive events
  • 9092 (Non-TLS)
  • 9093 (TLS)
Outbound

Used by SmartConnector to send data to Transformation Hub

Port 9092 needs to be opened only if your configuration is set to communicate with Transformation Hub over a non-encrypted communication channel.

While this type of setup is not recommended by Micro Focus, it represents an option in case the goal is to prioritize performance over security.