Host Systems
|
- Your host systems must meet or exceed the technical requirements for CPU cores, memory, and disk storage capacity, and anticipated requirements for end-to-end events processing throughput. With insufficient resources available on a host, the installation process may fail. Consult the Technical Requirements for ArcSight Platform 23.3 for guidance.
- Provision cluster (master and worker node) host
systems and operating environments, including OS, storage, network, and Virtual IP (VIP) if needed for high availability (HA). Note the IP addresses and FQDNs
of these systems for use during product deployment.
- You can install the cluster using a
sudo USER with sufficient privileges, or, alternatively, you can install it using the root USERID.
- Master and
worker nodes can be deployed on virtual machines. However, since most of
the processing occurs on worker nodes, if possible, you should deploy worker nodes on
physical servers.
-
When using virtual environments, please ensure:
- All master and worker nodes must be installed in the same subnet.
-
If a master and
worker are sharing a node, follow the higher-capacity worker node sizing
guidelines. OpenText does not recommend this configuration for production Transformation Hub environments.
|
High Availability
|
- For
high availability (HA) of master nodes on a multi-master installation, you must create a Virtual IP (VIP) which will be shared by all
master nodes. Prior to installation, a VIP must not respond when pinged.
- All master nodes should use the same hardware configuration, and all worker nodes should use the same hardware configuration (which is likely to be different from that of the master nodes).
- For HA, exactly three master nodes, at least three worker nodes, and at least three database nodes should be used so that if one of each node type fails, the remaining nodes can continue to operate the system without downtime. This is the configuration illustrated in the diagram. You can use fewer nodes of each node type. However, this configuration will result in that node type not being HA.
- For HA, use an NFS server that is separate from the Kubenetes cluster nodes, and has HA capabilities, so there is not a single point of failure. For example, this could be 2 NFS servers (active/passive) configured with replication with a Virtual IP managed between them which OMT is configured to use to connect to the NFS server. An example of configuring the 2 NFS servers in replication mode is described here. Note: Link opens an external site.
- For master nodes, only 1 or 3 master nodes are allowed.
- If you deploy a single master node, failure of the single master node could cause you to lose the ability to manage the entire cluster until you recover the single master node. In some extreme scenarios, failure of the single master node could cause the entire cluster to become unrecoverable, requiring a complete reinstall and reconfiguration. When using only a single master node, the system will be more reliable if you also host the NFS server on the same master node.
-
When the installer is configured to create more than one database node, the database fault tolerance will be set to one. This means the data in the database will be replicated so that one database node can fail and the system will continue to operate properly. Database storage utilization will double as a result of the data replication. In a failure scenario, the failed node should urgently be restored before there is a chance of another node failure, which will shut down the database to avoid additional problems.
-
If you configure the installer to create only a single database node, the database fault tolerance is set to zero because there is only a single node. Therefore, no other node will continue during a failure, and no data replication will occur in this scenario.
|
Storage
|
-
Create
or use an existing NFS storage environment with sufficient capacity
for the throughput needed. Guidelines
are provided below.
Determine the size and total throughput requirements
of your environment using total EPS. For example, if there are 50K EPS inbound, and 100K EPS consumed, then
the size would be 150K EPS.
- Data compression is performed on
the producer side (for example, in a Smart Connector).
|
Scaling |
- Adding more worker nodes is typically more effective than installing bigger and faster hardware because individual workloads on worker nodes are usually relatively small and some of them work better when there are fewer different workloads on the same node. Using more worker nodes also enables you to perform maintenance on your cluster nodes with minimal impact to your production environment. Adding more nodes also helps with predicting costs due to new hardware.
- Unlike worker nodes, for the database it is typically more effective to use bigger and faster hardware than to increase the number of database nodes because the database technology can fully utilize larger hardware and this decreases the need for coordination between database nodes. With that said, for HA it is important to deploy enough database nodes to be resilient in case of a database node failure or individual node downtime for maintenance.
|
Network |
- Although event data containing IPv6 content is
supported, the cluster infrastructure is not supported on IPv6-only systems.
|
Cloud |
-
All SmartConnector or Collector remote connections depend on the Operating System random number pool (entropy pool) to generate private keys for secure communication. For a cloud environment, you might need to increase the entropy pool beyond the lower limit of 3290 to ensure uninterrupted communication. For more information see, "SmartConnector or Collector Remote Connections Failing Due to Low Entropy" in the Installation Guide for ArcSight SmartConnectors (ArcSight SmartConnectors documentation).
|
Security
|
|
Performance
|
- If SmartConnector is configured to send events to Transformation Hub in CEF format and the events are being stored in ArcSight Database, consider the potential performance effects of the CEF-to-Avro data transformation, and allow a 20% increase in CPU utilization. This will generally have a large impact only with very high EPS (250K+) rates. Consider configuring the SmartConnector to use the Avro event format instead, which avoids the need for this transformation.
|
Downloads
and Licensing
|
|
Installing with Enterprise Security Manager |
-
If you want to install the Platform and the ESM server in the same environment, specify during the Platform installation a OMT API Server Port that does not use the same port as the ESM server (default 8443). For example, when using the Platform Install tool, the example-install-config-esm_cmd_center-single-node.yaml sets the master-api-ssl-port to port 7443.
|