This section provides sizing information based on the testing performed at OpenText with the hardware available to us at the time of testing. Your results may vary based on details of the hardware available, the specific environment, the specific type of data processed, and other factors. It is likely that larger, more powerful hardware configurations exist that can handle a greater load, and for even greater scalability Sentinel is explicitly designed to support distributed processing across multiple systems. If your environment is at all complex, contact OpenText Consulting Services or any of the Sentinel partners prior to finalizing your Sentinel architecture as they have additional spreadsheets and tools to calculate architectural constraints.
NOTE:
All-in-one configurations put all the varied processing loads (data collection, processing, analysis, user interface, search, etc) into one server rather than distributing it across multiple servers within the system. While an all-in-one configuration can work well for a smaller-scale environment that does not make heavy simultaneous use of all system features, the competing loads can potentially cause issues if the system is under stress (which is sometimes the case exactly when you need it most). Sentinel will prioritize critical functions such as data collection and storage, but (for example) UI performance may suffer. For this reason, you should deploy remote Collector Managers and/or Correlation Engines in most environments.
You can use Intel Hyper-Threading Technology (Intel HT Technology) with the Sentinel server to positively impact the load the system can handle. The following table specifies the scenarios in which Intel HT Technology was used in testing.
Similarly, you should enable multithreading on Collector Managers. You can configure a Collector instance to use multiple threads, which allows the Collector to process a higher number of events per second. To configure the number of threads, in the Edit Collector dialog box, click the Configure Collector tab. Set Number of Threads to the number of threads you want to use. With this feature, a single 8-core Collector Manager can process 10K EPS. However, the test results listed below do not include multithreading on Collector Manager.
NOTE:The CPU and memory resources for a Collector Manager are subject to change depending on the EPS and the number of Collectors. Therefore, you should use virtual machines for Collector Managers.
Category |
Demo All-in-One(Not intended for production) |
Medium Distributed Agentless Data Collection |
Medium Distributed Agent-based Data Collection |
Large Distributed Agent-less Data Collection |
Extra Large |
---|---|---|---|---|---|
Total System Capacity |
|||||
Retained EPS Capability: The events per second rate processed by real-time components and retained in storage by the system. |
100 EPS |
3000 EPS |
2500 EPS |
21000 EPS |
21000+ EPS |
Operational EPS Capability: The total events per second rate received by the system from event sources. This includes data dropped by the system's intelligent filtering capability before being stored and is the number used for the purposes of EPS-based license compliance. |
100 EPS |
3000+ EPS |
2500+ EPS |
21000+ EPS |
25000+ EPS |
Sentinel Server Hardware |
|||||
CPU |
lntel(R) Xeon(R) CPU E5420@ 2.50GHz (4 CPU cores), without Intel HT Technology |
Two lntel(R) Xeon(R) CPU ES-2650 O@ 2.00GHz (4 core) CPUs (8 cores total), without Intel HT Technology |
Two lntel(R) Xeon(R) CPU ES-2680 O@ 2.70GHz (6 cores per CPU; 12 cores total) |
Two lntel(R) Xeon(R) CPU ES-2695 v2@ 2.40GHz(12 core) CPUs (24 cores total), with Intel HT Technology |
Contact Micro Focus Services. |
Primary Storage: Primary indexed event data optimized for fast retrieval. |
500 GB 7.2k RPM drive |
10 x 300 GB SAS 15k RPM (Hardware RAID 10) |
6 x 146 GB SAS 10K RPM (RAID 10, stripe size 128k) |
12 TB, 20 x 600 GB SAS15k RPM (Hardware RAID 10, stripe size 128k) |
|
Secondary Storage: Secondary indexed event data optimized for storage efficiency. Includes a copy of the data in local storage but is only searched if the data is not found in primary storage. |
For information about configuring secondary storage, see Configuring Secondary Storage Locations in the Sentinel Administration Guide. |
||||
Memory |
4 GB 8 GB, when Sentinel Agent Manager, NetlQ Secure Configuration Manager, or NetlQ Change Guardian are connected |
24 GB |
128 GB |
|
|
Remote Collector Manager #1 Hardware |
|||||
CPU |
Not Applicable (Local Embedded CM Only) |
lntel(R) Xeon(R) CPU E5-2650 O@ 2.00GHz, 4 cores (virtual machine) |
Two lntel(R) Xeon(R) CPU E5-2680 O@ 2.70GHz (4 cores per CPU; 8 cores total) |
Two lntel(R) Xeon(R) CPU E5-2695 v2@ 2.40GHz(8 core) CPUs 16 cores total) |
Contact Micro Focus Services. |
Storage |
100 GB |
250 GB |
|||
Memory |
4 GB |
8 GB |
24 GB |
||
Remote Collector Manager #2 Hardware |
|||||
CPU |
Not Applicable |
Two lntel(R) Xeon(R) CPUE5-2695 v2@ 2.40GHz(8 core) CPUs 16 cores total) |
Contact Micro Focus Services. |
||
Storage |
250 GB |
||||
Memory |
24 GB |
||||
Agent Manager Hardware |
|||||
CPU |
Not Applicable (Agent-less collection only) |
Two Intel Xeon 5140@2.33 GHz (2 cores per CPU; 4 cores total) |
Not Applicable |
Contact Micro Focus Services. |
|
Storage |
4 x 300 GB SAS 10K RPM (RAID 10, stripe size 128k) |
||||
Memory |
16 GB |
||||
Remote Correlation Engine Hardware |
|||||
CPU |
Not Applicable (Local Embedded CE Only) |
lntel(R) Xeon(R) CPU E5-2650 O@ 2.00GHz, 4 cores (virtual machine) |
lntel(R) Xeon(R) CPU E5-2680 O@ 2.70GHz, 4 cores (virtual machine) |
Two lntel(R) Xeon(R) CPU E5-2695 v2@ 2.40GHz, 4 core per CPU (8 cores total) |
Contact Micro Focus Services. |
Storage |
100 GB |
||||
Memory |
8 GB |
16 GB |
|||
Data Collection |
|||||
Collector Manager (CM) Distribution: The number of event sources and events per second load placed on each Collector Manager. The filtered percentage indicates how many normalized events were filtered out immediately after collection, without being stored or passed to analytic engines. Note that the non-normalized raw log data that the normalized events are based off of is not affected by filtering and is always stored.The Local Embedded CM is located on the Sentinel Server machine. |
Local Embedded CM
|
Local Embedded CM
Remote CM #1
|
Local Embedded CM
Remote CM #1
|
Local Embedded CM
Remote CM #1
Remote CM #2
|
Contact Micro Focus Services. |
Collectors Used |
Oracle Solaris 2011.1r2
Juniper Netscreen 2011.1r2
|
Each Collector had its own Syslog server. Oracle Solaris 2011.1r2
Microsoft AD and Windows version 2011.1r4
Sourcefire Snort 2011.1 r1
Juniper Netscreen 2011.1r2
|
Agent Manager event source server 1
IBM i series 2011.1r5
NetlQ Agent Manager 2011.1r4
NetlQ Unix Agent 2011.1r4
Juniper Netscreen 2011.1r2
|
Each of the following Collectors had its own Syslog server, parsing at the following EPS rates
|
Contact Micro Focus Services. |
Total |
|
|
|
|
Contact Micro Focus Services. |
Data Storage |
|||||
How far into the past will users search for data on a regular basis?. Amount of locally cached data for higher search performance |
7 days |
Contact Micro Focus Services. |
|||
What percentage of searches will be over data older than the number of days above? Impacts the amount of input/output operations per second (IOPS) for local or network storage. |
10% |
||||
How far into the past must data be retained? Impacts how much disk space is required to retain all the data. If secondary storage is enabled, this impacts the size of secondary storage.Otherwise, it impacts the size of primary storage. |
14 days |
||||
Will a secondary storage device be available and connected? Impacts whether all data will be stored locally or if secondary storage is available for lower-cost long term online storage.Data in secondary storage remains online. |
No |
||||
How many reports will be optimized using summaries and other data synchronization policies? Impacts the number of data synchronization policies, which impacts the size and IOPS of primary storage. |
6 (out of the box) 3 (Event views dashboard) |
||||
User Activity |
|||||
How many users will be active at the same time, on average? Impacts the amount of IOPS for primary and secondary storage and other items. |
1 |
Contact Micro Focus Services. |
|||
How many searches will an active user be performing at the same time, on average? Impacts the amount of IOPS for primary and secondary storage. |
1 100M events per search |
1 300M events per search |
Not tested with search or reporting load |
1 2B events per search |
|
How many reports will an active user be running at the same time, on average? Impacts the amount of IOPS for primary and secondary storage. |
1 200k events per report |
1 500k events per report |
1 600k events per report |
||
How many real-time alert views will be running at the same time, on average? |
3 (whenever) |
||||
How many real-time event views will be running at the same time, on average? |
3 (last hour) |
||||
How many alert dashboards will be running at the same time, on average? |
1 (7 days) |
||||
How many users will be accessing the managed dashboards at the same time, on average? |
5 |
||||
How many managed dashboards will be running at the same time, on average? |
3 |
||||
How many Events Overview dashboards will be running at the same time, on average? |
3 |
2 |
|||
How many Security Health dashboards will be running at the same time, on average? |
3 |
2 |
|||
How many Threat Response dashboards will be running at the same time, on average? |
3 |
2 |
|||
How many Security Health widgets per dashboard will be running at the same time, on average? |
6(last 24 hours) |
||||
How many IP Flow Overview dashboards will be running at the same time, on average? |
1(last 15 min) |
||||
How many IP Flow Real time dashboards will be running at the same time, on average? |
1(last 15 min) |
||||
How many Threat Hunting dashboards will be running at the same time, on average? |
1(last 7 days) |
||||
How many User Activities dashboards will be running at the same time, on average? |
1(last 7 days) |
||||
How many Events Overview widgets per dashboard will be running at the same time, on average? |
3(last 24 hours) |
||||
How many alert widgets per dashboard will be running at the same time, on average? |
2(whenever) |
||||
How many IP Flow overview widgets per dashboard will be running at the same time, on average? |
25 |
||||
How many IP Flow Real-time widgets per dashboard will be running at the same time, on average? |
3 |
||||
How many Threat Hunting widgets per dashboard will be running at the same time, on average? |
20 |
||||
How many User Activities widgets per dashboard will be running at the same time, on average? |
5 |
||||
Analytics |
|||||
What percentage of the event data is relevant to correlation rules? Amount of data the Correlation Engine will process. |
100% (out of the box) (3 correlations per second) |
100% (out of the box) (1 correlation per second) |
100% (out of the box) (10 correlations per second) |
Contact Micro Focus Services. |
|
What percentage of the event data is relevant to Event Visualization? (Data indexed to OpenSearch) |
100% (out of the box) |
||||
What percentage of the event data is relevant to IP Flows? (IP Flow events indexed to OpenSearch) |
3% (500 IP Flow events per second) |
5% (100 IP Flow events per second) |
10% (10 IP Flow events per second) |
||
How many source IPs or source host names are relevant to generic hostname resolution service? (Number of DNS lookups impacting the CPU utilization of the Collector Manager) |
200 |
100 |
|||
How many alerts are considered for potential incident recommendation calculation? |
100000 |
50000 |
|||
How many simple correlation rules (filter/trigger only) will be used? Impacts the CPU utilization of the Correlation Engine. |
105 (out of the box) |
||||
How many complex correlation rules will be used? Impacts the CPU and memory utilization of the Correlation Engine. |
1(out of the box) |
||||
Correlation Engine (CE) Distribution |
Local Embedded CE (70 rules)Remote CE (35 rules) |
||||
How many sets of data will anomaly detection be performed on?The number of Security Intelligence dashboards, which impacts the CPU, primary storage size, and memory utilization. |
2(100% of event stream each) |
1(100% of event stream each) |
2(100% of event stream each) |
||
How many alerts will be created? |
30 per minute |
||||
How many events are relevant to threat intelligence feeds? |
10 EPS |
|
|||
High Availability |
Not Used |
||||
Notes: Notable functionality disabled or warnings of what happens when exceeding the system load described above. |
|
Increasing Retained EPS will eventually cause instability in this system configuration. |
You must install and set up OpenSearch nodes in a cluster mode if you want to use the Event Visualizations feature. For more information, see the “Configuring the Visualization Data Store” in the Sentinel Installation and Configuration Guide.
You must set up OpenSearch as recommended in the following table:
Component |
Recommendation |
---|---|
Indexing Node Data Storage |
|
CPU |
Intel(R) Xeon(R) CPU ES-2695 v2@ 2.40GHz |
|
OpenSearch Nodes |
CPU per Node |
Memory (GB) per Node |
Disks per Node |
---|---|---|---|---|
100 EPS |
1 data node + 1 master node (OpenSearch node in Sentinel) |
4 |
4 |
2 |
3000 EPS |
2 data nodes + 1 master node (OpenSearch node in Sentinel) |
8 |
24 |
3 |
20000 EPS |
4 data nodes + 1 master node (OpenSearch node in Sentinel) |
8 |
32 |
4 |