Monitor executions on the execution server generate result and measure data. This data passes several stages before it is stored persistently in the repository (database). By default, these stages include only volatile storage (RAM). This leads to data loss if a server crashes or hangs, or if network problems lead to a cache overflow.
The flow of result data starts with incoming results from the monitor execution. These results are stored in the system memory by the ResultCache service which waits for the ResultFetcher service to pull data to the application server. As soon as the transmission completes successfully, the data is removed from the ResultCache. The application server caches the data in the ProjectResultWriter service, which then cycles through the projects and writes data in portions (round-robin) to the repository.
The ResultCache service on the execution server stores incoming result data until it is collected by the application server. In case of network outage or the application server being down for a longer period, the memory of the execution server limits the amount of data that can be cached. If the limit is reached, any incoming result data will be dropped and is then lost.
The application server pulls data from the execution server and caches it in the ProjectResultWriter service, from where it is written to the repository in a round-robin cycle, project by project. If data arrives faster than the database is able to store it, the cache will grow until the memory limit is reached, at which point the ProjectResultWriter will cease pulling data from the execution servers, which ultimately leads to cache overflows on the execution servers. If a system crashes while an amount of data is being cached, those results will be lost.
To avoid the loss of data on the execution servers and on the application server, Performance Manager provides the option to enable transactional file-based intermediate result data storage.