Previous Topic Next topic Print topic


Caching

Restriction: This topic applies to Windows environments only.

Caching is available for all database objects, including tables, indexes and data dictionary files. The cache size is specified at the server level using the Caching Options screen of the XDB Server Configuration Utility. Cache memory usage and other related information can be monitored on the XDB Cache Statistics screen while the server is running.

The major benefit of using cache is to keep frequently accessed disk blocks in RAM to ensure that subsequent access to these blocks requires no additional disk I/O. The buffer manager uses a searching algorithm to determine if a block exists in cache before deciding to retrieve that block from disk. If a block does not exist in the cache, this searching time can be classified as additional overhead because the search and the disk I/O must be performed.

The buffer manager also has a tossing algorithm. If the cache is full, some current cache blocks must be discarded before new blocks can be added. The determination of which blocks to discard is based upon a modified clock algorithm.

The cache manager can provide a performance boost only if the reduction in I/Os outweighs the CPU cost required to search and toss. The more objects there are to cache and the more random the access, the less useful a cache manager becomes.

For example, consider a database that is significantly larger than available RAM, with a large number of indexes (to optimize query processing) and data access patterns that are mostly random. All those indexes are eligible for caching, but the random access nature of the queries means that there will probably be a high percentage of cache tossing and low percentage of cache hits. This would indicate that each query was accessing either different I/O disk blocks or I/O blocks that had previously been tossed from the cache. In this situation, large cache sizes would have limited use.

If your system shows a high percentage of tossing and low percentage of cache hits, you should choose a cache size that would not assume I/O overlap between the query set, but would allow I/O reduction for an individual query. In this fashion, each individual query could benefit from common disk I/O access, while reducing the amount of CPU time required by the cache manager to search and toss.

It is important to check the cache hits and misses when all or most of your users are actively accessing data. Generally, the cache hit ratio increases as the number of users grows, especially when those users are performing similar tasks.

Previous Topic Next topic Print topic