Previous Topic Next topic Print topic


Guidelines

Restriction: This topic applies only when a Database Connectors license has been installed via the Micro Focus License Management System.

The following sections describe some areas for you to consider as you prepare your database and your COBOL program.

Guideline 1 — Database Administrator

A site with a database management system needs a database administrator (DBA).

The administrator is critical in any large database installation. The DBA checks performance statistics and memory usage, performs general maintenance and backup for the database, initiates traces, executes database utilities, sets buffer sizes, determines how often buffers are flushing, and, in general, understands how database settings and events affect overall performance.

The DBA also performs periodic tuning of the database, including:

  • Using monitoring tools
  • Allocating table spaces
  • Examining output of the query optimizer to ensure that the most efficient paths are being used
  • Updating table statistics
  • Creating indexes

If a site experiences a performance problem, this usually means there is a bottleneck somewhere in the system. The DBA can help to isolate the bottleneck. Once the bottleneck is identified, the site can determine whether to apply more resources or correct the situation that is stressing the existing resources.

Guideline 2 — Understand COBOL Operations and Database Operation

Some COBOL operations are particularly stressful to a database. The more the application uses these operations, the more likely performance of the RDBMS suffers.

The more you understand about your RDBMS and how it operates, the more you can help your COBOL applications to work efficiently with it.

File Input and Output

Consider these standard COBOL I/O operations:

  • READ
  • REWRITE
  • WRITE
  • DELETE
  • OPEN

Each has an associated cost in terms of database performance. Each asks the database to do something that takes time. So if there are I/O operations that are optional in your COBOL application, you may want to remove them.

For example, developers sometimes OPEN and CLOSE files unnecessarily, using the OPEN–CLOSE pairing as a way to demarcate each new procedure:

OPEN file-A
procedural code
CLOSE file-A

But it's important to know that file OPENs are expensive in terms of performance. If you can eliminate non-essential OPENs from portions of your application, you may improve performance.

READ operations can also affect performance. All COBOL I/O is based on key indexes. Examining the output of your query optimizer allows you to determine if the most efficient execution path is being used for READs. The execution path for a given query can change as your data changes and as the size of the tables changes. It is also affected by database initialization parameters and any statistical information that you may have gathered on your tables. It might be helpful to know that, typically, the index path is the most efficient for Database Connectors applications.

Transactions

Large transactions are also very expensive. The main problem here is that the database holds locks on indexes and rows throughout an entire transaction. Thus, the database is creating additional overhead for an extended period of time if the transaction is lengthy. In addition, complex information tracking must take place to ensure that transactions can be successfully rolled back.

Often application designers decide to err on the side of safety when applying transaction management to a mature application. Which operations should be included in a single transaction? The safe approach is to group everything that is related into one transaction. But this scheme is expensive — and even more so when a database is involved. The lengthier the transaction, the longer the locks are held and system resources are tied up. The extensive data verification in COBOL programs only prolongs this.

If performance is an issue, give some thought to dividing each transaction into smaller and more efficient subgroups.

Tables with Multiple Indexes

If using tables with multiple indexes, keep in mind that when a record is written or updated, locks are put onto all of the indexes and they are all basically rewritten during the course of the write/update. This is a costly process. There may be multiple columns per index, and multiple indexes per table. Each rewrite implies a certain amount of wait time. Tables with a large number of indexes can be slow on writes/updates, possibly leading other operations in the database query optimizer to become confused.

There are two things you can do in this circumstance:

  • Restructure your data. The benefits of data restructuring may be significant. For example, if you have any situations in which two indexes start out with the same column or set of columns, you may be able to improve performance appreciably by changing your data definition.

    Suppose two indexes both start with MONTH, DAY, YEAR. These identical first three columns can cause the RDBMS's query optimizer to choose the wrong index, in which case you generate a significant amount of unnecessary and unproductive overhead. Restructuring one of the indexes can make a difference.

  • Use the file system for some of your data. If you cannot restructure your data but are finding certain operations to be too expensive, you might want to consider moving some data into the indexed file system.
Guiding the Data Searches
You can guide the data searches that result from database queries, and thus improve performance, by making use of an external variable called A4GL_WHERE_CONSTRAINT. This process is explained in The WHERE Constraint.

Guideline 3 — Program and Database Interaction

A database can interact with your COBOL program in surprising ways. When you introduce a database to a mature COBOL application, your application may experience errors you have not seen before. Being aware of this possibility helps you to prepare your response.

For example, your existing application without the database may rarely exceed the limit on the number of data files that can be open simultaneously. But when you add a database, you increase the likelihood of this significantly. This is because the query processing may often require temporary tables. Also, the ORDER BY clause (used in SQL statements to sequence retrieved data) opens temporary work files. So you may find that these files cause you to reach the limit sooner. (Note that proper tuning of the query optimizer can reduce the number of temporary files required.)

When upgrading to a new version of the RDBMS, new software components may interact with your applications differently than their predecessors did. It's important to rerun your pre-installation simulations (see Guideline 4 — Plan for Growth) to determine whether your system resources are still adequate. Investigate any differences in the two simulations. You may have to make adjustments to compensate for the differences in the RDBMS versions.

Upgrading or switching to a new database may also mean that you need to modify your makefile to coordinate with the settings of the new database. You can edit your makefile manually, or use one of the editing tools distributed with Database Connectors. For more information, see the Appendix specific to the Connector interface you are using.

If you notice a change in performance with a new release of your RDBMS, keep in mind that certain database settings can have a significant effect on speed. Fine-tuning your settings can make a difference.

Several other options to help you improve performance are listed below:

  • Try one of the many third-party monitoring tools available
  • Break up data onto several connected hard drives to free up space and resources
  • Limit the number of indexes you have assigned with each database table

Guideline 4 — Plan for Growth

Planning ahead for growth is essential. You need to be able to predict the system resources that your application requires reaching a full load, both in terms of users and database size.

Before you choose and install hardware, it is best to run a simulation. Your hardware vendor or RDBMS vendor can help you to simulate a large number of users on a given platform with a given RDBMS.

Setting up and running a simulation that includes your own application lets you anticipate when moving into any installation, and can help to mitigate costly consequences.

A potentially costly mistake is to test in your office with a small database and a small number of users. Databases respond differently to small amounts of data and a small number of users than they do to large amounts of data and a large number of users. Functionally, the database works differently as the load increases. You might think of this adjustment as switching into a different gear. Significant changes in the load on your database can lead to large increases in overhead. The behaviors and loads encountered as the database expands cannot be determined from a linear projection based on the smaller scenario.

Previous Topic Next topic Print topic