action.skip

Updating the Relational Database

This chapter covers updating the Databridge Client data tables in the relational database with changes made to the DMSII database. In addition, it explains DMSII reorganizations and how they update the Databridge Client data tables.


Updating the Databridge Data Tables

Updating is the process of applying the DMSII database changes to the Databridge data tables in the relational database by sending only the changes, not all of the data, to the Databridge data tables.

You can update the Databridge data tables after they have been cloned as long as they meet the following requirements:

You can update the Databridge Client data tables by running a process command each time you want the update to occur, or you can schedule a process command to run at fixed times or a fixed amount of time after the run finishes. How often you update the Databridge tables depends on the following:

  • How current you want the data to be. For time-critical information, you may want to update the Databridge data tables several times a day.

    Note

    If you do not update the relational database often, it might be more effective for you to re-clone the DMSII data sets rather than update them. For example, if you are interested in weekend information only, and several audit files have been closed (and possibly moved to tape) during that time, recloning may be faster than updating.

  • How often audit files are available on the host. When an audit file is not available, the Databridge Engine temporarily stops processing until the audit file is available. The Databridge Engine can access the active DMSII audit file when the DBEngine control file parameter READ ACTIVE AUDIT is set to true.

  • How often closed audit files are available on the host. In some cases, a closed audit file is not available because it has been copied to tape. In this instance, a host operator must mount the tape before the closed audit file can be made available to Databridge.


Performing Updates Without Using Stored Procedures

Version 6.6 of the Databridge software introduced a new way of updating tables that does not use stored procedures and is more efficient. This feature can be controlled globally by setting the configuration parameter use_stored_procs to False, see use_stored_procs.

This parameter makes the process and clone commands generate the actual SQL command instead of using a stored procedure call to perform an update. The Client still uses host variables, as was the case with stored procedures calls. Executing the SQL directly eliminates some overhead and makes processing the update faster. You can also control this on a data set by data set basis as described in use_stored_procs.

Note

Setting configuration parameter use_stored_procs to False, does not have any effect, unless you run a redefine command with -R option (redefine all) to propagate this setting to the ds_options columns of the data sets. The redefine command with return an exit code of 2033 (Reorg command required). You need run the reorganize command to complete task. This command will generate new scripts for the tables and refresh all the data sets, which results in the dropping of all the stored procedures, which will no longer be used. If you revert to using stored procedures, the procedure is the same, in this case the reorganize command will recreate the stored procedures.


Scheduling Updates

The dbutility processcommand has a built-in scheduling mechanism that allows the run to hibernate and resume at the next scheduled time. When the Databridge Server sends the Client an end-of-audit-reached status, dbutility normally terminates. However, if you enable scheduling, the Databridge Client disconnects from the server and the database and hibernates until the next scheduled processcommand, when it reconnects to the server and the database. This scheduling mechanism only works after you run dbutility for the initial clone of the DMSII database. If the Client crashes or the power fails, scheduling will fail. Service-based scheduling has none of these shortcomings, as the scheduling task is taken over by the Client Manager service.

If you use the Client Manager service, it takes over this functionality. When a DBClient run terminates, the service determines when the next processcommand should be run and starts it when that time arrives. The advantage of service-based scheduling is that it is immune from system failures, as the service automatically gets restarted when the system is rebooted. For details about service-based scheduling, see the Databridge Client Console Help.

To schedule updates

  1. Uncomment the scheduling parameters in the Databridge Client configuration file. Scheduling parameters are listed under the [Scheduling] header in the configuration file.

  2. Select one of the following scheduling methods:

    • Daily scheduling

    • Fixed-delay scheduling

  3. Enter appropriate values for the following scheduling parameters, depending on whether you are using daily scheduling or fixed-delay scheduling. Each parameter is explained in [Scheduling].

    [Scheduling]
    ;
    ;dbutility process command only
    ;
    ;daily                  = 08:00, 12:00, 17:00, 24:00
    ;exit_on_error          = false
    ;sched_delay_secs       = 600
    ;sched_minwait_secs     = 3600
    ;sched_retry_secs       = 3600
    ;blackout_period        = 00:00, 02:00
    

    As long as the process command completes successfully, dbutility becomes inactive (sleep) until the next scheduled time. If the scheduled dbutility process command is successful, the following message appears:

    Next update for DataSource datasourcename will run at hh:mm (delay = nn secs)
    

    Scheduling of updates will continue until any of the following occurs:

    • You reboot the Databridge Client machine or end the Databridge Client session

    • You enter a SCHED OFF console command when dbutility is processing updates

    • A DMSII reorganization (other than a garbage collection)

      Note

      If you must stop the dbutility program, we recommend that you use the QUIT command to exit at the next quiet point. If the Client is waiting for the server to send it updates when none are available and the use_dbwait configuration file parameter is set to True, you can use the QUIT NOW command, which resets the connection to the server and terminates the Client run. If needed, you can also press Ctrl+C to terminate a session while dbutility is processing updates; however, we do not recommend this approach.


Scheduling Examples

Daily Schedule Example

The following example uses the daily scheduling method. In this example, the Databridge Client runs only twice a day -- once midway through the business day and once at the end of the business day. If the process command fails, the Databridge Client waits 10 minutes before retrying.

[scheduling]
daily            = 12:00, 17:00   ; run the process at noon and 5PM
sched_retry_secs = 600            ; retry in 10 minutes after a failure

Fixed-Delay Example

The following example uses the fixed-delay scheduling method. In this example, the Databridge Client runs the process command 4 hours (240 minutes) after the run finishes. If the process command fails, the Databridge Client retries every 30 minutes.

[scheduling]
sched_delays_secs = 14400
sched_retry_secs  = 1800


Scheduling Blackout Periods

You can schedule blackout periods during which the Client suspends all processing and updates to allow for routine maintenance. To use this feature with the service-controlled Client, you can set the Blackout Period value from the Client Console by clicking on the Configure item in the data source Settings menu.


Unscheduled Updating

Use this procedure when you want to run dbutility process independent of scheduling.

  • Make sure that the Databridge Server is running. If it is not, the Databridge Client will try to connect to the server and eventually time out.

  • Make sure that your signon parameters are set appropriately.

  • If the [EbcdictoAscii] section of the configuration file (to customize character translation) has changed since the initial clone, your data may not be consistent. You might need to re-clone.

  • Make sure that the current directory is the one you created for this data source. This ensures that Databridge Client can locate the scripts. (Scripts are only required during an update if there's a purged data set.)

  • Enter the following:

    dbutility [signon_options misc_options] process datasource

Option Description
signon_options For each Databridge Client type, the following command-line options specify the relational database signon parameters:

Oracle: [-U userid] [-P password] [-D database]

SQL Server: [-U userid] [-P password] [-W] [-O ODBCdatasource]

PostgreSQL: [-U userid] [-P password] [-O ODBCdatasource]
misc_options See table in the next section titled "Process Command Options".
datasource The name of the data source specified in the DBServer control file or by Enterprise Server.

When you run a process command to update the Databridge tables in the relational database, the following occurs:

  • All modified records are overwritten with their new values. If the target record is not found in the table, the Databridge Client adds the record to the table instead.

  • All deleted records are deleted.

  • All added records are inserted into to the data tables. If the target record is already in the table, the Databridge Client modifies the record in the table instead.


Process Command Options

Command-line options related to the process command are as follows:

Option Description
-d Enables default tracing.
-f filename Specifies a configuration file other than the default dbridge.cfg file in the working directory.
-t Enables selective tracing.
-w Toggles the use_dbwait parameter in dbridge.cfg
-K Inhibits the audit file removal WFL from being run on the host.
-L Forces the Client to use a new log file for this run.
-T Forces the Client to use a new trace file for this run, if tracing is enabled.

For information on the command-line options, see dbutility Command-Line Options.

Anomalies That Can Occur In Updates

When the Databridge Client updates the relational database, the following anomalies can occur:

Last quiet point in an audit file When processing an update transaction group since the last quiet point, Databridge Engine does the following when it reaches the end of the last available audit file:

  • Aborts the current transaction group so that the updates are rolled back. These are not duplicate updates, but updates that could not be committed. These updates will be reapplied the next time you run a process command.
  • Sends the Databridge Client a status indicating that the transaction group was rolled back. Upon receiving this status, the Databridge Client does not display any messages.
Host application rolls back changes

(This is a partial DMSII rollback, not to be confused with a total DMSII rollback.)
If a host application encounters an error condition while updating the DMSII database, it rolls back all of the changes it made. In this case, Databridge Engine aborts the updates when it finds the aborted transaction indication in the audit file. Databridge Engine handles the situation in one of two ways based on the setting of the DBEngine control file parameter CONVERT REVERSALS TO UPDATES:
  1. If CONVERT REVERSALS TO UPDATES is FALSE (the default setting), Databridge Engine sends an abort transaction status to the Databridge Client and then reprocesses the transaction group, excluding any updates by the program(s) that rolled back its updates. In this case, none of the updates in the aborted transaction are applied to the data tables.
  2. If CONVERT REVERSALS TO UPDATES is TRUE, Databridge Engine will continue to process the audit file, converting the items marked as reversals to normal updates, in a manner similar to the method employed by DMSII. (DMSII aborts transactions by reversing the updates previously done. Thus a CREATE will be reversed to a DELETE, a DELETE reversed to CREATE, MODIFY to a MODIFY using the Before Image). All updates, including those that were aborted and their reversals, are applied to the data tables.