DMSII Reorganization and Rollbacks
DMSII Reorganization and Rollbacks
This section lists changes that can occur to the DMSII database, how those changes affect the Databridge Client control and data tables, and how to handle them on the Client database. For instructions on handling a DMSII reorganization on the host, see Prepare for a DMSII Reorganization in Chapter 10 of the Databridge Host Administrator's Guide.
Initializations
A DMSII initialization occurs when a DMSII data set is purged of its records. When a data set is initialized, Databridge Engine sends the Databridge Client a stateinfo record with a mode value of 4. The Databridge Client performs the actions described below after displaying the following message:
DataSet name[/rectype] has been purged
The Client drops all of the tables belonging to this data set and re-creates them, effectively purging the tables of all records. When Databridge Engine is done sending stateinfo records, it sends a status of DBM_PURGE(21), causing the Client to display the following message:
DataSets purged by Databridge Engine
The normal update processing will repopulate them.
Note
An initialization does not change the data set format level in the DMSII database.
Reorganizations
Although there are three types of DMSII database reorganizations (record format conversion, file format conversions, and garbage collection reorganizations), the types of reorganizations are not as important as whether the reorganization changes record layouts or record locations, as follows:
-
DMSII record format conversions change record layouts. When a data set is affected by a record format conversion, parallel changes must be applied to the Client database. See Managing DMSII Changes to Record Layout.
-
DMSII file format conversions and garbage collection reorganizations change record locations. Only certain data sets require recloning in this case. See DMSII Changes to Record Locations.
Note
Filler substitutions are handled the same as a record format reorganization. In a filler substitution, there is a change to the
item count
column in record for the data set in the DATASETS table.
Managing DMSII Changes to Record Layout
Use this procedure if a DMSII reorganization changes the layout of records. DMSII record layouts are changed in the following circumstances:
-
Record format conversion (also called structural reorganization in this section)
-
Filler substitutions
When Databridge Engine notifies the Databridge Client that the layout of
records have changed, the Databridge Client returns a message for each
reorganized data set and then prompts you to run a redefine
command
followed by a reorganize
command. It then returns an exit_status
value of
2 (DMSII reorganization). The redefine
command can determine whether
the layout for the data tables have been affected by the DMSII layout
change and if the affected data sets need to be re-cloned. (For more
information about the redefine
command does, see About the redefine Command.)
To run the redefine command
-
If the DMSII changes are extensive or complex, we recommend that you back up the relational database before proceeding.
-
If you use the Administrative Console's Customize command to customize the table layouts, skip steps 3 through 5 and run the Customize command instead. It will perform the same actions as the
redefine
command, but will also allow you to make customizations for the data sets affected by the reorganization. -
Modify user scripts as required by the DMSII layout changes.
-
Run the
redefine
command as follows:dbutility redefine datasource
Important
If any changes caused by the reorganization are not supported, the
redefine
command does not create the reorganization scripts. Instead, it sets theds_mode
column of the corresponding data set to 0, which forces the data set to be re-cloned. If the changes caused by the reorganization are allowed, theredefine
command setsds_mode
to 31. -
If the
redefine
command results in errors because the user scripts were improperly updated, run areload
command using the unload file automatically created by the redefine command. This file is named "datasource.reorg_nnn.cct", where datasource is the data source name (in uppercase) and nnn is the old update level of the database. This restores the control tables to the state they were in before theredefine
command was run. See The Reload Command. Correct the user scripts and rerun theredefine
command until no errors result. -
Examine the reorg scripts created by the
redefine
command (or the Customize command) to make sure they are reasonable before proceeding any further. These scripts are created in the working directory and have names of the form "script.reorg_nnn.tablename", where nnn is the old update level of the DMSII database. If you see SQL statements that are likely to take a very long time to execute consider restoring the control tables and setting theuse_internal_clone
option or the correspondingds_options
bit for the data set. This will use a "select into" (CTAS in Oracle) to copy the old table (that is first renamed) to recreate the table with the needed changes while preserving the original data. The major time consuming operations are setting the initial values for added columns and ALTER commands that change the data type of columns.
Doing this for a table with several million records can take a long time. The internal clone operate at bulk loader speed and is subject to the same rules as the bulk loader. -
(This step automatically executes a
generate
command.) Run thereorganize
command as follows:dbutility reorganize datasource
The
reorganize
command does the following:-
It generates new Client scripts for all data tables whose layouts have changed by running the same code that a
generate
command would. -
For each data set affected by the reorganization, it runs the scripts created by the
redefine
command to reorganize the tables associated with the data set. If these scripts run successfully, it restoresds_mode
to its value before the reorganization. Conversely if the script fails, it setsds_mode
to 33 for the data set in question to indicate that the attempt to reorganize the table has failed, and it stops prematurely.If the command completes successfully, proceed to step 8. Otherwise, determine why the command failed and decide what to about it.
Your options include:
-
Give up and re-clone the data set that could not be reorganized by setting its
ds_mode
to 0. Rerun thereorganize
when you do this, as the command stops when it encounters an error. Restarting it after taking care of the problem data set will complete the task for any data set that still have ads_mode
of 31. If you try to run aprocess
command while some data sets still have theirds_mode
columns set to 31, this will result in an error telling you that you need to run areorganize
command. -
Correct the script that failed, set its mode back to 31, and rerun the
reorganize
command. -
If you are proficient in SQL, you can reorganize the table using external means to the Client to perform the action that the reorg scripts were attempting to do. If you succeed you can then set
ds_mode
back to its original value (which will most likely be 2). You also will need to run arefresh
command for the problem data set to replace the old stored procedures which are out-of-date.
The
reorganize
command is restartable after a failure. The data sets that were already processed successfully will not be affected by rerunning the command, and the data set that caused the command to fail will be skipped unless itsds_mode
column is set to 31. -
-
-
Run a
process
command to resume change tracking:dbutility process datasource
About the redefine Command
You will be prompted to run the redefine
command when a data set is
reorganized or when the Support Library is recompiled. (A Support
Library recompile indicates that either the layout has changed, such as
changes to ALTER or FILTER, or the SUPPORT option in the SOURCE
declaration changed.)
In all of the aforementioned cases, Databridge Engine treats the
situation like a reorganization and requires that you run a redefine
command.
When you run the redefine
command, it does the following:
-
Creates a backup of the Client control tables for the data source by silently performing an
unload
command. The unload file is created in the data source's working directory when the Client first detects the reorganization. The unload file is named "datasourcereorgnnn.cct" where nnn is the value of the update level prior to running theredefine
command (and is saved to theold_update_level
column of the DATASOURCES entry). -
Re-creates the relational database layout for all data sets that are marked as needing to be redefined.
-
Runs user scripts (if you use them) to preserve changes to the Client control tables. If you are using the Administrative Console's Customize command, all changes are restored from the old controls tables.
-
Determines which data sets have tables whose layouts have changed, updates the
ds_mode
column in DATASETS accordingly, and creates reorganization scripts that will alter the relational database tables to match the changes in the reorganized DMSII data sets.
Reorganized Data Sets
When a data set has been reorganized (status_bits
= 8), the
redefine
command compares the layouts of tables mapped from the data
set in the existing Client control tables with the new layouts and does
the following:
-
If no changes occur to the layouts of tables mapped from the data set, the
redefine
command sets the data setds_mode
column its value before the reorganization (1 or 2), indicating that the data set is ready to be updated. -
For tables for which the layout has changed, the
redefine
command creates reorganization scripts that will modify the relational database tables to match the changes in the reorganized DMSII data sets. -
If the DMSII reorganization introduces one or more new columns, one of the following occurs based on the value of the parameter
suprress_new_columns
.Is Result True The active
column is set to 0 for new items in the DATAITEMS Client control table and for new tables in the DATATABLES Client control table.
The nextprocess
command does not re-clone the data set.False The new columns are added to the tables in the Client database. These columns will be set to the appropriate values based on their INITIALVALUE defined in the DASDL.
The nextprocess
command will continue to populate the table including the new column. If new tables appear, the data set will be re-cloned. -
If the reorganization introduces one or more new data sets, one of the following occurs base on the value of the parameter
suppress_new_datasets
.Is Result True Databridge Client sets the active
column in the corresponding entry in the DATASETS Client control table to 0, and the data set is not mapped.False Databridge Client sets the active
column in the corresponding entry in the DATASETS Client control table to 1 (unless the data set is a REMAP), and the layout of the corresponding relational database tables is defined in the DATATABLES and DATAITEMS Client control tables.
You must run areorganize
orgenerate
command to create the scripts for these new tables. These data sets are automatically cloned the next time you run aprocess
command. -
For any reorganized data set whose
active
column is 0, theredefine
command updates the corresponding Client control table entries, leaving theactive
column set to 0. This ensures that if you later decide to clone that data set, you only need to set theactive
column to 1 and execute aredefine
and agenerate
command.
Performing Reorganizations Using an Internal Clone
Version 6.6 of the Databridge software introduced a new way of
reorganizing tables that does not use alter
commands. In some cases,
the process of reorganizing a table by using alter
command can be very
expensive. For example, if you try to change a column that is an int to
a dec(10) when using SQL Server, the alter
command will cause every
single change to be logged, which can have rather disastrous effects if
the table is large. If you run out space for the log, the alter
command abends, leading to a massive rollback.
The use_internal_clone
parameter allows you to select the default
method of doing reorganizations. See use_internal_clone for
more information. You can then override it (on a data set by data set
basis) by using the Client Configurator to change the setting of the
ds_options
bit DSOPT_Internal_Clone (see DSOPT_Internal_Clone in the
section DATASETS Client Control Table for a description of
this bit).
The internal clone is comparable (in terms of speed) to using the bulk loader to copy the data from the old table to the new table. In the case of SQL Server, to make it run fast you must make sure that database's recovery model is not set to "Full", as was the case of for the bulk loader (temporarily change the database model to ""Simple" or "Bulk-logged" when you run an internal clone).
DMSII Changes to Record Locations
DMSII record locations are changed in the following circumstances:
-
Garbage collections reorganizations
-
File format conversions
-
Record format conversions
Garbage collection and file format conversion reorganizations only affect data sets that use AA Values as keys. Therefore, unless the data sets using AA Values as keys are small and garbage collection reorganizations at your site are infrequent, we recommend that you use RSNs. (If you're unable to use RSNs, composite keys are a viable alternative to AA Values. However, they are error prone and can result in false duplicate records.)
When a data set is affected by a garbage collection reorganization or a
file format conversion, the Databridge Client sets the ds_mode
column to 12 in the
DATASETS Client control table and displays the message:
WARNING: DMSII reorganization has occurred; AA Values for DataSet name [/rectype] are no longer valid
When a record format conversion affects a data set that uses AA Values,
the redefine
command forces that data set to be re-cloned, even if the
tables derived from the data set are not affected by the reorganization.
DMSII Reorganization When Using Merged Tables
The merged tables feature combined with multi-source databases allows a user to store data from multiple separate DMSII databases into a single relational database. The requirement is that all the DMSII databases have the same DASDL and always be kept in sync, as far as reorganizations are concerned. This section documents how to go about handling such reorganizations. Everything we said about the single data source case still applies here. The first thing you need to do is to let the Clients catch up with all the updates until it gets to the point in the audit trail where the reorganization occurred. Make sure that you let all the Clients catch up before doing anything else.
Once all the data sources are caught up, you will need to run redefine
commands for all of the data sources. Once this is completed you will
then need to run a reorganize
command for one of the data sources. Do not do
this for more than one data source, as there is only one set of tables
in the relational database and if any of the scripts alter a table
running the scripts a second time will usually result in SQL errors, as
the ALTER commands will most likely not be valid. For example if the
ALTER command adds a column, an attempt to add it again will fail. We
added the -n
option to the reorganize
command to make it work for the second
and any subsequent data sources in a multi-source environment. This
allows you to get all the data sources ready for processing updates by
generating scripts for reorganized data sets and refreshing the stored
procedures for the tables associated with such data sets. Finally the
command updates the ds_mode
column in DATASETS, restoring it to the
value it had before the redefine command was run. This command appears
near the bottom of the Advanced menu for the data source in the Administrative Console's Customize command.
If the reorganization requires that a data set be re-cloned, you should add the
-k
option to the first process or clone command you use so the table
gets dropped. In the absence of the -k
option, the Client will run the
cleanup script, which removes all the records associated with the
current data source. The alternative is to manually drop the table for
the first data source. Once the tables to be re-cloned have been dropped,
the remaining data sources can operate normally.
Rollbacks
A DMSII "rollback" restores the DMSII database to an earlier point in time in order to correct a problem. While the DMSII database is being restored, replication stops. The Client must then be restarted for replication to resume.
If the Client has processed updates after the DMSII restore point, this replicated data will be wrong. Upon finding bad information in the stateinfo, the Databridge Engine typically returns a message stating that a rollback has occurred. To resolve this problem, the relational database must also be rolled back (to the DMSII restore point or earlier).
If the Client hasn't processed updates after the DMSII restore point, no action is required. This can often be the case as the Client tends to lag behind the DMSII database by several audit files during the processing of audit files generated by batch jobs.
Recovering from DMSII Rollbacks
You'll need to recover the relational database after a DMSII rollback In situations where the Client database is caught up with the DMSII database (that is, there is no lag time between the two). There are two preferred ways to do this:
Programmatic rollback | Undoes all transactions that occurred after the specified rollback point (typically a time prior to the DMSII restore point). This is only possible if the relational database is audited, which is rarely the case. |
Reload the database | Entails reloading the database from a backed-up copy. This requires that all of the audit files—from the point when the relational database was backed up forward—to be available. If the audit files aren't available, recloning is the only option. |
Recloning the database is usually very time-consuming and is only recommended as a last resort or in cases where the relational database contains little data or if the required audit files are not available. For information about recloning, see Recloning.
Caution
Using shortcuts to recover a relational database after a DMSII rollback, such as updating the tables using scripts or resetting the State Info, is not only ineffective but problematic. These methods leave obsolete updates in the Client database and may cause valid updates to be skipped after the Databridge Client resumes tracking.
Recloning
Reasons for recloning include the following:
-
DMSII reorganization
-
DMSII rollback
-
An update is not possible (for example, because a table does not have a unique key)
-
One or more of the Databridge data tables in the relational database were removed
You can use either the process
or clone
command to re-clone data
sets. The clone
command lets you specify individual data sets on the
command. The process
command automatically re-clones all data sets
whose active
column is 1 and whose ds_mode
column is 0. Both commands
perform fixups, tracking and processing updates as needed (unless the
defer_fixup_phase
or the stop_after fixups
parameter is set to
True). See Recloning Individual Data Sets.
If you're recloning the entire database, the process is more involved. See Recloning a Database.
Recloning Individual Data Sets
Use one of the following procedures to re-clone data sets.
-
Set the current directory to the one you created for the data source (the directory from which you ran a
generate
command for the data source). Make sure that the directory contains the scripts for this data source. -
Set the
ds_mode
column (in the DATASETS Client control table) to 0 for the data sets you want to clone by running a SQL command. If you are recloning all data sets, using the "-Y reclone_all
" option eliminates the need to do this, as the Client will update the DATASETS table automatically when this option is used. -
Run the
process
command with the -y option, as follows:dbutility process -y datasource
The
-y
option forces any data sets whoseds_mode
is set to 11 or 12 to be recloned, in addition to the recloning data sets whoseds_mode
is set to 0. After the data extraction process is complete for the data sets being recloned, Databridge data tables whoseactive
columns are set to 1 in their corresponding Client control table (and whoseds_mode
is set to 2) are updated.
To reclone with a clone
command
-
Set the current directory to the one you created for the data source (the directory from which you ran a
generate
command for the data source). Make sure that the directory contains the scripts for this data source. -
Set the parameter
defer_fixup_phase
to True to suspend audit file processing. If you don't do this, audit files will be processed twice, once for the data set you clone and once for all of the other data sets. -
Synchronize the tables by running a
process
command. Synchronization occurs when all data sets reach the same point in the audit trail.
For clone command syntax, see dbutility Commands.
Recloning a Database
Recloning the relational database can be an efficient means of recovering it if it doesn't contain a lot of data. Otherwise, it can be time-consuming and costly, as recloning uses host resources. These reasons alone often make recloning a last resort when no backup is available. (These issues are one of the reason why we developed Enterprise Server. It makes processes like this one more efficient.)
We recommend that you use the following procedure instead of setting
ds_mode
to 0 for all data sets using a SQL query and running a process
command, because it ensures that you have the latest copy of the DMSII
layout.
-
Make sure that you have the latest copy of the DMSII layout.
-
Run a
drop
command to drop the data source. -
Run a
define
command. -
Run a
generate
command. -
Run a
process
command.
Adding a Data Set
Use this procedure to add a data set after you clone the DMSII database. You don't need to reclone the entire database.
To add a data set
-
Run a relational database query tool and list the contents of the DATASETS Client control table with the following SQL command:
select dataset_name, active, data_source from DATASETS
-
Set the
active
column for the data set you want to add to the Databridge data tables to 1 (on), as follows:update DATASETS set active=1 where dataset_name='datasetname'
-
Run a
redefine
command. -
Run a
generate
command to create new scripts that populate the resulting table. -
Run one of the following commands to populate the new tables that correspond to the new data set:
dbutility process datasource --or-- dbutility clone datasource datasetname
Note
If you run the a
process
command, the Databridge data tables whoseactive
columns are set to 1 in their corresponding Client control table are also updated at this time.
After you complete this procedure, update your data set selection script (script.user_datasets
.datasource) so that you do not lose this change the next time you run a define
command.
Dropping a Table
Use this procedure when the Client no longer uses a Databridge data table in the relational database.
To drop a table from the Administrative Console, see the Databridge Client Console Help.
To drop a table
-
Update your data set global mapping customization and global data table customization scripts, depending on whether you are dropping a primary or secondary table, to reflect this change. See Customizing with User Scripts.
-
If you are dropping all of the tables derived from a data set, set the
active
column corresponding to the data set to 0 (in the DATASETS Client control table) and then run the data set selection script (script.user_datasets
.datasource) using the dbutilityredefine
command. -
If you are dropping a secondary table, set the
active
column corresponding to the table to 0 (in the DATATABLES Client control table) and then run the data table customization script (script.user_define
.primary_tablename) for the primary table using theredefine
command. -
From a command line, set the current directory to the working directory for the data source, and then run a script, such as the following (Windows)
dbutility -n runscript dbscripts\script.drop.tablename
Backing Up and Maintaining Client Control Tables
To help you maintain your Client control tables, Databridge provides three commands that allow you to backup, restore, and recreate copies of your Client control tables. In this section, each of these commands is described.
The Unload Command
The unload
command creates a text file that contains a record for each
of the entries in the various Client control tables. For best results,
run an unload
command before running a redefine
command.
Format | The format of the unload command is as follows:dbutility [options] unload datasource filename |
Options | The list of options is the same as those for signon_options. Additional options include -t , -T , and -f . |
Data Source | If a datasource of "_ALL" is specified, the Databridge Client writes all data sources to the backup file (filename). If a specific data source is specified, the Databridge Client writes only the entries for that data source to the backup file. |
Sample Run
15:05:25 dbutility unload demodb demodb.cct
15:05:25 Databridge Client version 7.0.0.000 [OCI/Oracle]
15:05:25 Copyright (C) 2019 Micro Focus or one of its affiliates.
15:05:30 Loading control tables for DEMODB
15:05:32 Unloading control tables for DEMODB
15:05:32 Control tables for DataSource DEMODB written to file "demodb.cct"
15:05:32 Client exit code: 0 – Successful
The Reload Command
The reload
command enables you to restore the Client control tables from a file that was created using the unload
command.
Format | The format of the reload command is as follows:dbutility [signon options]reload datasource filename [dataset1, dataset2, ...]NOTE: Client control table changes made since the tables were unloaded will be lost. Depending on what has changed, data table record could also be affected, requiring recloning. |
Options | The list of options include -t , -T , -f , and -k . The -k option forces Databridge to keep the stateinfo in the control tables for data sets that are in normal mode (ds_mode = 2) and that have client_fmt_level and item_count columns that remain unchanged (there is no reorganization involved). |
Data Source | If a datasource of "_ALL" is specified, the Databridge Client restores all data sources contained in the backup file. If a specific data source is specified, the Databridge Client restores only the entries for that data source from the file. If this is further qualified by a data set list, the Databridge Client restores only the entries for the data sets specified. Note that all the data sets specified in the list must already exist. |
Sample Run
17:16:26 dbutility reload demodb demodb.cct
17:16:27 Databridge Client version 7.0.0.000 [OCI/Oracle]
17:16:27 Copyright (C) 2021 Micro Focus or one of its affiliates.
17:16:35 Reloading Control table entries for DataSource DEMODB from file "demodb.cct"
17:16:45 Control tables for DataSource DEMODB reloaded from file "demodb.cct"
17:16:45 Client exit code: 0 – Successful
The Refresh Command
The refresh
command enables you to drop and recreate all of the stored
procedures for the tables associated with the given data set in the
specified data source. It is a variation of the runscripts
command
that is designed to run portions of the Databridge Client scripts (script.drop
.tablename and script.create
.tablename). This command is useful when you want to manually handle a data
set that would otherwise be recloned after a DMSII reorganization.
Note
In case of variable-format data sets, the tables for all the record types that have their active column set to 1 in the DATASETS Client control table, are refreshed.
Format | The format of the refresh command is as follows:dbutility [options] refresh datasource dataset |
Options | The list of options is the same as those for signon_options. If "_ALL" is specified for dataset, Databridge Client refreshes the stored procedures for all active tables that correspond to data sets whose active columns are 1. If a specific data set is specified, the Databridge Client refreshes only the stored procedures for the tables mapped from that data set. All tables for the specified dataset must have been created. |
Sample Run
12:39:45 dbutility refresh DEMODB CUSTOMER
12:39:45 Databridge Client, Version 7.0.0.000 (64-bit) [OCI/Oracle]
12:39:45 Copyright 2021 Micro Focus or one of its affiliates.
12:39:45 Loading control tables for DEMODB
12:39:45 Stored procedures for all tables of DataSet CUSTOMER successfully refreshed
12:39:45 Client exit code: 0 - Successful
In this case, the data set CUSTOMER is mapped to a single table named customer. The refresh
command executes the following SQL statements.
begin drop_proc('u_customer');end;
begin drop_proc('i_custmer'); end;
begin drop_proc('d_customer'); end;
create procedure u_customer (...) update customer set ... where ... ; end;
create procedure i_customer (...) insert into customer (...) values (...); end;
create procedure d_customer (...) delete from customer where ... ; end;
This effectively replaces all of the stored procedures with a fresh copy, while leaving the tables unchanged. This command is particularly useful when the index of the tables has changed. For example, if the data set CUSTOMER initially uses AA Values as keys, and a DMSII garbage collection occurs, you can avoid recloning this data set if it is mapped to a single table by creating a composite key.