Appendix C: Client Configuration
Client Configuration Files
The Databridge Client 6.1 and later versions use binary configuration files. Binary configuration files are compatible with both service-initiated operations and command-line operations. However, if you use the service, you must use binary configuration files. (Command-line operations can use either binary or text configuration files. For information about creating text configuration files, see Export or Import a Configuration File.)
The Databridge Client software uses the following configuration files:
-
The service configuration file (
dbcontrol.cfg
). This file contains settings for the service (Windows) or daemon (UNIX) that specify scheduling, passwords, logs, and more. For more information, see Appendix F: Service Configuration. -
Data source configuration files (
dbridge.cfg
) used by the Databridge Client programs (DBClient, DBClntCfgServer and dbutility). Each data source has its own configuration file, which can be updated using the Administrative Console's Configure command in the data source's menu. The Client configuration file overrides any equivalent parameter settings on the host.
In a new installation, the service creates the service configuration file in the config directory of the service's working directory (also referred to as the global working directory) the first time the service is started. The service also creates the logs and scripts sub-directories of the service's working directory at that time. When you add a data source in the Administrative Console, the service creates a binary configuration file "dbridge.cfg" for the data source. Data source configuration files are stored in the config subdirectory of the data source's working directory.
In an upgrade, as long as you are upgrading from version 6.1 SP3 or newer, you should be able to use the same working directory. If this is not possible, rename the old working directory and use the Migrate utility to recreate it using the old name (you must use a different working directory). You will also want to use this utility if you are upgrading from software older than 6.1 or you are switching from command line operations to service based operations. The Migrate utility takes your existing configuration files and creates new data source configuration files from them. It also creates a new service configuration file and adds your preexisting data sources to it.
How Do I Edit the Configuration File?
Caution
You should never directly modify a binary configuration file. This will corrupt the file.
Each time you change your configuration settings in the Administrative Console, you
update the binary configuration files. If you need to change a parameter that is not supported
by the Administrative Console, you can export the binary configuration
file to a readable text file. After you edit the text file, you can
import the updated file to create a new binary configuration file. The
import
command performs all the necessary checks to ensure that your
changes are valid. If you don't like the idea of using binary files for
command-line operations, you can force the export
command to replace
the binary file with an equivalent text file.
Because passwords are encrypted in the configuration file, there is no way to read them. If a password is wrong, export the configuration file and reenter the password as plain text. Then, import the file and export it again to remove the clear text passwords from the text configuration file. Alternatively, you can use the Administrative Console to change passwords.
Export or Import a Configuration File
Use the export
command to create an editable text file from your
configuration file. If no configuration file exists, the export
command creates a text file with the default configuration settings.
After you make your changes, the import
command will convert the text
file to binary for use with the Client. Text configuration files can
only be used with the command-line Client.
The export
and import
commands are typically used from a command
line; the Administrative Console supports only the export
command (the Export Configuration
item in the data source's Advanced menu.). When you export the configuration file, the Databridge
Client creates a text file that reflects the current values of the
configuration parameters. Any passwords in the file are automatically encrypted.
Caution
To change or encrypt a password that was manually entered in a text configuration file, use an import
command followed by an export
command to overwrite the text configuration file. See Change or Encrypt a Password.
To export the configuration file for a data source
-
Open a command session and run the following command:
dbutility [options]export [filename]
where [filename] is an optional parameter to name an exported file something other than the default "dbridge.ini".
The exported text file is written to the config subdirectory of the data source's working directory.
Option Description -u
Use this option if you export a text file named "dbridge.cfg". This allows the Client to overwrite the existing binary configuration file "dbridge.cfg" with a text configuration file of the same name.
For example:dbutility -u export dbridge.cfg
To import the configuration file for a data source
Use this procedure to create a binary configuration file from a text Client configuration file.
-
Open a command session and run the following command:
dbutility [options] import [filename]
where [filename] is an optional parameter to specify a filename other than the default, "dbridge.ini". When no option or filename is specified, the
import
command processes the text filedbridge.ini
in the config directory and creates an equivalent binary configuration file, "dbridge.cfg", in the same directory. If the file "dbridge.ini" does not exist in this location, theimport
command creates a binary configuration file with the default values. If the text file contains errors, the Client returns an error to help you identify the problem and no binary file is created.Option Description -f
filenameUse this option to specify a filename or path other than the default. If this option is omitted, the Client tries to read the file "dbridge.cfg" in the config directory of the data source's working directory.
To indicate a different location, type a backslash (Windows) or forward slash (UNIX) followed by the full path, including filename. For example,/home/user/xyz/foo/myconfig.cfg
-u
This option is required to allow the existing configuration file to be overwritten with a new file with the same name. Otherwise, the Client will try to read (import) the configuration from a file named "dbridge.ini".
For example, the following command:dbutility -u -f dbridge.cfg import
imports (reads) a file named "dbridge.cfg" and creates the binary configuration file "dbridge.cfg" regardless of whether the imported file is a text or binary file.
To export the service configuration file
-
Open a command session and from the Client's global working directory, run the following command:
dbctrlconfigure export
This command reads the binary configuration file "dbcontrol.cfg" in the config sub-directory of the global working directory and creates an editable text configuration file "dbcontrol.ini" in the same location.
To import the service configuration file
-
Open a command session and run the following command:
dbctrlconfigure import
This command reads the text configuration file "dbcontrol.ini" in the config sub-directory of the global working directory and creates a binary configuration file named "dbcontrol.cfg" in the same location.
Change or Encrypt a Password
Use this procedure for any of the following situations:
-
To change the password for the user ID that you use to sign on to the database in your text or binary configuration file
-
When the KEY (host password) on the host has changed and you need to update and encrypt the
hostpasswd
value in the Client configuration file. The KEY can only be changed by editing the DBServer control file (DATA/SERVER/CONTROL) on the host.
Passwords in the Client configuration file are automatically encrypted when you use the export
command to export the file
(see Export or Import a Configuration File).
Command-Line Options
The following command-line options have no equivalent configuration parameter:
Option | dbutility Command | Description |
---|---|---|
? |
Short help | |
-a |
display unload and reload |
Toggles the setting of the config parameter display_active_only Causes the commands to include the AF_STATISTICS table entries. |
-c |
process and clone |
Toggles the setting of the config parameter defer_fixup_phase |
-d |
All | Default Tracing. |
-f filename |
All | Specifies the configuration file name. |
-h |
Long help | |
-k |
reload process , clone and drop |
Makes the command preserve the stateinfo of data sets that have a ds_mode of 2 and have not been reorganized.<brAllows tables to be dropped in a multi-source environment and for tables that preserve deleted records. |
-l |
process and clone |
(SQL Server Client only) Forces the client to use the bcp utility regardless of the settings in the data sets' ds_options column. |
-m |
All | Append milliseconds to all timestamps in trace file output messages, except for message that are also logged. |
-r |
redefine |
Toggles the setting of the parameter use_dbconfig , which determines whether the command uses user scripts. |
-n |
runscript createscripts reorganize rowcounts |
Makes the runscript command treat its argument as a full file specification.Disables user script backups by createscripts command.Allows the reorganize command to be run after it has been run for the first data source in a multi-source setup.Causes the rowcounts command to display the table row counts without refreshing them. |
-r |
redefine |
Toggles the value of the use_dbconfig parameter. |
-s |
process and clone createscripts |
Disables the use of bulk loading during data extraction. Makes the createscripts command add the data source name to "where" clauses. |
-t mask |
All | Tracing options. |
-u |
configure , define , redefine , generate and dropall rowcounts |
Unconditionally performs the requested command, overriding any warnings that would be displayed without this option. Causes all table counts to be refreshed for the rowcounts command. |
-v |
All | Enables verbose log output that is normally not displayed. |
-w |
clone or process |
Toggles the setting of the use_dbwait parameter. |
-x |
clone |
Clones all active data sets except those specified at the command line |
-y |
process |
Instructs the Client to re-clone all data sets whose ds_mode has a value of 11 or 12. |
-z |
clone or process |
Instructs dbutility to not update the relational database during a clone or process command.This option is useful in determining how much non-database time is required to extract data for a data set. |
-A |
clone or process |
Prevents the Databridge Client from dropping a table during a clone (the Databridge Client drops only the index). |
-B |
display |
Limits the display command to the DATASETS Client control table records. |
-C |
All | Toggles the setting of the dbutility config parameter inhibit_console |
-D database |
All | Specifies the name that identifies the Oracle database or the TNS service that is being accessed. |
-F afn |
process |
Use this option make the Client act as if a QUIT AFTER afn command had been executed. It applies to process and clone commands only. The range of values allowed for afn are 1-9999. |
-K |
process |
Prevents the audit file removal WFL from being run on the mainframe after the Engine finishes processing an audit file. |
-L |
All | Forces the Client to start using a new log file. |
-M |
All | Specifies that the configuration parameter use_nt_authen should be set to false (SQL Server Client only). |
-N |
process and clone |
Toggles the setting of the config parameter enable_optimized_sql |
-O ODBCdatasource |
All | Specifies the ODBC data source to connect to (SQL Server and PostgeSQL Clients only). |
-P password |
All | Sets the password associated with the user ID for the relational database. The password is limited to 30 characters. |
-R |
redefine |
Forces all data sets to be redefined. |
-T |
All | Forces the Client to create a new trace file when tracing is enabled. |
-U userid |
All | Specifies the user ID for the relational database. The user ID must have the appropriate resource privileges for the designated relational database. |
-V |
unload |
Control table version to be used in unload command so the file can be read by older clients. |
-W |
All | Specifies that the configuration parameter use_nt_authen should be set to true (SQL Server Clients only). |
-Y reclone_all |
process |
Causes all active data sets to be re-cloned. |
-Z |
clone or process |
Forces the Client to drop and create the tables of all re-cloned data sets, regardless of the use of the deleted_record or expanded update_type columns. |
Syntax
Follow these conventions in the configuration file:
-
For hexadecimal values, use the 0x nnnn format.
-
A semicolon (;), except within double quoted strings, indicates that the remainder of the current line is a comment.
-
Section headers are enclosed in square brackets.
-
Section headers and parameter names are not case-sensitive.
-
Spaces and tabs between entries are ignored; however, spaces within double quoted values (for example, password values) are read.
-
If you are not using a parameter, either comment the parameter out or delete the corresponding line in the configuration file. Do not leave an uncommented parameter without a value after the equal sign (=). Doing so results in syntax error.
You can specify some of these parameters only in the Client configuration file. Other parameters have equivalent command-line options and environment variables. For a complete list of configuration file parameters, their equivalent command-line options, and their related Client command, see Reference Tables.
Sample SQL Server Client Configuration File
You can view the configuration file for SQL Server by using the Export command. See Export or Import a Configuration File.
To use a parameter that is commented out, delete the semi-colon ( ; ) and after the equals sign (=), enter a value that is appropriate for your site. Boolean parameters can be represented by True or False.
In the example below, some of the commented-out parameters have a value of -1. These parameters include the Databridge Engine control file parameters that can be overridden by the Client (commit frequency parameters and engine workers). This value indicates that the corresponding parameter in the Databridge Engine (or Server) control file will not be overridden by the Client. Do not uncomment these lines, unless you want to supply an actual value. Otherwise, the Client will issue an error.
;
; Databridge Client version 7.1 SQL Server configuration file -- generated programmatically
;
[Signon]
;user = USERID
;password = PASSWORD
;datasource = DATASOURCE
use_nt_authen = false
;hostpasswd = HOSTPASSWD
[Log_File]
file_name_prefix = "db"
;max_file_size = 0
logsw_on_size = false
logsw_on_newday = false
newfile_on_newday = true
single_line_log_msgs = false
[Trace_File]
file_name_prefix = "trace"
;max_file_size = 0
[Bulk_Loader]
bcp_batch_size = 100000
bcp_code_page = "ACP"
;bcp_copied_msg = "rows copied"
bcp_packet_size = 0
;bcp_tempfile_dir = "path\\"
;bulk_loader_path = "path\\"
max_bcp_failures = 5
max_errors = 10
max_temp_storage = 400M
use_bcp = true
verify_bulk_load = 1
[Params]
;
; (1) define/redefine command parameters
;
allow_nulls = true
auto_mask_columns = true
automate_virtuals = false
bracket_tabnames = false
clr_dup_extr_recs = true
convert_ctrl_char = false
default_user_columns = 0x00000000
dflt_history_columns = 0x00000000
enable_dms_links = false
enable_dynamic_hist = false
;external_column[n] = ["name"][,[sql_type][,[sql_length][,"default"]]]
extract_embedded = false
flatten_all_occurs = false
force_aa_value_only = 0
history_tables = 0
inhibit_required_opt = false
;maximum_columns = 0
min_varchar = 4
minimize_col_updates = false
miser_database = false
new_history_types = false
optimize_updates = false
read_null_records = true
reorg_batch_size = 50000
sec_tab_column_mask = 0x00000000
split_varfmt_dataset = false
strip_ds_prefixes = false
suppress_new_columns = false
suppress_new_datasets = true
use_bigint = false
use_binary_aa = false
use_clustered_index = false
use_column_prefixes = false
use_date = false
use_datetime2 = false
use_dbconfig = true
use_decimal_aa = false
use_internal_clone = false
use_nullable_dates = false
use_primary_key = false
use_stored_procs = false
use_time = false
use_varchar = true
;
; (2) process/clone command parameters
;
alpha_error_cutoff = 10
aux_stmts = 100
;batch_job_period = 00:00, 00:00
century_break = 50
;commit_absn_inc = -1
;commit_idle_database = -1
;commit_longtrans = -1
;commit_time_inc = -1
;commit_txn_inc = -1
;commit_update_inc = -1
controlled_execution = false
;convert_reversals = -1
correct_bad_days = 0
dbe_dflt_origin = direct
defer_fixup_phase = false
discard_data_errors = false
display_bad_data = false
enable_af_stats = false
enable_doc_records = false
enable_minimized_col = false
enable_optimized_sql = true
;engine_workers = -1
error_display_limits = 10,100
inhibit_8_bit_data = false
inhibit_console = false
inhibit_ctrl_chars = false
inhibit_drop_history = false
inhibit_init_values = false
keep_undigits = 0
lag_time_alert = 10
linc_century_base = 1957
;masking_parameter[n] = "str"
max_clone_count = 10000
max_discards = 0,100
max_retry_secs = 20
max_srv_idle_time = 0
max_wait_secs = 3600,60
min_check_time = 600
n_dmsii_buffers = 0
n_update_threads = 8
null_datetime_value = 19010101
null_datetime2_value = 19010101
null_digit_value = 9
numeric_date_format = 23
preserve_deletes = false
set_blanks_to_null = false
set_lincday0_to_null = false
show_perf_stats = true
show_statistics = true
show_table_stats = true
sql_exec_timeout = 180,0
sql_heart_beat = 0
statistics_increment = 100000,10000
stop_after_fixups = false
stop_after_gc_reorg = false
stop_after_given_afn = false
stop_on_dbe_mode_chg = false
suppress_delete_msgs = false
suppress_dup_warnings = false
track_vfds_nolinks = true
use_dbwait = false
use_latest_si = false
;
; (3) Server options
;
;shutdown {until | for} hh:mm after stop
;stop {before | after} task "name"
;stop {before | after} time hh:mm[:ss]
;
; (4) generate command parameters
;
;global_table_suffix = "str"
;create_table_suffix[n] = "str"
;global_index_suffix = "str"
;create_index_suffix[n] = "str"
;user_column_suffix[n] = "str"
;
; (5) miscellaneous command parameters
;
display_active_only = true
;
; (6) user scripts
;
user_script_bu_dir = ""
user_script_dir = "scripts"
;
; (7) external data translation parameters
;
use_ext_translation = false
eatran_dll_name = "DBEATRAN.DLL"
[Scheduling]
;
; dbutility process command only
;
;daily = 08:00, 12:00, 17:00, 24:00
;exit_on_error = false
;sched_delay_secs = 600
;sched_minwait_secs = 0
;sched_retry_secs = 60
;blackout_period = 22:00, 02:00
[EbcdicToAscii]
; e1 = a1
; e2 = a2
; ...
; en = an
;
[DBConfig]
default_date_fmt = 21
global_type0_changes = true
[Encryption]
ca_file = ""
ca_path = ""
certify_server_name = false
enable_encryption = false
tls_host_name = ""
Sample Oracle Client Configuration File
;
; Databridge Client version 7.1 Oracle configuration file -- generated programmatically
;
[Signon]
;user = USERID
;password = PASSWORD
;database = DATABASE
;hostpasswd = HOSTPASSWD
[Log_File]
file_name_prefix = "db"
;max_file_size = 0
logsw_on_size = false
logsw_on_newday = false
newfile_on_newday = true
single_line_log_msgs = false
[Trace_File]
file_name_prefix = "trace"
;max_file_size = 0
[Bulk_Loader]
;bcp_code_page = <code_page>
;bcp_decimal_char = -1
;bulk_loader_path = "path\\"
enable_parallel_mode = false
inhibit_direct_mode = false
max_bcp_failures = 5
max_errors = 10
max_temp_storage = 400M // Windows only
sqlld_bindsize = 65536
sqlld_rows = 10000
verify_bulk_load = 1
[Params]
;
; (1) define/redefine command parameters
;
allow_nulls = true
automate_virtuals = false
clr_dup_extr_recs = true
convert_ctrl_char = false
default_user_columns = 0x00000000
dflt_history_columns = 0x00000000
enable_dms_links = false
enable_dynamic_hist = false
enable_extended_types = false
;external_column[n] = ["name"][,[sql_type][,[sql_length][,"default"]]]
extract_embedded = false
flatten_all_occurs = false
force_aa_value_only = 0
history_tables = 0
inhibit_required_opt = false
;maximum_columns = 0
min_varchar = 4
minimize_col_updates = false
miser_database = false
new_history_types = false
optimize_updates = false
read_null_records = true
reorg_batch_size = 50000
sec_tab_column_mask = 0x00000000
split_varfmt_dataset = false
strip_ds_prefixes = false
suppress_new_columns = false
suppress_new_datasets = true
use_binary_aa = false
use_dbconfig = true
use_clob = false
use_column_prefixes = false
use_decimal_aa = false
use_internal_clone = false
use_nullable_dates = false
use_primary_key = false
use_stored_procs = false
use_varchar = true
;
; (2) process/clone command parameters
;
alpha_error_cutoff = 10
aux_stmts = 100
;batch_job_period = 00:00, 00:00
century_break = 50
;commit_absn_inc = -1
;commit_idle_database = -1
;commit_longtrans = -1
;commit_time_inc = -1
;commit_txn_inc = -1
;commit_update_inc = -1
controlled_execution = false
;convert_reversals = false
correct_bad_days = 0
dbe_dflt_origin = direct
defer_fixup_phase = false
discard_data_errors = false
display_bad_data = false
enable_af_stats = false
enable_doc_records = false
enable_minimized_col = false
enable_optimized_sql = true
;engine_workers = -1
error_display_limits = 10,100
inhibit_8_bit_data = false
inhibit_console = false
inhibit_ctrl_chars = false
inhibit_drop_history = false
keep_undigits = 0
lag_time_alert = 10
linc_century_base = 1957
max_clone_count = 10000
max_discards = 0,100
max_retry_secs = 20
max_srv_idle_time = 0
max_wait_secs = 3600,60
min_check_time = 600
n_dmsii_buffers = 0
n_update_threads = 8
null_digit_value = 9
numeric_date_format = 23
preserve_deletes = false
;rollback_segment_name = ""
set_blanks_to_null = false
set_lincday0_to_null = false
show_perf_stats = true
show_statistics = true
show_table_stats = true
sql_exec_timeout = 180,0
sql_heart_beat = 0
statistics_increment = 100000,10000
stop_after_fixups = false
stop_after_gc_reorg = false
stop_after_given_afn = false
stop_on_dbe_mode_chg = false
suppress_delete_msgs = false
suppress_dup_warnings = false
track_vfds_nolinks = true
use_dbwait = false
use_latest_si = false
;
; (3) Server options
;
;shutdown {until | for} hh:mm after stop
;stop {before | after} task "name"
;stop {before | after} time hh:mm[:ss]
;
; (4) generate command parameters
;
purge_dropped_tabs = false
;global_table_suffix = "str"
;create_table_suffix[n] = "str"
;global_index_suffix = "str"
;create_index_suffix[n] = "str"
;user_column_suffix[n] = "str"
;
; (5) miscellaneous command parameters
;
display_active_only = true
;
; (6) user scripts
;
user_script_bu_dir = ""
user_script_dir = "scripts"
;
; (7) external data translation parameters
;
use_ext_translation = false
eatran_dll_name = "DBEATRAN.DLL"
[Scheduling]
;
; dbutility process command only
;
;daily = 08:00, 12:00, 17:00, 24:00
;exit_on_error = false
;sched_delay_secs = 600
;sched_minwait_secs = 3600
;sched_retry_secs = 3600
;blackout_period = 00:00, 02:00
[EbcdicToAscii]
; e1 = a1
; e2 = a2
; ...
; en = an
;
[DBConfig]
default_date_fmt = 21
[Encryption]
ca_file = ""
ca_path = ""
certify_server_name = false
enable_encryption = false
tls_host_name = ""
Sample PostgreSQL Client Configuration File
;
; Databridge Client, Version 7.1 PostgreSQL configuration file -- generated programmatically
;
[Signon]
;user = USERID
;password = PASSWORD
;datasource = DATASOURCE
;hostpasswd = HOSTPASSWD
[Log_File]
file_name_prefix = "db"
;max_file_size = 0
logsw_on_size = false
logsw_on_newday = false
newfile_on_newday = true
single_line_log_msgs = false
[Trace_File]
file_name_prefix = "trace"
;max_file_size = 0
[Bulk_Loader]
bcp_batch_size = 100000
bcp_code_page = "iso-8859-1"
max_bcp_failures = 5
max_errors = 10
max_temp_storage = 400M
verify_bulk_load = 1
[Params]
;
; (1) define/redefine command parameters
;
allow_nulls = true
automate_virtuals = false
clr_dup_extr_recs = true
convert_ctrl_char = false
default_user_columns = 0x00000000
dflt_history_columns = 0x00000000
enable_dms_links = false
enable_dynamic_hist = false
;external_column[n] = ["name"][,[sql_type][,[sql_length][,"default"]]]
extract_embedded = false
flatten_all_occurs = false
force_aa_value_only = 0
history_tables = 0
inhibit_required_opt = false
;maximum_columns = 0
min_varchar = 4
minimize_col_updates = false
miser_database = false
optimize_updates = false
read_null_records = true
sec_tab_column_mask = 0x00000000
split_varfmt_dataset = false
strip_ds_prefixes = false
suppress_new_columns = false
suppress_new_datasets = true
use_column_prefixes = false
use_decimal_aa = false
use_internal_clone = false
use_nullable_dates = false
use_primary_key = false
use_time = false
use_varchar = false
;
; (2) process/clone command parameters
;
alpha_error_cutoff = 10
aux_stmts = 100
;batch_job_period = 00:00, 00:00
century_break = 50
;commit_absn_inc = -1
;commit_idle_database = -1
;commit_longtrans = -1
;commit_time_inc = -1
;commit_txn_inc = -1
;commit_update_inc = -1
controlled_execution = false
;convert_reversals = -1
correct_bad_days = 0
dbe_dflt_origin = direct
defer_fixup_phase = false
discard_data_errors = false
display_bad_data = false
enable_af_stats = false
enable_doc_records = false
enable_minimized_col = true
enable_optimized_sql = true
;engine_workers = -1
error_display_limits = 10,100
inhibit_8_bit_data = false
inhibit_console = false
inhibit_ctrl_chars = false
inhibit_drop_history = false
inhibit_init_values = false
keep_undigits = 0
lag_time_alert = 10
linc_century_base = 1957
max_clone_count = 10000
max_discards = 0,100
max_retry_secs = 20
max_srv_idle_time = 0
max_wait_secs = 3600,60
min_check_time = 600
n_dmsii_buffers = 0
n_update_threads = 8
null_digit_value = 9
numeric_date_format = 23
preserve_deletes = false
set_blanks_to_null = false
set_lincday0_to_null = false
show_perf_stats = true
show_statistics = true
show_table_stats = true
sql_exec_timeout = 180,0
sql_heart_beat = 0
statistics_increment = 100000,10000
stop_after_fixups = false
stop_after_gc_reorg = false
stop_after_given_afn = false
stop_on_dbe_mode_chg = false
suppress_delete_msgs = false
suppress_dup_warnings = false
track_vfds_nolinks = true
use_dbwait = false
use_latest_si = false
;
; (3) Server options
;
;shutdown {until | for} hh:mm after stop
;stop {before | after} task "name"
;stop {before | after} time hh:mm[:ss]
;
; (4) generate command parameters
;
;global_table_suffix = "str"
;create_table_suffix[n] = "str"
;user_column_suffix[n] = "str"
;
; (5) miscellaneous command parameters
;
display_active_only = true
;
; (6) user scripts
;
user_script_bu_dir = ""
user_script_dir = "scripts\\"
;
; (7) external data translation parameters
;
use_ext_translation = false
eatran_dll_name = "DBEATRAN.DLL"
[Scheduling]
;
; dbutility process command only
;
;daily = 08:00, 12:00, 17:00, 24:00
;sched_delay_secs = 600
exit_on_error = false
sched_minwait_secs = 0
sched_retry_secs = 60
;blackout_period = 22:00, 02:00
[EbcdicToAscii]
; e1 = a1
; e2 = a2
; ...
; en = an
;
[DBConfig]
default_date_fmt = 21
global_type0_changes = true
[Encryption]
ca_file = ""
ca_path = ""
certify_server_name = false
enable_encryption = false
tls_host_name = ""
Parameter Descriptions
Accessing configuration parameters in the Administrative Console
The Administrative console allows you to directly update all the parameters described in this section, except for those in the signon section.
The parameters in the signon section are specified when adding a new data source. When using the Administrative Console to update configuration parameters, you need to take the following actions:
-
Navigate to the Client Managers page,
Databridge Servers > Client Managers
. -
Click on the desired Client Manager. This displays the data sources page.
-
From the Settings menu click on Configure, this opens the Client Parameters dialogs.
-
We provide the navigation steps to get to the parameter in question in the line that starts with "
Console:
" for most parameters. The first item is the page name in the left pane. Some of these items have an associated "+" icon, which when clicked on display the additional items. Clicking on any of these items opens the corresponding dialog page. The directions for items within the page are enclosed in parentheses and typically include a group name followed by the start of the item name within the group.
For example: PROCESSING > DMSII Data Error Handling (Character data error > Change to space). In this case you need to click on either PROCESSING or the "+" after it and then click on DMSII Data Error Handling and go to the "Character data error" group and look for "Change to space". -
Change the parameters you need to change and Click Save when done, unless you want to make changes in other property pages. You can navigate to other property pages and make changes until you are ready to save your changes by pushing the Save button at which point all your changes will be applied.
-
When you push the Save button, if there are errors in your changes you will get a pop-up telling what you did wrong. You can then go back in the Client Configuration parameter dialog pages and correct the errors and push Save again when done.
Processing Order
Configuration file options override environment variables. Command-line options override both environment variables and configuration file options.
The parameter processing order is as follows:
-
The operating system login name (user ID) is used as the lowest level default for the database user ID.
-
Environment variables (DBUSERID, DBPASSWD, DBDATABASE, and DBHOSTPW).
-
Command-line options
-d
(for full tracing),-v
(for verbose messages),-t
(for creating a Databridge Client trace file) and-T
(for forcing the Client to start a new trace file), and-f
(for specifying a configuration file other than the defaultdbridge.cfg
). These options are processed in the order in which they appear on the command line. -
Parameters specified in the configuration file. You can specify the configuration file via the
-f
option. If you do not specify a configuration file name via the-f
option, dbutility tries to open the default configuration file (dbridge.cfg
in the config subdirectory of the data source's working directory); if the file does not exist, the Databridge Client uses the default values for all the configuration file parameter. The absence of a configuration file is not treated as an error only when running the command-line Client. If you use the service or daemon, the absence of a configuration file named "dbridge.cfg" is treated as an error. -
All remaining command-line options. In the final pass, a command-line option with a configuration file equivalent overrides the configuration file entry.
[signon]
Use the [signon] section of the "dbridge.cfg" file to enter information for signing on to the relational database and Databridge Server on the host.
The configuration file must include the data source (or database, if using Oracle), signon parameters to access the relational database, and a user and a password (unless you use the SQL Server Client with Integrated Windows authentication).
When using the Administrative Console, you need to supply these parameters at the time you create the data source. To do so, right-click on the service in the tree view and click Add Data Source from the pop-up menu to open the dialog and enter these parameters.
Parameter | Description |
---|---|
database |
Default: None Command-line option: -D (Oracle) This parameter is the name that identifies the Oracle instance or the Oracle Net Services node that is being accessed. If the name contains non-alphanumeric characters, you must enclose it in double quotation marks, as follows: database = "orcl.cin.microfocus.com" |
datasource |
Default: None Command-line option: -O (Microsoft SQL Server and PostgreSQL) This parameter is the name that identifies the ODBC data source used to access the SQL database. This ODBC data source is configured using the Control Panel during the Client installation. |
hostpasswd |
Default: None Range: 17 alphanumeric characters Command-line option: -X Use the host password parameter to specify the password associated with Databridge Server on the host. This parameter must match exactly the KEY parameter defined in the Host Server control file. For example:
|
password |
Default: None Command-line option: -P Use the password parameter to specify the password associated with the user ID for the relational database. The password must be valid for the user ID or the connection to the relational database server will fail. Passwords are limited to 30 characters. If your password contains non alphanumeric characters other than the underscore, you must enclose it in double quotes, as follows: password = "a$bb%" NOTE: Passwords starting with Oracle 11g release 2 are case-sensitive. The password is always encrypted in both text and binary versions of the Client configuration file. For more information, see Export the Client Configuration to a File or Export Client Configuration in the Databridge Administrative Console Help Passwords that are communicated between the Databridge Client and Databridge Server are not encrypted when the client/server communications are not encrypted. |
user |
Default: None Command-line option: -U Use the user parameter to specify the user ID for the relational database. The user ID must have the appropriate resource privileges for the designated relational database, as explained in Setting Up a User ID (Windows) in the Databridge Installation Guide. |
use_nt_authen |
Default: False Range: True or False Command-line option: -W The use_nt_authen parameter applies to Microsoft SQL Server Clients only.Use Windows ODBC Data Source Administrator to set the required ODBC data source authentication method. The SQL Server database must be installed with the proper authentication mode selected; either SQL Server, Integrated Windows, or Mixed Mode (that is, using both methods). When using Integrated Windows authentication, Windows Administrators are automatically included in the SQL Server user list. The SYSTEM account is only included in versions of SQL Server older than 2012. For more information, see the Databridge Installation Guide. Use this parameter as follows:
|
[signon] parameters with equivalent environment variables
[signon] Parameter | Environment Variable | Option | dbutility Command |
---|---|---|---|
database |
DBDATABASE | -D |
All (only applies to Oracle) |
datasource |
-O |
All (does not apply to SQL Server) | |
hostpasswd |
DBHOSTPW | -X |
define , redefine , process , clone , and switchaudit |
password |
password | -P |
All |
user |
DBUSERID | -U |
All |
[Log_File]
Use the [Log_File] section to control the various options for the log file that is created in the logs subdirectory of the working directory for a data source.
When using the service, two sets of Client log files are generated. The
DBClient program and the command-line Client dbutility use log files,
whose default names are of the form dbyyyymmdd.log
. The DBClntCfgServer program uses log files, whose names are of the
form db_cfgyyyymmdd.log
. The prefix "db" can be changed by specifying a file_name_prefix
in the log section of the Client configuration file.
When using the Administrative Console, click on the LOGGING item to get the "Client Log parameters" page of the dialog, enter prefix in the edit box titled "File name pefix".
Parameter | Description |
---|---|
file_name_prefix |
Default: "db" Range: 1 to 20 characters Recommended value: data source name Use this parameter to change the prefix of the log files for this data source. We recommend using the name of the data source as the prefix as this ensures that log files created on the same date but for different data sources have unique names. The log files have names in the form dbyyyymmdd.log or when necessary, dbyyyymmdd_hhmiss.log (This command allows you to replace the prefix "db" by any character string, provided that it results in a legal file name.) |
logsw_on_newday |
Default: False Range: True or False This parameter determines whether or not the Client uses a new log file, when the date changes. You may want to set this parameter to False, if your log files are small and use the logsw_on_size parameter to manage the log files. |
logsw_on_size |
Default: False Range: True or False Recommended value: True (when running real/time) Use this parameter to control whether or not the Client should check the log file size to see if it has reached the size defined by the max_file_size parameter. If the size of the log file exceeds this parameter the log file is closed and a new one is opened. If the current date is different than the creation date of the old file, which is part of its name, the new log file will be of the form dbyyyymmdd.log otherwise the time component will be added to the file name to ensure that the name is unique. |
max_file_size |
Default: 0 Range: numeric value, optionally followed by K, M, or G Recommended value: 1M Use this parameter to limit the size of log files. The default value of 0 indicates that no limit is imposed on the size of log file. The suffixes of K, M and G allow you to specify the maximum file size in kilobytes, megabytes, or gigabytes. A value on the order of 1 MB is a reasonable value to use. The file size is always checked when you start the Client, regardless of the setting of the logsw_on_size parameter. When the logsw_on_size parameter is set, the log file size is also checked when the Client starts processing a new audit file. |
newfile_on_newday |
Default: True Range: True or False This parameter forces the Client to create a new log file when it starts if the existing log file was created on an earlier date. You may want to set this parameter to False, if your log files are small and use the logsw_on_size parameter to manage the log files. |
single_line_log_msgs |
Default: False Range: True or False The single_line_log_msgs parameter tells the Client to make all of its log file output messages single-line messages. When this parameter is set to True, the end-of-line character of all multi-line outputs are replaced by a space. This parameter exists to assist some log file analysis programs that fail to parse multi-line output messages. |
[Trace_File]
Use the [Trace_File] section to control the various options for the trace file, created in the trace subdirectory of the working directory for a data source.
When using the Administrative Console, click on LOGGING > Trace Log to get the Trace parameters page of the dialog.
Parameter | Description |
---|---|
file_name_prefix |
Specifies a string (up to 20 characters in length) to change the default prefix "trace". |
max_file_size |
Specifies the size limit of trace files. You can enter a number with a suffix of K, M and G to indicate the unit of measure (kilobytes, megabytes, or gigabytes). |
[Bulk_Loader]
The bulk loader parameters apply to the bulk loader utility for your relational database -- SQL*Loader
for Oracle and bcp SQL Server.
When using the Administrative Console, click on the BULK LOADER
to get to the "Bulk loader parameters" page of the dialog.
Parameter | Description |
---|---|
bcp_batch_size |
Default: 100,000 rows per batch Range: 0 or 1000–10000000 rows per batch bcp, BCP API, and PGLoader: SQL Server and PostgreSQL Specifies the batch size used during the bulk loader operations. This is parameter supplied to the bcp utility, to make it load the table in several batches of the given size. When using the BCP API after bcp_batch_size rows are loaded, the Client calls bcp_batch to commit these rows. Permitted values are 0 or 1000-10000000 (rows per batch). A value of zero causes the bcp utility to load the entire group of records in the data file in one batch. Copying all of the rows of a very large table in one batch may require a high number of locks on the Microsoft SQL Server database.When you specify a nonzero value, the Databridge Client adds the -b batch_size option to the bcp command line. A value of 0 omits the -b option. |
bcp_code_page |
Default: "" Range: “String” Bulk Loader utility: SQL Server, Oracle, and PostgreSQL Adds the line "CHARACTERSET <code_page> " to the SQL*Loader control file. Consult the Oracle documentation for the exact names of the code pages as Oracle uses their own notation. The typical code page for 8-bit character data is "WE8ISO8859P1". You need to specify a bcp_code_page when dealing with a UTF8 database. |
bcp_copied_msg |
Default: NULL (omitted) Range: Any “quoted string” Bulk Loader utility: SQL Server Enables the bcp_auditor utility to determine whether or not a bulk loader was successful in cases where the database language is not English. For example, in German, this parameter is "Zeilen kopiert", but in English, it is "rows copied". If this parameter is not set correctly, the bcp_auditor reports bulk loader failures even though the bulk loader worked correctly. The bcp_auditor program also accepts the bcp_copied_message in binary format expressed as the text "HEX_" followed by a string of hexadecimal values representing the values of each byte. This allows you to circumvent code page related problems, which can sometimes corrupt the bcp_copied_message when it is passed to the bcp_auditor program as a command-line argument. For example, the string "HEX_6C69676E657320636F7069E965732E" can represent the French message "lignes copiées." (The character "é" does not cause any problems when expressed as "E9".) |
bcp_decimal_char |
Default: -1 (This parameter is commented out.) Range: a period (.) or a comma (,) SQL * Loader: Oracle Clients onlyThis parameter is normally auto-detected by the Client that gets its value by reading the of Oracle database's NLS_NUMERIC_CHARACTERS parameter. This method will work correctly when the Client and the database reside in the same machine. However, if the Client is run outside the database machine, there is no guarantee that the Oracle Client software, that the Databridge Client uses, will have the same NLS settings as the target database. For example it is possible to have a US Oracle Client software in the Client machine that connects to a Brazilian database. In this rather unusual situation you would have to set the bcp_decimal_character to ‘.’ as it will default to ',' which will lead to SQL*Loader errors in for all records that have numeric data with a decimal point. |
bcp_delim |
Default: Tab (SQL Server) bcp utility: SQL Server This parameter works as follows with the various Clients: Oracle: The string "delim" is not configurable, the Client always uses the vertical bar as the delimiter. The Client sets the bcp decimal_character by reading the database’s NLS parameters.SQL Server: The string bcp_delim can be longer than one character. This is useful if the data contains alpha fields with TAB characters that need to be preserved. (A possible delimiter value in this case would be "|" or "||"; see inhibit_ctrl_chars ) |
bcp_loader_path |
Default: "" (omitted) Range: Valid Windows path with backslashes entered as "\" All Clients using a bulk loader This parameter provides an alternative to including the directory where the bulk loader resides in the system path, which ia not always acceptable. |
bcp_packet_size |
Default: 0 (which omits the -a option) Range: 0 or 512–65535 (decimal or hexadecimal) bcp utility: SQL Server (remote servers only) Defines the network packet size value for the bcp utility. Use this parameter when you have wide tables. For wide tables, setting this parameter to a packet size larger than the bcp default (4096) can speed up loading the data into the table. When you specify a nonzero value, the Databridge Client adds the " `-a pkt_size " option to the bcp command line in the .CMD scripts.If you omit this parameter, or if you specify a value of 0, the Databridge Client omits the " -a pkt_size " option and the bcp utility uses the default network packet size of 4096. |
bcp_tempfile_dir |
Default: "" (omitted) Range: Valid Windows path with backslashes entered as "\" bcp utility: SQL Server When running the SQL Server client in a clustered Windows Server environment the Client working directory resides on a shared network drive, which is slower than the local SSD disks and results in much slower clones. This parameter was added to mitigate this problem by making the Client use a directory on a different disk to hold the bcp temporary files. |
enable_parallel_mode |
Default: False Range: True or False SQL*Loader : OracleRelated parameters: inhibit_direct_mode This parameter, which is only meaningful when DIRECT mode is enabled, causes the generate command to add the specification "parallel = true" to the SQL*Loader command line. Parallel mode makes the SQL*Loader run faster at the expense of additional system resources. |
inhibit_direct_mode |
Default: False Range: True or False SQL*Loader : OracleRelated parameters: enable_parallel_mode , sqlld_rows , and sqlld_bindsize Controls whether the generate command adds the specification "direct=true" to the SQL*Loader command line. If your Oracle database is on the same machine as the Databridge Client, you would let this parameter assume its default value of False, as DIRECT mode is much faster than conventional mode. Conversely, if your Databridge Client accesses a remote Oracle database using SQL*Net between two dissimilar architectures (for example, Windows and UNIX), you must use conventional mode by setting this parameter to True.Setting inhibit_direct_mode to True inhibits the use of the direct=true option when invoking SQL*Loader in the command files. It is provided for your convenience so that you do not to have to remove the string "direct=true" from every call on SQL*Loader .When you enable inhibit_direct_mode , we recommend that you increase the size of sqlld_bindsize for better performance. |
max_errors |
Default: 10 Range: 0–1000 Bulk loader utility and BCP API: All Controls the bulk loader’s tolerance to records that are discarded due to data errors. Use this parameter when you have many bulk loader errors. Increasing the maximum error count allows you to gather all the errors in one run rather than finding 10 errors and then having to start over again. For example, if you are having problems cloning a table, you may want to increase the count to 1000 or more to get all the errors in one cloning or process session. Knowing the type of errors helps you to solve the problems. The default value for this parameter is 10, which means that the bulk loader aborts after encountering 10 bad records. These bad records are written to the discard file and information about the error is written to the bulk loader log file. For information about these files, see Files related to SQL*Loader and Files related to BCP. |
max_bcp_errors |
Default: 5 Range: 0-1000 This parameters is designed to prevent the Client from blindly extracting all the selected data sets when the number of tables with load failures reaches the specified threshold. A value 0 indicates that the Client should proceed regardless of the number of failed loads. |
max_temp_storage |
Default: 400 MB Range: 10 MB–3 GB (or 0) Bulk loader utility: bcp for SQL Server or SQL*Loader for OracleApplies to: Windows Clients This parameter activates the segmented bulk load feature, which allows you to specify the maximum amount of storage that the Client should use for temporary files. Because the Client cannot stop in the middle of a record, you can expect it to use slightly more storage than the value you specify. Therefore, select a value less than the total amount of free space available on the disk. We recommend that you keep this value low as there is no real advantage to attempting to load large tables all at once. If you set the value too high, thye Client can run out of storage while it is writing temporary files. You can specify the max_temp_storage value as an integer with any of the following suffixes:K (or KB) for kilobytes (default) M (or MB) for megabytes G (or GB) for gigabytes The space between the number and the suffix is optional. NOTE: The valid range for this parameter is 10 MB to 3 GB (0xC0000000). You must specify values greater than 0x7FFFFFFF without a suffix. The value you enter for max_temp_storage must be a whole number. |
sqlld_bindsize |
Default: 64K bytes Range: 0x10000–0x400000 (decimal or hexadecimal) SQL * Loader: OracleRelated parameters: inhibit_direct_mode , sqlld_rows Defines the value to be used for the BINDSIZE parameter for SQL*Loader operations. Increasing this value can speed up SQL*Loader operations when using conventional mode (for example, running remote to a database on a UNIX system. Use sqlld_rows and sqlld_bindsize when you are running the Client for a remote Oracle database running on UNIX or Windows.A larger bind size and row size can increase the speed of the load across Oracle Network Services at the expense of using more memory. |
sqlld_rows |
Default: 100 Range: 10–100,000 rows SQL*Loader: Oracle Related parameters: inhibit_direct_mode , sqlld_bindsize Defines the value to be used for the ROWS specification for SQL*Loader operations. Use sqlld_rows and sqlld_bindsize when you are running the Client for a remote Oracle database running on UNIX or Windows.A larger bindsize and row size can increase the speed of the load across Oracle Network Services at the expense of using more memory. |
use_bcp |
Default: true Range: True or False Bulk Loader utility: SQL Server The SQL server Client can operate with the bcp utility or the BCP API. This parameter determines the default value used by the define and redefine commands when setting the ds_options for the various data sets. We recommend using the bcp utility as it is more reliable than the BCP API. If you have a lot of record types with some variable format data sets, you should definitely set them to use the bcp utility, as you may end up with a lot database connections during their data extraction, as each table being loaded requires each own database connection when using the BCP API. |
verify_bulk_load |
Default: 1 Range: 0, 1, or 2 Bulk loader utility: All Determines how you want the Databridge Client to handle the results of the bulk loader operations during data extraction, as follows: Setting and Description
|
[params]
The [params] section of the configuration consists of the following groups of command parameters:
For this group | See this topic |
---|---|
define and redefine |
Define and Redefine Command Parameters |
process and clone |
Process and Clone Command Parameters |
Server options | Server Option Parameters |
generate |
Generate Command Parameters |
display |
Display Command Parameter |
User scripts | User Scripts Parameters |
External data translation DLL support | [Bulk_Loader] |
Define and Redefine Command Parameters
The following parameters are included in the [params] section of
Databridge Client configuration file. The parameters listed in this
section affect only the define
and redefine
commands.
allow_nulls
Default: False
Range: True or False
Console: CUSTOMIZING (General > Allow NULLs)
The allow_nulls
parameter specifies whether or not the define
and
redefine
commands should set the DAOPT_Nulls_Allowed bit (value 1) in
the da_options
column of the DATAITEMS Client control table. This means
that both DMSII null data and data items that contain bad values
(excluding keys) will be stored as relational database NULLs.
You can set this parameter in the Client Configuration dialog box of the Administrative Console or by using data table customization scripts. To avoid storing NULL data as values that are possibly legitimate (0 or 999), keep this parameter set to True.
auto_mask_columns
Default: True
Range: True or False
Console: CUSTOMIZING (DMSII related parameters > Preserve DMSII MASKING option)
The parameter auto_mask_columns
specifies whether the Databridge Client
should automatically mask columns whose corresponding items in DMSII have
DATAMASK specifications in the DASDL.
automate_virtuals
Default: False
Range: True or False
Console: PROCESSING > Advanced (General > Automated virtual data sets)
This parameter enables code that automatically handles virtual data sets
that must be linked with their parent data sets using the
virtual_ds_num
, real_ds_num
, and real_ds_rectype
columns in the DATASETS
Client control table. These links are currently set up via user scripts.
When this option is enabled, you simply issue a process
command. When
issuing a clone
command, the virtual data sets do not have to be
explicitly specified on the command line.
bracket_tabnames
Default: False
Range: True or False
Applies to: SQL Server Client only
Console: CUSTOMIZING (General > Use brackets ...)
The parameter bracket_tabnames
specifies whether the Databridge Client
should allow data set names that are TRANSACT_SQL reserved words to be
used as table names or they should be renamed. If this parameter is set
to True all such table names are enclosed in square brackets in all SQL
statements used by the Client. If the parameter is set to False the
Client renames them by adding "_x" to the data set name.
clr_dup_extr_recs
Default: True
Range: True or False
Console: CUSTOMIZING > Advanced (Global data set options > Clear duplicate ...)
This parameter defines the initial value of the DATASETS table
ds_options
bit DSOPT_Clrdup_Recs. When this bit is set, the Databridge Client
runs a script to remove false duplicate records after the index creation fails.
These duplicate records are caused by long cloning of an active DMSII database
where the extraction process can see the same record more than once if it moves.
In addition to indirectly affecting the process
and clone
commands, this
parameter indirectly affects the generate
command.
The ds_options
bit DSOPT_Clrdup_Recs causes the following actions:
-
When set to False, the Databridge Client ignores false duplicate records. If there are false duplicate records, the index creation will fail. In this case you must manually remove the false duplicate records and recreate the index before the fixup phase can continue.
-
When set to True, the
generate
command creates a script (named "script.clrduprecs.tabname) that removes records with duplicate key values. This script will run only if the create index step fails. After the duplicate records are deleted, the index creation and fixup phases continue as normal.
convert_ctrl_char
Default: False
Range: True or False
Related parameters: alpha_error_cutoff
, discard_data_errors
, display_bad_data
Console: PROCESSING > DMSII Data Error Handling (Character data error > Control character)
The convert_ctrl_char
parameter applies to DMSII data items of type ALPHA.
Note
-
Do not set the
convert_ctrl_char
parameter to True unless you are absolutely certain that eliminating control characters will have no adverse effects on the data. For example, eliminating control characters can cause some fields to be misinterpreted. -
This parameter and the parameter
inhibit_ctrl_chars
are mutually exclusive. If you attempt to set them both to True, the configuration file scanner will generate an error.
Use this parameter as follows:
-
Set
convert_ctrl_char
to True if you want the Databridge Client to replace all control characters in ALPHA data with spaces. This is usually the result of host applications whose record layouts are out-of-sync with the DMS database. -
Set
convert_ctrl_char
to False if you want the Databridge Client to not change control characters to spaces. Depending on your setting foralpha_error_cutoff
, the column that contains control characters may be set to NULL, but at least the problem field will be identified. Then, you can decide whether to set this parameter to True and ignore the bad data.
In summary, before you set this option to True, set alpha_error_cutoff
to a low value and set display_bad_data
to True to determine whether
or not it is safe to ignore the control characters.
When using the Administrative Console, these two parameters are presented as 3 radio buttons ("Change to space", "Change to ?"" and "Translate if possible"). This prevents you from setting both mutually exclusive parameters to True.
default_user_columns
Default: 0
Range: 0 - 16383 (Some bit combinations are not allowed.)
Console: CUSTOMIZING > User Columns Section Two
The default_user_columns
parameter adds non-DMSII columns (user columns) to all
the tables mapped from data sets in the Client database. You can add additional
user column to history tables using the dflt_history_columns
parameter, see
dflt_history_columns. You can also exclude certain
user columns from secondary tables using the sec_tab_column_mask
parameter,
see sec_tab_column_mask.
When using the Administrative Console use the checkboxes in the Primary Tables
column to set corresponding the bits for the dflt_user_columns
parameter.
dflt_history_columns
Default: 0
Range: 0 - 16383 (Some bit combinations are not allowed.)
Console: CUSTOMIZING > User Columns Section Two
The dflt_history_columns
parameter adds more non-DMSII columns (user
columns) to all history tables in the Client database. By default,
history tables are created with three non-DMSII columns. The
dflt_history_columns
parameter is intended to simplify user scripts at
sites where the same non-DMSII columns (user columns) are added to all
(or most) history tables. When you use this parameter to add user
columns to history tables, the specified non-DMSII columns are added
to all history tables. If you do not want to add all of these columns to all
history tables, you must use a user script to set the active
column to 0 for
the unwanted columns in the DATAITEMS Client control table.
For more information about history tables, see history_tables
and
ds_options
in the DATASETS Client control table.
When using the Administrative Console use the checkboxes in the History Tables
column to enable the corresponding bits for the dflt_history_columns
parameter.
enable_dms_links
Default: False
Range: True or False
Console: CUSTOMIZING (DMSII related parameters > Enable DMSII links)
This parameter must be set to True when running a define
, redefine
or the Administrative Console's Customize commands if you want to replicate the links in the DMSII database. In addition to setting this parameter to True, you must also enable DMSII link support in the Databridge Engine control file.
enable_extended_types
Default: False
Range: True or False
Applies to: Oracle Client only
Console: CUSTOMIZING > SQL Data Types
This parameter makes the Oracle Client read the Oracle database's max_string_size
parameter and check if it is set to "extended" (as opposed to "standard"). If it
finds the parameter set to "extended", it allows varchar2
and raw
columns to
have a maximum length of 32K.
Using this parameter, to ensure that DMSII ALPHA columns that are longer than
4000 characters do not get split, is much more efficient than setting the
parameter use_clob
to True.
enable_dynamic_hist
Default: None
Range: True or False
Console: CUSTOMIZING > History Tables (Options > Enable dynamic history)
This parameter allows the user to add history tables without having to
re-clone all the affected data sets. To do this, specify the default
history columns (if any) using the default_history_columns
configuration file parameter. Then, set the DSOPT_SaveUpdates (8) bit for
all data sets for which history tables are to be kept, and run a
redefine
command with the -R
option, forcing all data sets to be
remapped. Finally, run a reorganize
command, which will creates the history
tables and their indexes. The new history tables will populate when audit
files are processed.
external_column[n]
Default: N/A
Range: N/A
Console: CUSTOMIZING > User Columns Section One
This parameter allows you to globally change the item_name
, sql_type
, or sql_length
of the non DMSII columns described in Numeric Date and Time in Non-Contiguous Columns. The syntax is as follows:
external_column[n] = ["name"[,sql_type[,sql_length]]]
Where | Is |
---|---|
n | The corresponding bit number (dms_subtype value) for the non-DMSII column. NOTE: The brackets and value are required syntax. |
name | Custom column name |
sql_type | An integer value that represents the internal code for the SQL type that you want to use. The Client only accepts data types that make sense for a particular column. For instance, you cannot set the data type for the AFN to bit or char, but you can set it to int or dec(10). For details, see DMSII and Relational Database Data Types. |
sql_length | A value that represents the length of the data type. Specify this value only if the data type requires it. If the data type does not have a length specification, specifying a value may cause an error. |
The following table shows allowable sql_type
values for external_column
.
DMS Subtype | Mask Value (hex) | Default Column Name | Allowable SQL Types (SQL Server) | Allowable SQL Types (Oracle) |
---|---|---|---|---|
1 | 0x0001 | update_type | tinyint, shortint, int, bigint | number(n) |
2 | 0x0002 | update_time | datetime, smalldatetime, datetime2 | date |
3 | 0x0004 | update_ts | timestamp | N/A |
4 | 0x0008 | audit_ts | datetime, smalldatetime | date, timestamp |
5 | 0x0010 | audit_filenum | shortint, int, dec(n), bigint | number(n) |
6 | 0x0020 | audit_block | int, dec(n), bigint | number(n) |
7 | 0x0040 | source_name | varchar(n), char(n) | varchar(n), char(n) |
8 | 0x0080 | source_id | tinyint, shortint, int, bigint | number(n) (where n >=3) |
9 | 0x0100 | my_id | int, dec(n), bigint | N/A |
10 | 0x0200 | deleted_record | int, bigint | number(n) (where n >=9) |
11 | 0x0400 | source_name | varchar(n), char(n) | varchar(n), char(n) |
12 | 0x0800 | source_id | tinyint, shortint, int, bigint | number(n)(where n >=3) |
13 | 0x1000 | audit_ts | datetime, smalldatetime, datatime2 | date, timestamp |
14 | 0x2000 | user_column1 | char(n), varchar(n), tinyint, shortint, int, float, datetime, dec(n), smalldatetime, datetime2, date, time | char(n), varchar(n), number(n), float, date |
15 | 0x4000 | sequence_no | int, bigint | number(n)(where n >=9) |
16 | 0x8000 | delete_sqno | shortint, int, bigint | number(n)(where n >=5) |
17 | 0x10000 | create_time | datetime, smalldatetime, datetime2 | date, timestamp |
18 | 0x20000 | user_column2 | char(n), varchar(n), tinyint, shortint, int, float, datetime, dec(n), smalldatetime, datetime2, date, time | char(n), varchar(n),number(n), float, date |
19 | 0x40000 | user_column3 | char(n), varchar(n), tinyint, shortint, int, float, datetime, dec(n), smalldatetime, datetime2, date, time | char(n), varchar(n), number(n), float, date |
20 | 0x80000 | user_column4 | char(n), varchar(n), tinyint, shortint, int, float, datetime, dec(n), smalldatetime, datetime2, date, time | char(n), varchar(n), number(n), float, date |
Note
For Oracle, if you choose the tinyint value for sql_type
you get number(3), if you choose the smallint value you get number(5) and so on, as the data types in question are not defined for Oracle. Oracle has only one type of data type for integer values number(n).
For example, the entry below causes the audit_filenum
column to be renamed AFN (the double quotation marks are optional since no special characters are involved); the sql_type
and sql_length
remain unchanged.
external_column[5] = "AFN"
default_user_columns = 0x0010
In the example below, the data type of the audit_block
column changed to dec(12).
external_column[6] = ,11,12
extract_embedded
Default: False
Range: True or False
Console: CUSTOMIZING (DMSII related parameters > Extract embedded data sets)
Use the extract_embedded
parameter when the DMSII INDEPENDENTTRANS
option is reset. If INDEPENDENTTRANS is set, the extract_embedded
parameter is not needed because the Databridge Client can clone and
update embedded datasets.
When INDEPENDENTTRANS is reset, use this parameter as follows:
-
Set
extract_embedded
to True if you want the Databridge Client to extract embedded data sets during cloning when INDEPENDENTTRANS is reset. However, the Databridge Client cannot apply fixups or updates to these extracted embedded data sets. -
Set
extract_embedded
to False if you want the Databridge Client to ignore embedded data sets.
flatten_all_occurs
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Table layout > Flatten all OCCURS)
This parameter simplifies writing user scripts when you want to flatten a lot
of OCCURS clauses. Setting this parameter to True causes the Client to
initialize the DIOPT_Flatten_Occurs bit to 1 in the di_options
column of
the DMS_ITEMS Client control table for all items that have an OCCURS clause.
This avoids having to do this using user scripts. If you do not want to flatten
certain OCCURS clauses, you can set the corresponding bit to 0 for those
specific items by using customization user scripts or by using the Administrative Console's
Customize command (see the Databridge Administrative Console Help).
force_aa_value_only
Default: 0
Range: 0-2
Console: CUSTOMIZING > Advanced (Global data set options > Force AA Values...)
When set to 1, this parameter globally sets the DSOPT_Use_AA_Only bit in
the ds_options
column for the DATASETS table entries that have valid AA
Values or RSNs. When set to 2, this action is only performed for data
sets that have RSNs, because AA Values aren't preserved when a garbage
collection or structural reorganization occurs.
If you want to exclude certain data sets, you can set the DSOPT_Use_AA_Only bit to 0 by using user scripts or the Administrative Console's Customize command (see the Databridge Administrative Console Help).
history_tables
Default: 0
Range: 0-2
Console: CUSTOMIZING > History tables (Data set history tables)
This parameter is designed to simplify script writing. It allows you to
make the define
command globally set the DSOPT_Save_Updates and
DSOPT_History_Only bits. A value of 0 indicates that neither bit
should be set for data sets. A value of 1 indicates that the
DSOPT_Save_Updates bit should be set for all data sets. Finally, a value
of 2 indicates that the DSOPT_Save_Updates and the DSOPT_History_Only
bits should both be set for all data sets.
inhibit_required_opt
Default: False
Range: True or False
Console: CUSTOMIZING (General > Ignore REQUIRED attribute)
The 6.6 Client honors the REQUIRED attribute of DMS items that indicates that the item cannot be NULL because it is a member of a SET. This option tells the Client no to pay attention to the REQUIRED attribute and handle things the way older Clients did. We already have provisions in place that inhibit data sources created by older Clients from honoring the REQUIRED attribute. Without these provisions, columns that have the REQUIRED attribute would be changed to NOT NULL, which would make them incompatible with the table layouts used by older Clients. This parameter allows such column to have the NULL attribute so that items that have bad values can be stored as NULL, rather than an artificial value that is used in place of NULL.
This parameter has no effect on items that are keys, which always have the NOT NULL attribute.
maximum_columns
Default: Dependent on the database
Console: CUSTOMIZING > Advanced (Table layout > Maximum columns ...)
The maximum_columns
parameter enables you to reduce the column count
when a table split occurs because of the maximum column limitation of
the relational database. For example, if you want to add a column
containing the value of the audit timestamp file to the first table of a
split table, you can set the maximum_columns
parameter to 1023 instead
of 1024. By doing so, you avoid moving an item from a full table to a
secondary table to make room for the new column. The table below shows
the maximum columns and ranges for different relational databases.
Database | Default | Range |
---|---|---|
Oracle | 1000 | 1-1000 |
SQL Server | 1024 | 1-1024 |
min_varchar
Default: 4
Range: 0 to 255
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use varchar ...)
This parameter supplements the use_varchar
configuration file
parameter by adding the condition that the length must be at least equal
to the value of this parameter. Setting this parameter value to 4 would
force columns whose data types would have been VARCHAR(1), VARCHAR(2),
or VARCHAR(3) to instead be CHAR(1), CHAR(2), and CHAR(3) if
use_varchar
is set to True.
When using the Administrative Console the Use varchar
slider must be enabled before you can change the value of this parameter.
Note
Setting this parameter to a non-zero value when use_varchar
is set to False has no effect.
minimize_col_updates
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Global data set options > Update changed columns only)
The minimize_col_updates
parameter specifies whether the define
, redefine
and the Administrative
Console's Customize commands should set the DSOPT_Optimize_4_CDC bit in
the ds_options
column of the DATASETS table. This bit, when set, causes the Client to
create update statements that only assign values to columns whose values
are changed. To do this, stored procedures are abandoned in favor of
pure SQL without the use of host variables. This slows down the Client considerably,
but the overall process may ultimately take less time because
SQL Server or Oracle replication sends significantly less data to the remote database.
See the parameter enable_minimized_col, which allows the user to disable this
option without having to run a redefine
command.
Caution
Using this parameter will significantly slow down update processing by the Client. If you are replicating your relational database, enabling this feature may provide some benefits if replication is very slow.
See also the enable_minimized_col
parameter, which allows the user to disable this option without running a redefine
command.
miser_database
Default: False
Range: True or False
Related parameters: automate_virtuals
, use_nullable_dates
Console: CUSTOMIZING (DMSII related parameters > MISER database)
This parameter is for MISER database sites. When set to True, it sets the default date format to be a MISER date. It also sets the following parameters (required for MISER sites) to True, if they aren't already set to True:
-
automate_virtuals
-
flatten_all_occurs
-
use_nullable_dates
new_history_types
Default: False
Range: True or False
In older client history tables did handle key changes correctly, they ended up as DELETE followed by an INSERT
with no link between the two records. This parameter allows sites to continue working the way they did until their
applications are updated to handle the new update type used in history tables for records that undergo a key change.
When this parameter is set to True the Client uses the MOFITY_BI(6) and MODIFY_AI(7) instead of DELETE(2) and INSERT(1).
The MODIFY_AI record will always follow the MODIFY_BI record, so if you are using IDENTITY columns (SQL Server and Oracle)
or SERIAL/BIGSERIAL columns (PostgreSQL) it is pretty simple to find the matching records. We recommend that you use
these columns instead of the update_time
and sequence_no
columns, which are the default user columns for sequencing
history table records.
optimize_updates
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Global data set options > Optimize SQL updates)
The optimize_updates
parameter specifies whether the define
, redefine
and the Administrative Console's Customize commands should set the DSOPT_Use_bi_ai bit (1) (in the ds_options
column of the DATASETS table) for data sets that have items
with OCCURS clauses that are not flattened. The Client uses this bit, which you can modify using user scripts, to determine if it should request the Databridge Engine to send all updates for the data set as BI/AI pairs. The Databridge Client then compares the before and after
images to determine if an update has any effect, and suppresses all redundant updates. Depending on the data, this can greatly increase performance when you do not flatten OCCURS clauses. See the parameter enable_optimized_sql, which allows the user to disable this
option without having to run a redefine
command.
read_null_records
Default: True
Range: True or False
Console: CUSTOMIZING (DMSII related parameters > Read NULL record values)
This parameter determines whether or not the Client should request the
NULL VALUES for data set records from the Databridge Engine during the
define
, redefine
and the Administrative Console's Customize commands. The NULL VALUES are then stored in
the binary file "datasource_NullRec.dat" from which they are retrieved at the beginning of process
or
clone
commands. When this parameter is enabled, the testing for NULL
is more accurate; however, this feature generates a small amount of
overhead, particularly with a large database where these records use
more memory. Note that this parameter does not imply that NULLS are
allowed in the relational database; this is still specified using the
allow_nulls
parameter.
reorg_batch_size
Default: 50000
Range: 5000 - 100000
Console: CUSTOMIZING > Advanced (Table reorganization options > Reorg command ...)
This parameter determines the size of the transactions that the Client
uses during a reorganize
command to set the value of newly-added columns to
their initial value, as defined in the DASDL. The redefine
command
creates a reorg script that uses a stored procedure to do the updates in
batches that are executed as transactions. For a large table, this
process can take quite long, but it does not run the database out of log
space. Consider using the internal clone option instead (see
use_internal_clone.
sec_tab_column_mask
Default: 0
Range: 0 - 16383
Console: CUSTOMIZING > User Columns Section Two
The parameter sec_tab_column_mask
eliminates a set of user columns
from secondary tables without having to write extensive user scripts
to set the active
column in DATAITEMS to 0 for the unwanted columns.
To remove those columns, the Client removes the bits you specified in
sec_tab_column_mask
from the value represented in external_columns
and
uses the resulting value to determine which user columns to add to
secondary tables during define
, redefine
and the Administrative Console's Customize commands.
This parameter is intended to allow the adding the audit timestamp, the audit file number, or the audit block to primary tables without adding them to secondary tables. The default value of this parameter is 0, which indicates that no user columns should be removed from secondary tables.
When using the Administrative Console uncheck the checkboxes in the Secondary Tables
column to set the corresponding bits in the sec_tab_column_mask
parameter.
split_varfmt_dataset
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Global data set options > Split variable format data set)
This parameter makes the define
and redefine
and the Administrative Console's Customize commands set the bit
DSOPT_Split_Varfmt_ds in the ds_options
column for the DATASETS table globally. See Split Variable Format Data Sets Option.
strip_ds_prefixes
Default: False
Range: True or False
Console: CUSTOMIZING (General > Strip Data Set prefixes)
This parameter makes the define
, redefine
and the Administrative Console's Customize commands set the
item_name_prefix
column in the DATASETS table to the data set name.
This is useful when all DMSII data item names use the data set name followed by a dash as common prefix. The
strip_ds_prefixes
parameter provides a quick way of stripping those common prefixes without writing any user scripts
or using the Administrative Console's Customize command (as renaming every column requires a lot of work).
If the prefix is an abbreviated form of the data set name (e.g. SVHIST instead of SV-HISTORY),
use a user script or the Administrative Console's Customize command to set the
item_name_prefix
column in the DATASETS table to this value (do not include the trailing dash).
suppress_dup_warnings
Default: False
Range: True or False
Console: PROCESSING > DMSII Data Error Handling (General error handling > Suppress duplicate ... )**
The parameter suppress_dup_warnings
controls whether or not duplicate
insert and failed update warnings are displayed during update
processing. The bit DSOPT_Ignore_Dups (32) in the ds_options
column of
the DATASETS table can be used instead when you want to apply this only
for certain data sets.
suppress_new_columns
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Global Data Set Options > Ignore new columns)
The suppress_new_columns
parameter indicates that the redefine
and the Administrative Console's Customize
commands set the active
columns to 0 in the DATAITEMS and DATATABLES
entries resulting from DMSII reorganizations that add DMSII items. The
suppress_new_columns
parameter is useful when you want to keep your
relational database tables intact after a DMSII reorganization,
particularly if the added column will cause existing application to
fail. If this is the case, set suppress_new_columns
to True.
suppress_new_datasets
Default: True
Range: True or False
Console: CUSTOMIZING (General > Ignore new data sets)
This parameter indicates whether or not the Client maps new data sets
created during a DMSII reorganization. If this parameter is set to True
new data sets get their active
columns set to 0 in the DATASETS table
which causes them not go get mapped to the relational database.
If you decide that you want to replicate the new data set after running the redefine
, you must set the active
column to 1 for the data set in the DATASETS Client control table. You can do this by updating the user script "script.user_datasets.datasource" in the scripts directory.
You then need to run a second redefine
command to get the data set mapped. When you run back-to-back redefine commands the Client will automatically reloads the control tables from the
unload file it creates at the start of the first redefine
command.
use_bigint
Default: False
Range: True or False
Recommended Value: True
Applies to: SQL Server and PostgreSQL Clients
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use bigint ...)
This parameter is only applicable to the SQL Server Client. It indicates that the Databridge Client should map DMSII numeric data that is too large to fit in the int data type (32-bit integer), to bigint (64-bit integer). If this parameter is set to False, such data items would be mapped to decimal(n). Items that are too large to fit in a bigint are still mapped to decimal(n). This parameter makes it easy to standardize the Client to use bigint instead of decimal(n), without having to write user scripts. It also allows you to maintain backward compatibility with older databases that do not use bigint, by setting this parameter to False.
use_binary_aa
Default: False
Range: True or False
Applies to: SQL Server and Oracle Clients
Console: CUSTOMIZING (AA Values and RSNs > Use binary AA Values)
This parameter maps AA Values, Parent_AA Values, RSNs (including Visible RSNs) and DMSII Links to binary(6) or raw(6) instead of char(12) to reduce their storage requirements by half.
AA Values (and RSNs), which are 48-bit values, are stored in 6 bytes when using binary data, as opposed to 12 bytes when using character data.
The data types used for these columns depend on the value of the
sql_type
column in the DATAITEMS Client control table. The purpose of
this parameter is to define how these items are to be mapped by default
to avoid changing the sql_type
of all such columns.
use_clob
Default: False
Range: True or False
Applies to: Oracle Client only
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use clob ...)
It indicates that DMSII ALPHA data that is too large to fit in a varchar2 column, which is limited to 4000 characters, should be mapped to a data type of clob instead of being truncated or split into two columns.
use_clustered_index
Default: False for index. True for primary key.
Range: True or False
Applies to: SQL Server Client only
Console: CUSTOMIZING > Advanced (Indexes > Use clustered indexes)
The use_clustered_index
parameter applies to all data tables. You can
override its setting on a table-by-table basis via the DTOPT_Clustered_Index bit
in the DATATABLES control table's dt_options
column.
Use this parameter as follows:
-
Set
use_clustered_index
to True if you want a clustered index for all or most tables. -
For all tables, just set this parameter to True.
-
For most tables, set this parameter to True and then reset DTOPT_Clustered_Index bit in
dt_options
for those tables for which you do not want a clustered index. -
Set
use_clustered_index
to False if you want no clustered indexes on all tables, or if you want clustered indexes on only a few tables. -
For no clustered index on all tables, just set this parameter to False.
-
For clustered indexes on only a few tables, set this parameter to False and then set the DTOPT_Clustered_Index bit in
dt_options
for those tables for which you do want a clustered index.
To reset or set DTOPT_Clustered_Index, see dt_options
in
DATATABLES. Typically you would do this via user scripts.
use_column_prefixes
Default: False
Range: True or False
Console: CUSTOMIZING (General > Use column prefixes)
This parameter extends the tab_name_prefix
specified in the DATASOURCES
Client control table to the columns of the user tables. If the
tab_name_prefix
column of the data source is blank, this parameter has
no effect. For more details, see DATASOURCES Client Control Table.
use_date
Default: False
Range: True or False
Related parameters: use_datetime2
Applies to: SQL Server and PostgreSQL Clients
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use date ...)
Use this parameter to make the define
, redefine
and the Administrative Console's Customize commands
interpret the DIOPT_Clone_as_Date bit in the di_options
column of the
DMS_ITEMS table as a request to use a data type of date instead of
smalldatetime. This eliminates the need to set the di_options
bit
DIOPT_Use_Date for every item that is to be mapped to a data type of date.
use_datetime2
Default: False
Range: True or False
Related parameters: use_date
Applies to: SQL Server and PostgreSQL Clients
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use datetime2 ...)
Use this parameter to make the define
, redefine
and the Administrative Console's Customize commands
interpret the DIOPT_Use_LongDate bit in the di_options column
of the
DMS_ITEMS table as a request to use a data type of datetime2 instead of
datetime. This eliminates the need to set the di_options
bit
DIOPT_Use_LongDate2 bit for every item that is to be mapped to a data
type of datetime2.
use_dbconfig
Default: False (hidden when False)
Range: True or False
Applies to: Clients launched by the service
Console: Automatically set for a new data source when you use the Customize command
A successful run of the upgrade
command of the dbscripfixup program automatically sets this parameter to True. You can later set it to False
if you want to revert to using user scripts by running a dbutility import
command. You should first run a Create Users Scripts command from
the data source's Advanced menu, or run a dbutility creatscripts
command to save all your customizations in user scripts. We recommend
saving the customizations in user scripts in case you have to drop and recreate it. The define
command will always run user scripts regardless
of the setting of the parameter use_dbconfig
, while the redefine
command will only do this when the use_dbconfig
parameter is set to False,
or if you use the -r
option to toggle this parameter so it is treated as False when set.
Caution
Do not change the use_dbconfig
parameter to True when the data source was not created using the Customize command. If you created the data source using the Define/Redefine command, you will be able to use Customize command, unless there were user scripts involved in the underlying define
command. If you are not sure run, the dbscriptfixup program's upgrade
command to check for use scripts and fixup the control table and automatically set this parameter to True when the command is successful.
use_decimal_aa
Default: False
Range: True or False
Console: CUSTOMIZING (AA Values and RSNs > Use decimal AA Values)
Note
This parameter is mutually exclusive with the use_binary_aa
parameter.
This parameter maps AA Values, Parent_AA Values, RSNs (including Visible RSNs) and DMSII LINKS to a numeric data type instead of char(12). The data type varies from database to database. In the case of SQL Server, bigint is used and in the case of Oracle, number(15) is used.
use_internal_clone
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Table reorganization options > Use internal clone ... )
This parameter affects the redefine
and reorganize
commands. Instead of
using ALTER commands to add, delete or modify new columns to tables, the
Client uses a combination of scripts and table renaming commands to
create new copies of the tables with the new layouts. The Client copies
the data using SELECT INTO in the case of SQL Server and CTAS (Create
Table As Select) in the case of Oracle. This operation works like the
bulk loader and is faster than using ALTER and UPDATE commands,
but more importantly the command is not logged. The only
drawback of this method is that it requires sufficient free disk storage
to hold a second copy of the table for the duration of the operation.
use_nullable_dates
Default: False
Range: True or False
Console: CUSTOMIZING (General > Allow NULL dates)
This parameter forces all MISER dates, including keys, to have the
DAOPT_Nulls_Allowed bit (value 1) in the da_options
column of the
DATAITEMS Client control table. This parameter should only be set to
True if you are using a MISER database. Only one MISER date is allowed
as a key. The Client generates custom stored procedures that handle the
cases where the MISER date that is part of the index is NULL.
use_primary_key
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Indexes > Use Primary Keys)
This parameter tells the Databridge Client to create a primary key
instead of a unique index for all tables. You can override its setting
on a table-by-table basis via the DTOPT_Primary_Key bit in the DATATABLES
Client control table's dt_options
column.
-
Set
use_primary_key
to True if you want a primary key for all or most tables. -
For all tables, just set this parameter to True.
-
For most tables, set this parameter to True and then reset DTOPT_Primary_Key for those tables for which you do not want a primary key.
-
Set
use_primary_key
to False if you want no primary keys on all tables, or if you want primary keys on only a few tables. -
For no primary key on all tables, just set this parameter to False.
-
For primary keys on only a few tables, set this parameter to False and then set DTOPT_Primary_Key for those tables for which you do want a primary key.
To reset or set DTOPT_Primary_Key, see dt_options
in
DATATABLES. Typically you would do this via user scripts.
use_stored_procs
Default: False
Range: True or False
Applies to: SQL Server and Oracle Clients
Console: CUSTOMIZING > Advanced (Global Data Set Options > Use stored procedures in updates)
This parameter makes the process
and clone
commands generate
actual SQL commands instead of stored procedure calls to perform
updates. The Client still uses host variables, as was the case with
stored procedures calls. Executing the SQL directly eliminates some
overhead and makes processing the update faster. If you change this
parameter, you must propagate the change to the ds_options
columns of
the DATASETS Client control table. The easiest and safest way to do this is to run a
redefine
command using the -R
option (when using the Administrative Console, click on
Advanced > Redefine with Options for the data source and then enable "All Data Sets" slider). The redefine
command
will ask you to run a reorganize
command, which creates a new set of
scripts for creating the tables. It also will refresh the stored
procedures for all data sets by dropping them if they exist and then creating them if they are needed.
use_time
Default: False
Range: True or False
Applies to: SQL Server and PostgreSQL Clients
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use time ...)
Use this parameter to make the define
, redefine
and the Administrative Console's Customize commands
interpret the DIOPT_Clone_as_Time bit in the di_options
column of the
DMS_ITEMS table as a request to use a data type of time instead of a
numeric time. This eliminates the need to set the di_options
bit
DIOPT_Use_Time in the DMS_ITEMS Client control table for every item that is to be mapped to a data type of time.
use_varchar
Default: True
Range: True or False
Console: CUSTOMIZING > SQL Data Types (Default SQL data types > Use varchar ...)
Set use_varchar
to True to cause the define
, redefine
and the Administrative Console's Customize commands to
map DMSII ALPHA data to varchar (Microsoft SQL Server) or varchar2 (Oracle) instead of char.
Note
The Databridge Client suppresses trailing blanks from all character data constructed from DMSII ALPHA data.
Process and Clone Command Parameters
The following parameters are included in the [params] section of the
Databridge Client configuration file. The parameters listed in this
section affect only the process
and clone
commands.
alpha_error_cutoff
Default: 10
Range: 0 - 100
Related parameters: discard_data_errors
, display_bad_data
Console: PROCESSING > DMSII Data Error Handling (Character data error > Set item to NULL ...)
This parameter specifies the percentage of data errors in any given ALPHA item that are tolerated before the field is declared bad and treated as NULL (or simulated NULL if the column does not allow NULLS). The default value for this parameter is 10 (10%); the allowable values are in the range 0 (fail on first error) to 100 (ignore all errors).
aux_stmts
Default: 100
Range: 0 - 200
Console: PROCESSING (General > Number of auxiliary (ODBC/OCI) statements ...)
Use the aux_stmts
parameter to set the number of database API (that
is, ODBC or OCI) STMT structures that can be assigned to individual SQL
statements. Using multiple database API STMT (statement) structures
allows SQL statements to be parsed once and executed multiple times,
provided the STMT structure is not reassigned to hold another SQL
statement. Increasing the number of database API statements
significantly improves processing time, if your system has enough
memory.
Note
When using the Oracle Client, make sure that the open_cursors
parameter defined in the database initialization file for the Oracle instance ("init SID.ora", where SID is the name of the instance) is set to a high enough value.
batch_job_period
Default: ;00:00, 00:00
Range: 00:00 to 24:00 (The two time values cannot be equal.)
Console: PROCESSING > Engine and Enterprise Server (Batch commit parameters > Batch job ...)
The batch_job_period
parameter specifies the block of time during which batch jobs typically run. For example "batch_job_period = 22:00, 01:00" indicates that batch jobs run between 10:00 pm and 1:00 am. The syntax for the four commit checkpoint parameters, that can be overridden by the Client, was modified to allow an optional second value to be specified. The second value represents the alternate value to be used during the batch period. The Client was then modified implement the automatic switching of commit parameters between the two periods. The switching is based on the value of the audit time stamp rather than the time when the Client is run. Commit parameters that use low values do not work well for batch jobs, but the might work well for on-line periods to keep the
lag time low. This enhancement, which was added to Databridge 7.1, allows you optimize performnace for both periods by picking appropriate commit parameters for each period.
century_break
Default: 50
Range: 0 - 99 or -1
Console: PROCESSING > Date and Time (Date parameters > Century break)
Values for the dms_subtype
column in the DMS_ITEMS Client control table that are in the 30s, 60s, and 90s have 2-digit years (yy) which represent dates in the 20th and 21st centuries.
The century break
parameter is used to determine the century for a 2-digit year. For example, if this parameter is set to 50, values < 50 are 21st century years (20yy);
values >= 50 are 20th century years (19yy).
When the century_break
value is set to -1, the century break value is dynamically calculated based on the current year giving the two digit years
a range of "current_year - 49" to "current_year + 50". In the case of the year 2021, this range is 1972 to 2071.
You can find DMSII date formats that are affected by the century_break
parameter at Decoding DMSII Dates, Times, and Date/Times.
commit_absn_inc
Default: --1 (This parameter is commented out.)
Range: 0 -- 200,000
Related parameters: commit_update_inc
, commit_time_inc
, commit_txn_inc
Console: PROCESSING > Engine and Enterprise Server (COMMIT parameters)
The commit_absn_inc
parameter allows the Databridge Client to override
the Databridge Engine CHECKPOINT CLIENT EVERY nnn AUDIT BLOCKS
parameter setting. This parameter causes the Databridge Engine to generate
a commit at the next quiet point after nnn audit blocks have
been processed since the last commit. This parameter determines one of
many conditions under which Databridge Engine generates a commit.
When the commit_absn_inc
parameter is not included in the configuration file, or it is commented out,
the Databridge Client uses the default value of --1. This value indicates
that the Client won't attempt to override the settings for the corresponding
Databridge Engine parameter (whose default value is 100). The value -1
is not a valid setting, per se, and will result in a "value out of
range" error. Comment the line out instead by adding a ";" at the start of the line.
A value of 0 disables the use of this parameter by Databridge Engine. A value that exceeds the value specified in Databridge Engine control file is ignored.
When the batch_job_period
parameter is enabled, you need to add a second value,
preceded by a comma, to specify the alternate value to be used when the Client
is processing audit blocks that were created during the batch period.
When using the Administrative Console the second value is provided in the Batch commit parameters group, after enabling the Batch job slider and entering the time interval for the batch job period.
If commit_absn_inc
, commit_update_inc
, commit_time_inc
, and
commit_txn_inc
are specified, Databridge Engine commits at the next
quiet point after one or more of the conditions are satisfied.
commit_idle_database
Default: --1 (This parameter is commented out.)
Range: True or False
Console: PROCESSING > Engine and Enterprise Server (COMMIT parameters > commit during idle ...)
This parameter allows the Client to override the COMMIT DURING IDLE DATABASE in the Databridge Engine's Control File. It makes the Databridge Engine commit at the next quiet point when it encounters an update while the data base is idle.
commit_longtrans
Default: --1 (This parameter is commented out.)
Range: True or False
Related parameters: commit_absn_inc
, commit_update_inc
, commit_time_inc
, commit_txn_inc
Console: PROCESSING > Engine and Enterprise Server (COMMIT parameters)
Warning
Setting this parameter to True (that is, overriding the Databridge Engine CHECKPOINT LONG TRANSACTIONS parameter) can result in problems and is therefore not recommended. By default, this parameter is commented out.
This parameter determines one of many conditions under which Databridge Engine should generate a commit. When this value is not specified, as in the case of the default setting, Databridge Client uses an internal value of --1. This value indicates that it won't attempt to override the settings for the corresponding Databridge Engine parameter (whose default value is False). The value -1 is not a valid setting, per se, and will result in a "value out of range" error. Comment the line out instead by adding a ";" at the start of the line.
A value of 0 disables the use of this parameter by Databridge Engine. A value that exceeds the value specified in the Databridge Engine control file is ignored.
commit_time_inc
Default: --1 (This parameter is commented out.)
Range: 0 - 300 seconds
Related parameters: commit_absn_inc
, commit_update_inc
, commit_txn_inc
Console: PROCESSING > Engine and Enterprise Server (COMMIT parameters)
The commit_time_inc
parameter allows the Databridge Client to override
the Databridge Engine CHECKPOINT CLIENT EVERY n SECONDS parameter
setting by causing Databridge Engine to generate a commit at the next
quiet point after n seconds have elapsed in the current transaction.
This parameter determines one of many conditions under which Databridge
Engine should generate a commit.
When the commit_time_inc
parameter is not included in the configuration file, or it is commented out,
the Databridge Client uses the default value of --1. This value indicates
that it won't attempt to override the settings for the corresponding
Databridge Engine parameter (whose default value is 0). The value -1 is
not a valid setting, per se, and will result in a "value out of range"
error. Comment the line out instead by adding a ";" at the start of the line.
A value of 0 disables the use of this parameter by Databridge Engine. A value that exceeds the value specified in the Databridge Engine control file is ignored.
When the batch_job_period
parameter is enabled, you need to add a second value
preceded by a comma, to specify the alternate value to be used when the Client
is processing audit blocks that were created during the batch period.
When using the Administrative console the second value is provided in the "Batch commit parameters" group, after enabling the "Batch job" slider.
If commit_absn_inc
, commit_update_inc
, commit_time_inc
, and
commit_txn_inc
are specified, Databridge Engine commits at the next
quiet point after one or more of these conditions are satisfied.
commit_txn_inc
Default: --1 (This parameter is commented out.)
Range: 0 -- 200,000
Related parameters: commit_absn_inc, commit_update_inc, commit_time_inc
Console: PROCESSING > Engine and Enterprise Server (COMMIT parameters)
The commit_txn_inc
parameter allows the Databridge Client to override
the Databridge Engine CHECKPOINT CLIENT EVERY n TRANSACTIONS
parameter setting by causing Databridge Engine to generate a commit at
the next quiet point after n transaction groups have been processed.
This parameter determines one of many conditions under which Databridge
Engine should generate a commit.
When the commit_txb_inc
parameter is not included in the configuration file, or it is commented out,
the Databridge Client uses the default value of --1. This value indicates
that it won't attempt to override the settings for the corresponding
Databridge Engine parameter (whose default value is 0). The value -1 is
not a valid setting, per se, and will result in a "value out of range"
error. Comment the line out instead by adding a ";" at the start of the line.
A value of 0 disables the use of this parameter by Databridge Engine. A value that exceeds the value specified in the Databridge Engine control file is ignored.
When the batch_job_period
parameter is enabled, you need to add a second value
preceded by a comma, to specify the alternate value to be used when the Client
is processing audit blocks that were created during the batch period.
When using the Administrative console the second value is provided in the "Batch commit parameters" group, after enabling the "Batch job" slider.
If commit_absn_inc
, commit_update_inc
, commit_time_inc
, and
commit_txn_inc
are specified, Databridge Engine commits at the next
quiet point after one or more of these conditions are satisfied.
commit_update_inc
Default: -1 (This parameter is commented out.)
Range: 0 -- 200,000
Related parameters: commit_absn_inc
, commit_time_inc
, commit_txn_inc
Console: PROCESSING > Engine and Enterprise Server (COMMIT parameters)
The commit_update_inc
parameter allows the Databridge Client to
override the Databridge Engine CHECKPOINT CLIENT EVERY nnn UPDATE
RECORDS parameter setting. It does this by causing Databridge Engine to
generate a commit at the next quiet point after nnn updates have
been sent to the Databridge Client. This parameter determines one of
many conditions under which Databridge Engine should generate a commit.
When the commit_update_inc
parameter is not included in the configuration file, or it is commented out,
the Databridge Client uses the default value of --1. This value indicates
that it won't attempt to override the settings for the corresponding
Databridge Engine parameter (whose default value is 1000). The value -1
is not a valid setting, per se, and will result in a "value out of
range" error. Comment the line out instead by adding a ";" at the start of the line.
A value of 0 disables the use of this parameter by Databridge Engine. A value that exceeds the value specified in the Databridge Engine control file is ignored.
When the batch_job_period
parameter is enabled, you need to add a second value,
preceded by a comma, to specify the alternate value to be used when the Client
is processing audit blocks that were created during the batch period.
When using the Administrative console the second value is provided in the "Batch commit parameters" group, after enabling the "Batch job" slider.
If commit_absn_inc
, commit_update_inc
, commit_time_inc
, and
commit_txn_inc
are specified, Databridge Engine commits at the next
quiet point after one or more of these conditions are satisfied.
controlled_execution
Default: False
Range: True or False
Related command-line option: -o
Related parameters: min_check_time
Applies to: Command-line Client (dbutility) only
Console: N/A
Note
This parameter is only used by the command-line Client dbutility. The blackout_period parameter in the scheduling section of the configuration file renders this method obsolete.
The controlled_execution
parameter forces the Client to check the
values of the stop_time
and end_stop_time
columns of the DATASOURCES
table. These columns enable an application external to Databridge to
specify a block of time during which Databridge Client operations are
disallowed. If the Databridge Client determines that this period of time
exists, update processing is stopped. Any attempts you make to restart
the Databridge Client also fail until the blackout period is over or the
stop_time
and end_stop_time
columns are set to 0.
convert_reversals
Default: -1 (This parameter is commented out.)
Range: True or False
Console: PROCESSING > Engine and Enterprise Server (General > Convert reversals ...)
The convert_reversals
parameter allows the Client to override the
Databridge Engine control file parameter CONVERT REVERSALS. Refer the
Databridge Host Administrator Guide for more details on this
parameter. When this value is not specified, as in the case of the
default setting, Databridge Client uses an internal value of -1. This
value indicates that it won't attempt to override the settings for the
corresponding Databridge Engine parameter (whose default value is
false). The value -1 is not a valid setting, per se, and will result in
a "value out of range" error. Comment the line out instead by adding a ";" at the start of the line.
correct_bad_days
Default: 0
Range: -1 to 2
Console: PROCESSING > Date and Time (Date parameters > Correct invalid date values)
The parameter correct_bad_days
specifies whether the Databridge Client
should treat a DMSII date with a bad day (or month) value as an error or
attempt to correct it by setting the value to last day for the given
month and year.
This parameter does not apply in the following circumstances:
-
Dates whose day values are greater than 31 (unless the parameter is set to 2)
-
DMSII Julian dates (
dms_subtype
values 17, 18, 27, 28, 37, 38) -
MISER dates, Linc dates, DMSII dates and DMSII timestamps
Set this parameter as follows:
-
Set
correct_bad_days
to 1 if you want the Databridge Client to set bad DMSII dates to the last day for the given month. In this case, a bad date would be February 29, 2002 because 2002 is not a leap year. The Databridge Client would correct this date to February 28, 2002. Likewise, a date of September 31 would be corrected to September 30, regardless of the year because September always has 30 days. A day value greater than 31 is not corrected in this case. However, a day value of 0 is always silently changed to 1, regardless of the setting of the setting of this parameter. -
Set
correct_bad_days
to 2 if you want the Databridge Client to perform the following corrections in addition to the ones for the case wherecorrect_bad_days
is set to 1. Day values greater than 31 are set to the last legal day of the month, month values greater than 12 are set to 12 and a month value of 0 is set to 1. -
Set
correct_bad_days
to 0 if you want the Databridge Client to store bad dates as NULL. If the DAOPT_Nulls_Allowed bit in theda_options
column of the corresponding DATAITEMS entry is not set, the bad date is stored as 1/1/1900 in the SQL Server Client and 1/1/0001 in the Oracle Client. -
Set
correct_bad_days
to -1 if you want the Databridge Client to store bad dates (including dates with a day value of 0, which normally gets changed to 1) as NULL. If the DAOPT_Nulls_Allowed bit in theda_options
column of the corresponding DATAITEMS entry is not set, the bad date is stored as 1/1/1900 in the SQL Server Client and 1/1/0001 in the Oracle Client.
dbe_dflt_origin
Default: direct
Range: direct, indirect, cache
Console: PROCESSING > Engine and DBEnterprise Server (General)
The dbe_dflt_origin
parameter specifies the expected origin for
Enterprise Server audit files during normal operations. The Client
issues a WARNING if Enterprise Server sends it a different value
whenever it starts processing a new audit file.
defer_fixup_phase
Default: False
Range: True or False
Console: PROCESSING > Stop Conditions
The defer_fixup_phase
parameter prevents the Databridge Client from
entering the fixup phase, which is deferred to the next process
command.
Note
Version 6.1 and later Databridge Clients do not support parallel clones, which was one of the reason for the existence of this command.
discard_data_errors
Default: False
Range: True or False
Related parameters: alpha_error_cutoff
, display_bad_data
Console: PROCESSING > DMSII Data Error Handling (General error handling > Discard records ...)
The parameter discard_data_errors
instructs the Client to write all
records with data errors to the discard file tablename.bad, located
in the discards subdirectory of the working directory. If you set this
parameter to False, the Client loads the record into the database with
the affected column set to NULL or with the affected characters changed
to question marks (?). Setting this parameter to True forces the
alpha_error_cutoff
parameter to 0 so that no errors are tolerated
before the Client declares the field bad. For more information, see
alpha_error_cutoff.
display_bad_data
Default: False
Range: True or False
Related parameters: alpha_error_cutoff
, discard_data_errors
Console: PROCESSING > DMSII Data Error Handling (General error handling > Display data errors ...)
The display_bad_data
parameter is a debugging aid for users that
encounter many data errors. Enabling this parameter makes the Databridge
Client display the raw DMSII data in a field that is found to have a
data error. This output, which immediately follows the data error
messages, is suppressed whenever the number of errors exceeds the
maximum number of errors to be logged (as defined by the
error_display_limits
configuration file parameter).
enable_af_stats
Default: False
Range: True or False
Console: PROCESSING > Statistics (Audit file statistics)
This parameters enables the writing of the audit file statistics to the AF_STATS Client control table. These statistics are the incremental statistics that are written to the log file every time the Client start processing a new audit file. The only difference being that when the processing of an audit file spans multiple Client runs, the statistics are combined into a single record that spans the audit file. See the section on AF_STATS Chapter 8 titled "Databridge Control Tables" for a complete description of the columns of the AF_STATS Client control table.
enable_doc_records
Default: False
Range: True or False
Console: PROCESSING > Engine and Enterprise Server (General)
The enable_doc_records
parameter requests DOC records from the Databridge Engine. Enable this parameter only when you are
troubleshooting Databridge Engine problems. These records help diagnose the various block types that the Engine encounters
while processing audit files.
Note
These records are recorded in the trace file only when full debugging is enabled ( -d
) or if you enable the DOC Record Tracing option as described in the section DOC Record Trace .
enable_ff_padding
Default: False
Range: True or False
Console: PROCESSING > DMSII Data Error Handling (Character Data > Enable High Value Padding )
This parameter enables an option that lets you mark items as padded with high values to achieve left justification. This parameter applies to ALPHA items and unsigned numeric items that are stored as ALPHA data. When set to False, this parameter does not appear in the exported configuration file.
enable_minimized_col
Default: True
Range: True or False
Console: PROCESSING > Advanced (General > Override changed columns only option)
When the minimize_col_updates
parameter is applied during a define
or redefine
and the Administrative Console's Customize commands,
the DSOPT_Optimize_4_CDC bit is set in all data sets. Set this parameter to False to override the DSOPT_Optimize_4_CDC
bit during the change tracking phase and avoid having to run a redefine
command to clear the DSOPT_Optimize_4_CDC bit.
Minimized SQL is not generally very useful, as it slows down update processing by not using host variables. It might be useful when the relational database is replicated to a secondary database that is remote. This option reduces the size of the changes by only updating column that are changed rather than using canned SQL that update all columns regardless of whether they were changed or not.
enable_optimized_sql
Default: True
Range: True or False
Console: PROCESSING > Advanced (General > Override optimized SQL updates option)
When the optimize_updates
parameter is applied during a define
, redefine
and the Administrative Console's Customize commands,
the DSOPT_Use_bi_ai bit is set in all data sets containing secondary OCCURS tables. Set this parameter to False
to override the DSOPT_Use_bi_ai bit during the change tracking phase and avoid having to run a redefine
command to clear the DSOPT_Use_bi_ai bit.
The DSOPT_Use_bi_ai bit is documented under ds_options
in the DATASETS Client control table.
engine_workers
Default: -1 (This parameter is commented out.)
Range: 1-10
Console: PROCESSING > Engine and Enterprise Server (General)
The engine_workers
parameter allows the Databridge Client to override
the Databridge Engine WORKERS = n parameter setting to control the
number of extract workers Databridge Engine can use during the data extraction phase.
This value can only be lower than Host parameter (DATA/ENGINE/CONTROL), never higher.
The default value of -1 indicates that the Client does not attempt to override the settings for the corresponding Databridge Engine parameter whose default value is 1.
error_display_limits
Default: 10 errors for the display; 100 errors for the log file
Range: 0-1000, 0-10000
Console: PROCESSING > DMSII Data Error Handling (General error handling > Error display limits)
The error_display_limits
parameter allows you to control the number
of screen output messages and log file entries for data errors. All data
error counts are maintained for individual tables. This parameter
prevents Databridge from filling the disk with meaningless errors when a
large number of the records in a data set are in error.
inhibit_8_bit_data
Default: False
Range: True or False
Console: PROCESSING > DMSII Data Error Handling (Character data > Change 8-bit characters to ? ... )
Use the inhibit_8_bit_data
parameter for data validation. Do not set
this parameter if your data contains international characters.
For example, if your valid alpha data consists of 7-bit characters, set inhibit_8_bit_data to True. The Databridge Client then changes all 8-bit characters to a question mark (?) and issues a warning message on the first occurrence of the bad data. The message contains the keys of the record with the invalid data, as in the following:
WARNING: Item 'cm_addr_line_2' in table 'customers' has 8-bit characters in alpha data - Keys: cm_number=00101301
Note
If an item containing 8-bit characters or control characters happens to be a key, the record is discarded as it attempts to change the bad characters to ? (question marks), potentially resulting in duplicate records. All discarded records are written to the file "tablename.bad" in the discards subdirectory of the working directory for the data source.
inhibit_console
Default: False
Range: True or False
Related command-line parameter: -C
(toggle)
Applies to: Command-line Client (dbutility) only
Console: N/A
When set to True, this parameter disables the console commands for the command-line Clients (dbutility). The console commands are explained in Controlling and Monitoring dbutility.
inhibit_ctrl_chars
Default: False
Range: True or False
Console: PROCESSING > DMSII Data Error Handling (Character data > Control character)
When this parameter is set to true, the Databridge Client treats all control characters as errors and converts them to a question mark (?) when set to True. When it is set to False, it supports all control characters except NUL, CR, LF, and TAB (in some cases). The Client for Oracle accepts TAB, and the Microsoft SQL Server Client accepts TAB characters if the bcp delimiter is not the TAB character.
Note
This parameter and the parameter convert_ctrl_char
are mutually exclusive. If you attempt to set them both to True, the configuration file scanner will generate an error.
inhibit_drop_history
Default: False
Range: True or False
Console: CUSTOMIZING > History Tables (Options > Inhibit Drop)
Use this option to prevent the Databridge Client from inadvertently
dropping history tables during a clone
, process
, or drop
command
or to prevent the clean-up scripts from running.
This is a safeguard to prevent the user from making an unrecoverable error. If you want the tables dropped and are sure of that, you can change this setting and rerun the Client. However, make sure to set it back to True for the next time.
-
If the data source is dropped it cannot be reprocessed because the Databridge Client attempts to drop the history table, and the option prevents this from happening.
-
Cleanup scripts deal with tables that are partially re-cloned. In the case of multiple source tables, they are re-cloned one data source at a time. In the case of tables that preserve deleted records, the deleted records are preserved during a clone. In the case of MISER data sets that hold history and resident records, the table is re-cloned without dropping the history records (which is different than Databridge Client history tables).
inhibit_init_values
Default: False
Range: True or False
Console: CUSTOMIZING > Advanced (Table reorganization options > Do not set initial values ...)
This parameter allows you to disable new columns added after a DMSII reorganization from getting set to their initial values. If there are large tables and your applications can deal with the new columns being NULL, setting this parameter to True will save time.
Note
If you ever re-clone such tables these columns will no longer be NULL.
keep_undigits
Default: 0
Range: 0 - 2
Console: PROCESSING > DMSII Data Error Handling (Character data ... )
This parameter, when set to 1, allows you to keep the undigits in numeric fields that are stored as character data. These characters will have a value of 'A' through 'F' based on the value of the corresponding undigit.
Setting this parameter to 2 also causes undigits in numeric fields to be treated as 9s.
lag_time_alert
Default: 10
Units: Minutes
Range: 1 - 60
Console: PROCESSING > DMSII Data Error Handling (Lag time alert threshold )
This parameter specifies when the Client issues a "Lag Time Threshold exceeded" alert.
linc_century_base
Default: 1957
Range: 1800 and up
Console: PROCESSING > Date and Time Parameters (Date parameters)
This parameter allows you to configure the base year for Linc dates, which is site-specific parameter in Linc databases.
masking_parameter[n]
Default: N/A
Range: "string"
Applies to: SQL Server Client using SQL Server 2016 or newer
Console: CUSTOMIZING > SQL Suffixes (Data Masks)
This array of parameters is used to hold the parameters for
the random and partial masking functions. Data masking is defined using
the masking_info
column of DATAITEMS, which defines the masking function
and the index of the corresponding parameter string (which does not
include the parentheses). The format of the masking_info
column (which is
an int) is 0x00nn000m, where m is the masking function code and nn is
the index into the table of masking parameters.
The following masking codes are defined: 0 -- no masking, 1 -- default()
masking function, 2 -- email() masking function, 3- random() masking
function, 4 -- partial masking function. The last two masking functions
have 2 and 3 parameters respectively. These parameters are represented
in the left half of the masking_info
by the index into the table of
masking parameters (or example 0x00010003 would be a random() masking
function with its parameters represented by the masking_parameter[1]
entry in the configuration file. This parameter could be "0,100" which
would result in the masking function "random(1,100)" being used in
defining the data mask for the column.
You can reuse masking_parameter
entries as many times as needed. The
index must be between 1 and 100. Refer to the SQL Server
documentation for details on how data masking works.
The figure below shows how to set up masking parameters using the Administrative Console's Customize command. This example results in the accountno column having the following attributes:
accountno varchar(16) masked with (function='partial(0,"************",4)') NULL,
![](images/client-masking-parameter.png)
max_clone_count
Default: 10000
Range: 1000-100000 SQL insert statements before a commit
Related command-line option: -s
Console: N/A
In most cases you do not need to use this parameter. This parameter is
used only when you enter the -s
option at the command line or set the
bit DSOPT_No_Loader (2) in the ds_options
column of corresponding row in
the DATASETS table.
The max_clone_count
parameter applies to the dbutility process
and
clone
commands for cloning only, not updates, when the command-line
option -s
is set. It defines the maximum number of rows that the
Client can insert into a table before a commit is required.
The value of this parameter has no effect on the commit frequency during the processing of updates, which is controlled by Databridge Engine.
max_discards
Default: 0,100
Range: 0-10000, 0-1000
Console: PROCESSING > DMSII Data Error Handling (General error handling > Discard record ... )
This is a two-part parameter that controls how the Client handles discarded records. The first number represents the total number of discards the Client will tolerate before abending. The second number represents the maximum number of discards records for a table that are written to the discard file. Discards that exceed this number are ignored.
If either of these values are set to zero, no limits are imposed for the corresponding actions, and the Client will behave the way it did before this parameter was implemented.
The first value must be greater than the second value, unless the second value is zero, indicating that it's unlimited. Otherwise, the Client will always abend before the second value goes into effect.
max_retry_secs
Default: 20
Range: 1 - 36000 seconds
Related parameters: use_dbwait
, max_wait_secs
Console: PROCESSING (Audit unavailable action > Retry interval ...)
The max_retry_secs
parameter works only when you enable
max_wait_secs so be sure to set both.
The max_retry_secs
parameter applies when you use the process
command to track changes. It defines the value for the retry time (in
seconds) for the DBWAIT API call for Databridge Engine, which is called
when the use_dbwait
parameter is set to True. This value defines the
amount of time to wait before reading an audit file again.
For example, if you set max_wait_secs
to 3600 seconds (same as 1 hour)
and max_retry_secs
to 60 seconds, Databridge Engine checks for new updates in the
audit file once a minute for an hour before giving up and returning an
audit file unavailable status.
Note that when you supply a second value for the parameter
max_wait_secs
, the value of max_retry_secs
must be less than that value,
as the Client expects to get control back within the time specified by
the second value of max_wait_secs
. Ideally, the second value of
max_wait_secs
should be an exact multiple of max_retry_secs
to ensure
that Client gets control back after the correct amount of time. For
example, if using the default value of 60 for the second value of
max_wait_secs
, we recommend you set this parameter to 20 or 30 seconds,
which ensures that the Client gets control back in 60 seconds.
max_srv_idle_time
Default: 0
Range: 15 -- 600 minutes
Console: PROCESSING > Advanced (Server inactivity timeout)
This parameter allows the timer thread to time out a server connection after several inactivity warnings. When this parameter is set to a non-zero value, which represents the timeout value in minutes, the Client stops if the length of an inactivity period exceeds this value.
The Client stops with an exit code of 2059. If using the service, this will cause it to restart the Client after a brief delay. This parameter provides an alternative to the TCP keep-alive mechanism to detect situations where we have a dead connection. This situation is most likely to occur if the MCP is HALT LOADED.
When using the Administrative Console enabling the slider labeled "Server inactivity timeout ..." activates the edit box for supplying the value for this parameter.
max_wait_secs
Default: 3600,60
Range: 0--36000 seconds for the first value, 0 or 60-300 seconds for the second value
Related parameters: use_dbwait
, max_retry_secs
Console: PROCESSING (Audit unavailable action > Maximum wait time ...)
The max_wait_secs
parameter works only when you enable
use_dbwait. When you set max_wait_secs
, also set
max_retry_secs.
The max_wait_secs
parameter applies when you use the dbutility
process
command to track changes. It defines the maximum wait time (in
seconds) for the DBWAIT API call for Databridge Engine, which is called
when the use_dbwait
parameter is set to True. This is the maximum
amount of time that Databridge Engine waits before returning an audit
file unavailable status.
The max_wait_secs
value and the max_retry_secs
value are the DBWAIT
API input parameters. The maximum wait time (max_wait_secs
) specifies
the cutoff point for the retries (max_retry_secs
). DBWAIT gives up
when the total amount of time elapsed since the last successful attempt
to read the audit file is greater than or equal to the max_wait_secs
.
The optional second value for this parameter is used to break up large wait times into smaller increments by making the Client repeatedly issue DBWAIT calls using this second value, which must be smaller than the first value (unless the first value is 0).
For example setting max_wait_secs
to 3600,60 will result in the Client
issuing a DBWAIT remote procedure call with a max_wait_secs
value of
60 seconds. Upon getting a "no more audit available" return status,
the Client will issue another DBWAIT call until it has received no data
for the amount of time indicated by the first parameter. This way of
doing things ensures that an idle line has some traffic on it, which
makes it possible to detect situations where the network goes down and
neither side knows about it.
Upon receiving data the Client resets the timer that keeps track of idle during which no updates are received. A value of 0 for the second parameter makes the Databridge Engine handle the wait-and-retry loop without any involvement by Client.
Note that when you supply a second value for the parameter
max_wait_secs
, the value of max_retry_secs
must be less than that value,
as the Client expects to get control back within the time specified by
the second value of max_wait_secs
. Ideally the second value of
max_wait_secs
should be an exact multiple of max_retry_secs
to ensure
that Client gets control back after the correct amount of time. For
example, if using the default value of 60 for the second value of
max_wait_secs
, we recommend you set this parameter to 20 or 30 seconds,
which ensures that the Client gets control back in 60 seconds.
Note
A value of 0 indicates that Databridge Engine continually waits.
min_check_time
Default: 600 (expressed in units of seconds)
Range: 10--1200
Related parameters: controlled_execution
Applies to: Command-line Client (dbutility) only
Console: N/A
The min_check_time
parameter is used in with the
controlled_execution
parameter to reduce the number of times the
program reads the corresponding entry in the DATASOURCES table. After a
quiet point, which ends a transaction group of updates, the Client only
reads the DATASOURCES table if min_check_time
has elapsed since the
last read. If you set this parameter to 60 seconds, the Client reads
the DATASOURCES table no more than once a minute, even if quiet points
are only a few seconds apart.
months
Default: JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DEC
Range: A list of exactly 12 three-character entries
Console: N/A
Use the months
parameter when you want to use month name abbreviations
that are not in English. This parameter applies only when you are using
DMSII date encoding methods that use three-character abbreviations for
months.
For more information on DMSII date encoding methods, see Decoding DMSII Dates, Times, and Date/Times.
To make an entry for the months
parameter, enter your three-character
month names in order and separated by commas.
n_dmsii_buffers
Default: 0
Range: 0, 2 -- 64
Related parameters: n_update_threads
Console: PROCESSING > Advanced (Multi-threaded updates > Number of DMSII buffers ...)
Use this parameter to configure the number of RPC buffers to be used by
the Client. If you let this parameter default or set it to 0, the Client
uses 4 times n_update_threads
RPC buffers or 2 buffers when
n_update_threads
is 0. When you have DMSII links enabled, this
parameter is set to the number of extract workers unless the default
value is larger. Raising this value might improve performance by
ensuring that there are enough buffers queued to keep the update workers
busy at all times.
n_update_threads
Default: 8
Range: 0 -- 16
Applies to: SQL Server (see note) and Oracle Clients
Console: PROCESSING > Advanced (Multi-threaded updates > Number of update threads ...)
Use this parameter to specify the number of update threads to be used. The update threads are responsible for executing SQL to update the user tables and writing bulk loader temporary files. When using the BCP API in the SQL Server Client these threads are also responsible for making the BCP API calls to load the data. If you have multiple processors and disk arrays, setting this parameter to a high value will increase the update processing speed at the expense of additional memory. Avoid setting this parameter to 1, as this will effectively pass off all updates to the single worker thread, when executing them directly would be preferable.
It is recommended to always use multi-threaded updates, as it improves performance considerably.
Note
This parameter requires the use of SQL Native Client in ODBC. The SQL Server driver doesn't support MARS, which is required for multithreaded updates. If MARS cannot be enabled, the Client automatically reverts to using single-threaded updates.
null_datetime_value
Default: 19010101
Range: 17530101 to 99991231
Applies to: SQL Server Client
Console: PROCESSING > Date and Time (Null date values ... > Datetime)
Use this parameter to change the value used to represent a NULL date in a datetime column that does not allow nulls. For example, you could change the value to 18991231 if the default value of 190001001 is meaningful.
null_datetime2_value
Default: 19010101
Range: 00010101 to 99991231
Applies to: SQL Server Client
Console: PROCESSING > Date and Time (Null date values ... > Datetime2)
Use this parameter to change the value used to represent a NULL date in a datetime2 column that does not allow nulls. For example, you could change the value to 00010101 if the default value of 190001001 is meaningful.
null_digit_value
Default: 9
Range: 0 or 9
Related parameters: allow_nulls
Console: PROCESSING (Store NULL DMSII numbers as)
Use this parameter when your DMSII data contains NULL values that you do
not wish to store as NULL. This parameter applies only to items that
have the DAOPT_Nulls_Allowed bit reset in the da_options
column of the
corresponding DATAITEMS table entry.
-
If you set
null_digit_value
to 0, all NULL values encountered in DMSII NUMBER data types get stored as zeros. -
If you set
null_digit_value
to 9, all NULL values encountered in DMSII NUMBER data types get stored as high values (999 or 999.999).
numeric_date_format
Default: 23 (format mmddyyy)
Range: Any legal numeric date format value (dms_subtype values 11--16, 21--26, 31--36)
Console: PROCESSING > Date and Time (Default formats > Numeric date)
The numeric_date_format
parameter enables you to store DMSII dates as
relational database numbers written out in the specified, allowable,
DMSII numeric date format. To configure the numeric_date_format
, you
need to set the DMS_ITEMS Client control table DIOPT_Clone_as_Date bit
and set the sql_type
to 13, which represents a numeric date. The date is
stored as an int data type in Microsoft SQL Server and a number(10) in Oracle.
This feature is useful in converting a DMSII MISER date or Linc date as a readable, numeric date. Note that the use of relational database date data type is a much better alternative.
preserve_deletes
Default: False
Range: True or False
Console: PROCESSING > Advanced (General > Preserve deleted records ...)
Setting this parameter to True causes records that contain an extended
update_type
column (type or bit 11) whose value is 2 (DELETE) to survive
a re-clone of the data set. Instead of dropping the table, all
non-deleted records are removed from the table during the re-clone.
This parameter has no effect on the handling of tables that have a
non-DMSII column of type 10 (named deleted_record
by default). Deleted
records are unconditionally preserved when such tables are re-cloned.
rollback_segment
Default: NULL string
Range: rollback_segment_name
Applies to: Oracle Clients only
Console: PROCESSING (General)
This parameter makes the Client use the specified rollback segment by
executing the SQL "SET TRANSACTION USE ROLLBACK SEGMENT
Rollback_segment_name
" at the start of every transaction.
set_blanks_to_null
Default: False
Range: True or False
Console: CUSTOMIZING (General > Set blank columns to NULL)
This parameter causes the Client to store zero-length character data (that is, "") as NULL instead of a single space. This parameter only applies to columns that are not part of the index.
set_lincday0_to_null
Default: False
Range: True or False
Console: PROCESSING > Date and Time (Date parameters > Linc date base year ...)
This parameter causes the Client to treat a Linc date of 0 as NULL rather than 1/1 of the Linc base year.
show_perf_stats
Default: True
Range: True or False
Console: PROCESSING > Statistics (Logging options > Show performance statistics)**
The show_perf_stats
parameter enables the displaying and looging of performance
statistics at the end of the data extraction phase when the AFN value
changes (for example, when the processing of audit files is completed)
and when the process
or clone
command terminates.
show_statistics
Default: True
Range: True or False
Related command-line option: -v
Related parameter: statistics_increment
Console: PROCESSING > Statistics (Logging options > Show statistics)
The show_statistics
parameter, when set to True, causes the Databridge Client to
display record count statistics at the intervals specified by the
statistics_increment
parameter. The statistics lines are useful in
monitoring the progress of lengthy operations.
The show_statistics
parameter applies to both the process
and
clone
commands.
show_table_stats
Default: True
Range: True or False
Console: PROCESSING > Statistics (Logging options > Show table statistics)
This parameter, when set to True, causes the Client to log the record counts for each cloned table at the end of the data extraction phase. During update processing it causes the Client log the update counts and average update times when the Client starts processing a new audit file. These statistics are reset after being logged, as they are incremental statistics that span the last audit file processed. Tables that have no updates are omitted from these report.
sql_exec_timeout
Default: 180,0
Range: 15-1200 for the first value, 0 or 30-3600 for the second value
Console: PROCESSING > Advanced (SQL execution timeout values)
The sql_exec_timeout
parameter applies to update processing only. The
first value allows the user to override the default setting of 180
seconds (3 minutes), which is used to determine when the timer thread
should issue a WARNING about the query taking too long to complete. The
optional second parameter, which defaults to 0 when omitted, allows the
user to set the secondary timeout value for a long query after which
time the query is aborted. A value of 0 disables this timeout. The value
of the second parameter must be greater than that of the first
parameter, except if it is 0.
sql_heart_beat
Default: False
Range: True or False
Console: PROCESSING > Advanced (Generate SQL heartbeats ...)s
This parameter was implemented as a work-around for the situation where long clones resulted in the Client's connections to the database getting closed because of long periods of inactivity. When this parameter is set to a non-zero value, the Client periodically executes a dummy SQL update on the Client connection to keep the connection alive during the data extraction where the only activity is on the bulk loader connection.
When using the Administrative Console, enabling the slider labeled "Generate SQL heartbeats ..." activates the edit box for the value for this parameter.
statistics_increment
Default: 100000,10000
Range: 1--10000000 (10 million) for the first value, 1--1000000 (1 million) for the second value
Related command-line option: -v
Related parameter: show_statistics
Console: PROCESSING > Statistics (Record count display intervals)
The statistics_increment
parameter applies when show_statistics
is
set to True or when the -v
option is in effect. The
statistics_increment
parameter lets you set the display interval for
record counts that occur during cloning and updating. For example, a
setting of 1 indicates that the Databridge Client will display when
every record is processed. A setting of 1000000 indicates that the
Databridge Client will display a line after one million records have
been processed.
Setting the statistics_increment
parameter to a low number slows
processing time, especially during cloning.
Enter a value using the following syntax:
statistics_increment = ccc[,uuu]
Where | Is |
---|---|
ccc | The record count before displaying the record statistics. This record count is used during cloning. |
uuu | The record count before displaying the record statistics. This record count is used during updating. |
stop_after_fixups
Default: False
Range: True or False
Console: PROCESSING > Stop Conditions (Stop after fixup phase)
Setting this parameter to True causes the Client to stop as soon as all the tables are synchronized. This is a useful stopping point in a data warehousing environment, as the warehouse can be loaded at this point. It is also helpful if you want to validate data before declaring things to be in working order.
stop_after_gc_reorg
Default: False
Range: True or False
Console: PROCESSING > Stop Conditions (Stop after garbage collection reorganization)
Setting this parameter to True causes the Client to stop at the first
quiet point after a garbage collection reorganization occurs. The
program acts as if the operator issued a console QUIT
command (or a
SIGTERM signal in the case of UNIX) at the point when the garbage
collection reorganization was detected. The Client exit status is 2034
if garbage collection or a file format reorganization is encountered in
the audit trail.
stop_after_given_afn
Default: False
Range: True or False
Applies to: Command-line Client (dbutility) only
Console: N/A
Note
This parameter is specific to replication and applies only to the command-line Client. The command-line -F
option, which allows you to specify the AFN after which to stop, overrides this parameter.
The stop_after_given_afn
parameter enables you to stop processing
after an externally specified audit file has been processed. Note that
you must store the value of the audit file number in the stop_afn
column
of the DATASOURCES entry using data source tools external to dbutility.
The stop_after_given_afn
parameter forces the Client to check the
values of the stop_afn
column of the DATASOURCES table. If a non-zero
value is found in this column, the Client sets the stop AFN value and
stops reading the DATASOURCES table.
To automate this functionality using a script launched by the service, see Automate Client Operations with the Service.
stop_on_dbe_mode_chg
Default: False
Range: True or False
Console: PROCESSING > Stop Conditions (Stop on Enterprise Server audit file origin change)
Setting this parameter to True causes the Client to stop as soon as it
detects that the Databridge Enterprise Server access mode changes from
the value specified in the parameter dbe_dflt_origin
. If this
parameter is set to "direct" and Enterprise Server switches to "indirect",
this will result in the Client stopping at the next quiet point.
suppress_delete_msgs
Default: False
Range: True or False
Console: PROCESSING > DMSII Data Error Handling (Suppress messages for delete operations with bad keys)
Setting this parameter to True causes the Client to stop reporting data errors for DELETE
operations that result in discards. The Client totally ignores such discards, as the target
records cannot be in the relational database because they have keys with data errors, they
are not included in the discard count and they are not written to the table's discard file.
track_vfds_nolinks
Default: True
Range: True or False
Console: CUSTOMIZING (DMSII related parameters > Track Variable Format datasets ...)
When set to True, this parameter causes the Client to track variable-format data sets that contain links; however, the links themselves are not tracked. When a record is created in a variable-format data set, links are set to null. If the application assigns the links to point to other records, the Client database will not contain these new link values until the variable-format data set is re-cloned. This parameter is selected, by default.
When this parameter is set to False, variable-format data sets are set to
have a ds_mode
value of 11 after initial cloning, which makes them ineligible for change tracking.
use_dbwait
Default: False
Range: True or False
Related parameters: max_wait_secs, max_retry_secs
Console: PROCESSING (Audit unavailable action > Wait and retry)
Use this parameter to select the Databridge RPC to use during update processing, as follows:
-
Set to False to use the DBREAD RPC. The DBREAD RPC returns an audit file unavailable status when all available audit files have been processed.
-
Set to True to use the DBWAIT RPC. The DBWAIT RPC waits for an audit file to become available. This is the required setting if the reading of the active audit file is enabled (
READ ACTIVE AUDIT
parameter in the Engine control file).
The difference between the DBWAIT RPC and the DBREAD RPC is that DBWAIT waits for updates to become available rather than returning an audit file unavailable status.
This parameter applies only to the process
command for updates. The
Databridge Client ignores it for a clone
command, which always uses
the DBREAD RPC.
Note
You can temporarily toggle this parameter by using the -w
command-line option.
use_latest_si
Default: False
Range: True or False
Console: PROCESSING > Engine and Enterprise Server (General > Include latest StateInfo ...)
If the use_latest_si
parameter is set to True, the Client will request
that the server include the latest StateInfo in all the data records
sent during audit file processing. The overhead of doing this is 24
bytes per record. This parameter is mainly intended as a debugging tool
when chasing audit file processing problems. In addition to making the
Client print up-to-date audit locations instead of the audit location of
the last quiet point that was used as a COMMIT, this option may be
useful when you use the audit timestamp as an external column for data
tables. Enabling this parameter will make the values used in such
columns much more accurate. Not all DMSII audit file records have an
associated timestamp, so the timestamp will still not be 100% accurate.
Server Option Parameters
The following parameters are included in the [params] section of the Databridge Client configuration file. The parameters listed in this section affect how the Databridge Client processed updates.
shutdown
Console: N/A
This parameter applies only to the command-line Client (dbutility). It inhibits update processing for a given period of time after a LIMIT_NAME or LIMIT_TIME condition (normally initiated by a STOP parameter) is encountered. The format of the shutdown parameter is as follows:
shutdown {until | for} hh:mm after stop
The first form specifies the time of day at which the shutdown period
ends, while the second form specifies the length of the shutdown period.
The command-line option –o
can override this parameter.
stop
Console: PROCESSING > Stop Conditions (Dynamic stop conditions)
This parameter allows you to specify a condition for the Databridge Engine to stop processing updates as follows:
stop {before | after} {task "name" | time hh:mm[:ss]}
For example, you would enter the following:
stop before task "name"
- or-
stop after time 12:30:15
Generally, you should include only one stop specification in the configuration, but using two stop specifications is legal. When more than one task or one time is specified in the configuration file, the program honors only the last one. However, when a task specification is coupled with a time specification, the Client honors the task specification only if it occurs on the date specified in the time specification.
Generate Command Parameters
The generate
command parameters include decimal_aa_length
and purge_dropped_tabs
(Oracle Clients only) and a number of SQL statement suffixes (Oracle and SQL Server Clients).
Parameter | Description |
---|---|
decimal_aa_length |
Default: 15 Range: 15 – 38 Applies to: Oracle Client Use this parameter to control the size of the data type that represents a decimal AA Value—by default, this is NUMBER(15). If you set this parameter to 16, the Client will use NUMBER(16) for all decimal AA Values. |
purge_dropped_tabs |
** Default: False Range: True or False Applies to: Oracle Client Console: CUSTOMIZING > SQL Suffixes (Enable purge option for dropped tables) This parameter makes the generate command add the PURGE option to the "drop table ..." SQL statement in the drop_table stored procedure that is refreshes every the Oracle Client runs this command. This option forces the drop tables not to be placed in the Oracle recycle bin. Since all the drop table scripts in the dbscripts folder call this stored procedure changing this parameter takes effect as soon as you run a generate command, even if it finds nothing to do. |
SQL Statement Suffixes
Console: CUSTOMIZING > SQL Suffixes ({Table | Index} SQL suffixes ...)
The following parameters determine which extra clauses are added to the create table and create index SQL statements in the scripts generated by the Databridge Client.
Suffixes must be entered on a single line and be enclosed in double quotation marks. Suffixes can be up to 256 characters in length.
Parameter | Description |
---|---|
create_index_suffix |
Default: None Range: “suffix” Applies to: Oracle and SQL Server Clients The create_index_suffix parameter enables you to define extra attributes (a suffix) for create index SQL statements that the Client generates for any given table. Each attribute list is defined with a number or index n so you can reference it. Up to 100 different suffixes can be defined. Individual indexes can select one of the suffixes by specifying this value in the index_suffix column of the corresponding DATATABLES Client control table entry. The index suffix is then concatenated to all create index SQL statements for this table.Here's an example suffix for a SQL Server database which specifies file groups for create index statements: create_index_suffix [1]="ON filegroup" Here's an example suffix for an Oracle database: create_index_suffix [1]="TABLESPACE name STORAGE MINEXTENTS 1 NEXT 10 MAXEXTENTS UNLIMITED" |
create_table_suffix |
Default: None Range: “suffix” The create_table_suffix parameter enables you to define a suffix for create table SQL statements that the Client generates for any given table and to assign a number to this suffix so you can reference it. The index n allows for up to 100 different suffixes to be defined. Individual tables can select one of the suffixes by specifying this value in the create_suffix column of the corresponding DATATABLES Client control table entry. The table suffix is then concatenated to all create table SQL statements that specify the given suffix number.Here's an example suffix for an SQL Server database which specifies filegroups for create table statements: create_table_suffix [1]="ON filegroup" Here's an example suffix for an Oracle database: create_table_suffix [1]="TABLESPACE tablename" |
global_index_suffix |
Default: None Range: “suffix” Applies to: Oracle and SQL Server Clients The global_index_suffix parameter enables you to add a filegroup (SQL Server) or a tablespace (Oracle) or any other SQL command specification to all create index SQL statements that the Client generates except those that have a suffix associated with the create_index_suffix parameter. |
global_table_suffix |
Default: None Range: “suffix” The global_table_suffix parameter allows you to add a filegroup (SQL Server) or a tablespace (Oracle) or any other SQL command specification to all the create table SQL statements that the Client generates, except for statements whose suffix is associated with the create_table_suffix parameter. |
user_column_suffix |
Default: None Range: “suffix” The user_column_suffix parameter allows you to add a suffix to the column definition created by the generate command for external columns of type user_column1 through user_column4. This is particularly useful for adding default clauses. |
In the case of index suffixes (both global and specific) for the Oracle Client, you can use the string $(INDEX_NAME) as an environment variable that the Client replaces by the actual index name for the table when using the suffix. You can also insert new line characters into the suffix by using "\n"; this is sometimes necessary when the suffix contains a SQL statement that must be executed separately after the index creation completes. An example for this is enabling parallel mode for index creations, which speeds up the index creation significantly. You can use the following index suffix to do this:
"parallel (degree 8)\n/***/\nalter index $(INDEX_NAME) parallel 1"
Once the index is created, the alter index statement sets the parallel degree back to 1, it needs the index name to be able to do this, using the $(INDEX_NAME) environment variable makes this possible without having to write separate scripts for each table. The /***/ is inserted into the SQL suffix to force the Client to execute the create index statement before executing the alter index statement. Using a semicolon causes an OCI error. Inserting "\n/***/\n" makes the Client break up the line into two separately executed SQL statements.
Data Masking Strings
Applies to: SQL Server Client (using SQL Server 2016 or newer version)
Console: CUSTOMIZING > SQL Suffixes (Data Masks)
These strings provide the data for the arguments of the random() and partial() data masking functions. They do not include the parentheses.
See the masking_parameter
in the [params] section for details on how to setup data masking for columns with sensitive data.
Display Command Parameters
The following parameter is included in the [params] section of the
Databridge Client configuration file. It affects the display
command only.
When using the Administrative Console this parameter can be found in the PROCESSING page of the Client Configuration property pages.
display_active_only |
Default: True Range: True or False Related command-line option: -a Console: PROCESSING (General) Use the display_active_only parameter to affect the display command, as follows:
You can temporarily override this parameter by using the –a command-line option. |
User Scripts Parameters
The following parameters are included in the [params] section of the Databridge Client configuration file. The parameters listed in this section affect what the Databridge Client does with user scripts.
When using the Adminstrative Console these parameters can be found in the CUSTOMIZING page of the Client Configuration property pages.
Note
We highly recommend that you set these parameters. As long as you have made sure that each user script includes all of the changes for the specified data set, the user scripts ensure that the Databridge Client can handle DMSII reorganization changes.
check_user_scripts |
Default: False Range: True or False Set this parameter to True to let the Databridge Client inform you if a user script for a table is missing. In this case, the Databridge Client returns the following message: ERROR: Unable to open script file filename This parameter is especially useful if you have created data table creation user scripts and index creation user scripts for every table in your relational database. The Databridge Client runs these scripts immediately after it completes its own scripts for creating tables and table indexes. NOTE: This parameter does not apply to data set selection user scripts and data table customization scripts. |
[Scheduling]
The Scheduling parameters section only applies to the dbutility
process
command. You must run the process
command once before the
scheduling takes effect. For more information, see Scheduling dbutility Updates.
To schedule Client runs that are initiated from the Administrative Console, click PROCESSING > Scheduling to open the Client Configuration property pages and set these parameters. For more information, see the Databridge Administrative Console Help.
Parameter | Description |
---|---|
blackout_period |
Default: 00:00, 00:00 Range: 00:00 to 24:00 (The two time values cannot be equal.) Use this parameter to specify a fixed block of time during which the Client cannot run. This parameter is useful for operations, such as database backups, that can only take place when the Client is inactive. For example, if you want to back up the database daily between 1:00 a.m, and 2:30 a.m. daily, define a blackout period from 0:55 to 2:30. The extra 5 minutes ensures that the Client finishes any long transactions before the database backup begins. If the Client is running when the blackout period starts, the Client automatically stops. If the Client is waiting for an idle host to send it updates when the blackout period starts, the Client resets the TCP/IP connection and aborts the run if it hasn't received any updates after 15 seconds. If you try to run the Client during a blackout period, nothing happens. During a blackout period the service will not start the Client. If the scheduler tries to schedule a DBClient run at a time that falls within a blackout period, the start of the run will be delayed until the blackout period ends. When this parameter is updated using the Administrative Console, it is set to the same value in both the service and Client configuration files. |
daily |
Default: daily = 08:00, 12:00, 17:00, 24:00 Range: 12 entries in ascending order from 00:00 to 24:00 NOTE: The daily parameter is mutually exclusive with the sched_delay_secs parameter. If you specify both daily and sched_delay_secs in the [scheduling] section of the configuration file, sched_delay_secs overrides daily regardless of the order in which they are specified.Enter the times you want the dbutility* process command to wake up and gather updates from the DMSII database. You must specify 24-hour time (for example, 5:00 for 5:00 a.m. and 17:00 for 5:00 p.m.). The range for minutes is 00–59.You can specify up to 12 times for the daily parameter. However, you must specify the times in ascending order. Note the following:
|
exit_on_error |
Default: True Range: True or False The exit_on_error parameter indicates that the scheduling should be terminated if an error occurs. If this parameter is set to false, the process command is retried at the next scheduled time. |
sched_delay_secs |
Default: 0 Range: 0–86,400 seconds (24 hours) NOTE: The sched_delay_secs parameter is mutually exclusive with the daily parameter. If you specify both daily and fixed_delay in the [scheduling] section of the configuration file, fixed_delay overrides daily regardless of the order in which they are specified.Use the sched_delay_secs parameter to specify a time delay between successive executions of the process command. The sched_delay_secs parameter does use the retry_time parameter. To disable the sched_delay_secs parameter, comment it out or set its value to 0. |
sched_minwait_secs |
Default: 0 Range: 0–86,400 seconds (24 hours) This parameter ensures that next scheduled process command is delayed by the specified interval and doesn't occur too soon after the current scheduled time. |
sched_retry_secs |
Default: 3600 seconds (1 hour) Range: 0–86,400 seconds (24 hours) The sched_retry_time parameter only applies after a failed process command. A value of 0 means that dbutility schedules the next run at the next regularly scheduled time without any retries. For example, if the mainframe is down when dbutility attempts to run a process command using the scheduling option, dbutility will retry the operation after the specified amount of time has elapsed. If the retry time value is larger than the next scheduled time, dbutility retries at the next scheduled time. |
[EbcdictoAscii]
Use the [EbcdictoAscii] section of the configuration file to customize character translation tables.
When using the Administrative Console, you can customize the translation table by clicking CUSTOMIZING > Translations to open this section of the Client Configuration property pages.
Note
If you plan to customize character translation tables, you must modify the configuration file before you run dbutility process
or dbutility clone
to populate the Databridge data tables in the relational database. In addition, if you customize the character translation tables when you populate the data tables the first time, you must use them on all subsequent updates. If you don't, the data will be invalid.
Translation Table
The Databridge Client uses the ISO standard translation tables to translate EBCDIC data received from the host to ASCII data. You can adjust the translation tables to work with national character sets, which typically redefine characters such as { } [ ] | to represent national characters.
Redefining a Character
To redefine a character, alter the EBCDIC to ASCII translation table by entering the pair of numeric values representing the EBCDIC character code and the corresponding ASCII character code in the [EbcdictoAscii] section of the configuration file. You can use decimal or hexadecimal (for example, 124 for decimal or 0x7C for hexadecimal) to represent the EBCDIC and ASCII character codes.
The Databridge Client does not allow you to change the values of
characters that are constant across national characters, including the
space, hyphen (-), single quote ( \' ), digits 0--9, and the letters of
the alphabet (A--Z and a--z). Changing any of these characters causes an
error unless you set the restrict_translation
parameter appropriately.
Example
The following example shows EBCDIC to ASCII translation using hexadecimal characters. Note that this file is for example only; it does not represent any national character set.
;hexadecimal format
[EbcdictoAscii]
0x7C = 0x7E ; remapping of @ to ~
0xE0 = 0x7C ; remapping of \ to |
0xC0 = 0x5B ; remapping of { to [
0xD0 = 0x5D ; remapping of } to ]
External Data Translation DLL Support
The following parameters are included in the [params] section of the Databridge Client configuration file.
When using the Administrative Console, you can change the translation DLL name by clicking CUSTOMIZING > Translations to open this section of the Client Configuration property pages.
Parameter | Description |
---|---|
eatran_dll_name |
Default: “DBEATRAN.DLL” Range: “dllname” NOTE: You must include quotation marks around the filename. The parameter eatran_dll_name allows you to rename the external translation file DBEATRAN.DLL. |
use_ext_translation |
Default: False Range: True or False The use_ext_translation parameter enables you to translate 16-bit character sets from EBCDIC to ASCII. When this parameter is enabled, the Databridge Client accesses an alternate data translation routine that uses an external DLL, named DBEATRAN.DLL (dbeatran.so for UNIX), instead of the standard translation procedure (for example [EbcdictoAscii] Section). The DBEATRAN.DLL contains the EBCDIC_to_ASCII entry point. This DLL is dynamically loaded when you enable the use_ext_translation option. |
Double-byte Translation DLLs
We provide translation DLLs for mixed multi-byte character sets that are used on Japanese and Traditional Chinese MCP systems to the corresponding Microsoft code pages 932 and 950, respectively. To install these DLLs, enable Double-byte Translation Support in the Features
tab of the installer for SQL Server. These DLLs are named
dbeatran_cp932.dll and dbeatran_cp950.dll, which you need to specify in the client configuration file using the eatran_dll_name
parameter. These DLLs use configuration files that must be added to the config directory for each data source that uses them.
Setting up dbeatran_cp932.dll
This DLL is configured by copying the sample configuration files installed in the SQLServer folder in Program Files to the config folder in the Client's working directory using the appropriate names.
-
Depending on whether you are using JapanEBCDICJBIS8 or JapanV24JBIS8 on the MCP, copy the file dbtrans_jbis8.smp or dbtrans_v24jbis8.smp to the config folder as dbtrans.cfg.
-
Copy the file dbgaiji.smp to the config folder as dbgaiji.cfg.
The file dbtrans.cfg, which is an ini file, has five sections labeled [options]
, [ebcdic_to_ascii]
, [euc_to_jis_table1]
, [euc_to_jis_table2]
, and [euc_to_jis_table3]
.
Options section
This section supports the following parameters:
log_messages = {yes | no}
default: no
When this parameter is set to yes
, the DLL logs all translated data to the file translate.log in the client’s working directory. This file is overwritten on every client run.
IMPORTANT: Only set this parameter to yes
when testing, because the huge log file adds significant overhead to the Client run.
show_tables = {yes | no}
default: no
When this parameter is set to yes
, the DLL writes the single and double bytes translation tables that is it uses to the file translate.log in the Client’s working directory.
sok_eok = {yes | no}
default: no
When this parameter is set to yes
, SOK (start of Kanji) and EOK (end of Kanji) characters are translated to blanks. Otherwise, they are not included in the translated data.
Translation table sections
The section ebcdic_to_ascii defines the values for the single-byte translation table for EBCDIC data that is mapped to a single-byte in Code Page 932. The values in the section euc_to_jis_table1 are used to translate the first byte of a double-byte character sequence, while the values in the sections euc_to_jis_table2 and euc_to_jis_table3 are used to translate the second byte of a double-byte sequence, depending on whether the first byte of the MCP data is odd or even.
This can be done because the mapping of the data follows a strictly consistent pattern. You can modify these tables; however, this will affect all translations. All three tables have a base value of 0xA1, which means that they indexed by the MCP character's value minus 0xA1. The file dbgaiji.cfg defines the custom characters used on the MCP. These characters are in the range (0x41–0x9E, 0xA1–0xFE) on the MCP.
Setting up dbeatran_cp950.dll
This DLL is configured by copying the sample configuration file installed in the SQLServer folder in Program Files to the config folder using the appropriate name.
Copy the file dbtrans_cp950.smp to config folder as dbtrans.cfg.
The file dbtrans.cfg, which is an ini file, has three sections labeled [options]
, [ebcdic_to_ascii]
and [euc_to_cp950_table]
.
Options section
This section supports the following parameters:
log_messages = {yes | no}
default: no
When this parameter is set to yes
, the DLL logs all translated data to the file translate.log in the client’s working directory. This file is overwritten on every client run.
IMPORTANT: Only set this parameter to yes
when testing, because the huge log file adds significant overhead to the Client run.
show_tables = {yes | no}
default: no
When this parameter is set to yes
, the DLL writes the single and double bytes translation tables that is used to the file translate.log in the Client’s working directory.
sdo_edo = {yes | no}
default: no
When this parameter is set to yes
, SDO (start of double-byte) and EDO (end of double-byte) characters are translated to blanks. Otherwise, they are not included in the translated data.
illegal_ebcdic_char = 0xnn
default: 0x3F
This value defines the value used for all entries in the ebcdic_to_ascii
table entries that are 0x00. The default is 0x3F, which is a question mark.
illegal_dbcs_char = 0xnnnn
default: 0xA1B8
This value defines the value used for all entries in the double-byte translation table entries that are 0x0000. The default value is 0xA1B8, which is the white star character.
Translation table alterations
The sections ebcdic_to_ascii and euc_to_cp950_table are used to alter entries in the tables in the DLL that used for single-byte and double-byte character translations. The format of these entries is:
value1 = value2
Where value1 is the MCP character and value2 is the character to which it is translated. At DLL startup, these sections are processed and the tables are altered before they are written to the log file when the parameter show_tables
is set to yes
. This allows you to see the actual translation tables that are being used. Values can be in hex or decimal; hex values must be prefixed by 0x. Values for single byte
[DBConfig]
This section contains parameters that are related to the Administrative Console's Customize command.
default_date_fmt
Default: 21
Range: 1-296
Console: PROCESSING > Date and Time Parameters (Default date formats)
This parameter specifies the default format for numeric columns that are clones as dates. For a MISER database this should be set to 1.
global_type0_chnages
Default: True
Range: True or False
Console: CUSTOMIZING (Customizing General)
The version 7.0 DBClntCfgServer program together with the Administrative Console applies all customizations done to the fix part of a variable format data set to all the records types, as they all contain the exact same fixed part. This option is provided as a safeguard for the unlikely situation where users do not want to do this. Some sites have variable format data sets that have a large number of record types, if you are customizing a date in the fixed part using the Administrative Console's Customize command you only have to do this once and it gets applied to all the records types. The only reason we did not implement this parameter in the Administrative Console is that it is highly unlikely that anyone will want to change it.
[Encryption]
This section contains parameters that are related to the data encryption in Client/Server communications using SSL/TLS.
enable_encryption
Default: False
Range: True or False
Console: Encryption (Enable ENCRYPTION ...)
Enabling this parameter is the first step towards using SSL/TLS encryption between the Client and DBServer. See the section on setting up encryption in the Databridge Installation Guide to find out more about how to do this. If using the Administrative Console simply more the slider the on position, This will make the next two parameters visible.
ca_file
Default: ""
Range: String
Console: ENCRYPTION (CA file)
This parameter is a full file specification for the file that contains the certificate to be used.
ca_path
Default: ""
Range: String
Console: ENCRYPTION (CA path)
This parameter is the path of a directory that contains the bundle of certificate to be used.
certify_server_name
Default: False
Range: True or False
Console: ENCRYPTION (Check server name in certificate)
This parameter indicates whether or not the server certificate will be checked to determine if the server is the node we think we are connecting to.
tls_host_name
Default: ""
Range: String
Console: N/A
This parameter is intended to specify the server name to check for when the certify_server_name
parameter is set to True. The code does not currently use this parameter.
Reference Tables
The following reference tables show all of the configuration file parameters, and as applicable, their associated environment variables, command-line options, and dbutility commands. Additionally, the tables show relationships between configuration file parameters that work together.
Because these tables do not explain each configuration file parameter, environment variable, and command-line option in detail, we recommend that you use it for reference only.
Bulk Loader Parameters
The following parameters from the [Bulk_Loader] section
of the Databridge Client configuration file apply only to the dbutility
clone
and process
commands and have no associated command-line options.
[Bulk_Loader] Parameter | Bulk Loader Utility |
---|---|
bcp_batch_size | SQL Server |
bcp_code_page | Oracle and SQL Server (bcp only) |
bcp_copied_msg | SQL Server (bcp only) |
bcp_delim | SQL Server (bcp only) |
bcp_packet_size | SQL Server (bcp only) |
enable_parallel_mode | Oracle |
inhibit_direct_mode | Oracle |
max_bcp_failures | SQL Server and Oracle |
max_temp_storage | SQL Server (bcp only) and Oracle (Windows only) |
sqlld_bindsize | Oracle |
sqlld_rows | Oracle |
verify_bulk_load | All |
Scheduling Parameters
The following [Scheduling] parameters from the Databridge Client
configuration file have no associated command-line parameter, and they
apply to the process
command only when using the command-line Client
(dbutility):
-
daily
-
exit_on_error
-
sched_delay_secs
-
sched_retry_secs
EBCDIC to ASCII Parameters
EBCDIC to ASCII translation applies only to the clone
and process
commands and has no associated command-line options.
Params Parameters
The following parameters are from the [params] section of the Databridge Client configuration file:
[params] Parameter | Option | dbutility Command | Notes |
---|---|---|---|
allow_nulls |
define and redefine |
||
alpha_error_cutoff |
clone and process |
||
auto__mask_columns |
define , generate , process , redefine and clone |
SQL Server 2016 and newer | |
automate_virtuals |
clone and process |
||
aux_stmts |
clone and process |
This parameter applies to Oracle and SQL Server ODBC Clients only. | |
batch_job_period |
clone and process |
||
bracket_tabnames |
clone and process |
SQL Server only | |
century_break |
clone and process |
||
check_user_scripts |
clone and process |
||
clr_dup_extr_recs |
generate |
||
commit_absn_inc |
clone and process |
||
commit_idle_database |
clone and process |
||
commit_longtrans |
clone and process |
||
commit_time_inc |
clone and process |
||
commit_txn_inc |
clone and process |
||
commit_update_inc |
clone and process |
||
controlled_execution |
-o |
clone and process |
|
convert_ctrl_char |
clone and process |
||
correct_bad_days |
clone and process |
||
create_index_suffix [n] |
generate |
||
create_table_suffix [n] |
generate |
||
dbe_dflt_origin |
clone and process |
||
decimal_aa_length |
define and redefine |
Oracle only | |
default_user_columns |
define and redefine |
||
defer_fixup_phase |
-c |
clone |
Toggle |
dflt_history_columns |
define and redefine |
||
discard_data_errors |
clone and process |
||
display_active_only |
-a |
display |
Override |
display_bad_data |
clone and process |
||
eatran_dll_name |
clone and process |
||
enable_af_stats |
clone and process |
||
enable_dms_links |
define , redefine , process and clone |
||
enable_doc_records |
clone and process |
||
enable_dynamic_hist |
redefine |
||
enable_ff_padding |
clone and process |
||
enable_minimized_col |
clone and process |
||
enable_optimized_sql |
-N |
clone and process |
Toggle |
engine_workers |
clone and process |
||
error_display_limits |
clone and process |
||
external_column [n] |
define and redefine |
||
extract_embedded |
define , redefine , process and clone |
||
flatten_all_occurs |
define and redefine |
||
force_aa_only |
define and redefine |
||
global_index_suffix |
generate |
||
global_table_suffix |
generate |
||
history_tables |
|||
inhibit_8_bit_data |
clone and process |
||
inhibit_console |
-C |
clone and process |
Toggle |
inhibit_ctrl_chars |
clone and process |
||
inhibit_drop_history |
clone and process |
||
inhibit_init_values |
redefine |
||
inhibit_required_opt |
define and redefine |
||
linc_century_base |
clone and process |
||
masking_parameter[n] |
generate |
||
max_clone_count |
-s |
clone and process |
|
max_discards |
clone and process |
||
max_retry_secs |
process |
Requires use_dbwait and works with max_wait_secs |
|
max_srv_idle_time |
clone and process |
||
max_wait_secs |
process |
Requires use_dbwait and works with max_retry_secs |
|
maximum_columns |
define and redefine |
||
min_check_time |
clone and process |
||
min_varchar |
define and redefine |
||
minimize_col_updates |
define and redefine |
||
miser_database |
define , redefine , process and clone |
||
months |
clone and process |
||
null_datetime_value |
clone and process |
||
null_datetime2_value |
clone and process |
||
null_digit_value |
clone and process |
||
numeric_date_format |
clone and process |
||
n_dmsii_buffers |
clone and process |
||
n_update_threads |
clone and process |
||
optimize_updates |
define and redefine |
||
preserve_deletes |
clone and process |
||
read_null_records |
define and redefine |
||
reorg_batch_size |
redefine and reorganize |
||
rollback_segment |
All | Oracle only | |
sec_tab_column_mask |
define and redefine |
Requires default_user_ columns |
|
set_blanks_to_null |
clone and process |
||
set_lincday0_to_null |
clone and process |
||
show_perf_stats |
clone and process |
||
show_statistics |
-v |
clone and process |
Works with statistics_increment |
show_table_stats |
clone and process |
||
shutdown |
-o |
Override | |
split_varfmt_dataset |
define and redefine |
||
sql_exec_timeout |
clone and process |
||
sql_heart_beat |
clone and process |
||
statistics_increment |
-v |
clone and process |
Works with show_statistics |
stop |
|||
stop_after_fixups |
clone and process |
||
stop_after_gc_reorg |
clone and process |
||
stop_after_given_afn |
clone and process |
||
strip_ds_prefixes |
define and redefine |
||
suppress_dup_warnings |
clone and process |
||
suppress_new_columns |
redefine |
||
suppress_new_datasets |
redefine |
||
use_binary_aa |
define and redefine |
||
use_bigint |
define and redefine |
SQL Server only | |
use_clob |
define and redefine |
Oracle only | |
use_clustered_index |
define and redefine |
This parameter applies to SQL Server. See use_decimal_aa. | |
use_column_prefixes |
define and redefine |
The tab_name_prefix column of the DATASOURCES Client control table must contain an entry. |
|
use_date |
define and redefine |
SQL Server only | |
use_datetime2 |
define and redefine |
SQL Server only | |
use_dbwait |
-w |
process |
Toggle Works with max_wait_secs and max_retry_secs |
use_decimal_aa |
define |
||
use_ext_translation |
clone and process |
This parameter applies to Windows. | |
use_internal_clone |
redefine and reorg |
||
use_latest_si |
clone and process |
||
use_nullable_dates |
define and redefine |
This parameter applies only to MISER databases. | |
use_primary_key |
define |
||
use_stored_procs |
define , redefine , process and clone |
||
use_time |
define and redefine |
SQL Server only | |
use_varchar |
define and redefine |
||
user_script_dir |
-n |
define , redefine , process and clone |
Override |
user_column_suffix[n] |
generate |