Databridge 7.1 SP1 Release Notes
July 2024
What's New in Databridge Version 7.1 Service Pack 1
-
Resolved a number of issues in the 7.1 base release and 7.1 Updates 1 and 2. See the Patch Notes for details.
-
Added a new command named
Service Sessions
to the Client Managers > Actions menu. This command displays the sessions that are active for the Client Manager service in a page formatted as a table. This page has an icon for terminating Customize command sessions that can sometimes get orphaned if the browser is closed without properly terminating the command. Be sure to push the Done button until the command terminates.The service handles terminating a Customize command session by sending the Client a request to save its state and exit. In the absence of this command, you had to either use Task Manager (a kill command in the case of Linux/UNIX) to terminate the run or wait for the Administrative Console to time out the session after an hour of no activity.
-
Added a new alert for the lag time exceeding a configurable threshold. The configuration parameter
lag_time_threshold
was added to control when such an alert is generated. The parameter's range is 1 to 60 minutes and its default value is 10. A value of 0 minutes disables the alert. To prevent constant alerts, the alert is not cancelled until the lag time drops below a computed value that is much lower than the threshold. -
The 7.1 SP1 Administrative Console was enhanced to operate with both the Databridge 7.0. and 7.1 services. This is done through protocol negotiation, where the two sides agree on which protocol level will be used. Menu items that are not supported in the older Clients are grayed out. The console code also deals with any differences in the RPC data taking the appropriate actions based on the protocol level.
-
When the redefine command resulted in tables being dropped from the relational database (for example if you decided to flatten OCCURS clauses), such tables were not dropped by the client. This task was delegated to the user, who then had to run the drop scripts for each of these tables.
To resolve this issue, the 7.1 SP1 redefine command automatically creates a command file (shell script in UNIX) that can be run to remove all of these tables. The script resides in the data source’s working directory and is named
src\drop_obsolete_tables_ddd.cmd
(for Linux and UNIX, the file extension is.sh
), wheresrc
is the data source name in upper-case andddd
is the update level of the DMSII database.
Note
Newly added menu items use a previously unused bit in the security mask (a 64-bit quantity) that is maintained for each user in the service’s configuration file. You need to use the Manage users
command in the Administrative Console - Client Managers page to update the menu security bits for users who will need to access these menu items, as they would otherwise be grayed out.
Patch Notes
This section describes the various patches in Hot Fixes, Updates and Service Packs for version 7.1. Only relevant patches are included in the lists below.
The patches are grouped by component, are listed in chronological order, and specify the patch numbers that implement them. The patch number is in the last part of the version string. Within each list, all completed releases are marked with a line that specifies the release name (such as 7.1 Update 2). The lack of such a line indicates that work is still in progress.
DBEngine
Issues resolved in version 7.1 Updates:
003 - Preserve reversal bit in UI array when the Engine parameter Convert reversals to updates is true.
004 - It was possible for an aborted transaction to affect the valid update of another transaction on a different stack. This was caused by not heeding the LCW reversal bit on a reread following the aborted transaction.
DBEnterprise
Issues resolved in version 7.1 Updates:
001 - The internal mapping of base and filtered data sources could result in a duplicated source name. The data source could not be customized when in this state.
002 - DBEnterprise would fail to the obtain host database update level if the key was expiring. This issue resulted in filtered data sources being unable to be customized.
003 - Duplicate family names are no longer treated as errors if the LUNs are different.
004 - The host connection now uses KeepAlive for MCP connections. This allows long periods without host interaction to keep the connection open. Previously, large initial extracts could lose host connectivity resulting in a failed clone.
005 - NUMBER (1) STORED OPTIONALLY NULL 0 items were filled in using the digit 3 instead of 0 (which ends up being NULL).
009 - It was possible for a compact data set change that is split across the end of an audit block section to result in an error if proceeded by a number of compact data set changes in the same block.
DBServer
Issue resolved in Databridge 7.1 Updates:
006 - Increase protocol level to 36 to support the convert reversals to updates change in DBEngine to preserve the reversal bit.
Issues resolved in Databridge 7.1 SP1:
007 - When NOTIFY was used to start a host-initiated client run, it was possible that neither a hostname nor IP address are set. This resulted in an attribute error when setting the ON part of the task NAME. The ON part is set to UNRESOLVED should this occur. This task is a temporary NOTIFY WORKER.
DBClient
Issues resolved in version 7.1 Updates:
001 - The cloning of FileXtract data sources hung at the end of the data extraction phase.
002 - The sequence_no
column of history tables is incremented twice after every update instead of once.
003 - The Kafka client mishandles JSON output when the configuration parameter json_output_option
is set to 1 or 2.
004 - The audit timestamp being sent to the Administrative Console was wrong. This was reflected in the Dashboard and the Run > Statistics
output.
005 - OCCURS table filter generation fails to update DATASETS, which causes the filter generation to fail.
006 - Data extractions in multi-threaded Windows clients sometimes end up with a load error. The situations that lead to this are large values for the parameter max_temp_storage
and data sets with multiple tables.
The cause of the problem is that the temporary storage threshold is reached while the last tables for a data set are not yet fully loaded. The client ends up queuing an extra request to load the file, which results in the loader trying to load the last file that was loaded, causing bcp to get a file not found error as the file in question no longer exists.
007 - The Kafka client does not suppress MODIFY records that have no changes when the configuration parameter json_output_option
was set to 1 or 2.
008 - The stored procedure prefix table was corrupted in a recent correction, which caused calls for stored procedure with prefixes m_
to pick up a space after the m
, which results in a SQL error. Stored procedures with prefixes of z_
also cause SQL errors.
010 - Enclosed passwords that appear in the bulk loader scripts file in double quotes to avoid getting syntax error when the password contains non-alphanumeric characters. This allows passwords to contain non-alphanumeric characters, except for double quote and NUL.
011 - Added a test to allow MISER sites to be able to set the configuration parameter use_stored_procs
to false, as the client would otherwise have tried to generate a SQL statement to update the table.
012 - Added code to log the ODBC driver name and version for the PostgreSQL Client.
013 - PostgreSQL client failed to detect duplicate records errors during index creations.
014 - When duplicate keys were found during the index creation and the clear duplicates script was run, the row_count
column in DATATABLES was not updated to reflect the resulting record count.
015 - When doing data extraction for a COMPACT data set Databridge Enterprise sent the client deleted records. These records ended up getting discarded after generating error messages about their keys being null. The client now detects such records and silently ignores them when using Databridge Enterprise.
017 - Enhanced the history table code to handle reversals by deleting the original record for a reversal from the history table. This makes the client produce the same results regardless of whether CONVERT REVERSALS TO UPDATES is enabled. This change requires the matching Engine, as the older Engines did not send the reversal bit in the updates’ update information data.
018 - Added the IDENTITY column for the Oracle client and modified history tables to use it instead of the update_time
and the sequence_no
user columns, as the default columns for history tables. The redefine
command will not change history tables, while the define
command will. To preserve compatibility with history tables generated by older clients avoid using defaults for user columns in a define
command. The default name for the identity column is my_id
, its default datatype is NUMBER(10).
019 - The bit SRC_NewHistory
(0x800000) was added to the status_bits
column of the DATASOURCES control table. It is set by the define
command to allow the Oracle client to use the IDENTITY column by default in history tables (in place of the update_time
and sequence_no
columns).
020 - Implemented the configuration parameter new_history_tables
, which is used to enable new code that sets the update types for history records involved in a key change to MODIFY_BI (6) and MODIFY_AI (7) instead of DELETE (2) and CREATE (1). These records are always sequential in the history table and allow the user to notice that they are the same record in DMSII.
021 - Added the ability to specify a file name in the create table and create index DDL suffixes, excluding global suffixes. A suffix of #add
filename
will cause the content of the file in the user scripts directory (usually scripts
) to be used as the suffix. This allows arbitrarily long suffixes to be used, provided that the lines in the file are less than 256 characters in length.
022 - Added code to the PostgreSQL client to get and log the PostgreSQL version.
023 - The Kafka client's length calculations were incorrect causing the JSON data to have an extra NUL character appended to it and the keys to be missing the closing double quote character.
024 - The client no longer displays error messages about records with null key items during data extraction with Databridge Enterprise. The count of such records (when not zero) is reported in the end of extraction statistics.
025 - Running back-to-back redefine commands did not work when there was an actual DMSII reorganization. If a data set ended up with a mode of 31, the second redefine command got an error claiming that a reorganize command should be run next.
This allows a new data set to be added to the client when suppress_new_datasets
is true. The first redefine
command will see the new data set and set its active column to 0. You can then use the console and navigate to Settings > Data Sets
and set the active column to 1 using the properties of the data set. The second redefine command will pick up where the first one left off and complete the task.
027 - Reworked patch 24 not to assume that the keys were always the first items in the tables. Renumbering the keys resulted in all the record being ignored.
028 - The Postgres Client’s generate command failed with an IO error when there was a SERIAL column present.
029 - The Postgres Client got a SQL error when updating the DATASETS table after a client failure.
030 - The export command sometimes got an access violation after the text configuration file was written.
031 - Enhanced the table statistics to provide statistics by update type in addition to the cumulative update statistics.
032 - The DBClntCfgServer "verifysource" command, when doing an "Add > Existing" data source from the Administrative Console, caused the client to crash when DBClntCfgServer tracing was enabled in the service.
033 - Cloning a database where all the tables were empty using the BCP API caused the client to hang,
034 - Eliminated error message in the redefine command when the old copy of the script file <source>_drop_obsolete_tables_<ul>.<ext>
does not exist. This file provides an easy way of dropping obsolete tables that result from running the redefine command.
035 - A timing hole in the clients sometimes resulted in the index thread being created twice. As a result of this, the two index threads stepped all over each other leading to a multitude of Database errors. This happened only when there were lots of empty data sets at the start of the run.
036 - When using the BCP API the SQL Server client did not display the extraction statistics for empty tables.
037 - Enhanced the define command for history tables to make the my_id
column a key and to only make the update_time column a key when the my_id column is not present.
038 - SQL Server & Postgres history tables did not work when the configuration parameter dflt_history_columns
was set to include the update_type
, update_time
, and sequence_no
user columns. The default user columns for history tables in a define command is update_type
and my_id
(identity column
for SQL Server and Oracle and serial
or bigserial
for Postgres).
039 - The DBClntCfgServer verifysource command returned an exit code of 2056 when it detected that the control tables need upgrading using the dbfixup program. This caused the Administrative Console to disable the data source.
The situation was rectified by enhancing the command to verify the existence of the data source and return an exit code of 2109, which the service changed to a 0 after scheduling a launch of the dbfixup utility for the data source. This allowed an Existing to work correctly when the data source’s control tables needed dbfixup to be run.
040 - The Administrative Console was being passed the wrong value for the ABSN when the first 4 bits of its leading byte were non-zero.
041 - The Administrative Console was being passed the wrong value for the audit_time6 column of the DATASETS control table.
042 - A new status bit was added to the DATASOURCES control table to allow the Administrative Console to gray out the "Data Set State Info" menu in the data source's Settings menu when the data source is not in change tracking mode.
043 - The client was not setting the Engine parameter for NoReversals to the correct value.
044 - The unload command was not generating files compatible with the older clients' unload files.
045 - When a preserved deleted record was not in the relational database, the client stopped with an exit code of 2097. We now recover from this situation by inserting the record and marking as deleted.
046 - Prevented the verify_bulk_load
code from causing a SQL error when the -z
option is enabled, by bypassing the count verification as we have not loaded anything into the table.
047 - Fixed the client’s handling of history tables that use IDENTITY or SERIAL columns as keys not to include the my_id
column in the key value pairs. These are used when displaying keys and when constructing where clauses in select statements used in handling reversals.
048 - The binding of host variables for update statements used to preserve deleted records does not work correctly when user columns are not at the end of the record. This causes the update not to find the target row in the table, which leads to the client stopping with an internal error.
049 - Host Variable tracing resulted in a null pointer exception when a history table using an IDENTITY or SERIAL column was involved.
Issues resolved in version 7.1 SP1:
050 - Implemented the configuration file parameter "purge_dropped_tabs" for the Oracle client’s generate command that adds the PURGE option to the “drop table” SQL statement in the drop_table stored procedure.
051 - Corrected the table statistics value for the update SQL times.
052 - Included the statistics for bulk delete
operations (also called Delete_all
) on occurs tables in the table statistics. Reorganized the report to have a title line.
053 - Zeroed the lag time on receipt of a DBM_AUDIT_UNAVAILABLE
status, which indicates that we are caught up with the audit trail.
054 - Enhanced the -m
option for traces to make it more readable, by adding the milliseconds to the timestamp as .mmm
.
055 - Implemented the parameter bcp_tempfile_dir
for the SQL Server client to allow the bcp temporary files to be placed in a different directory than the working directory. On a clustered system the working directory is on a network drive, which can slow the cloning because the file I/O on a network drive will be slower than an SSD local drive.
056 - Resolved an error reported by the client when an insert into the table gl_history_tmp2
was turned into an m_gl_history_tmp2
stored procedure call at MISER sites. The error was caused by a failure to take into account update type 5 when updating the row_count
column in the DATATABLES entry for gl_history
.
057 - Modified the client to require the service to handle the cancelling of connection related alerts. The service uses the -J alert_code
to indicate that the client should send back a recovery successful unsolicited message to the service. Upon receipt of this message, the service resets the retry count and cancels the corresponding alert.
058 - Fixed the Administrative Console's Customize command, which was not working correctly after patch 25. The Define/Redefine command was not affected by this bug.
060 - Updated the Oracle client to not use deprecated OCI functions.
061 - Implemented a Lag Time High
alert using a configured value for the threshold, which defaults to 10 minutes.
062 - Added sequence numbers to log messages sent to the console to allow the console to detect duplicate messages when writing the log file, which is used to refresh the log windows. The failure to detect duplicate messages resulted in repeated lines in the data sources' log windows.
063 - Eliminated the configuration parameter use_odbc_reg
and made the SQL Server client always retrieve the server name from the Windows Registry because the ODBC SQLInfo
call strips the domain suffix, which causes bcp to fail in some cases.
DBFixup
Issue resolved in 7.1 Updates:
001 - The dbfixup program was getting errors when the configuration file contained encrypted passwords.
Client Manager Service
Issues resolved in 7.1 Updates:
001 - The service failed to suppress the sending of a data source added message for the console that initiated the add, which caused the console not to update the status of the added data source unless the console user forced the data source to be refreshed by disconnecting and reconnecting or getting the data source's read/only information.
002 - Implemented an RPC that allows the Administrative Console to display the active TCP/IP sessions in the service/daemon.
003 - Added log statements for failed signons to make it easier to determine the cause of the signon failures.
004 - The service was enhanced to handle the exit of 2109, which is mentioned in client patch 39.
005 - A few additional changes to auto_dbfixup runs during upgrades.
006 - Added code for displaying the service's sessions in the console.
007 - Modified the "Settings > Data Set State Info" menu item to be grayed out when the client is not in tracking mode.
008 - If a run that is started from the batch console ends, the service can sometimes get a null pointer exception if the alternate end_of_run
script file does not exist.
009 - If the service encounters a black period when starting a scheduled process command, the run does not get started when the blackout period ends.
010 - Reinstated patch 008, which was not included in 7.1 Update 2.
011 - The service was not tracing IPC traffic because the session type was not initialized for IPC sessions.
012 - Made the service trace the data for bogus message received at session initialization when it detects that the length is incorrect.
013 - When there is a start_of_run script file present in the scripts directory, the starting of a run fails. The service looks at the timer, that is used to detect the launched failing to connect back to the service, before the run is launched. It ends up closing the IPC connection, which causes the launched run to terminate.
014 - Added debug code to the service to diagnose scheduling problems.
015 - Increased the constant MAXCONSOLES to 64. This number also controls the maximum IPC connections that the service supports.
016 - The scheduler initialization code failed when a data source had its run_at_startup
parameter set to true. As a result the expected process command run was not launched and the blackout_period times were corrupted.
017 - The service was not setting the command line switch -J code
to indicate that the client should send back a recovery successful unsolicited message when the recovery for a failed connection worked. The service uses this message to reset the retry count and to cancel the alert.
018 - Corrected an error in patch 17.
Issues resolved in version 7.1 SP1:
019 - Modified the service to preserve the sequence numbers of stdout
and stderr
log messages to allow the Administrative Console to detect duplicate messages when creating the client log file.
020 - Implemented a new RPC to allow the console administrator to gracefully terminate an orphaned Customize
command and close the associated console session.
021 - Fixed the service to automatically remove deprecated menu security bits from the userids when reading the configuration file. Any new bits that were added after the base release need to be enabled by using the Manage Userids menu for the Client Manager.
022 - Implemented the RC_GetSQLServerODBC RPC that allows the Add > New dialog to use a list-box instead of requiring the user type the SQL Server ODBC data source name.
023 - Raised the console protocol level to 5 to allow the console to distinguish 7.1 software from 7.1 SP1 software that has some new RPCs.
024 - Made the daemon return an empty ODBC data source for the PostgreSQL Client. This RPC is not yet implemented for Linux.
Administrative Console
Issues resolved in 7.1 Updates:
001 - The Define/Redefine
menu item in the data source Actions
menu was not getting disabled when the user's privilege to run this command was revoked by the administrator.
002 - The Configure command was not displaying items in the SQL suffixes page and it was not letting you update them, as a result of a bookkeeping error.
003 - Added the menu item Service Sessions
to the Client Managers
page’s Actions
menu to displays the list active TCP/IP sessions.
004 - Fixed the console's handling of automated dbfixup runs by the service so it does not show a status of Fixup pending after the dbfixup run completes successfully.
005 - Fixed the service and Administrative Console to handle the automatic running of dbfixup during an upgrade. The console was showing a status of "Locked(Fixup pending)", which did get updated on its own.
Issues resolved in version 7.1 Service Pack 1:
006 - Eliminated duplicate console log messages from the log file that were caused by multiple console sessions getting the same messages for a given data source. Then log file
is used when you return to a data source tab that was not in focus.
007 - Added the ability to gracefully terminate the DBClntCfgServer
run when a Customize session is orphaned. In the past, the only way to do this was to kill the run.
008 - Replaced the edit box for ODBC data source name Add > New dialog with a Combo box that is populated with the configured data sources. If you use an older version of the service, the list box will be empty and you will need to type the name like you did before.
009 - The first time you sign-in to the console, you are prompted to change the password for dbridge
. We recommend that you do that. Note: If you dismiss the dialog, we will not warn you again.
010 - Added an alert for lag times that exceed the configured threshold in the Configure command's "Processing - DMSII Data Error Handling" page. If the switch is disabled, the client will never generate this alert for the data source. This one-time alert will be cancelled when the lag time reaches the cutoff threshold, which is well below the value for sending the alert to prevent the client from repeatedly sending alerts.
Version Information
The Databridge components and utilities are listed with their version numbers in the base release and current release of version 7.1. All host programs have been compiled with MCP Level 57.1 software.
Databridge Host | Base release | Curent release |
---|---|---|
DBEngine | 7.1.0.002 | 7.1.3.004 |
DBServer | 7.1.0.005 | 7.1.3.006 |
DBSupport | 7.1.0.001 | |
DBGenFormat | 7.1.0.001 | |
DBSpan | 7.1.0.001 | |
DBSnapshot | 7.1.0.001 | |
DBTwin | 7.1.0.000 | |
DMSIIClient | 7.1.0.000 | |
DMSIISupport | 7.1.0.001 | |
DBInfo | 7.1.0.002 | |
DBLister | 7.1.0.001 | |
DBChangeUser | 7.1.0.000 | |
DBAuditTimer | 7.1.0.001 | |
DBAuditMirror | 7.1.0.000 | |
DBCobolSupport | 7.1.0.000 | |
DBLicenseManager | 7.1.0.000 | |
DBLicenseSupport | 7.1.0.001 |
FileXtract | Base release |
---|---|
Initialize | 7.1.0.000 |
PatchDASDL | 7.1.0.000 |
COBOLtoDASDL | 7.1.0.000 |
UserdatatoDASDL | 7.1.0.000 |
UserData Reader | 7.1.0.000 |
SUMLOG Reader | 7.1.0.000 |
COMS Reader | 7.1.0.000 |
Text Reader | 7.1.0.000 |
BICSS Reader | 7.1.0.000 |
TTrail Reader | 7.1.0.000 |
LINCLog Reader | 7.1.0.000 |
BankFile Reader | 7.1.0.000 |
DiskFile Reader | 7.1.0.000 |
PrintFile Reader | 7.1.0.000 |
Enterprise Server | Base release | Current release |
---|---|---|
DBEnterprise | 7.1.0.000 | 7.1.2.009 |
DBDirector | 7.1.0.000 | |
EnumerateDisks | 7.1.0.000 | |
LINCLog | 7.1.0.000 |
Databridge Client | Base release | Current release |
---|---|---|
bconsole | 7.1.0.000 | |
dbutility | 7.1.0.000 | 7.1.3.063 |
DBClient | 7.1.0.000 | 7.1.3.063 |
DBClntCfgServer | 7.1.0.000 | 7.1.3.063 |
dbscriptfixup | 7.1.0.000 | 7.1.3.063 |
DBClntControl | 7.1.0.000 | 7.1.3.024 |
dbctrlconfigure | 7.1.0.000 | 7.1.3.024 |
dbfixup | 7.1.0.000 | 7.1.2.001 |
migrate | 7.1.0.000 | |
dbpwenc | 7.1.0.000 | |
dbrebuild | 7.1.0.000 | 7.1.3.063 |
Databridge Administrative Console | Base release | Curent release |
---|---|---|
Administrative Console | 7.1.0 | 7.1.3 |
System Requirements
Databridge 7.1 SP1 includes support for the following hardware and software requirements.
System Support Updates. Databridge will remove support for operating systems and target databases when their respective software companies end mainstream and extended support.
Supported Internet Browsers: Microsoft Edge, Mozilla Firefox, Google Chrome
Databridge 7.1 SP1 | Supported Systems |
---|---|
Databridge Host | Unisys mainframe system with an MCP level SSR 59.1 through 63.0 DMSII or DMSII XL software (including the DMALGOL compiler) DMSII database DESCRIPTION, CONTROL, DMSUPPORT library, and audit files |
Databridge Enterprise Server | ClearPath PC with Logical disks or MCP disks (VSS disks in MCP format) -or- Windows PC that meets the minimum requirements of its operating system: - Windows Server 2022 - Windows Server 2019 - Windows Server 2016 (CORE mode must be disabled for installation and configuration) - Windows Server 2012 R2 (CORE mode must be disabled for installation and configuration) - Windows Server 2012 Direct Disk replication (recommended) requires read-only access to MCP disks on a storage area network (SAN) TCP/IP transport. NOTE: To view and search the product Help, JavaScript must be enabled in the browser settings. |
Databridge Administrative Console | One of following platforms can be used for the Administrative Console server: - Windows Server 2012 or later - Windows 10 x64 - Intel X-64 with Red Hat Enterprise Linux Release 7 or later - Intel X-64 with SUSE Linux Enterprise Server 11 SP1 or later - Intel X-64 with UBUNTU Linux 14.4 or later - Sun Microsystems SPARCstation running Solaris 11 or later |
Databridge Client | We recommend running the Administrative Console on a different machine from the Client to avoid negatively impacting the client's performance. To access the Administrative Console, use a supported browser on the client machine. NOTE: - Disk space requirements for replicated DMSII data are not included here. For best results, use a RAID disk array and store the client files on a separate disk from the database storage. - Memory requirements do not include the database requirements when running the Client in the server that houses the relational database (consult your database documentation for these). The numbers are for a stand-alone client machine that connects to a remote database server. |
Client - Windows | Unisys ES7000 -or- Pentium PC processor 3 GHz or higher (multiple CPU configuration recommended) 2 GB of RAM (4 GB recommended) 100 GB of disk space in addition to disk space for the relational database built from DMSII data) TCP/IP transport One of the following operating systems: - Windows Server 2022 - Windows Server 2019 - Windows Server 2016 (CORE mode must be disabled for installation) - Windows Server 2012 R2 (CORE mode must be disabled for installation) - Windows Server 2012 - Windows 10 One of the following databases: - Microsoft SQL Server 2022 - Microsoft SQL Server 2019 - Microsoft SQL Server 2017 - Microsoft SQL Server 2016 - Microsoft SQL Server 2014 - Microsoft SQL Server 2012 - Oracle 12c, 18c, 19c, 21c |
Client - UNIX and Linux | One of the following systems: - Sun Microsystems SPARCstation running Solaris 11 or later - IBM pSeries running AIX 7.1 or later - Intel X-64 with Red Hat Enterprise Linux Release 8 or later - Intel X-64 with SUSE Linux Enterprise Server 11 SP1 or later - Intel X-64 with UBUNTU Linux 18.04 or later 2 GB of RAM (4 GB recommended) 100 GB of free disk space for installation (in addition to disk space for the relational database built from DMSII data) TCP/IP transport One of the following databases: Oracle 12c, 18c, 19c, 21c |
Obtaining Databridge 7.1 SP1
Maintained customers are eligible to download Databridge from the Software Downloads site.
Installing Databridge 7.1 SP1
-
When installing Databridge 7.1 SP1 for the first time, download and install
Databridge Host, Databridge Enterprise Server, and All Clients. ZIP format. (7.1)
. -
See the Databridge Installation Guide for detailed installation and upgrade instructions.
Installation Instructions for a Service Pack, Update, or Hotfix
Before you install Databridge 7.1 Service Pack 1, quit all Databridge applications including the Administrative Console, and then terminate the service (daemon on UNIX). After the installation is complete, restart the service/daemon manually.
Important
To avoid potential problems, we strongly recommend that you upgrade the Host, Enterprise Server, software simultaneously.
Databridge Host
-
On the MCP Server, upload the file DB71xxxx using binary or image file transfer (where xxxx is a string of characters that further identifies the individual Hot Fix, Update or Service Pack.
ftp my_aseries_host <login> bin put DB71xxxxx DB71xxxxx
-
Log on to the MCP using Databridge USERCODE and go to CANDE.
-
Extract the install WFL from the above-mentioned file, using the following command:
WFL UNWRAP *WFL/DATABRIDGE/INSTALL AS WFL/DATABRIDGE/INSTALL OUT OF DB71xxxxx
-
Run the Databridge install file to apply the patch.
START WFL/DATABRIDGE/INSTALL
If you want to install the Databridge software to a different pack family than primary pack of your FAMILY substitution statement (FAMILY DISK = primarypack OTHERWISE secondarypack) use the following command instead.
START WFL/DATABRIDGE/INSTALL ("DATABRIDGE", "otherpack")
Note
This procedure is identical to how you would install the base release. We still provide the container file DISKINSTALL for backward compatibility, however you can also get it from the DB71xxxxx container file. Alternatively, you can upload DISKINSTALL and get the INSTALL WFL by unwrapping this file.
This will also work if you use the old way of installing a patch:
WFL UNWRAP *= AS = OUTOF DB71xxxxx TO DISK (RESTRICTED=FALSE)
The advantage of using the install WFL is that you will not get into trouble if you forget to add (RESTRICTED = FALSE). The new patches replace all the modules regardless of whether they have changed or not.
Databridge Client and Enterprise Server
On Windows, open the Windows folder of the Hot Fix, Update or Service Pack and double-click the file "databridge-7.1.bbbb-xxxxx-W64.exe", where bbbb is the 4-digit build number and xxxxxx is "hotfix", "update" or "servicepack". All installed components such as the Client and Enterprise will be updated.
On UNIX, upload the appropriate tar file for the Client from the hot fix, update or service pack to the directories where these components are installed. If you use Windows to process the extract of the tar file from the zip file, you must transfer the tar file to UNIX using binary FTP.
Then, use the following command:
tar -xvf <filename>
where
Note
To avoid accidentally deleting the Databridge applications, we recommend that you always keep the install directory and the working directory separate.
Administrative Console
Notes
- The Administrative Console must be installed after you install the Databridge Enterprise Server and the client. See Installing the Databridge Administrative Console in the Databridge Installation Guide for detailed steps to install the Administrative Console on Windows or UNIX machines.
- We recommend that you install the Administrative Console on a separate server from the client machine(s) because:
- The Administrative Console can use significant resources that may impact the client's performance.
- When the Administrative Console is installed with a Client machine, the Administrative Console cannot monitor activity when the client machine is down. By having the Administrative Console on a different machine, you can monitor the Client Manager(s) and receive alerts for address warnings and connectivity errors.
- The existing JRE subdirectory 'java' must be removed or renamed before installing the new software.
On Windows, open the Console\Windows folder of the Hot Fix, Update or Service Pack and double-click on "setup.exe". All installed components, including the private JRE that we provide will be updated. Unlike the Databridge Client and Enterprise server case this process uses an MSI, which first removes the old software and the install the new copy. Files that were not installed in the original install will be preserved. This means that any changes that were made to files like "container.properties" in the conf folder will be preserved.
On Linux and UNIX, copy the updated databridge-container and JRE files from the install medium and follow the directions above for 'Installing the Administrative Console on Linux/UNIX.'
Contacting Customer Support
For specific product issues, contact Customer Support.
Legal Notice
© 1995—2024 Rocket Software, Inc. or its affiliates. All Rights Reserved.