Client Errors
Client errors appear onscreen and in the log file, unless otherwise noted.
These messages are prefixed by the timestamp hh:mm:ss (where hh is hours, mm is minutes, and ss is seconds). Frequently, errors are preceded by a relational database message. When that occurs, refer to your relational database documentation.
Note
Often, a primary problem will cause several secondary problems, resulting in additional errors. Try to find the earliest error and solve that problem first before proceeding. In many cases, solving this one problem resolves the other problems without any additional work.
Config file error: filename:line#: error_string
This error message, which is limited to the Kafka client indicates that the specified line of the secondary Kafka configuration file filename is in error. The error_string specifies the nature of the error. The secondary Kafka configuration file is a text file whose name is specified by the kafka_config_file
parameter in the client configuration file.
Config file error: filename:line#: expected name=value
This error message, which is limited to the Kafka client indicates that the specified line of the secondary Kafka configuration file filename is not in the form "name = value". The secondary Kafka configuration file is a text file whose name is specified by the kafka_config_file
parameter in the client configuration file.
ERROR: A Kafka broker is required and is not configured!
This message indicates that you did not supply a value for the parameter kafka_broker
in the client configuration file or the secondary Kafka configuration file. The secondary Kafka configuration file is a text file whose name is specified by the kafka_config_file
parameter in the client configuration file.
ERROR: AA value 0xhhhhhhhhhhhh of LINK_AI image does not match that of previous CREATE image 0xhhhhhhhhhhhh for DataSet name[/rectype]
This message, which can occur during the data extraction phase of a process
or clone
command, indicates that the AA value in the after image of a DMSII link does not match the AA value of the previous CREATE image for the specified data set record. The client combines these two images to form a complete CREATE image before processing the data to generate the record that is loaded into the database. The two records are expected to have the same AA values. This situation, which should never occur under normal circumstances, indicates a Databridge Engine error.
ERROR: All Engine COMMIT parameters are zero; at least one of them needs to have a positive value
This message, which can occur at the start of a process
or clone
command, indicates the resulting commit parameters are in error, as they do not specify any situation under which the Engine should try to do a commit. Since this could result in the system running out of log space or the update process running extremely slowly, the client will exit with an exit code of 2060 and request that the situation be rectified by updating the configuration file.
ERROR: Ambiguous command cmd, matches both cmd1 and cmd2 commands
In response to command line console input, this message indicates that the text of the command cmd, which the operator typed, matches more than one command. Make sure that you type enough characters to make the command unique.
ERROR: Archive file contains a malformed record: record_text
This message can occur during a reload
command if the input scanner does not find the record to be in the expected format. If you edited the unload file created by the unload
command you are likely to get this message or any of the other reload
command error message. The second line of the message is the record that the program is having a problem with.
ERROR: Archive file "name" contains no data or cannot be read
This message can occur during a reload
command, if an unexpected end of file situation occurs. A possible reason for this would be if the unload
command that created the file did not complete the command successfully.
ERROR: Archive file does not start with a version record of the form "V,version"
This message can occur during a reload
command, if the file specified on the command line is not a control table unload
file. The first record of this file is always "V,version" where version is the version of the client control tables.
ERROR: Attempt to ALTER SESSION to set CLIENT_ENCODING failed
This message, which only applies to the PostgreSQL client, can occur when the client starts up. It indicates the client is attempting to execute an ALTER SESSION
SQL command to set the CLIENT_ENCODING
to "iso_8859_1", which is the default encoding for the client. This message might indicate that the userid you are using does not have enough privileges.
ERROR: Attempt to ALTER SESSION to set NLS_LANGUAGE failed
This message, which only applies to the Oracle client, can occur when the client starts up. It indicates that the client is attempting to execute an ALTER SESSION
SQL command to set the NLS_LANGUAGE
to AMERICAN
, AMERICA
. This is necessary because the SQL data that the client creates is formatted to use the period as the decimal character. Since we cannot do this for the SQL*Loader runs, we must format the SQL*Loader data using the proper decimal character. The client handles this automatically. This message might indicate that the userid you are using does not have enough privileges.
ERROR: Attempt to ALTER SESSION to set NLS_LENGTH_SEMANTICS failed
This message, which only applies to the Oracle client, can occur when the client starts up and detects a AL32UTF8
database. When this occurs, the client executes an ALTER SESSION
SQL command to set the NLS_LENGTH_SEMANTICS
to CHAR
, which prevents data truncation errors from occurring when 8-bit characters are translated to multi-byte characters. The client handles this automatically. This message might indicate that the userid you are using does not have enough privileges.
ERROR: Attempt to clear DAOPT_Nulls_Allowed option for key items failed
The client does not normally allow keys to have the NULL
attribute. This would invalidate the SQL that is used to perform update and delete operations, where the WHERE
clauses test the key items for equality. If the items are NULL
, you cannot test them for equality. The client attempts to reset the bit DAOPT_Nulls_Allowed (1)
that indicates the item allows nulls. This message is displayed in the unlikely situation where the update statement fails.
ERROR: Attempt to connect to a NULL data_source; add datasource specification to configuration file
This message indicates that there is no datasource
parameter specified in the [signon] section of a client that uses ODBC or CLI. You must specify a datasource
parameter so that the client can connect to the database using ODBC or CLI.
ERROR: Attempt to delete record from table 'name' after a duplicate deleted record was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message can occur during a process
or clone
command and it only applies to Miser databases. It indicates that after the client got a duplicate record while trying to mark a record as deleted, and it then tried to delete the record, the delete resulted in a SQL error. This message may appear after other related error messages that should be addressed first.
Setting the sql_type
of the deleted_record
column in DATAITEMS to 18 (BIGINT) makes the client combine the deleted_record
value with the delete_seqno
which eliminates the duplicate record problem caused by the 1 second granularity of the epoch time used to set the value of the deleted_record
column. In the case of Oracle, which does not have a data type of BIGINT, NUMBER(15) is used instead, as the combined value is a 48-bit quantity.
ERROR: Attempt to delete record from table 'name' during a modify was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message can occur when processing a MODIFY record during change tracking. It indicates that, while handling this request as a delete/insert (typically done when the values of keys change), the client encountered an error during the delete operation. See the database API error that precedes this message to determine the reason why the delete statement failed.
ERROR: Attempt to delete record from table 'name' during a two-step modify was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message, which can occur during a process
or clone
command, indicates that the client was unable to delete the record from the specified table during a two-step modify, which is invoked when the value of the depends item for an item with an OCCURS DEPENDING ON clause changes. If the value of this item decreases, the client updates the rows that remain in the OCCURS table and deletes the rows that are no longer present. This error appears if the delete operation fails and may appear after other related error messages that need to be fixed first.
ERROR: Attempt to delete record from table 'name' was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message, which can occur during a process
or clone
command, indicates that the client was unable to delete the record from the specified table. If deleted records are being preserved the delete operation is actually an update. This message may appear after other related error messages that should be addressed first.
This message can also occur during an insert that results in a duplicate record and the attempt to recover by doing an update results in no rows being updated. The client attempts to recover from this situation by doing a delete/insert. You will get this message if the delete fails.
ERROR: Attempt to delete records from table 'name' during a modify was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message, which can occur during a process
or clone
command, indicates that the client was unable to update a record because it did not exist in the database. When a failed update involves an OCCURS table, the insert statement fails. The client attempts to recover from the situation by deleting all of the records and reinserting them in the table. This message, which rarely occurs, indicates that the delete failed. This message will be preceded by database error messages that might help determine what happened.
ERROR: Attempt to delete records from table 'name' was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message, which can occur during a process
or clone
command, indicates that the client was unable to insert a record into a table and is preceded by several other error messages, warnings, and database error messages that help you determine why this happened. The chain of events leading to this message is:
- The client is unable to insert a record into a table because doing so would result in a duplicate record.
- It attempts to do an update instead but finds no matching rows.
- When the item has an OCCURS clause, the client attempts to recover by doing a delete/insert. It tries to delete all occurrences of the item and reinsert them.
- If the delete fails, this message displays.
ERROR: Attempt to drop history table 'name' while "inhibit_drop_history" option is enabled
This message, which can occur during a process
or clone
command, indicates that the client attempted to delete a history table while the inhibit_drop_history
parameter was set to True. This parameter is designed to safeguard against accidentally dropping history tables.
ERROR: Attempt to drop index 'name' on table 'name' failed
This message can occur during a process
or clone
command when a table that has data that needs to be preserved is recloned. When the records need to be preserved in a nonstandard manner, the program drops the index of the table and runs a cleanup script to delete unwanted records instead of dropping the table and recreating it. The most common source of this error is that the table in question does not have an index, possibly because the index creation during the original clone failed. The program traps this error and continues execution after printing a WARNING. Look at the database error messages that precede this error for clues about why the drop index operation failed.
ERROR: Attempt to establish IPC connection with DBClntControl failed
This message, which applies to the DBClient and DBClntCfgServer programs, indicates that the IPC connection to the service/daemon could not be established. This connection is used to route output messages to the Administrative Console via the service/daemon and to allow the console to issue RPCs to the clients and get the results passed back.
This is an internal error that is an indication that something is seriously wrong with either the DBClntControl program or the system. First try to stop and restart the service. If the error persists, reboot the system to clear up the problem.
ERROR: Attempt to insert missing deleted record into table 'name' was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx)
Unlike earlier clients, the 7.1 clients handle the situation where the attempt to preserve a deleted record by doing an update fails to find the record in the table. The client recovers from this situation by insert the record into the table after marking it is deleted. It does so by setting the update_type
column to DELETE if using the (expanded) update_type to preserve deleted records or by storing the current time (and sequence number) into the deleted_record
column.
ERROR: Attempt to insert record into table 'name' after delete was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message, which can occur during a process
or clone
command, indicates that the client unsuccessfully tried to recover from a failed insert by doing a modify. The client then tries to delete the record or all the occurrences of the given key and tries to reinsert them. This message is an indication that this last insert failed. At this point the client gives up and stops. See the preceding database API error messages for details on why the insert failed.
ERROR: Attempt to insert record into table 'name' after failed update was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message, which can occur during a process
or a clone
command, indicates that the program was unsuccessful in performing an update because the target record does not exist. The program then tried to do an insert, which also failed because a duplicate was found. A possible cause of this error is that one of the keys is NULL. If this error occurs, contact Customer Support.
ERROR: Attempt to insert record into table 'name' during a modify was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message can occur when processing a MODIFY record and the update statement fails to find the record (that is, the row count is 0). The client attempted to recover from this by doing an insert, which failed. Refer to the database API error messages for details on why the insert failed.
ERROR: Attempt to insert record in table 'name' during a two-step modify was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, …
This message can occur during a process
or clone
command and indicates that the client could not insert the record into the specified table during the two-step modify. A two-step modify is used when the value of the depends item changes for an item with an OCCURS DEPENDING on clause. If the value of this item increases, the client updates the rows that were present in the OCCURS table and inserts the remaining rows. This error appears if the insert
operation fails. This message may appear after other related error messages that should be addressed first.
ERROR: Attempt to insert record in table 'name' was unsuccessful
This message, which can occur when a data set is cloned without using the bulk loader. It indicates that an error occurred when the client tried to do a COMMIT after max_clone_create
count records were inserted into the table. See the preceding database API error messages for details on why the commit failed.
ERROR: Attempt to insert record into table 'name' was unsuccessful - Keys: colname = value, ...
This message can occur when a data set is cloned without using the bulk loader. It indicates that an error occurred when the client tried to insert the record into the table. Refer to the database API error messages for details on why the insert failed.
ERROR: Attempt to insert record into table 'name' was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, ...
This message, which can occur during a process
or clone
command, indicates that the client was unable to insert the given record into the table. The client always attempts to recover from a failed insert that is caused by the record already being in the database. If we get a different error, then this message is displayed and the client stops. Refer to the database API error messages for details on why the insert failed.
ERROR: Attempt to load record into table 'name' was unsuccessful - Keys: colname = value, ...
This message, which is limited to the SQL Server client, indicates that an error occurred while setting up the host variables during data extraction or that the BCP API’s bcp_sendrow
procedure returned an error. In either case, look at the message that precedes this one in the log file to help identify the actual reason for this error.
ERROR: Attempt to mark record as deleted in table 'name' failed (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, ...
This message indicates that the process
or clone
command (used in conjunction with the delete record preservation feature) was unable to update a record and set the appropriate column to indicate that it has been deleted. The 7.1 clients recover from this error by inserting the record into the table after marking as deleted by updating the update_type
or the deleted_record
column depending on how the deleted records are being preserved.
ERROR: Attempt to reclone DataSet name[/rectype] without recloning DataSet name1[/rectype1]
The clone
command is attempting to reclone the specified data set and the following conditions exist:
- The configuration parameter
automate_virtuals
is set to True. - The specified data set is the primary source for a virtual data set that gets its input from more than one DMSII data set.
- The data set that is the secondary source of data has been cloned previously and is not specified on the command line.
- Recloning is only allowed for the data set that is the secondary source of the data. For example, assume SV-HISTORY and SAVINGS both provide data for the virtual data set SV-HISTORY-REMAP, and SV-HISTORY is the primary source of data for SV-HISTORY-REMAP that must be cloned first. An attempt to reclone SV-HISTORY without recloning SAVINGS results in this error. If you want to reclone SV-HISTORY, you must also reclone SAVINGS. If you specify them both on the command line, the program performs operations in the right order. However, you can reclone SAVINGS without recloning SV-HISTORY.
ERROR: Attempt to rename table 'name' failed
This message can occur during the execution of a reorganize
command. It indicates that an error occurred while attempting to rename a table. When the parameter use_internal_clone
is set to True, the redefine
command sets up the reorg scripts to use a SELECT INTO statement (CTAS in the case of Oracle) to create a new copy of the table that has the new layout. Added columns are assigned their initial values. Before running the script that does the copy, the reorganize
command executes a SQL statement that renames the old table. You will see this message when the rename fails. If the reorg script that does the copy fails, the new table is dropped and the old table is renamed back to its original name. You will also see this message if the rename fails in this case.
ERROR: Attempt to run user stored procedure 'm_tablename' failed
This message, which only applies to MISER databases during process
or clone
command, indicates that an error occurred while running the stored procedure used to merge character data that is sent to the client in two separate messages for the data GL-HISTORY and its corresponding resident history records in the data set GL.
ERROR: Attempt to treat duplicate insert record as an update for table 'name' failed (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, ...
The client traps all duplicate record errors that occur during the execution of insert SQL statements. It then changes the insert statements into update statements and re-executes them. This error is displayed when this happens while processing an update as a delete/insert. If the resulting update statement fails, this error message is displayed because it cannot find the target record to change. If this error occurs, contact Customer Support.
ERROR: Attempt to update res_flag for table 'name' failed
This message, which only applies to MISER databases during process
or clone
commands, indicates that the client could not update the res_flag
column of a virtual data set derived from a history data set and its associated data sets that contains an array of resident history records. An example of two such data sets is SAVINGS and SV-HISTORY.
When history records that were previously resident in main data set get inserted into the history data set and the array in the main data set is emptied out to make room for a new history record. Rather than blindly trying to insert these records into the virtual data set table, the client recognizes the fact that the record is already in the table and updates it instead, avoiding getting a duplicate record error, which would result if it did an insert. The update effectively changes the res_flag
column values from 1 to 0. If the record is not in the virtual data set table the client recovers from the failed update by doing an insert, which almost never happen. This error indicates that the update failed. It is handled in the same way as any other update error.
ERROR: Attempt to update table 'name' was unsuccessful (AFN=afn, ABSN=absn, SEG=seg, INX=inx) - Keys: colname = value, ...
This message, which can occur during a process
or clone
command, indicates that the client could not update the specified table. This message may appear after other related error messages that need to be fixed first. This error does not apply to situations where the client encounters an update with no matching rows; the client automatically recovers from that situation.
ERROR: Bad bit (0xhhhhhhhh) in {DataItem | DataSet | DataSource | DataTable | DMSItem} column mask
This message occurs when DBClntCfgServer attempts to update a control table entry and gets an invalid bit mask. This is an internal error in the Administrative Console's Customize
command. Contact Customer Support.
ERROR: Bad data mask string index, column 'name' in table 'name' cannot be masked
This message, which is limited to the SQL Server client, indicates that the data mask specified for the given column is incorrect. Data masks, which are stored in the masking_info
column of the DATAITEMS client control table, are constructed as 32-bit integers, where the low 8 bits are the masking function code (0 indicates no masking, 1 indicates “default” masking, 2 indicates “email” masking, 3 indicates “random” masking and 4 indicates “partial” masking). The rest of the value is the index into the masking_parameter
array in the client configuration file, where these values must be specified.
The client uses this information to add a specification of the form “masked with (function=name([parameters])”. Only the random and partial functions have parameters. The random function can be applied to numeric columns, while partial function can be applied to columns of char or varchar type. This error indicates that the specified string is in error.
ERROR: Bad {index | table} suffix value nn specified for table 'name' in DATATABLES
This message occurs during a generate
or reorganize
command if the index_suffix
or create_suffix
column of the corresponding DATATABLES entry has a value that is out range or there is no [n] specification in the configuration parameters create_table_suffix
or create_index_suffix
.
ERROR: Bad input line 'text' in globalprofile.ini
This message, which is limited to UNIX clients, indicates that file globalprofile.ini
contains a bad input line. Do not add comments or any other lines to the file, as this will most probably cause this error.
ERROR: Bad input line 'text' in topic configuration file "topic_config.ini"
If you edited the Kafka topic configuration file, you need to correct the line with the given text.
A sample topic configuration file is shown below. The first entry in each topic is table name, which is the data set name in lowercase with dashes replaced by underscores. The second entry is the topic name, which is typically enclosed in double quotes.
; Topic configuration file for update_level 450
[topics]
castm = "ELECTRA_castm"
clinm = "ELECTRA_clinm"
gnarr = "ELECTRA_gnarr"
gwbhb = "ELECTRA_gwbhb"
ERROR: Bad section header 'name' in globalprofile.ini
This message, which is limited to UNIX clients, indicates that the first line of file globalprofile.ini
has been modified. The first line of this file must contain the section header [dbridge].
ERROR: Bad section header 'sss' in topic configuration file "topic_config.ini"
This message, which is limited to the Kafka client, indicates that the section header [sss] is not "[topics]", which the Kafka client expects to find in the first and only section header of the topic configuration file.
ERROR: BCP format file entry for item 'name' (dms_item_type = dd) cannot be generated
This error indicates the dms_item_type
column in the DMS_ITEMS control table entry is not a legal DMSII data type. This should not happen unless you updated the control table using a bad user script. This error causes the generate
command to fail.
ERROR: bcp_bind failed for column 'name' in table 'tabname'
This error message that only applies to the SQL Server client indicates that the client was unable to bind the specified column to host variables while setting up the BCP API to load the table in question. Contact Customer Support if you get this error.
ERROR: bcp_init failed for "database..table"
This error message that only applies to the SQL Server client indicates that the call on the BCP API’s bcp_init
procedure failed. Check the database name and the table name and then Contact Customer Support if you get this error.
ERROR: bcp_sendrow failed for table 'tabname'
This error message that only applies to the SQL Server client indicates that the call on the BCP API’s bcp_sendrow
procedure failed. Check the ODBC error that is associated with this error as there might be a simple explanation such as the database is out of storage space.
ERROR: Begin_Transaction during data extraction for table 'name' not using the bulk loader failed
This error message, which is limited to data extractions that do not use the bulk-loader (or the BCP API). Such extractions are handled by separate data connections that use transactions to limit the number of updates that are done before doing a COMMIT. Begin transaction should not fail, as it does not do much other than disabling auto-commits. The size of the transaction is controlled by the configuration parameter max_clone_count
.
ERROR: Binary filter generation failed with exit code nnnn - see file "name" for details
This message indicates that the makefilter run that the client launched failed. The client launches makefilter at the end of a define or redefine command and when exiting the Administrative Console's Customize
command. You need to fix the error in the filter source and rerun makefilter to fix this situation. There is no need to rerun the command, as it completed successfully.
ERROR: Binding of variables failed for SQL statement sql_stmt
This message is only applicable when using host variables, which are enabled by setting the configuration parameter aux_stmts
to a non-zero value. This indicates that an internal error occurred while binding program variables to various columns of an SQL statement. In this mode, all SQL statements use bound variables. Before the SQL statements are actually executed, the values are copied into these variables by the program. This technique allows the SQL statement to be re-executed using different data since the SQL remains constant. Only the content of the host variables changes.
ERROR: {Build_HV_Parameters() | Build_Parameters()} encountered an illegal update type dd
These are internal errors, which can occur during process
or clone
commands, when processing an update. They indicate that the client encountered an undefined update type while setting host variables or generating the stored procedure call for an update. These errors are not expected to occur, as the client checks the validity of the update type received from the Databridge Engine before calling these procedures. When the check fails the following error message is diplayed: ERROR: Illegal update type dd for table 'name'
. However, the client changes the update type under some conditions, which include:
- Row filtering of OCCURS tables where the update changes the filtering status of the row
- OCCURS DEPENDING ON clauses in COMPACT data set where the value of the depends item changes.
- Key changes for data set that use a SET with the KEYCHANGEOK attribute as the source of the indexes for tables.
These cases could conceivably lead to this message being displayed. Contact Customer Support if you encounter this error.
ERROR: Bulk copy record count (mmm) and actual record count (nnn) differ for table 'name'
This message, which only displays when the configuration parameter verify_bulk_load
is set to 2, indicates a mismatch between the number of rows in the table and the number of rows that the client loaded. Look at the bulk loader log file for the table in the working directory and also in the discards
directory where you will undoubtedly find a discard file for the given table. If the parameter verify_bulk_load
is set to 2, this situation causes the client to abend. However, if it set to 1, you see a similar warning, and the client ignores the error.
ERROR: Bulk load failed (rc = dd) for table 'name' using bcp - see file "bcp.tablename.log" for more information
ERROR: Bulk load failed (rc = dd) for table 'name' using sql*loader - see file "sqlld.tablename.log" for more information
ERROR: Bulk load failed (rc = dd) for table 'name' using pgloader - see file "pgloader.tablename.log" for more information
These messages indicate that there was a problem loading the extract records into the corresponding database table using the bulk loader. Look at the bulk loader log file in the working directory. This file usually identifies the problem. The most common problems are:
- The maximum error threshold was exceeded causing the bulk load to abort the operation.
- The database’s bin directory is not in the PATH or the config parameter
bulk_loader_path
is not set causing the attempt to run the bulk loader to fail. - The database or table space is out of space.
In the case of the Microsoft SQL Server client the sequence (rc = d) specifies the exit code of the launched command file that does the bcp. An exit code of 1 indicates that bcp returned a nonzero exit code, and a return code of 2 indicates that bcp_auditor utility determined that the bcp failed, even though bcp returned an exit code of 0.
ERROR: Call on dbkafka_begin_txn() failed, rc = dd
This message, which is limited to the Kafka client, indicates that call on the Kafka function in question failed.
ERROR: Call on dbkafka_deinit() failed, rc = dd
This message, which is limited to the Kafka client, indicates that call on the Kafka function in question failed.
ERROR: Call on dbkafka_end_txn({false | true}) failed, rc = dd
This message, which is limited to the Kafka client, indicates that call on the Kafka function in question failed. An argument of false indicates a rollback, while an argument of true indicates a commit.
ERROR: Call on dbkafka_init() failed, rc = dd
This message, which is limited to the Kafka client, indicates that call on the Kafka function in question failed.
ERROR: Call on dbkafka_produce() failed, rc = dd
This message, which is limited to the Kafka client, indicates that call on the Kafka function in question failed. This function initiates the delivery of a message by enqueuing the message on the internal producer queue. The call does not wait for the message to be delivered. The actual delivery attempts to the broker are handled by background threads.
ERROR: Call on dbkafka_topic() failed, rc = dd
This message, which is limited to the Kafka client, indicates that call on the Kafka function in question failed.
ERROR: Cannot access Databridge control tables -Make sure that a 'configure' command was executed previously
This message can occur for any client command except dbutility configure
. It can indicate either of the following:
-
You did not run the dbutility
configure
command and therefore the client control tables were not created.-or-
-
(more common) You are trying to execute a dbutility command with a relational database user ID that is different from the one that originally created the control tables. In this case, use a relational database query tool to sign on to the relational database with the same user ID that dbutility is using. Then, enter the following:
For Microsoft SQL Server:
select name, uid from sysobjects where type = 'U'
For Oracle:
select table_name from user_tables
The resulting display includes the name of all of the user tables owned by the user ID. If all of the client control tables do not appear, the problem is in the user ID you were using. For more information, see the Databridge Installation Guide.
ERROR: Cannot access user# for user userid
(SQL Server client only) When you sign on to the relational database, the client reads the Microsoft SQL Server table sysusers
to find the user index for your user ID. The user index is important because the relational database uses the user index (numeric) rather than the user ID (character string) to mark table ownership.
When this error occurs, it is typically preceded by messages that help to illustrate what went wrong. If this error persists, contact your relational database administrator.
ERROR: Cannot find entry point {"EBCDIC_to_ASCII" | "INITIALIZE_EATRAN" | "TERMINATE_EATRAN"} in extended translation {DLL | shared library} "filename"
The attempt to locate the entry point EBCDIC_to_ASCII, INITIALIZE_EATRAN or TERMINATE_EATRAN in the data translation DLL failed. Make sure that there is not another DLL with the same name that Windows is finding instead. The best way to avoid this problem is to put the DLL in the Databridge client program directory and setup the PATH environment variable to include this directory. Because the client is executed from this directory, and the DLL is also present in that directory, you should not run into this problem.
ERROR: Cannot open lock file for data source name, errno=number (errortext)
This message indicates that the client cannot open the lock file used to implement a file lock for the data source. A common cause of this error is that the locks directory does not exist. You must create the service's (global) working directory and the locks sub-directory in order to be able to run the client. For more information, see the Databridge Installation Guide.
ERROR: Cannot run a cmd_name command while the data source is locked
This message indicates that a Databridge run has the data source locked. When a run hangs, you must cancel the run to release the lock file. The process_id
of the run that locked the file and the command it is executing is written into the lock file "source_database.lock
" (where source is the name of the data source and database is the name of the relational database). This file is located in the locks
subdirectory of the service's (global) working directory. You can open it with an editor to see why the data source is locked.
ERROR: Changing the keys for a DataSet when it is using a DMSII SET as the index is not allowed
The Administrative Console's Customize
command currently does not support changing the values of item_key
in DMS_ITEMS for data sets that are using a DMSII SET as the source for the index.
You can force the client to use AA Values or the RSN (if it exists) as the index by setting the bit DSOPT_Use_AA_Only
(0x4000) in the ds_options
column for the corresponding entry in the DATASETS control table and running a redefine
command with the -R
option.
ERROR: chdir failed for directory "path", errno=number (errortext)
This message, which can occur when processing the user_script_dir
line of a text configuration file or when updating this parameter in a binary configuration file. It can also occur when running a user script. When file security is enabled, user scripts must reside inside the service's working directory, as they will then be protected by file security. When the client needs to verify that a user script or the specified user script directory complies with this rule, it does a couple of _chdir
and _getcwd
commands while trying to verify this. This error is an indication that a directory is missing or that the parameter user_script_dir is pointing to a non-existent directory.
ERROR: CheckTokenMembership() failed for group 'name', errno=number (errortext)
(Windows only) This message, which only applies when file security is enabled, indicates that the system call to verify the userid's membership in a group failed. The client uses this call to determine if the user is allowed to run the client, by determining if the user has write permissions for the files in the working directory.
ERROR: Cleanup of table 'name' failed
The client was unable to delete selective entries from the specified table during the drop
or dropall
command. The client typically drops tables. However, if tables contain non-DMSII data or are populated from more than one data source, the client uses the cleanup scripts to delete the records in question. Look at the relational database messages that precede this error for more information.
ERROR: Client aborting rpc_description call
This message is preceded by other messages such as relational database API errors. The rpc_description indicates the specific operation (such as connect
, initialize
, switchaudit
) that was in progress when the client error occurred. This message indicates that a fatal error occurred while processing the given DBServer RPC.
ERROR: Client Control Table version mismatch (found number, expected number) - Run the "dbfixup" program to correct this situation
(dbutility only) This message indicates that the client control tables have the wrong version. You must run the dbfixup program to correct this error. This message typically occurs if you upgrade from an earlier version of the client and do not run the dbfixup program before you run the client.
ERROR: Client Control Table version mismatch (found number, expected number) - The {service | daemon} will automatically run "dbfixup" to correct this situation
This message indicates that a client launched by the service has detected that the client control tables have the wrong version. The service automatically launches the dbfixup program when this happens. The Administrative Console is usable as soon as the dbfixup runs for all the data sources.
ERROR: Client Control Table version mismatch (found number, expected number) - You need to restore the old client control tables before processing can continue
This message indicates that a client launched by the service has detected that the client control tables have a version that is higher than that of the client. This can happen if you try to revert to using an older client after an upgrade. You must restore the client control tables from the unload file generated during the upgrade before you can continue. You should be able to use an unload command with the -V option indicating the version of the control tables in the old client.
ERROR: Client/host interface level mismatch: client = ver1, host = ver2
This message can occur when the client connects to DBServer, if the version of DBServer is too old to be compatible with the client. Specifically, this message indicates that DBServer returned an illegal value for the negotiated protocol level. We recommend that you always use matching host, Enterprise Server, client, and Administrative Console software.
ERROR: Close failed for binary configuration file "name", errno=number (errortext)
This is an indication that there was a problem writing the binary configuration file to disk. The errortext explains the cause of the failure. The most common cause is that the config
directory has not been created or you do not have write access to the config
directory or the file dbridge.cfg
.
ERROR: Close failed for Null Record file, errno=number (errortext)
An error occurred while closing the NULL record file during a define
or redefine
command. The most likely problem is a lack of disk space. The system error included in the message should explain why the error occurred.
ERROR: Close failed for text configuration file "name", errno=number (errortext)
This is an indication that there was a problem writing the text configuration file to disk. The errortext usually indicates the cause for the failure. The most common cause is that the config
directory has not been created.
ERROR: Column 'name' in table 'name' is not an AA Value -- this is not supported
This message only occurs when you clone embedded subsets and the client tries to execute a bulk delete operation which deletes all the child record that belong to given parent in the virtual data set tables that implements the embedded subset. This message is indicative of a configuration error as the parent record must be using AA values as the index; without that, the parent/child relationship cannot be implemented.
ERROR: Command aborted due to error in cross checking DMSII Links
The define
or redefine
command is being aborted because the client encountered errors when checking the integrity of DMSII links. This message is preceded by one or more error messages indicating which tables have links to nonexistent or inactive tables. The most likely cause of this error is that you are attempting to do a partial redefine of the data source with a data set that is the target of a link not being included in the redefine
command.
ERROR: command_name command failed to complete
This message indicates that the client command could not complete successfully, where command_name is a client command such a process
, clone
or redefine
. Typically, this message is preceded by an explanatory error.
ERROR: COMMIT {ABSN | TIME | TRANS | UPDATES} command requires a numeric argument
ERROR: COMMIT {ABSN | TIME | TRANS | UPDATES} value out of range
These messages are a result of an invalid user input for a dbutility console COMMIT ABSN
, COMMIT TIME
, COMMIT TRANS
, or COMMIT UPDATES
command.
ERROR: COMMIT command must be followed by {Absn <n> | Update <n> | TIme <n> | TRans <n> | Stats}
This message is a result of an invalid user input for a dbutility console COMMIT
command.
ERROR: Commit_Transaction failed during data extraction for table 'name' not using the bulk loader aborting clone
This error message is limited to data extractions that do not use the bulk-loader (or the BCP API). Such extractions are handled by separate data connections that use transactions to limit the number of updates that are done before doing a COMMIT. The size of the transaction is controlled by the configuration parameter max_clone_count
. The failure of the COMMIT is most likely a database resource issue. The preceding database API error messages should help in determining the source of the problem.
ERROR: Configuration file "name" contains no valid information
This message indicates that the specified configuration file contains no data or valid information (for example, there are no section headers in the file).
ERROR: Configuration file [name] section: error_message - Input Line: input_text
This message is associated with a large number of errors that can appear when processing a text configuration file. If you use binary configuration files, these messages are confined to the import
command.
[name] is the relevant section in the configuration file. input_text is the actual line of text in which the error occurred. For information about configuration file syntax, see Appendix C in the Databridge Client Administrator's Guide.
ERROR: Configuration file contains invalid line: input_text
This message typically appears when you have omitted the semicolon (;) from a comment line that precedes the first section header in the configuration file.
ERROR: Configured numeric date format (number) for item 'name' in table 'name' is not supported, {record will be discarded | date set to NULL} -Keys: colname = value, ...
This message can occur during a process
or a clone
command when processing a data item whose sql_type column contains a value of 13 (numeric_date). The configuration parameter numeric_date_format
is used to define the format for numeric dates. This error indicates that the format specified in the configuration file is not supported.
ERROR: CreateDirectory failed for directory "name", error=number (errortext)
(Windows only) This message indicates that the attempt to create a directory failed, the errortext is the explanation of the error and it should help identify the cause of the error.
You should not try to change file security, except by running the setfilesecurity program. This program is located on the root of the install directory.
ERROR: CreateFile failed for 'Console_Reader_Thread'
(Windows only) This message is an internal error, which indicates that the console thread’s attempt to read input from the keyboard failed. As a result, the console is inoperative. Except for this, the run proceeds as if the console were not enabled.
ERROR: CreateFile failed for data file "name", error=number (errortext)
(Windows only) This message can occur during a process
or clone
command. It indicates that the client cannot create the specified data file for holding bulk loader data. The indicated system error should explain the cause of the problem.
ERROR: CreateFile failed for file "name", error=number (errortext)
(Windows only) This message can occur when opening a new file for write and file security is enabled. The client creates the new file using the Windows library CreateFile procedure, which uses a DACL that is constructed to reflect the access rights based on the security defined at install time. The error will most likely indicate that the user does not have the proper privileges to create the file. Another advantage of using file security is that it allows the service and the command line to share files when the service is run using the built-in SYSTEM account.
ERROR: CreateMutex failed for 'name', error=number (errortext)
(Windows only) This message, which can occur during a process
or clone
command, indicates that a Windows internal error has occurred while attempting to create a mutex resource. If this error occurs, contact Customer Support.
ERROR: CreateSemaphore failed for 'name', error=number (errortext)
(Windows only) This message, which can occur during a process
or clone
command, indicates that a Windows internal error occurred while attempting to create a semaphore resource used by the table creation thread. If this error occurs, contact Customer Support.
ERROR: CreateThread failed for 'name', error=number (errortext)
(Windows only) This message, which can occur during a process
or clone
command and indicates that an internal system error occurred while attempting to create a thread (bulk_loader
, index
, console
ot update_worker
thread). If this error occurs, contact Customer Support.
ERROR: Creation of control table name failed
This message can occur during a dbutility configure
command or when the control tables are created by the Administrative Console's Customize
or the Define/Redefine
commands. It indicates that an error occurred while creating the specified client control table. See the relational database API message that precedes this message (on the screen or in the log file) for more information. For more information see OCI Errors or ODBC Errors.
ERROR: Creation of history table 'name' failed
This message, which can occur during a reorganize
command, indicates that the command was unable to create the given history table. The configuration parameter enable_dynamic_hist
allows the client to dynamically create history tables without having to re-clone the data set in question. See the database API error messages that precede this error message to determine why the creation of the table failed.
ERROR: Creation of keys failed for primary table 'name'
This message can occur during a define
or redefine
command. It indicates that an error occurred while the client attempted to define the keys for a primary table. This message is preceded by another error message that explains the actual cause of this error.
ERROR: Creation of keys for OCCURS failed for table 'name'
This message, which can occur during a define
or redefine
command, indicates that an error occurred while trying to insert the keys for an OCCURS table into the DATAITEMS client control table. See the relational database API message that precedes this message (on the screen or in the log file) for more information.
ERROR: Creation of keys within split failed for table 'name'
This message can occur during a define
or redefine
command. It indicates the following:
-
A DMSII data set has more columns than the relational database limit, and therefore, must be split into two or more relational database tables.
-
An error occurred while creating the keys for a secondary table.
See the relational database API message that precedes this message (on the screen or in the log file) for more information.
ERROR: Creation of table 'name' and its procedures failed
This message can occur during the cloning of a data set using a process
or clone
command if the client is unable to create a table and its stored procedures. See the relational database message that precedes this error for more information about what went wrong. If you used create table suffixes verify them as they could be the cause of the error.
ERROR: Critical columns missing from table 'name', cannot create cleanup script
This message indicates that the generate
command could not find some of the columns that are created by setting bits in the external_columns
column of the DATASETS control table entry for the table's parent data set. The most common cause of this message is that the external_columns
column was assigned a value without running a define
or a redefine
command, or that the active
column for the item in question was set to zero in DATAITEMS. It is recommended that you use user scripts to perform all such actions that the redefine
command runs and that you never change the value of the active
column in DATAITEMS.
ERROR: Data item number nnn referenced in filter file does not exist for table 'name'
This message is an internal error that indicates the binary filter file is not in sync with the client control tables. Try recreating the filter by either running the makefilter utility's import
command or run a redefine
command with the -R option (Redefine All) to remedy this situation. This situation should never occur because whenever you run a redefine command or you run the Administrative Console's Customize
command, makefilter is automatically launched. Pay attention to the cases where the program is unable to compile the filter due to errors in the filter source file. Look in the makefilter log file db_flt_yyyymmdd.log
to determine why the compile failed.
ERROR: Data source does not support transactions (SQL_TXN_CAPABLE = NONE)
This message, which is applicable to ODBC and CLI clients, indicates that the database does support transactions. You cannot run the Databridge client with a database that does not support transactions. This error indicates that your relational database is not properly set up.
ERROR: Database {name | NULL} failed to open
The requested relational database failed to open. (If you do not specify a database name and you are connecting to the default database, the client displays "NULL" as the database name.)
In this case, check the following:
- Is the relational database server running?
- Did you enter the correct relational database server name?
- Did you enter the correct relational database name?
- Did you enter the correct ODBC data source name?
- Did you use the correct user ID for the relational database?
- Did you use the correct password?
Check the settings in the client configuration file, and command line options.
Unlike earlier clients, the 7.1 clients establish a single connection to the relational database server, except during data extraction. The index thread uses a second database connection during data extraction, which gets terminated when the data extraction ends.
ERROR: Database password cannot be decoded
This message can occur when the client reads a text configuration that contains an encrypted (or encoded) password that is corrupt. This error happens most often when the file has been edited. To resolve the problem replace the obfuscated password by its clear text form. Make sure you enclose it in double quotes, then use the import
command to update the binary configuration file, followed by an export
command to encrypt the clear text password in the text configuration file.
The 7.1 clients encrypt passwords in both text and binary configuration files. During upgrades from older versions encoded passwords are accepted, however, import
and export
commands will encrypt these passwords. In text configuration files encrypted passwords contain the letter "x" followed by a string of hex values, while encoded passwords are represented by a string of hex values.
ERROR: Databridge call failed for rpc_description (Transport or RPC error)
This message is preceded by a SOCKETS ERROR or other message that occurs during the processing of a remote procedure call. The rpc_description identifies the specific operation that was in progress when the client error occurred (for example, connect
, primary_set
, set_option
, data sets
, initialize
, switchaudit
). This message can occur in any command that establishes communications with DBServer or Enteprise Server.
ERROR: Databridge control tables are not empty, use dropall command first - To bypass this check, use the 'u' option for the configure command
This message indicates that you are attempting to run a dbutility configure
command that overwrites the existing client control tables. This message is intended as a safeguard so that you do not accidentally overwrite the existing client control tables. For information about -u
and other dbutility command options, see Appendix B in the Databridge Client Administrator's Guide.
ERROR: Databridge control tables from a prior release exist run dbfixup to upgrade them - To bypass this check, use the 'u' option for the configure command
This message indicates that you are attempting to run a dbutility configure
command that found client control tables with an old version. Using the -u
option will result in the control tables being dropped and re-created. If you want to preserve the old control tables you can run dbfixup to upgrade the control tables.
ERROR: DataSet List specification not allowed for a DataSource that is not defined
This message occurs during a reload
command when you specify a list of data sets to reload for a data source that is not found in the control tables. The partial reload of only a few specific data sets is only supported if the data source exists and contains entries for the specified data sets.
ERROR: DataSet List specification not allowed when loading all DataSources
This message occurs during a reload
command when you specify a data source name of _all
and a list of data sets to reload. The partial reload of only a few specific data sets is only supported if the data source is explicitly named, it exists and it contains entries for the specified data sets.
ERROR: DataSet name not found
This message appears during a clone
or a refresh
command when one of the following occurs:
-
You did not enter the correct DMSII data set name on the command line.
-
The
active
column for this data set is set to 0 in the DATASETS control table.
ERROR: DataSet name[/rectype] does not have history tables; DSOPT_HistoryOnly bit must be 0 in ds_options
This message, which can occur during a define
or redefine
command, indicates that the ds_options
column of the DATASETS table entry for the data set is incorrect. The bit DSOPT_HistoryOnly
(0x2000) can only be set when the bit DSOPT_Save_Updates
(8) is also set. This is an indication that your user scripts are incorrect.
ERROR: DataSet name[/rectype] failed reorganization, correct the error or reclone – mode dd
This message can occur at the start of a process
or clone
command if the client finds a data set whose mode is 33. This situation can occur if you attempt to run a process
or clone
command after running a reorganize
command where the reorganization of a table fails for the data set in question.
To resolve this problem, do one of the following:
-
Set the
ds_mode
to 0 and re-clone the data set.—or—
-
Fix the reorganization script and rerun the
reorganize
command after setting theds_mode
to 31 for the data set in question.
ERROR: DataSet name[/rectype] has an invalid mode dd
This message occurs during a process
command (at data set selection time), if the ds_mode
column of the DATASETS control table contains an illegal value dd. Use a relational database query tool to enter a valid dd value. Refer to Chapter 5 in the Databridge Client Administrator's Guide for a list of valid mode value for the ds_mode
column of the DATASETS control table.
ERROR: DataSet name[/rectype]has an invalid value (dd) in the status_bits field
This message occurs at the start of a process
or a clone
command (at data set selection time), if the status_bits
column of DATASETS control table contains an illegal value dd. Use the Administrative Console or the relational database query tool to enter a valid dd value.
ERROR: DataSet name[/rectype] has been reorganized (mode = dd); you must first run a reorganize command
This message can occur at the start of a process
or clone
command (at data set selection time), if the client finds a data set whose mode is 31 or 34. This situation can occur if you attempt to run a process
or clone
command after running a redefine
command that needs to be followed by a reorganize
command.
After you inspect the reorg scripts to make sure that the actions they are about to perform are reasonable, run a reorganize
command. Reorg scripts ALTER tables, which is nearly impossible to reverse.
ERROR: DataSet name[/rectype] has been reorganized; you must first run a redefine command
This message can occur during a process
or clone
command (at data set selection time), if the DBSelect RPC call returns a status indicating that the data set has been reorganized. This situation can occur if you attempt to clone a data set that was defined before the data set was reorganized in DMSII. Simply run a redefine
command and (if necessary) a generate
command before attempting the clone again.
ERROR: DataSet name's {real_ds_num | virtual_ds_num} column points to an inactive or non-existent structure number
This message only applies when the automate_virtuals
parameter is set to True. It indicates that the real_ds_num
or virtual_ds_num
column for the specified data set points to an inactive or non-existent structure. The proper handling of virtual data sets that get input from more than one real data set requires that all data sets involved have their active
columns set to 1. Use the Administrative Console or the relational database query tool to correct this situation.
ERROR: DataSource name already defined, use redefine command instead - To bypass this check, use the 'u' option for the define command
This message occurs if you attempt to run a define
command for a data source that is already defined. It is a protection against inadvertently running a define
command when you meant to run a redefine
command. If you intended to use a define
command, use the -u
option on the command line. When using the Administrative Console the Define/Redefine
command automatically figures out which command to use.
ERROR: Day of year value val out of range for item 'name'in table 'name'. {record will be discarded | date set to NULL} - Keys: colname = value,...
This message can occur during a process
or clone
command. It indicates that the specified day value is incorrect in a MISER or LINC database date or in a Julian date.
This is not a fatal error. The date is stored as NULL or if the item is a key, the record is discarded.
ERROR: Definition of data items for table 'name' failed
This message can occur during a define
or redefine
command. It indicates that a failure occurred while defining the data item for the specified table. For more information on the cause, refer to the messages that precede this message (onscreen or in the log file).
ERROR: Delete of control_tablename entries for datasource failed
This message can occur during a define
, redefine
, drop
, or dropall
command. It indicates that the records in the given control table could not be deleted. See the relational database API message that precedes this message (onscreen or in the log file) for more information.
ERROR: Delete_all for table 'name' failed
This message indicates that the processing of a DELETE_ALL
request failed. This request is used while updating embedded subsets. See the relational database API message that precedes this message (onscreen or in the log file) for more information about why the delete statement failed.
ERROR: Deletion of records from Client control tables failed for DataSource name
This message can occur during the define
command, it is always preceded by more specific error messages. It is an indication that the define
command could not delete the entries that existed in the control tables. For more information, see the database API message that precedes this message.
ERROR: DeSelect for DataSet name[/rectype] ignored, it has no structure index assigned
This message can occur during a process
or clone
command if the program encounters an internal error when attempting to deselect a data set. This operation is used when the AA values of a data set are invalidated by a DMSII garbage collection reorganization. When a search of the table that contains the selected data sets fails for the data set to be deselected, this message is displayed.
ERROR: DIOPT_Store_as_GUID option only valid for ALPHA(36) items -- option ignored
(SQL Server client only) This message occurs when you try to set the bit DIOPT_Store_as_GUID (0x8000000) for a DMS item that is not an ALPHA(36). Setting this bit causes the client to treat the data as a GUID and setup the column to have a data type of uniqueidentifier
.
ERROR: DMSII database timestamp does not match value in control tables, further processing is not possible
The column db_timestamp
in the DATASOURCES control table holds the DMSII database’s timestamp. This is normally filled in when the data source is defined and is used to make sure that client is using the same database as when the data source was created. If the column is 0, the test is not preformed. The client will update this column in this situation. If you need to bypass this error, set the db_timestamp
column to 0.
ERROR: DMSII date contains an illegal numeric value val for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: colname = value,...
This message indicates that the client encountered an illegal numeric value while processing the item which is being interpreted as a DMSII date. This error message applies to both numeric and alpha dates. It can be caused by a number that is longer than 8 digits; a value that contains illegal digits; an incorrect value in the dms_subtype
column; or bad DMSII data. This is not a fatal error. The date is stored as NULL or, if the item is a key, the record is discarded.
ERROR: DMSII date/time contains an illegal time value tval for item 'name'in table 'name', {record will be discarded | date set to NULL} - Keys: columnname = value, ...
This message indicates that the client encountered an illegal numeric value while processing the item which is being interpreted as a DMSII date/time. This error message applies to both numeric and alpha date/time values. It can be caused by a number that is longer than 14 digits; a value that contains illegal digits; an incorrect value in the dms_subtype
column; or bad DMSII data. This is not a fatal error. The date/time is stored as NULL or, if the item is a key, the record is discarded.
ERROR: Drop of control table name failed
This message can appear during dbutility configure
or dropall
commands. It indicates that the drop of the specified control table has failed. See the relational database API message that precedes this message (on
the screen or in the log file) for more information.
ERROR: Drop of table 'name' failed
This message can appear during a drop
, or a dropall
command. It indicates that the drop of the specified data table, or data table’s associated stored procedures, has failed. For more information, see the relational database API message that precedes this message (onscreen or in the log file).
ERROR: Drop of table 'x_name' failed
This error can occur during a reorganize
command when the parameter use_internal_clone
is set to True. It indicates that after the reorganize
command created a new copy of the table, the attempt to drop the old table, which was previously renamed, failed. You will need to use the relational database query tool to drop this table. The reorganize
command does not stop when this error occurs.
ERROR: email() masking function is illegal for item 'name' in table 'name' data type = dtype
This error message, which is limited to the SQL Server client, indicates that an attempt was made to mask the specified column using a masking function of "email". This function is only valid for columns whose data type is char
or varchar
.
ERROR: Embedded DataSet name[/rectype] cannot be selected because its parent structure is not active
This message, which can occur during a process
or clone
command, indicates that the client encountered an embedded data set whose parent structure is not selected. Set the active
column of the parent structure to 1 in the DATASETS control table.
ERROR: Engine did not send BI/AI pair for update to DataSet name[/rectype] which allows key changes - Clear bit 0x1000 in ds_options of DATASETS entry if you wish to ignore this error
This message indicates that the Databridge Engine did not send updates as BI/AI pairs as the client requested. Clear the bit, which is displayed as a hexadecimal mask. This situation can occur if you have bad user scripts that force the data set to use AA Values or RSNs by improper methods. You should set the bit DSOPT_Use_AA_Only (0x800)
in ds_options
to do this. If you use the deprecated bit DSOPT_Include_AA (16)
and clear the item_key
columns, you will get this error.
Caution
Clearing this bit is inadvisable when the DMSII SET being used as the source of the index has the KEYCHANGEOK attribute.
ERROR: Execution of script "filename" failed
This message can occur during a reorganize
command and indicates that the execution of a reorganize
script failed. See the relational database API message that precedes this message (on the screen or in the log file) for more information.
Look at the script to see if you can determine why it failed. If you see the error correct it and set the data set's ds_mode
to 31. You can then rerun the reorganize
command as it will only affect data sets's whose mode is 31.
ERROR: Expected LINK_AI record for DataSet name[/rectype] not received from Server
This message indicates that a Databridge Engine error occurred during the data extraction of a data set that has DMSII links. The data extraction for such data sets consists of CREATE
images followed by LINK_AI
records. The client combines these before writing the resulting record to the bulk loader file (or pipe, on UNIX). If a CREATE
record is followed by another CREATE
record this error is issued. Contact Customer Support in the unlikely event that you get this error.
ERROR: External column 'name' in table 'name' has an unsupported data type "dtype"
(SQL Server only) This message, which only occurs when using the BCP API, indicates that the column name, which was added to the table using a create table user script, has an unsupported data type. If you get this error set the bit DSOPT_Use_BCP
(0x1000000) in ds_options
to make the client use bcp for this table.
ERROR: Failed to get SID for 'name', error=number (errortext)
(Windows only) This message, which only applies when file security is enabled, indicates that the system call to get the security ID for the given user name failed. Contact Customer Support if you get this error. The client will revert to using default security when this happens.
ERROR: Fetch of DATATABLES failed
This message can occur during a define
or redefine
command. It indicates that the FETCH of the external table names following the SQL select statement for the DATATABLES client control table failed. For details, see the relational database API messages that precede this message (onscreen or in the log file).
ERROR: fgets failed for console input, errno=number (errortext)
(UNIX only) The dbutility console reads the keyboard to get a console command. If you run dbutility as a background run, an attempt to read from the keyboard will result in this error and cause the console thread to terminate. To prevent this error, set the parameter inhibit_console
to True in the client's configuration file. On UNIX, you can use a couple of the kill signals to communicate with the background run.
ERROR: File name in create {table | index} suffix[nn] "filename" for table 'name' is missing
This message can occur when the suffix in question starts with "#add ". This is used to indicate that the suffix should be retrieved from the specified file in the scripts
directory. This error indicates that the suffix does not contain a file name after "#add ". Correct the suffix in the configuration file and rerun the generate
command.
The 7.1 clients treat a create table or create index suffix that starts with "#add " as a request to import the content of the specified file as the SQL suffix. This allows you to use long suffixes that are not limited by the maximum line length of text configuration files.
ERROR: File "name" specified in -f option does not exist
This message, which is limited to dbutility, indicates the argument of the –f
option is not a valid file name. If you just specify a file name, the program looks for the file in the config
directory. If you want to point to a file in any other place you need to specify the fully qualified name of the file.
ERROR: fseek failed for Null Record file, errno=number (errortext)
This error can occur when reading or writing the NULL record files. If the error occurs during a process
or clone
when trying to read the file, you can try running a redefine
command with a -R
option to rebuild the file. If you get this error during a define
or a redefine
command, the only recourse you have is to set the parameter read_null_records
to False.
ERROR: func masking function parameter contains a decimal point for item 'name' in table 'name' data type = dtype
This error message, which is limited to the SQL Server client, indicates that the validation of the masking parameter string specified in the configuration file failed for the column in question. This could happen for a masking type of “random” applied to a column that has an integer data type or a masking type of “partial” that has 3 argument, two of which must be integers.
ERROR: func masking function parameter contains non-numeric characters for item 'name' in table 'name' data type = dtype
This error message, which is limited to the SQL Server client, indicates the validation of the masking parameter string specified in the configuration file failed for the column in question. This is limited to a masking type of “random” which has two numeric parameters.
ERROR: Generation of AA item 'name' in DATAITEMS table failed for 'tabname'
This message can occur during a define
or redefine
command. It indicates that the specified item could not be placed in the DATAITEMS table. The AA Value is the offset of the record in the DMSII data set (that is, its absolute address). The column, which is named my_aa
or my_rsn
, contains the AA Value or RSN (record serial number) of the record. For more information about the cause of the failure, see the relational database messages that precede this error.
ERROR: Generation of BITOR function failed
The Oracle client uses a BITOR function to perform logical OR operations on the various options columns in the control tables. The generate
command creates this function when it does not exist. Oracle has a native BITAND function that performs logical AND operations. This message indicates that the creation of the BITOR function failed. For more information about the cause of the failure, see the relational database messages that precede this error.
ERROR: Generation of Client Control Table version entry in DATASOURCES failed
This message can occur during a dbutility configure
command or when the control tables are created by the Administrative Console's Customize
or Define/Redefine
command. It indicates that the client cannot generate the DATASOURCES table entry that holds the version of the control tables. For details about this problem, see the relational database messages that precede this error. One possible cause is that the userid does not have the required privileges.
ERROR: Generation of common scripts failed
This message applies to the Databridge client for Oracle only. It can occur during a generate
command, and it indicates that an error occurred while running one of the three common scripts that create or replace stored procedures used by the scripts the program uses to create or drop tables and stored procedures. The stored procedures are named exec_DDL
, drop_proc
, and drop_table
. Try dropping these procedures using SQL*Plus and then rerun the generate
command. The OCI messages that precede this error should provide further information for clues about why this error occurred. The most likely cause for this error is that you do not have the appropriate privileges to perform this operation.
ERROR: Generation of DATASETS entry failed for DataSet name[/rectype]
This message can occur during the define
or redefine
command. It indicates that an error occurred while inserting an entry into the DATASETS control table for the specified data set name. For more information about the cause of the failure, see the relational database message that precedes this message (onscreen or in the log file).
ERROR: Generation of DATASETS entry failed for Global_DataSet
This message, which can occur during a define
or redefine
command. It indicates that an error occurred while inserting an entry for Global_DataSet
into the DATASETS control table. This entry is used for holding the global State Information during update processing. For more information about the cause of the failure, see the relational database message that precedes this message (onscreen or in the log file).
ERROR: Generation of DATASOURCES entry failed for name
This message can occur during a define
or a redefine
command. It indicates that the client could not insert a record into the DATASOURCES control table. For more information about the cause of the failure, see the relational database message that precedes this message (onscreen or in the log file).
ERROR: Generation of DATATABLES entry failed for 'tabname'
This message can occur during a define
or a redefine
command. It indicates that the client could not insert a record into the DATATABLES control table. For more information about the cause of the failure, see the relational database messages that precede this message (onscreen or in the log file).
ERROR: Generation of external column item 'name' in DATAITEMS table failed for 'tabname'
This message, which can occur during a define
or a redefine
command, indicates that the client could not generate the entry for the external column item in the DATAITEMS control table. For details, see the relational database message that precedes this message (onscreen or in the log file).
ERROR: Generation of item name in DMS_ITEMS table failed for DataSet name[/rectype]
This message can occur during the define
or redefine
command. It indicates that an error occurred while inserting an entry into the DMS_ITEMS control table for the specified item and data set. See the relational database API message that precedes this message (on the screen or in the log file) for more information.
ERROR: Generation of [KEY] item 'name' in DATAITEMS table failed for 'tabname'
This message, which can occur during a define
or a redefine
command, indicates that the client could not insert an entry into the DATAITEMS control table for the specified item, which is a key in the index for the specified table. See the relational database API message that precedes this message (onscreen or in the log file) for more information.
ERROR: getcwd failed, errno=number (errortext)
See message "ERROR: chdir failed for directory "*path*", errno=*number* (*errortext*)"
, as these errors happen under very similar conditions.
ERROR: History table definition failed for DataSet name[/rectype]
The define
or redefine
command could not create the entry for the history table for the specified data set in the DATATABLES control table, or it could not create the entries for its columns in the DATAITEMS control tables. See the relational database messages that precede this error for more information about what went wrong.
ERROR: History table 'name' does not include {an identity | a serial} or update_time column
In order to use history tables, the tables must have an identity column or an update_time
column and a sequence_no
column determine the order in which to apply the changes. We recently added support for an IDENTITY column to the Oracle client. Identity columns (serial
or bigserial
in the case of PostgreSQL) are well suited for this purpose, however, the combination of an update_time
and a sequence_no
column also works. If the client finds no such column during a generate
command, it displays this error, which causes the generate
command to fail. Make sure that you have not set active=0 in DATAITEMS for the columns that the define
command automatically creates for history tables.
ERROR: History table 'name' does not include an 'update_type' column
In order to use history tables, the table must have an update_type
column, which specifies the type of update involved (insert, delete, or update). If the client finds no such column during a generate
command, it displays this error, which causes the generate
command to fail. Make sure that you have not set active=0 in DATAITEMS for the columns that the define
command automatically creates for history tables.
The 7.1 clients also use update types of MODIFY_BI (6) and MODIFY_AI (7) for records that are subjected to a key change. For backwards compatibility this is only done when the config parameter new_history_types
is set to True.
ERROR: Host password cannot be decoded
This message can occur when the client reads a text configuration that contains an encrypted (or encoded) password that is corrupted. This error happens most often when the file has been edited. To resolve the problem replace the password, making sure you enclose it in double quotes, then use the import
command to update the binary configuration file, followed by an export
command to replace the clear text password in the text configuration file.
The 7.1 clients encrypt passwords in both text and binary configuration files. During upgrades from older versions encoded passwords are accepted, however, import
and export
commands will encrypt these passwords. In text configuration files encrypted passwords contain the letter "x" followed by a string of hex values, while encoded passwords are represented by a string of hex values.
ERROR: Illegal argument for 'g' option
This option is used to pass the port number on which the Client Manager Service (or daemon, on UNIX) listen for connect requests. It isn't applicable to dbutility. If you get this error call Customer Support.
ERROR: Illegal concatenation for items 'name1' and 'name2', resulting column is too large
This error indicates that concatenation is illegal because the resulting column would exceed the maximum length for the corresponding data type.
ERROR: Illegal date value val for item 'name' in table 'name', day set to newday - Keys: colname = value,...
This message only occurs when the correct_bad_days
parameter is set to 1 or 2. It indicates that the DMSII date item contains invalid day values, which the client is changing to make the date valid. A day value of 0 is changed to 1 with no warnings regardless of the value of the correct_bad_days
parameter.
ERROR: Illegal date value val for item 'name' in table 'name', month set to newmonth - Keys: colname = value, ...
This message only occurs when the correct_bad_days
parameter is set to 2. It indicates that the DMSII date item contains invalid month values, which the client is changing to make the date valid.
ERROR: Illegal date value val for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message can occur during the process
and clone
commands. It indicates that the date extracted from the DMSII data is in error. The most likely causes of this error are bad DMSII data or an incorrect value in the dms_subtype
column. This is not a fatal error. The date is stored as NULL or, if the item is a key, the record is discarded.
ERROR: Illegal date value val for item 'name' in table 'name', value of year out of range, {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message can occur during a process
or a clone
command. It indicates that the year portion of the date is invalid for the relational database data type. In the case of SQL Server the year portion of an item with a SQL type of smalldatetime
is limited to the range 1900–2079. Similarly, the year portion of an item of SQL type of datetime
is limited to the range 1753–9999. The data types of date
and datetime2
have ranges of 0001-9999. If you are dealing with a SQL type of smalldatetime
, consider changing it to date
or datetime2
. To make the client use these data types (instead of smalldatetime
and datetime
) in SQL Server, set the parameters use_date
and use_datetime2
to True in the client configuration file. No additional customization is needed.
The program recognizes a MISER date of 99999 as a special date used by MISER systems and stores it as 6/6/2079 when the SQL type is smalldatetime
and 12/31/9999 otherwise. This is not a fatal error. The date is stored as NULL or, if the item is a key, the record is discarded.
For the Oracle and PostgreSQL databases dates have ranges that include the years 0001 through 9999.
ERROR: Illegal dms_subtype number for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message indicates that a member of a DMSII date GROUP has a dms_subtype
value that is not 1, 2, 3, or 4. Note that the only acceptable DMSII GROUPs are those with 2 or 3 numeric items. DMSII date groups are set up by setting the DIOPT_Clone_as_Date (2)
option in the di_options
column of the DMS_ITEMS
entry for the group, followed by the setting of the dms_subtype
columns for the members of the group.
This message can also indicate that a DMSII item with a data type of REAL marked as to be cloned as date, has an illegal dms_subtype
value. For a complete list of the valid dms_subtype
values for dates refer to the Databridge Client Administrator's Guide.
ERROR: Illegal hex character 'char' found in encoded string
This message can occur when the client tries to decrypt or decode passwords when reading the configuration file at the start of a run. Passwords are always encrypted in binary configuration files. Password encryption is optional in text-based configuration files and can be done with the export
command.
ERROR: Illegal month name mmm for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message indicates that the client encountered an illegal month name while processing a DMSII ALPHA date. If the month names are not in English, you need to use the months
specification in the configuration file. This is not a fatal error. The date is stored as NULL or, if the item is a key, the record is discarded after it is written to the corresponding file in the discards subdirectory.
ERROR: Illegal numeric data (value) for field name[number] in archive file
This error can occur during a reload
command. If you did not modify the file, report the error to Customer Support.
ERROR: Illegal [numeric] time value number for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message indicates that the client encountered an illegal numeric time value while processing the item which is being interpreted as a TIME(1). This is not a fatal error and is most likely caused by bad DMSII data. The date is stored as NULL or, if the item is a key, the record is discarded. Make sure that the dms_subtype
value you specified in the DMS_ITEMS table is correct.
ERROR: Illegal numeric value specified for {count | length}
This error can occur during a tcptest
command if an illegal numeric value is specified for the count or length parameter of the command. A value that causes the count to go negative also results in this error. Do not use extremely large values for the count, as this would make the test run for a very long time using up a lot of mainframe CPU time.
ERROR: Illegal numeric value specified for 'F' option argument
The argument of the -F
option for the Client is an audit file number, which must be in the range of 1 to 9999.
ERROR: Illegal numeric value specified for port
This message can occur during the scanning of the port number from the command line argument for a define
command. It indicates that the specified port number is not syntactically correct. Port numbers must be in the range 1 to 65535.
ERROR: Illegal numeric value specified for -t option argument
This message can occur during the scanning of the -t
command line option, which has a numeric argument. It indicates that the specified argument is not syntactically correct. You can specify the trace mask as a decimal number or a hexadecimal number which must be prefixed with “0x”.
ERROR: Illegal numeric value specified for 'V' option argument
The -V
option is used with the dbutility unload
command to specify the control table version. When you upgrade, the dbfixup program creates control tables that are readable by the client from which you are upgrading. This allows you to safely reload these control tables and use the previous version if you experience a problem.
ERROR: Illegal operator nnn in filter for table 'name'
This message only occurs when there is an OCCURS table filter present for the given table. It indicates that the binary filter file dbfilter.cfg
is malformed. Try recompiling the binary filter by using the import command of the makefilter utility to get a fresh copy of the binary filter. If the problem persists, contact Customer Support.
ERROR: Illegal string
This message can occur while processing text configuration files or the UNIX globalprofile.ini
file. It indicates that an illegal string value was entered into the configuration file for a parameter whose argument is a quoted string. This message is always followed by a second error message that lists the input record.
In the case of Windows, the most common cause of this error involves the use of back slashes in file names. You must enter a back slash as two back slashes because the first is interpreted as a force character. Failure to do this results in this error.
ERROR: Illegal update type dd for table 'name'
This internal message only occurs when the aux_stmts
configuration parameter is set to a nonzero value. It can occur during the process
or clone
command, and indicates that the section of code that generates the SQL statement for an update encountered an undefined update type. Contact Customer Support if you encounter this error.
ERROR: Improper section header in configuration file line: input_text
This message, which can occur when a text configuration file is being processed, indicates that section header is not defined or is not formatted using the following syntax:
[*SectionHeaderName*]
ERROR: Improper test with NULL only legal operations are "=" and "!="
This message only occurs when there is an OCCURS table filter present for the given table. It indicates that the binary filter file "dbfilter.cfg"
contains a bad test that has a second operand of NULL. You can only test for equality (=) or inequality (!= or <>) with NULL. Examine your filter source file, correct the error and try recompiling the binary filter by using the import command of the makefilter utility to get a fresh copy of the binary filter. If the problem persists, contact Customer Support. Under normal circumstances makefilter should be flagging the filter statement as being in error; this situation is not expected to ever happen.
ERROR: Incomplete script file "name", missing '\/***\/'
This message can occur during a process
or a clone
operation, while running a script to create a data table. It indicates that the script file script.create.*tabname*
was not created correctly or is corrupt. First, check that an error did not occur during the last generate
command. If the script file is corrupt, run the generate command again to create a new script file. You will need to use the -u
option to force it to generate new scripts.
ERROR: Index creation failed for control table name
This message can occur during a dbutility configure
command or when the control tables are created by the Administrative Console's Customize
or Define/Redefine
commands. It indicates that an error occurred while creating the index for the specified control table. See the relational database message that precedes this message (on the screen or in the log file) for more information.
ERROR: Index creation failed for history table 'name'
This message, which can occur during a reorganize
command, indicates that the command was unable to create the index for the given history table. See the database API error messages that precede this error message to determine why the creation of the index for this empty table failed.
ERROR: Index creation failed for table 'name'
This message, which can occur during a reorganize
command, indicates that the command was unable to re-create the index for the given table. The reorganize
command typically will drop the index for a table before altering it, when a column that is a member of the index is involved in the alter
command. The command also drops and recreates the index for table whose index type is changed (for example, a unique index is changed to a primary key). See the database API error messages that precede this error message to determine why the creation of the index for this empty table failed.
ERROR: Insufficient temporary value entries -- contact Customer Support
This message only occurs when there is an OCCURS table filter present for the given table. It indicates that the filter you are using is too complex for the program to handle. Contact Customer Support to get a new version of the makefilter utility that has a larger temporary value array. Alternatively you may want to consider simplifying your filter statement.
ERROR: Internal error, undefined column type number for colname encountered in Process_Archive_Record()
This message indicates an internal error in the reload
command. Contact Customer Support unless you modified the archive file created by the client.
ERROR: Invalid boolean argument specified for {sched | verbose} command
The command entered in the command line console contains an invalid boolean argument. A boolean argument is of the form {yes | no} or {true | false}. This applies to the sched
and verbose
commands.
ERROR: Invalid concatenation for item 'colname' in table 'tabname'
The client only supports concatenation of two ALPHA items or two unsigned NUMBER items. You can also use a NUMBER item that is cloned as ALPHA in place of an ALPHA item, or an ALPHA item that is cloned as a NUMBER in place of a numeric item. Any other combination results in this error message being displayed by the define
and redefine
commands.
ERROR: Invalid database update type number received
This message can occur during a process
or a clone
command. It indicates that the Databridge host software returned an undefined update type (for example, an update type that is not CREATE
, DELETE
, MODIFY
, STATE
, MODIFY_BI
, MODIFY_AI
, LINK_AI
, DELETE_ALL
, DOC
, or STATE
). To get more information on this message, you must get a trace of DBServer communications (-t 0x45) and send the trace to Customer Support.
Note
An alternative is to turn off cloning (set the active
column to 0) for the offending data set.
ERROR: Invalid length specification for datatype 'name' in external column 'colname'
This message, which occurs when text-based configuration files are being processed, indicates that the sql_length
specification in the given external_column
parameter is invalid for the corresponding data type.
ERROR: Invalid SQL type 'name' for external column 'colname'
This message, which occurs when text configuration files are being processed, indicates that the parameter sql_type
in the given external_column
specification is invalid.
ERROR: Invalid stop_time or end_stop_time value dddd, dddd, values set to 0
This message can occur during a process
or clone
command if the controlled_execution
configuration parameter is enabled. It indicates that the program detected an error in the values of the stop_time
and end_stop_time
columns it read from the DATASOURCES control table. These values are integers that represent a time of a day using 24-hour time (hh:mm format). If the value hh is not in the range 0:24 and mm is not in the range 0:59, both entries are set to 0 and this error appears.
Note
The method of stopping the client will not be supported in future releases of Databridge, use the blackout_period
parameter instead.
ERROR: Invalid structure index number received
This message indicates a problem on the host. In this case, the Databridge Engine is sending a DMSII structure index number that is invalid (for example, a structure index number less than 0). Structure indexes are assigned when the client sends DB_Select
requests to the Engine for each data set that is to receive updates or extracts at the start of a process
or clone
command. This index is used to associate DMSII records with the data set they belong to. A structure index of 0 is only used for STATE records, it indicates that the information applies to all selected data sets (that must all have a ds_mode
of 2). To get more information on this message, you must get a trace DBServer communications (-t 0x45) and send it to Customer Support.
ERROR: Invalid value nnn for dms_concat_num column in DATAITEMS for table 'tabname' item 'colname'
This message indicates that the client detected a bad item number in the dms_concat_num
column. While this error can occur during any client command, it typically happens when loading the control tables. The most likely cause of this error is from a non-existent item number in the dms_concat_num
column. After a DMSII reorganization or when changes occur in GenFormat, DMS items numbers can change. If you use hard-coded numbers in your user scripts, you may end up concatenating different columns than the ones you originally specified. Use subqueries in your user scripts instead of hard-coded numbers.
Fix your user script and force a redefine
command to make the change take effect. Set status_bits
to 8 for the data set in question or use the -R
option for the redefine
command.
ERROR: IO errors in writing {bcp format | sqlldr control | pgloader control | script} file "name", errno=number (errortext)
This message can occur during a generate
or createscripts
command. It indicates that an error occurred while writing to the specified file. The system error message errortext should give you a handle on why this error occurred.
ERROR: IO errors in writing file "name", errno=number (errortext)
This error message, which can occur during an unload
command, indicates that an error occurred while writing a record to the archive file whose name appears in the message. The system error should explain the cause of the problem.
ERROR: Item 'name' (dms_item_type = number) cannot be cloned
This message, which typically should not occur, is an indication that the specified dms_item_type
column contains a value that the program cannot deal with. If you happen to have a DATAITEMS entry whose type is GROUP (29), attempting to set the active column to 1 results in this error, unless the sql_type
of the item is set to date.
ERROR: Item 'name' (type nnn) cannot be tested for NULL as it is not nullable in DMSII
This message only occurs when there is an OCCURS table filter present for the given table. It indicates that the binary filter file "dbfilter.cfg
" contains a test for NULL for an item that is not nullable in DMSII. You need to rewrite the filter statement for this table, as what you are trying will not yield the results expected.
ERROR: Item 'name' in control table 'name' has an illegal sql_type of nnn
This message, which is not likely to be seen, indicates that when attempting to set up the host variables for updating a control table, a column with an invalid SQL data type code was encountered.
ERROR: Item name in DataSet name[/rectype] points to a non-existent or inactive DataSet (strnum=nnn)
This message only applies when the configuration parameter enable_dms_links
is enabled. It indicates that the specified item points to a non-existent or inactive data set. This situation is clearly an error, as the link must point to a valid table. You must either set the active column to 0 in the DMS_ITEMS table for the link in question, or set the active column to 1 in the DATASETS table for the target data set of the link.
ERROR: Item 'name' in table 'name' cannot be cloned as three booleans
The program clones DMSII NUMBER(1) items as a field of three Booleans when the bit DIOPT_Clone_as_Tribit
(16) in the di_options
column of DMS_ITEMS is set. If you try to use this option with a DMSII NUMBER whose length is not 1 you will get this error.
ERROR: Item 'name' in table 'name' cannot be cloned using a TIME data type, {record will be discarded | time set to NULL} - Keys: colname = value, ...
This message occurs if you try to clone a DMSII TIME(12), represented by a dms_subtype
value of 4, as a SQL Server time
data type. Use a numeric time instead, as this is not supported.
ERROR: Item 'name' in table 'name' cannot be flattened to a string result is too long
This message only occurs when you try to flatten a single item with an OCCURS clause to a string. It indicates that the resulting column is too long for a char
or varchar
data type. You need to consider other ways of dealing with this particular OCCURS clause.
ERROR: Item 'name' in table 'name' contains an illegal numeric value val, {record will be discarded | date set to NULL | time set to NULL} - Keys: colname = value, ...
This message indicates that the client encountered an invalid number. The program recognizes numbers that have all their digits set to 0xF to be NULL; any other digit that has a value that is not 0–9 except for the sign is treated as bad. A bad number is stored as NULL unless the item is a key, in which case the record is discarded. Note that if the bit DAOPT_Allow_Nulls (1) in the da_options column of DATAITEMS is not set, the number is stored as all nines (9) or all zeros (0), depending on the setting of the configuration parameter null_digit_value .
ERROR: Item 'name' in table 'name' contains an illegal numeric value val, {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message indicates that the client detected an invalid number while processing a DMSII item as a date. This error can occur when the DMSII data is bad or when the item is not an actual DMSII date. For details on interpreting DMSII items as date values, see Decoding DMSII Dates, Times, and Date/Times in the Databridge Client Administrator's Guide.
ERROR: Item 'name' in table 'name' contains an invalid time value hh:mm:ss, {record will be discarded | time set to NULL} - Keys: colname = value, ...
This message indicates that the client detected an invalid number while processing a DMSII item as a time value. This error can occur when the DMSII data is bad or when the item is not an actual DMSII time. For details on interpreting DMSII items as time values, see Decoding DMSII Dates, Times, and Date/Times in the Databridge Client Administrator's Guide.
ERROR: Item 'name' in table 'name' has an illegal {dms_item_type | dms_subtype} value of nnn
This message indicates that the createscripts
command encountered a DATAITEMS table entry that has a bad value in its dms_item_type
column or an item that is an external column (dms_item_type
= 258) whose dms_subtype
value is bad. This is the result of bad user scripts. You need to fix this before proceeding any further, as it will cause the client to fail.
ERROR: Item 'name' in table 'name' is not an unsigned NUMBER or an ALPHA item, flattening to a string is not supported
This message only occurs when you try to flatten a single item with an OCCURS clause to a string. It indicates that the item is not an unsigned NUMBER or an ALPHA item, which are the only two data types for which flattening to a string is supported.
ERROR: Item 'name' in table 'name' points to non-existent DMS item numbered nnn
This message, which can occur during a createscripts
command, indicates that the item in question is in error. This is the result of bad user scripts. The most likely cause of this error is that the item number in the dms_concat_num
column does not exist. You need to fix this before proceeding any further, as it will cause the client to fail. If you use hard-coded numbers in your user scripts, you may end up concatenating different columns than the ones you originally specified. Use subqueries in your user scripts instead of hard-coded numbers.
ERROR: Item 'name' in table 'name' which is a member of a date group is not an integer value, {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message appears when you have defined sql_type
to a date data type and dms_subtype
to 1, 2, 3, or 4, but there is a non-numeric member in the DMSII GROUP. The DMSII date GROUP can contain only numeric fields that must be appropriately identified as year, month, and day. For more information on setting the dms_subtype
column for dates, see Getting Started in the Databridge Client Administrator's Guide.
ERROR: Item name, which is a member of a SET used in a link, is not present in DataSet name[/rectype]
This message only applies when the configuration parameter enable_dms_links
is enabled. It indicates that the specified item which is a member of a SET used in a link is not present in the target data set. Make sure that the item's active
column is not set to 0 in the target data set.
ERROR: Last Database Error = number
This message is printed when a SQL error occurs while processing updates. This message is followed by the actual SQL statement that provoked the error.
ERROR: Length of 'letter' option argument exceeds maximum of nnn characters
The client performs length checking for the command line switches that have arguments that are strings. When the strings are longer than their assigned limits, the client displays this error. The command line switches involved (and their corresponding maximum length) are:
- -D (30 characters)
- -O (30 characters)
- -P (30 characters)
- -S (128 characters)
- -X (17 charactesrs)
ERROR: Link offset 0xhhhhhhhhhhhh for DataSet name[/rectype] in LINK_AI image is less than the offset of the first link in DATAITEMS 0xhhhhhhhhhhhh
This error, which only occurs during the data extraction of data sets that have links, indicates that the LINK_AI data is malformed. The client combines the CREATE record and LINK_AI data to form a single data record that can be bulk loaded. However, since the offset of the link data is wrong it would end up overwriting some of non-link data. Contact Customer Support if you get this error.
ERROR: Load of Databridge control tables failed
This message can occur during any client command except for dbutility configure
, refresh
, runscript
, and tcptest
. It indicates that an error occurred while reading the control tables. See the relational database API message that precedes this message (on the screen or in the log file) for more information.
ERROR: Log file prefix is too long, maximum allowable length is 20, value truncated
You can specify a prefix for the client log files in the client configuration file, however this prefix is limited to 20 characters. If you use a longer prefix, you get this error. The default prefix is “db”. We recommend using the data source name as the prefix, when you have more than one data source.
ERROR: Logswitch command failed
The logswitch command, which can be issued from the Console, closes the current log file and opens a new one with a different name. See the error messages that precede this message in the log file to see why the command failed.
ERROR: Mask index value nn out of range
(SQL Server client only) The parameters for the random
and partial
masking functions are stored in the client configuration using the masking_parameter
array which can hold 100 entries. The index into this table is stored in the low half of the masking_info
column of the DATAITEMS control table entry for the item. If this value exceeds 100 you get this error, which is caused by a bad user script. Fix the user script and run a redefine
command with the -R
option followed by a generate
command with the -u
option to fix this problem.
ERROR: Mask type value nn out of range
(SQL Server client only) This message indicates that the masking type, which resides in the low 8-bits of the masking_info
column of the DATAITEMS table has a value that is not in the range 0 to 4. It is an indication that you have a bad user script. Fix the user script and run a redefine
command with the -R
option followed by a generate
command with the -u
option to fix this problem.
ERROR: Masking function 'func' does not support parameters, masking string ignored
(SQL Server client only) This message indicates the masking type, which resides in the low 8-bits of the masking_info
column of the DATAITEMS table does not have parameters. This includes the “default” and “email” masking functions. Fix the user script to zero the rest of the entry and run a redefine
command with the -R
option followed by a generate
command with the -u
option to fix this problem.
ERROR: Maximum bcp errors threshold exceeded for table 'tabname', load aborted
(SQL Server client only) This message indicates that the table in question has gotten more discards than the value specified while issuing bcp_sendrow
calls into the BCP API. At this point the client will close off the bcp connection and silently discard any additional records received for this table, which will end up needing to be re-cloned.
ERROR: Merge of neighboring items only valid for ALPHA and unsigned NUMBER - unable to merge items 'name1' & 'name2'
The client merges two neighboring items of like type to form a bigger item when the di_options
bit DIOPT_MergeNeighbors
(0x1000000) is set in the DMS_ITEMS table entry for the first item. This feature is only supported for items of type ALPHA or unsigned NUMBER. If you try to merge any other type of items you will get this error.
ERROR: Mismatched AFN values for reorganized DataSets: 'name1' AFN = afn1 and 'name2' AFN = afn2
This message, which can occur during a redefine
command, indicates that not all the data sets to be reorganized have the same AFN value in their State Information. Most likely, the value of the active
column was changed for one of the data sets. If this is the case, you will also need to set the ds_mode
column to 0 for the data set whose active column was changed to 1.
ERROR: Missing END operator in filter for table 'name'
This message only occurs when there is an OCCURS table filter present for the given table. It indicates that the binary filter file "dbfilter.cfg
" contains a filter that does not end in an END operator. This indicates that an internal error occurred in the makefilter utility that should never happen; contact Customer Support if you get this error.
ERROR: Missing entry points in DLL "name"
(SQL Server client only) When using the BCP API, the Databridge client loads the ODBC DLL and sets up a table of addresses through which it makes the BCP API calls. This allows the client to work with the version of the ODBC DLL that supports the features needed. Microsoft includes the SQL Server version number in the DLL name, so we can link the DLL, as its name changes with every version of the ODBC driver.
This message indicates that one of the BCP API entry points that we need is not present in the ODBC driver being used. Use ODBC drivers with a version of 17.4 or newer.
ERROR: Missing length specification for SQL type sql_type (nn) in external column 'colname'
This error, which can occur during text configuration file processing, indicates that the sql_length
specification for the given external_column
parameter is missing or 0.
ERROR: Missing section header "[topics]" in topic configuration file "name"
This message, which is limited to the Kafka client, indicates that the section header "[topics]", is not the first line in the topic configuration file that is not a blank line or a comment.
ERROR: name command can only be followed by an optional alphanumeric argument
ERROR: name command requires a {boolean | decimal numeric | valid | valid numeric | valid string} argument
These messages are responses to bad input from the dbutility command line console for a command, which requires the given type of argument. Boolean arguments can be True or False (or “T” or “F”). Decimal arguments cannot be entered as hexadecimal values. Numeric values can be decimal or hexadecimal numbers. Hexadecimal numbers must be prefixed by “0x”. String arguments are typically text and optionally enclosed in double quotation marks. You must use double quotation marks when the data contains a non-alphanumeric character.
ERROR: Name 'name' is not a valid {user | group} name
This message, which only applies when file security is enabled, indicates that the system call to get the security ID for userid or a group name failed. The client uses this call to determine if the user is allowed to run the client and when setting the ACL for a file or directory it creates. If you need to change the file security settings for the client, use the “setfilesecurity” program for this.
ERROR: No active structures were found
This message can occur during a process
or clone
command. It indicates that no data sets are selected for cloning or updating. In other words, the active columns in the DATASETS client control table are set to 0 (cloning off). This situation could occur when you use a SQL statement to change the value for the active column, but you do not use a WHERE clause.
ERROR: No configuration file name specified as an argument to the 'f' option
This message indicates that the -f
option is not followed by a file specification. You cannot specify a null configuration file by omitting the argument for the -f
option.
ERROR: No data received from {DBServer | DBEnterprise Server} for nnn minutes, aborting Client
This message is displayed when the parameter max_srv_idle_time
is set to a non-zero value. It indicates that no data was received from the server for the specified amount of time and the client is about to stop. The client will exit with an exit code of 2059 after resetting the connection to the server. When using the service, the service will attempt to restart the client after a brief delay.
ERROR: No item with dms_subtype set to 254 was found in table 'name'
This message will only occur when you replicate DMSII embedded subsets using virtual data sets. It indicates that the virtual data set is not properly defined. A value of 254 in the dms_subtype column of the item is used to indicate that the item is a parent key (that is, it contains the AA Value of the parent item). The virtual data set in question thus implements the embedded subset.
ERROR: No usable link data found in LINK_AI record, table 'name'
This error indicates that when processing a LINK_AI image which contains links that have an occurs clause none of the link had any usable link data, which is an indication that the link data is not valid. Contact Customer Support if you get this error.
ERROR: Non-link item 'name' in table 'tabname' cannot follow link items, command aborted
When using DMSII links, the client requires that all links be placed at the end of the table. Adding non-DMSII columns to the end of the table results in this error. To rectify this problem, you need to change the value of the item_number
columns of the links in DATAITEMS so that they land after all non-link items.
ERROR: Null Record file does not contain an entry for DataSet name[/rectype]
This message, which can occur during a process
or clone
command, indicates that the null record entry for the specified data set is missing from the file. The most common cause of this error is enabling the configuration parameter read_null_records
for a data source that was already replicated. To rectify this problem, run a redefine
command with the -R
option to rebuild the null record file.
ERROR: Null Record file
"*datasource*_NullRec.dat"
is corrupt
This message, which can occur during the define
and redefine
commands, indicates that the specified file is corrupt because the client could not locate a record that is supposed to be in the file. To rectify this problem, re-run the redefine
command with the -R
option to rebuild the null record file.
Note
If you reload the control tables from the unload file that the command creates and rerun the command, you will most likely get this message, which you can safely ignore.
ERROR: Open failed for archive file "name", errno=number (errortext)
This error, which can occur during a client reload
command or a DBClntCfgServer configure
command that drives the Administrative Console's Customize
command. It indicates that the program got a system error when trying to open the archive file.
ERROR: Open failed for create {table | index} suffix[nn] file "filename" for table 'name', errno=nnn (errortext)
This message can occur during a generate
command when the suffix in question starts with "#add ". This is used to indicate that the suffix should be retrieved from the specified file in the scripts
directory. The most likely cause of this error is that the provided file name is incorrect. Update the configuration file and rerun the generate
command.
ERROR: Open failed for file "name", errno=number (errortext)
This message can occur during any command that attempts to open a new file for write. The included system error should explain the cause of the problem. In order to implement file security on Windows, all such file opens use common code that creates the file using the Windows library CreateFile procedure, which allows a DACL that defines the file security to be supplied.
Once the file is created with the proper security, we close it and reopen using ANSI C library procedures. You will get this error in the unlikely situation where the open or open call fails. In the case of UNIX clients, all such file open operations go through common code that displays this message in case of error.
ERROR: Open failed for filter file "name", errno=number (errortext)
This message only occurs when there is an OCCURS table filter file "dbfilter.cfg"
present in the config
subdirectory for the data source. It indicates that the attempt to open the filter file for read failed. The provided system error number and its associated test string should provide you with some clues about why this error occurred. You can try deleting the binary filter file and rerunning the makefilter import
command to recreate the file.
ERROR: Open failed for global configuration file "/etc/MicroFocus/Databridge/7.1/globalprofile.ini"
In order to be able to run UNIX clients, you must first create the file /etc/MicroFocus/Databridge/7.1/globalprofile.ini. This file serves the same purpose as the Windows registry for the client. It defines the directories where the software was installed and the global working directory where the client lock files are created in the locks subdirectory. This file also defines the userid under which the daemon is to be run. If you do not create this file, the client will display this error and exit.
ERROR: Open failed for Null Record file "name" failed, errno=number (errortext)
This message indicates that the client failed to open the file datasource_NullRec.dat
. The most common cause of this error is enabling the configuration parameter read_null_records
for a data source that was already replicated. To rectify this problem, run a redefine
command with the -R
option to rebuild the null record file.
ERROR: Open failed for pipe for shell "filename", errno=number (errortext)
This message, which applies to UNIX, can occur during a process
or clone
command. It indicates that the program could not open the shell script file which launches the bulk loader (SQL*Loader in the case of Oracle and pgloader
in the case of PostgreSQL). The included system error should explain the cause of the problem.
ERROR: Open failed for script file "name", errno=number (errortext)
Example: "ERROR: Open failed for script file "script.user_define.customer", errno=2, (No such file or directory)"
This message can occur during a process
, clone
, generate
, refresh
, runscript
or createscripts
command. It indicates that the client cannot find or open the specified script file. This error typically occurs if the dbscripts
sub-directory does not contain the scripts or the user_scripts_dir
parameter is not properly set up. Make sure that your scripts are in the directories where they are expected to be (dbscripts
or scripts
) for this data source and that you have not inadvertently deleted any script files. If you ran a createscripts
command before the error occurred, the user_script_dir
parameter may point to a nonexistent directory.
ERROR: Open failed for topic configuration file "topic_config.ini", errno=number (errortext)
This message, which is limited to the Kafka client, indicates that the client cannot find or open the file in question in the config
directory for the data source. Make sure that you ran a generate
command to create this file.
ERROR: Open_Stmt failed for thread[nn]
This error, which is limited to multi-threaded updates, is usually caused by a SQL error while creating the STMT. An STMT is a data structure that is used to execute pre-parsed SQL statements using host variables to pass the data. It could also be the result of memory allocation error. See the preceding error message for details about the problem.
ERROR: Operand stack {overflow | underflow}-- contact Customer Support
This message only occurs when there is an OCCURS table filter file "dbfilter.cfg"
present in the config
subdirectory for the data source. It indicates that an internal error has occurred while executing the filter pseudo-code. Try to recreate the filter using the makefiler utility and if that fails contact Customer Support.
ERROR: Operands for logical operator {AND | OR} are not both boolean values
This message is an internal error that only occurs when there is an OCCURS table filter file "dbfilter.cfg"
present in the config subdirectory for the data source. It indicates that the operands for an "AND" or "OR" operation are not both boolean values. Examine the filter source file "dbfilter.txt"
to see if you can see something wrong with the filter statement for the given table. If you can see the problem, fix the statement and recompile the filter using the makefiler utility. If that fails or you cannot see anything wrong with the filter command, contact Customer Support.
ERROR: Parent DataSet for table number nnn referenced in filter does not have the ds_options "DSOPT_FilteredTable" bit set - Run the makefilter utility to create a new filter file that is current
This message is an internal error that only occurs when there is an OCCURS table filter file "dbfilter.cfg"
present in the config subdirectory for the data source. It indicates the binary filter and the client control tables are not in sync. Try recreating the filter by either running the makefilter utility's import
command or run a redefine
command with the -R
option (Redefine All) to try to remedy this situation. This situation should never occur, as whenever you run a redefine
command or you run the Administrative Console's Customize
command makefilter is automatically launched. However, you need to pay attention to the cases where the program is unable to compile the filter due to errors in the filter's source file. Look in the makefilter log file to determine why the compile failed.
ERROR: Parser table in error
This is an internal error indicating that the parser table for the dbutility console command is in error. Contact Customer Support if you get this error.
ERROR: Partial load failed for table 'name' using bcp - see file "bcp.tablename.log" for more information
ERROR: Partial load failed for table 'name' using sql*loader - see file "sqlld.tablename.log" for more information
ERROR: Partial load failed for table 'name' using pgloader - see file "pgloader.tablename.log" for more information
These messages apply to Windows clients. They can occur when the bulk loader stops processing during a segmented load of a table. The amount of the segmented loads is determined by your setting for max_temp_storage
.
ERROR: partial() masking function for item 'name' in table 'name' data type = dtype requires three parameters
(SQL Server client only) When you specify a partial masking function for a column it must have three comma separated parameters; a numeric prefix, a text padding pattern, and a numeric suffix. The prefix and the suffix specify the number of characters to expose at the start and the end of the string, while the padding replaces the remaining characters. This error indicates that the masking string you created does meet these specifications.
ERROR: partial() masking function is illegal for item 'name' in table 'name' data type = dtype
(SQL Server client only) This error indicates that you are trying to use the partial masking function for a column whose data type is not char
or varchar
.
ERROR: Pass1 of two pass modify failed (return code = ddd)
This message, which can occur during a process
or clone
command, indicates that an error occurred while processing an update for an item with an OCCURS DEPENDING ON clause. The value of the item pointed to by the OCCURS DEPENDING ON changed. The client first updates the rows that remain in the table. It then inserts new rows or deletes rows that no longer exist from the OCCURS table. The following values indicate the status of updating the tables, and the client handles them appropriately:
- 6 - rows to insert
- 7 - rows to delete
- 8 - one of the updates found no matching rows
If any other value is returned, it is probably an internal error or some other error that caused the client to return an unexpected status. Contact Customer Support if this error is not caused by another error during the process of updating the table.
ERROR: Prepare failed for SQL statement: sql_stmt
This message can occur during a process
or clone
command. It indicates that an error occurred while parsing a SQL statement for updating data or control tables. See the relational database API message that precedes this message (on the screen or in the log file) for more information.
ERROR: Processing of configuration file "name" failed
This message occurs any time the client finds an error in the text-based configuration file that causes it to terminate.
ERROR: Program terminating, due to bulk loader failure
This message can occur during a process
or clone
command. It indicates that there was a bulk loader failure during data extraction and that the client is terminating. When you set the verify_bulk_load
parameter to True, a bulk loader error causes the client to abort the clone. This avoids having to extract all the data for a data set, only to find out that the bulk loader count verification failed, which causes the client to eventually abort.
ERROR: Program terminating, error occurred in a worker thread
This message indicates that a fatal error occurred in an Update Worker thread causing the program to terminate. This message is displayed by the main thread once it realizes that such an error has occurred. To determine what's causing the problem in the Update Worker thread, look for the additional error messages in the log file.
ERROR: pthread_create failed for 'name', error=number (errortext)
(UNIX) This error indicates that the system could not create the specified thread. The Index Creator Thread creates indexes for tables whose data extraction completes successfully. The Watchdog Timer Thread performs periodic checks for things such as lack of response from the server. When using multi-threaded updates, several Update Worker Threads are responsible for executing SQL.
This is an internal error that should never occur. It is an indication that the system might be low on resources.
ERROR: pthread_mutex_init failed for 'name', errno=error (errortext)
(UNIX) This error indicates that the initialization of a mutex in question failed. It is an internal error that should never occur.
ERROR: QueryPerformanceCounter failed, error=number (errortext)
(Windows only) This is an internal error that indicates that the client was unable to retrieve the current value of the performance counter, which it uses for time-interval measurements. Contact Customer Support if you get this error.
ERROR: QueryPerformanceFrequency failed, error=number (errortext)
(Windows only) This is an internal error that indicates that the client was unable to get information about the granularity of the external clock that we use for time-interval measurements. Contact Customer support if you get this error.
ERROR: quit after command requires an audit file number in the range 1 to 9999
This message can occur during a process
or clone
command and indicates that the AFN for the QUIT
command is invalid.
ERROR: quit command not in the form: "QUIT {AT hh:mm | AFTER nnnn}"
This message indicates that there is a syntax error in the dbutility console QUIT
command issued by the operator.
ERROR: random() masking function for item 'name' in table 'name' data type = dtype requires two parameters
The random() masking has two numeric parameters that represent the low and high values for the range of values generated. Failure two supply two parameters in the corresponding entry in the client configuration file results in this error.
ERROR: random() masking function is illegal for item 'name' in table 'name' data type = dtype
(SQL Server client only) This error indicates that you are trying to use the random masking function for a column whose data type is not a numeric value (for example int or decimal).
ERROR: Read failed for file "name", errno=number (errortext)
This message, which only displays on the screen, indicates that the log descriptor file "log.cfg" in the config
folder could not be read. This file is a tiny binary file used by the client to keep track of the log file name. If the error persists simply delete this file and let the client create a new one. The most likely source of this error is file ownership conflicts between the command line client and the service. See the system error in this message for more information about why this error occurred.
ERROR: Read failed for Null Record file, errno=number (errortext)
This message, which can occur during a process
, clone
, or redefine
command, indicates that an I/O error occurred while reading the null record file. To rectify this problem, try running a redefine
command with the -R
option to rebuild the null record file.
ERROR: Read failed for trace descriptor file "name", errno=number (errortext)
The trace descriptor file trace.cfg
, which resides in the config
folder, is a tiny binary file used by the client to keep track of the trace file name. If the error persists simply delete this file and let the client create a new one. The most likely source of this error is file ownership conflicts between the command line client and the service. See the system error in this message for more information about why this error occurred.
ERROR: ReadFile failed for console, error = number
This message indicates that the Console thread for the Windows client received a read error while reading keyboard input. You typically get this error message when the client is terminated
by pressing Ctrl+C
.
ERROR: Real data set link for data set name[/rectype] is NULL, make sure that automate_virtuals is true
In a MISER database, user scripts create a pointer that links the virtual data sets and the real data sets from which they are derived using the virtual_ds_num
, real_ds_num
and real_ds_rectype
columns in the DATASETS control table. If the configuration parameter automate_virtuals
is not enabled, this pointer is not set up and executing a createscripts
command returns this message.
ERROR: Received an invalid structure index for DataSet name[/rectype]
This message can occur during a process
or clone
command. It indicates that DBServer or Enterprise Server returned a negative structure index in the response packet for a DB_Select RPC call. When this occurs, set the active
column to 0 in the DATASETS table for the specified data set and try again. If the error persists, contact Customer Support.
ERROR: Refresh command failed
This message indicates that dbfixup set a bit in the data source's status_bits
column to indicate that there are OCCURS tables present. Upon seeing this bit a process
or clone
command initiates a refresh
command to get the stored procedures z_tablename created. These stored procedures are used to speed up delete operations for such tables. Rather than deleting the rows of secondary table for a given key one by one, we delete them all in a single SQL statement using this stored procedure. This message indicates that the launched command failed, look at the client log file for clues for clues about why the command failed.
ERROR: Refresh of stored procedures failed for DataSet name[/rectype]
This message, which can occur during a reorganize
command, indicates that the client was unable to drop and recreate the stored update procedures that are associated with the tables for the data set. See the preceding SQL error to figure out what causes the error.
ERROR: ReleaseSemaphore failed for 'name', error=number (errortext)
This message, which can occur during a process
or clone
command for Windows clients, indicates that an error occurred while attempting to post the semaphore (either the bcp_work_semaphore
or the index_work_semaphore
, which pass work items to the corresponding threads during the data extraction phase). This is a system error which should not occur under normal circumstances. Reboot Windows.
ERROR: Resequencing DATAITEMS table entries failed for 'itemname' of table 'tabname'
The define
and redefine
commands resequence DMSII links to always appear at the end of data tables. This happens because, during data extraction, data for the links is received as a separate record and must be added to the previous record that contained the data part of the data set record. This message indicates that a SQL error occurred during the resequencing. See the relational database API message that precedes this message (onscreen or in the log file) for more information.
ERROR: RPC response length of dddddd (0xhhhhhhhh) is too large
This message, which is very unlikely to occur, indicates that an RPC response packet has a bad length word. All RPC responses are preceded by a 4-byte length. This error indicates that the message is too long to be valid. In the unlikely event that you get this error, simply restart the client. If you still get the error, contact Customer Support.
ERROR: Schema generation failed
This message, which is limited to the Kafka client running a generate
command, indicates that the generation the files that define the schema for the tables managed by the client has failed. Check the error messages that precede this message. The only possible cause for the error is that the client got an error opening or writing the file.
ERROR: Script generation failed
This message can occur during a generate
command. It indicates that the scripts could not be generated. Typically, this message is preceded by other more explanatory messages.
ERROR: Scripts for DataSet name[/rectype] are not current; you must first run a generate command
This message, which can occur at the start of a process
or clone
command, indicates that the program believes that you need to run a generate
command. The DS_Needs_Generating
bit of the status_bits
column of the DATASETS entry is used to keep track of this. As stated, run a generate
command before going any further.
ERROR: Scripts must reside within the global Databridge Client working directory
This error, which can only occur when file security is enabled, indicates that the user scripts are not in a subdirectory of the client working directory. You need to place your scripts either in the scripts
subdirectory of the data source’s working directory or in a subdirectory of the client working directory when these scripts are shared among various data sources (in this case you could name this directory “userscripts” and set the user_scripts_dir
parameter in the client configuration file to point to it).
ERROR: Select of DATATABLES failed
This message can occur during a define
or redefine
command. It indicates that the SQL SELECT statement used to get the external table names from the control table DATATABLES failed. See the relational database API message that precedes this message (on the screen or in the log file) for more information.
ERROR: sem_init failed for 'name', error=number (errortext)
(UNIX) This error indicates that the initialization of a semaphore failed. The client uses several semaphores to synchronize activities between the various threads. This is an internal occur that should never occur, unless the system is low on resources.
ERROR: Send_DS_Added_Msg() failed
This is an internal error that indicates that the client was unable to send an IPC message (about a data set that was added) to the service for forwarding to the Administrative Console. The most common cause of the error would be a network error.
ERROR: Send_DS_Deleted_Msg() failed
This is an internal error that indicates that the client was unable to send an IPC message (about a data set that was deleted) to the service for forwarding to the Administrative Console. The most common cause of this error would be a network error.
ERROR: Send_DS_Mode_Chg_Msg() failed
This is an internal error that indicates that the client was unable to send an IPC message (about a data set whose mode was changed) to the service for forwarding to the Administrative Console. The most common cause of this error would be a network error.
ERROR: Send_IPC_Alert() failed
This is an internal error that indicates that the client was unable to send an IPC message, containing an alert, to the service for forwarding to the Administrative Console. The most common cause of this error would be a network error.
These IPC messages are used by Administrative Console to send e-mails to inform the DBA that there is a problem with the client operations.
ERROR: Send_IPC_Message() failed
This is an internal error that indicates that the client was unable to send an IPC message to the service for forwarding to the Administrative Console. The most common cause of this error would be a network error.
ERROR: Send_IPC_Response() failed
This is an internal error that indicates that the client was unable to send a response to an IPC message to the service for forwarding to the Administrative Console. The most common cause of this error would be a network error.
ERROR: Set of DATEFORMAT failed
This message applies to the client for Microsoft SQL Server. It indicates that the attempt to override the database servers default date format was not successful. For more information, see the ODBC message that precedes this message (onscreen or in the log file).
ERROR: Set of nocount off failed
This message applies to the client for Microsoft SQL Server. It can occur when you first start the client and it indicates that the attempt to enable row counts by executing the SQL statement SET NOCOUNT OFF
failed.
ERROR: SetEntriesInAcl() returned number (errortext)
(Windows only) This message can occur when the client tries to create an ACL that is used when creating a file or a directory. It indicates that the Windows procedure, which converts an array of security entries into an ACL, failed. The accompanying error number and error text should help determine what is causing this problem. The most likely source of this error is that the userid under which the client is running does not have the proper permissions to be able to create an ACL. You may want to temporarily revert to using the default security, until you get this problem resolved.
ERROR: SHCreateDirectoryEx failed for file "name", errno=number (errortext)
(Windows only) This message can occur when the command line client tries to create the working directory and the operation fails. The accompanying error number and error text should help determine what is causing this problem. The most likely source of this error is that the user id under which dbutility is running does not have the proper permissions to be able to create the working directory.
ERROR: Source record missing in unload file "name"
This message, which can occur during a DBClntCfgServer configure
command (which is not the same as a dbutility configure
command), indicates that the unload file that is being used to hold the backup copy of the control tables does not have a source record (S, …) immediately following the version record (V, …). It is an indication that the unload file is corrupt. If you haven't modified this file, contact Customer Support.
ERROR: sp_recompile failed for table 'name'
This message applies to Microsoft SQL Server. It can occur during a process
or clone
command after an index for a table is created. The sp_recompile
stored procedure informs the relational database that all the procedures associated with the table should be recompiled at the next execution. This ensures that the query plans associated with the tables use the index that was just created.
ERROR: SQL operation [for table 'name'] timed out (elapsed time eee), aborting query
This message is returned by the watchdog timer thread when the client wait time for a SQL operation to complete reaches the secondary threshold specified of the parameter sql_exec_timeout
. (A value of 0 disables this timeout.)
If the table name is known to the client, it is included in the message. The value eee is expressed in the appropriate units based on its value (e.g., 15 minutes). When this situation occurs the client stops with an exit code of 2058.
ERROR: SQLAllocHandle(SQL_HANDLE_ENV) failed
This error, which can occur with any ODBC or CLI clients, indicates that the ODBC SQLAllocHandle call for the environment handle failed. Check the preceding ODBC error message for more information about the reason for the failure. This error should only occur if the system is totally out of memory.
ERROR: SQLColumns for table 'name' returned a column name of NULL for the n'th column
(SQL Server only) This message can occur when you are using the BCP API and you have user columns that are added by means external to the client (such as table creation user scripts that alter the table to add the columns). The BCP API needs to bind all columns, even if the client is not using them, as they would otherwise be marked as being NULL.
Before starting the BCP API session to load such a table, the client uses this ODBC call to get the list of columns for the table and checks them. It adds column descriptors for the columns that are external to the end of the column list so that the bcp_bind
operation can bind all columns. This error indicates that the client was unable to get the name of the n'th column in the table. If you get this error, contact Customer Support.
ERROR: SQLGetDiagRec returned error_name
This message, which can occur in ODBC and CLI clients, indicates that an error occurred while attempting to retrieve an ODBC error. The string error_name is one of the following: "SQL_INVALID_HANDLE", "SQL_STILL_EXECUTING", "SQL_NEED_DATA", "SQL_ERROR", "SQL_SUCCESS_WITH_INFO", "SQL_NO_DATA_FOUND" or "Unknown Error(nnn)". Contact Customer Support if you get this error.
ERROR: SQL*Loader control file entry for item 'name' (dms_item_type = tt) cannot be generated
This message, which applies to the client for Oracle, can occur during the generate
command when creating the SQL*Loader control file. If an item whose dms_item_type
column contains an illegal value is encountered, the program displays this error. This message originates from exactly the same conditions as the message "ERROR: Item 'name' (dms_item_type = number) cannot be cloned"
, which can occur during a process
or clone
command.
ERROR: Stmt allocation failed for table 'name'
This message can occur during a process
or clone
command, and it indicates that the client was unable to create a STMT for processing an update to the specified table. See the database API error that precedes this message to determine the cause. The parameter aux_stmts
may be set too high for your hardware configuration; try reducing it. This is most likely an indication that you do not have enough memory on your system, or that there is a memory leak.
ERROR: system command failed for file "name", errno=number (errortext)
This message, which only applies to Windows clients, can occur during a process
or clone
command. It indicates that an error occurred while spawning a command prompt session to run the bulk loader utility ("SQL*Loader" for Oracle, "bcp" for Microsoft SQL Server and "pgloader" for PostgreSQL).
ERROR: Table definition failed for DataSet name[/rectype]
This message can occur when you run define
or redefine
command. It indicates that the control tables were not populated. For details on why this occurred, see the error messages that occurred during processing of the data set specified by name.
ERROR: Table 'name' does not contain a res_flag column
This error indicates that while processing a LINK_AI
record the client could not find any link in the table. Contact Customer Support if you get this error.
ERROR: Table name prefix for DataSource name is too long
This message is an internal error, which indicates that the tab_name_prefix
value, read from the DATASOURCES control table, is longer than 8 characters. The only way this can happen is if you alter the DATASOURCES table and increase the length of this column.
ERROR: Table 'name', which has an index defined has no key items, index script generation failed
This error occurs if you disable cloning for all the key fields in the index for the specified table. When the generate
command tries to generate the index creation script, it displays this message instead of generating a bad create index SQL statement.
ERROR: Table 'name' which has links is not using my_aa as the primary_key
This error can occur during a process
or clone
command and indicates that the specified table contains link items but is not using the my_aa
column as the key. You cannot set active
=0 or item_key
=0 for the my_aa
column for a table that contains links. The AA Value is what allows the client to associate a LINK_AI
record with the corresponding record in the table, whose link needs to be updated.
ERROR: Table number nnn referenced in filter file does not exist
This message only occurs when there is an OCCURS table filter file "dbfilter.cfg"
present in the config subdirectory for the data source. It indicates that the filter contains a reference to a table number that no longer exists. The most likely cause of this error is that the filter is not current. You should delete the binary filter file "dbfilter.cfg"
from the config
sub-directory and rerun the makefilter import
command to recreate the file. If this does not work contact Customer Support.
ERROR: Tables for DataSet name[/rectype]are not current; you must first run a redefine command
This message, which can occur at the start of a process
or clone
command, indicates that the layouts of the tables mapped from the data set are not current; therefore, they need to be updated via the redefine
command. The DS_Needs_Redefining (8) bit of the status_bits
column of the DATASETS control table keeps track of this.
ERROR: Tables for DataSet name[/rectype] need to be mapped; you must first run a redefine command
This message, which can occur at the start of a process
or clone
command, indicates that the mapping of the data set to relational database tables was not performed; therefore, you need to run a redefine
command. The bit DS_Needs_Mapping (1) the status_bits
column of the DATASETS control table is used to keep track of this. This situation typically occurs if you try to run a process
or clone
command after setting the active
column to 1 for a data set that was not previously mapped.
ERROR: The binary configuration file is not compatible with the Client being used
The binary configuration file now has an additional parameter in the [signon] section that identifies the client type for which it was created. If you try to use this configuration with a different client type, you will get this error (for example, if you try to use a Flat File client configuration file with a SQL Server client).
ERROR: The configured number of stmts (mmm) is insufficient for nn threads
Multi-thread updates need a greater number of configured database API statements (STMT) because of the increased number of concurrently executing SQL operations. The minimum allowed value is 20 plus the number of threads.
We recommend setting the parameter aux_stmt to a value of at least 100 when using multi-threaded updates.
ERROR: The dms_link_num nn of item name in DataSet name[/rectype] does not exist
This error, which can occur during a define
or redefine
command, indicates the dms_link_num
column of the DMS_ITEMS control table entry for the item points to a data set that does not exist. The only conditions under which this could happen are if the data set in question has its active
column set to 0 or it is filtered out in the GenFormat file on the mainframe.
ERROR: The "kafka_broker" parameter in the [Kafka] section must be specified
The Kafka client cannot operate unless the kafka_broker
parameter is specified in the configuration file. This message indicates that you did not supply a value for the parameter kafka_broker
in the client configuration file or the secondary Kafka configuration file.
ERROR: The line 'text' topic name in configuration file "name" is not of the form "table” = “topic"
This message, which is limited to the Kafka client, indicates that the topic configuration file has an error.
ERROR: The SET with strnum = nn of DataSet name[/rectype] pointed to by link item name in DataSet name[/rectype] contains more than 1 item
This error indicates that a self-correcting link item in the first data set points to a SET for the second data set that has more than one column. This is not supported by Databridge.
ERROR: Time must be specified as 'hh:mm', legal ranges are 0 to 23 for hh and 0 to 59 for mm
This message can occur in response to a command from the command line console and indicates that the time specification values are invalid.
ERROR: Topic configuration file "name" has more than one section header 'sss'
This message, which is limited to the Kafka client, indicates that the section header [sss] is not is not allowed as there is one already present.
ERROR: Topic name 'name' in topic configuration file "topic_config.ini" exceeds maximum size of nnn
This message, which is limited to the Kafka client, indicates that the topic name in question exceeds the maximum size of 192 characters.
ERROR: Trace file prefix is too long, maximum allowable length is 20, value truncated
You can specify a prefix for the client trace files in the client configuration file, however this prefix is limited to 20 characters. If you use a longer prefix, you get this error. The default prefix is “trace”.
ERROR: Tswitch command failed
The TSwitch
(trace switch) command is a dbutility console command that closes the current trace file and opens a new file. If an IO error occurs during this operation this error is displayed. For more information, see Log and Trace Files in Appendix A of the Databridge Client Administrator's Guide.
ERROR: Unable to access or update AF_STATISTICS tables, disabling audit file statistics
Version 7.1 of the Databridge client has a control table named AF_STATISTICS that has the incremental statistics for the last 9999 audit files. When you upgrade the client software, the dbfixup program creates this table. You need to enable the parameter enable_af_stats
in the client configuration file to make the client update this table when it starts processing a new audit file.
This error indicates that the table does not exist or that the userid the client is using does not have access to the table. If the table does not exist, unload all the data sources using the unload
command. (Make sure you set the data source name to "_all" on the command line.) Then run a configure
command with the -u
option, which will recreate the tables. Finally, use a reload
command to restore the control tables (again set the data source name to "_all" on the command line).
ERROR: Unable to access registry key SOFTWARE\Micro Focus\Databridge\Client\7.1
The client needs to access the Windows Registry key created by the installer in order to be able to run. If you did not use the installer and tried to copy the files, you will not get very far. Do not attempt to change the registry keys created by the installer as this might result in the client being unable to operate.
ERROR: Unable to access registry key 'SOFTWARE\ODBC\ODBC.INI'
The Microsoft SQL Server client gets the server name from the Windows Registry instead of querying the ODBC data source when the configuration parameter use_odbc_reg
is set to True. This error indicates that the client is unable to access the key in question. You should not set this parameter to True, unless your server name has dots in it. If you see this error, contact Customer Support.
ERROR: Unable to allocate nnnn bytes of memory
This message can occur during almost all client commands. It indicates that the operating system does not have enough memory for various client structures. The most common occurrence of this message is while loading the control tables. If this error occurs, do the following:
- Make sure that your system meets the minimum memory requirements for the hardware and software.
- Check the size of your swap file. The swap file could be too small or you could be running out of disk space on the volume where the swap file is located.
- Try again after quitting all other applications.
- Reboot the server if all else fails.
ERROR: Unable to convert DMSII type number to a date for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: colname = value, ...
This message can occur during a process
or clone
command when you map a DMSII item to a relational database integer data type (or in the case of SQL Server, a time
data type). The dms_subtype
specified must be one of the values defined in Decoding DMSII Dates, Times, and Date/Times in the Databridge Client Administrator's Guide.
ERROR: Unable to convert DMSII type number to a [numeric] time for item 'name' in table 'name', {record will be discarded | time set to NULL} - Keys: colname = value, ...
This message can occur during a process
or clone
command when you map a DMSII item to a relational database integer data type (or in the case of SQL Server and PostgreSQL, a time
data type). The dms_subtype
specified must be one of the values defined in Decoding DMSII Dates, Times, and Date/Times in the Databridge Client Administrator's Guide.
ERROR: Unable to create "name" directory, errno=number (errortext)
This message, which can occur during a dbutility configure
or define
command or when customizing a new data source using the Administrative Console's Customize
command, indicates that the client was unable to create the specified directory. (These directories include the config
, logs
, dbscripts
, discards
, and scripts
subdirectories.) You also get this message when the client tries to create the locks
subdirectory in the service's working directory and the operation fails.
ERROR: Unable to create backup user script directory "path", errno=number (errortext)
This message, which can occur during a createscripts
command, indicates that the client was unable to create the backup user script directory. In some cases this is simply a configuration error; check the configuration parameter user_script_bu_dir
to see if it mistyped.
ERROR: Unable to create connection to database name for index thread, aborting program
Unlike older clients, the 7.1 client normally uses a single database connection (OBDC in the case of SQL Server and PostgreSQL and OCI in the case of Oracle) during change tracking. During data extraction the index thread uses a second database connection for creating indexes and updating the DATASETS control table. Once the data extraction is completed this connection is closed.
ERROR: Unable to create connection to database name for loading table 'tabname'
This error indicates that the client was unable to create an ODBC connection for loading the given table using the BCP API (SQL Server only) or without using the bulk loader. The only possible cause for this error is that the system does not have the resources. You should not encounter this message.
ERROR: Unable to create DACL, using default security
(Windows only) This message, which is only applicable when file security is enabled, indicates that the client was unable to create an ACL to set up the security for a file or directory that it is trying to create. When this operation fails, the client reverts to using default security. The client sets up the Working Directory and its subdirectory with inheritance enabled so that if a file or sub-directory is created using default security, it inherits the security from its parent directory. This ensures that the files the user copies into the working directory (files created by the bulk loader) also have security enabled.
ERROR: Unable to create working directory "path", errno=number (errortext)
This message, which can occur during a dbutility configure
or define
command, indicates that the client was unable to create the global working directory. The client requires that the locks
subdirectory reside in this location.
ERROR: Unable to detect database name; specify it in the configuration file
This error message is limited to Oracle Express when using the default database. You normally should be able to use the client without specifying the database in the configuration file. You will get this message if the attempt to read the default database name fails.
ERROR: Unable to drop Databridge Client control tables
This message can occur during a dbutility dropall
command. It indicates that an error occurred while dropping the control tables. In this case, check the following:
- See the ODBC (or OCI) message that precedes this message (on the screen or in the log file) for more information.
- Look at the previous output messages to see how far the client progressed before encountering the error.
Note
After this message appears, you cannot rerun the dbutility dropall
command if some of the Databridge control tables were dropped. You might need to use a relational database query tool to drop the remaining tables.
ERROR: Unable to expand block to number bytes
This message can occur when the client tried to expand a previously allocated memory block and the operation fails. It indicates that the operating system does not have enough memory for various client structures. In this case, do the following:
- Quit all other applications and try again.
- Check the hardware and software requirements to make sure your system meets at least the minimum memory requirements.
- Check the size of your swap file. The swap file could be too small or you could be running out of disk space on the volume where the swap file is located.
- Reboot the server if all else fails.
ERROR: Unable to extract data for variable format date for item 'name' in table 'name', {record will be discarded | date set to NULL} - Keys: column_name = value, ...
This message can occur during a process
or clone
command. It indicates that the client was unable to extract the various components of the variable format date whose format specified by the dms_subtype
column in the DMS_ITEM control table. The most likely cause of this error is that the number you entered is incorrect. Refer to Decoding DMSII Dates represented as ALPHA or NUMBER in Chapter 2 of the Databridge Client Administrator's Guide.
ERROR: Unable to find base data set with strnum = nnn for virtual data set name[/rectype]
This message which can occur during a define or redefine command when you have virtual data sets and the parameter automate_virtuals
is set to True and you are not using a MISER database. It indicates that the remote procedure call to get the base structure index for the virtual data set failed. The most likely cause of this error is that the virtual data set is not properly defined.
ERROR: Unable to find control tables for DataSource name in file "name"
This error, which can occur during a reload
command, indicates that the archive file does not contain any entries for the data source on the command line. You either mistyped the data source name on the command line, you are not using the correct archive file, or you did not back up the data source you are trying to reload.
ERROR: Unable to find DataSource name
This message can occur during all client commands except configure
, redefine
, and dropall
. It indicates that you entered a data source name that is not in the DATASOURCES control table. This can occur if the data source name is misspelled or you have not created the data source yet.
ERROR: Unable to find key for link to DataSet name in DataSet name
This error, which can occur during define
and redefine
commands indicates that the SET pointed to by the self-correcting link has no keys. Make sure that you did not accidentally set the active
column to zero for a column that is a member of this SET.
ERROR: Unable to find matching concat data item (nnn) record for item 'name' in table 'name'
This message indicates that the dms_concat_num
column in DMS_ITEMS contains an incorrect value. This value may refer to a non-existent or inactive column. The user script involved is most likely the cause of the error and should be examined. Avoid using hard-coded number in user scripts; instead, use sub-queries.
ERROR: Unable to find matching data item record for DMS Item Number nnn in table 'name'
This error can occur during any client command that loads the client control tables when the data source contains an active data set that has active item with an OCCURS DEPENDING ON clause. The loading of the client control tables dynamically sets up the links between the given item and the item on which the OCCURS clause depends. This link may go back to the primary table if the OCCURS item is in a secondary table. The dms_item_number
column is used as a foreign key. If the load cannot find such an item in the DATAITEMS table this message is displayed.
The cause for this error is the active
column of the DMS_ITEMS table of the depends item set to 0, causing it vanish from the DATAITEMS table. To resolve this error, set the active
column to 1, set the bit DS_Needs_Remapping
(1) in the ds_options
column of the corresponding DATASETS control table, and run a redefine
command followed by a generate
command.
ERROR: Unable to get stmt for {insert | delete | update | delete all} statement for table 'name'
This message, which can occur during a process
or a clone
command, indicates that the client was unable to get a statement (OCI, ODBC or CLI) for executing the specified SQL statement. This message is typically preceded by a database API error message. This message will only occur when the configuration parameter use_stored_procs
is set to False. In this case the client generates INSERT, UPDATE and DELETE statements instead of calling stored procedures to execute these statements. Both methods use host variables, however not using stored procedures is more efficient, but it uses a bit more memory to hold the SQL statement, which is quite a bit longer. The "delete_all" case applies to OCCURS tables where all the rows for a given key are deleted in one "delete from " SQL statement that does not specify the value for the index1
(or index2
) column in the where
clause. If you are using stored procedures this is done by calling the stored procedure z_tablename.
ERROR: Unable to get stmt for stored procedure '{i|u|d|z}_name'
This message, which can occur during a process
or a clone
command, indicates that the client was unable to get a statement (OCI, ODBC or CLI) for executing the specified stored procedure. This message is typically preceded by a database API error message.
The prefix "z" indicate that the corresponding procedure "z_tabname" is used to delete all the rows for a given key in one stored procedure call, rather than doing this for every occurrence of the item.
ERROR: Unable to get stmt for updating control table name
This message, which can occur during most commands, indicates that the client was unable to get a statement (OCI, ODBC or CLI) for executing an update statement for the corresponding control table. This message is typically preceded by a database API error message.
ERROR: Unable to handle sql_type dd for external column 'name' in table 'name'
This message occurs during a process
or clone
command and indicates that the client cannot handle the sql_type
for the external column being added to the specified data set. The control tables are most likely corrupt.
You should routinely create backup copies of the control tables before making changes to the user scripts. Using the Administrative Console's Customize
command makes this a lot easier and avoid such problems.
ERROR: Unable to load backup copy of control tables for DataSource name
During client Configuration operations, the initial state of the control tables is automatically saved in the unload file srcreorgddd.cct
, where src is the data source name and ddd is the update level of the database. Client Configuration operations run much like redefine
commands. When the Administrative Console's Customize
command needs to compare the old and the new layout it reloads the old control tables from this file.
Caution
Deleting the unload file is not recommended until you complete all customization tasks in the Administrative Console's Customize
command. The client automatically deletes the unload file when it no longer needs it. If you run the Administrative Console's Customize
command multiple times before running a process
command, subsequent executions will not back up the control tables. The Administrative Console's Customize
command needs the unload file (that is, the original backup) to determine which changes have been made.
Unlike earlier clients, the 7.1 clients handle back-to-back redefine
commands in exactly the same way as the Administrative Console's Customize
command. This eliminates the problem that earlier clients had with back-to-back redefine
commands.
The -u
option was added to the redefine
command to force it to start over and use the control tables that were captured in the unload file. This is particularly useful if you are using user scripts and need to re-run the command after correcting a bad user script.
ERROR: Unable to load data source list
This message occurs during a define
or redefine
command indicates that the client cannot load the data source list. When you have more than one data source in the same relational database, the client uses the data source list to find out the table names used by other data sources and prevent naming conflicts.
ERROR: Unable to locate DLL "name"
(SQL Server client only) When using the BCP API, the Databridge client loads the ODBC DLL and sets up a table of address through which it makes the BCP API calls. This allows the client to work with the version of the ODBC DLL that supports the features needed. This error indicates the client cannot find the specified DLL. Make sure that the directory where the DLL resides is in the system PATH.
ERROR: Unable to locate the extended translation {DLL | shared library} "filename"
The attempt to locate the external data translation DLL, DBEATRAN.DLL, using a LoadLibrary call failed. Windows looks for a DLL in several places, the first of which is the directory where the program being executed resides. Under normal circumstances, this is the program directory created by the installer (c:\Program Files\Micro Focus\Databridge\7.1dbase_type).
If you use one of the two double-byte translation DLLs that we provide (Code pages 932 and 950), select the "Double-byte Translation Support" feature, in the Feature Selection tab of the installer to copy the DLL and sample configuration files to this directory.
If you use a different DLL, we recommend that you move the DLL to this directory. Windows also looks for the DLL in the current directory, the Windows system directory, the Windows directory, and the directories listed in the PATH environment variable.
If the DBEATRAN DLL is in none of these places, this error message is displayed when you set the configuration parameter use_ext_translation
to True.
On UNIX, the environment variable LD_LIBRARY_PATH must contain the directory where the shared library resides.
ERROR: Unable to read first source record from archive file "name", errno= number (errortext)
This error, which can occur during a reload
command, indicates that after successfully reading the version record, the client got an I/O error when it tried to read the next record, which should be a data source record. The most likely cause of this error is a corrupt unload file.
ERROR: Unable to read translation configuration files and initialize tables
This message, which only occurs at the start of process
or clone
command when using an external translation DLL (or shared library), indicates that the translation DLL initialization was unsuccessful. In most cases, the DLL cannot find the translation configuration file. The DLL expects these configuration files to be located in the config
directory where the client configuration files reside.
ERROR: Unable to read version record from archive file "name", errno=number (errortext)
This message can appear if a file I/O error occurs during a reload
command when trying to read the first record of the file (which is always V,version). The included system error should give you more information about why this error occurred.
ERROR: Unable to retrieve database/server names from data source
The SQL Server client uses SQLGetInfo
calls to programmatically retrieve the database and server name for the ODBC data source, which eliminates the need to specify the names in the configuration files. This message indicates that there was an error while retrieving these names. Check the preceding ODBC error message for more information about the reason for the failure.
ERROR: Unable to retrieve ODBC driver name/version from data source
(SQL Server and PostgreSQL) When starting up the first the client makes a couple of ODBC calls to get the ODBC driver name and version.
This information is critical in being able to use the BCP API with the SQL Server client. If you get this error, make sure that you are using one of the recommended ODBC drivers, whose name is of the form MSODBCDLL1x.dll. Use ODBC drivers with a version of 17.4 or newer. If you using an older ODBC driver try upgrading it; if the problem persists contact Customer Support.
ERROR: Unable to retrieve value for 'FileSecurity' from registry (result = nnn)
This error indicates that the Windows registry keys for the client are corrupt. The client expects to find the string FileSecurity
in the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Micro Focus\Databridge\Client\7.1
where the installer saves several values including the name of the directory in which the software was installed. The name FileSecurity
is a REG_DWORD
, whose value is 1 or 0. This value determines whether file security is enabled or not. This error indicates that there is no such entry in the list of values for the above-mentioned registry key.
To resolve this issue, change the setting for FileSecurity using the setfilesecurity program included with Databridge. Do not edit the registry key with regedit
.
ERROR: Unable to retrieve value for 'INSTALLDIR' from registry (result = nnn)
This error indicates that the Windows Registry keys for the client are corrupt. The client expects to find the string INSTALLDIR (the name of the directory in which the software was installed) in the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Micro Focus\Databridge Client\7.1
. The installer saves several values to this key, including the name of the directory in which the software was installed. This error indicates that there is
no such entry in the list of values for above mentioned Registry key.
ERROR: Unable to retrieve value for 'Server' from registry (result = nnn)
This error indicates that the SQL Server client was unable to get the server name from the Windows Registry key HEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI
. (This Registry key provides the server name only if the configuration parameter use_odbc_reg
value is True. Avoid using this value unless your server name contains periods.) If you see this error, contact Customer Support.
ERROR: Unable to retrieve value for 'UserID' from registry (result = nnn)
This error is an indication that the Windows Registry keys for the client are corrupt. The client expects to find the string UserID in the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Micro Focus\Databridge Client\7.1
. The installer saves the name of the directory where the software was installed and several other values in this Registry key.
ERROR: Unable to retrieve value for 'WORKINGDIR' from registry (result = nnn)
This error indicates that the Windows Registry keys for the client are corrupt. The client expects to find the string WORKINGDIR (the working directory for the service and location of the locks subdirectory) in the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Micro Focus\Databridge Client\7.1
. For more information about the working directory, see the Databridge Client Administrator's Guide.
ERROR: Unable to update specified external column nn
This error can only occur when processing a binary configuration file that contains external_column
specifications and indicates that the client was unable to update its internal table for external column definition. If using a text configuration file, check the syntax of the external_column
[nn] line. If using a binary file, export the configuration file, verify the syntax, (make any corrections), and then import the file. The import
command also checks the syntax for you.
ERROR: Undefined date format type (number) for item 'name' in table 'name', {record will be discarded | date set to NULL} – Keys: colname = value, ...
This message indicates an invalid value in the dms_subtype
column in the DATAITEMS control table. For a list of date formats refer to Decoding DMSII Dates, Times, and Date/Times in the Databridge Client Administrator's Guide.
ERROR: Undefined section header in configuration file line: text
See Sample SQL Server Client Configuration File in Appendix C of the Databridge Client Administrator's Guide for valid section headers.
ERROR: Unimplemented command cmd_number
This internal DBClntCfgServer error indicates that the Administrative Console or its Customize
command attempted to execute an unimplemented RPC. This can only happen if you try to run a newer version of the Administrative Console with an old client. We recommend upgrading the Databridge client, Databridge host software, and the Administrative Console to compatible versions.
ERROR: Unknown command: command
This message can appear when you misspell a dbutility command, or you enter a command that does not apply to dbutility.
ERROR: Unknown console command; type "help" to get a list of commands
This message indicates that the operator entered an invalid command in the command line console.
ERROR: Update of colname column in DATASETS table failed for name[/rectype]
This message, which can occur during a process
, clone
, define
, or redefine
command, indicates that an attempt to update the specified column of the DATASETS control table
failed. The columns in question include the following columns:
- active
- ds_mode
- misc_flags
- status_bits
- ds_options
Check the preceding SQL error message to determine why the error occurred.
ERROR: Update of DATAITEMS entries failed for table 'name' [for DataSet name[/rectype]]
This error can occur during a redefine
command or when running the Administrative Console's Customize
command. It indicates that the update of the DATAITEMS control table failed when trying to restore customizations. See the SQL errors that precede this message to find out why this error occurred.
ERROR: Update of DATAITEMS table failed for 'name' of table 'name'
This error can occur when running the Administrative Console's Customize
command or when running a redefine
command. If it occurs when running the Administrative Console's Customize
command, it indicates that DBClntCfgServer was unable to update the DATAITEMS control table for the specified item of the given table, while applying customization changes made by the user. The redefine
command can get this error whenever it tries to update the DATAITEMS table and something goes wrong. See the preceding SQL errors to determine why this error occurred.
ERROR: Update of DATASETS table failed for Global_DataSet
This message can occur during a redefine
, clone
, or process
command. It indicates that an error occurred when updating the DATASETS control table for the Global_DataSet
. See preceding API error messages (onscreen or in the log file) to determine the reason for this error.
ERROR: Update of DATASETS table failed for name[/rectype]
This message can occur during a process
or clone
command when the DATASETS control table is updated for a given data set (/rectype is added for variable-format data sets except type 0 records). This error can occur during the cloning of a data set when the State Information is changed multiple times and at the end of a process
command when the global State Information is being propagated to all data sets whose in_sync
column has a value of 1. This message can occur when running the Administrative Console's Customize
command. See the database API messages that precede this error for more information.
ERROR: Update of DATASOURCES table failed for name
This message can occur whenever the DATASOURCES control table is updated for a given data source. This can happen at the beginning and end of a process
or clone
command when the status_bits
column is updated. See the database API messages that precede this error for more information.
ERROR: Update of DATATABLES and DATAITEMS tables to preserve pass2 changes failed for DataSet name[/rectype]
This message, which can occur during a redefine
command or when running the Administrative Console's Customize
command, indicates that the client could not update the DATATABLES and DATAITEMS entries while attempting to preserve changes. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update of DATATABLES failed for table 'name' [for DataSet name[/rectype]]
This message, which can occur during most commands, indicates that the client could not update the DATATABLES entry for the specified table. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update of DMS_ITEMS table failed for item name in DataSet name[/rectype]
This message, which can occur during a redefine
command or when running the Administrative Console's Customize
command, indicates that the client could not update the DMS_ITEMS
entries. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update of DMS_ITEMS table failed for name (item_number nnn) in DataSet name[/rectype]
This message indicates that the DBClntCfgServer was unable to update the DMS_ITEMS control table when processing an update request from the Administrative Console's Customize
command. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update of DMS_ITEMS table to preserve pass1 changes failed for DataSet name[/rectype]
This message, which can occur during a redefine
command or when running the Administrative Console's Customize
command, indicates that the client could not update the DMS_ITEMS entries while preserving previous customizations. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update of DMS_ITEMS to fixup LINKS failed
This error can occur at the start of pass2 of define
and redefine
commands for data sets that have self-correcting link. It indicates that the attempt to update the DMS_ITEMS control table failed. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update statistics failed for table 'name'
(SQL Server client only) This message can occur during a process
or clone
command after an index for a table is created. The update statistics SQL Server command causes the software to update the statistics on the current table. This message indicates that the update statistics
command has failed. Check the SQL error message that precedes this message for clues about why this error occurred.
ERROR: Update Worker thread[nn]: bcp_done() failed for table 'name'
(SQL Server client only) The SQL Server client issues a bcp_done
call into the BCP API when it receives a State Information record from the Databridge Engine that has a ds_mode
of 1 at the end of data extraction. This call commits the current batch of rows and instructs the BCP API that we have reached the end of the data for the load. This error usually means that the database is out of resources. Check for any ODBC errors that precede this error for clues about what happened.
ERROR: Update Worker thread[nn]: FinishDataExtraction failed for table 'name'
(All clients using bulk loaders) When processing multi-threaded extracts the client queues a working storage block which marks the end of extraction. The thread calls the procedure FinishDataExtraction.
In the case of Windows, where the client uses temporary files, this involves closing the temporary file and queuing it on the bulk-loader thread’s work queue. This error usually indicates that the close of the temporary file has failed. In the case of UNIX clients, which use a pipe to run the bulk loader, this error usually indicates a problem with the actual load. Refer to the bulk loader log file in the data source's working directory for details on why this error occurred.
ERROR: Update Worker thread[nn]: Illegal response type rr encountered
This is an internal error that can only occur when using multi-threaded updates it indicates that an Update Worker encountered a DMS buffer whose record type field is illegal. The only record types that are expected by the Update Worker are CREATE
, DELETE
, MODIFY
, MODIFY_AI
, LINK_AI
, or DELETE_ALL
. Any other record types are handled in the main thread. Report this error to Customer Support.
ERROR: Update Worker thread[nn]: Insert failed during data extraction for table 'name'
This error indicates that the update worker thread got an error when it tried to execute an insert
SQL statement to load a record into table that is configured not to use the bulk loader. Check database API errors that precede this error for clues about what happened. Loads that do not use the bulk loader use a separate connection, the rows are loaded in batches whose size is controlled by the parameter max_clone_count
you could try reducing the value of this parameter to see if it has any effect on the error.
ERROR: Update Worker thread[nn]: {Read_CB_CREATE() | Read_CB_DELETE() | Read_CB_DELETE_ALL() | Read_CB_MODIFY() | Read_CB_LINKS()} for table 'name' failed
This error can occur when you use multi-threaded updates, it indicates that the operation in question failed. Look at the SQL errors in the log file to see if you can find any clues about why this error occurred. If the error persists try setting the parameter n_update_threads
to 0.
ERROR: User columns of dms_subtype n1 and n2 are mutually exclusive
This message can occur when processing text configuration files. It indicates that two of the user columns you specified are mutually exclusive. For example update_type
(1) and expanded update_type
(11) or deleted_record
(10) and expanded update_type
(11).
ERROR: User columns of dms_subtype mmm can only be used in conjunction with dms_subtype mmm
This message can occur when processing text configuration files. It indicates that you are attempting to use the external column delete_seqno
when the column deleted_record
is not present. The column delete_seqno
, which allows more than one delete to be performed for the same record when the second clock remains unchanged, is only meaningful when the column deleted_record
is present. In the absence of the delete_seqno
column the client cannot make the update and stalls until the clock changes, thereby degrading performance.
Setting the sql_type
column of the deleted_record
column in DATAITEMS to 18 (BIGINT) makes the client combine the deleted_record
value with the delete_seqno
which eliminates the duplicate record problem caused by the 1 second granularity of the epoch time used to set the value of the deleted_record
column. In the case of Oracle, which does not have a data type of BIGINT, NUMBER(15) is used instead, as the combined value is a 48-bit quantity.
ERROR: User DataSet {define | layout} script "filename" failed
This message can appear during a define
or redefine
command. These scripts are named script.user_define.dataset
and script.user_layout.dataset
where dataset is the dataset name in lower case with dashes replaced by underscores. This error indicates that the specified user script failed. Check the SQL errors that precede this message to see why the script failed. If you cannot figure out why it is failing, use the –t 0x800 option for the command or use the runscript
command to test the script.
ERROR: User DataSets global layout script "filename" failed
This message can appear during a define
or redefine
command. This script is named script.user_datasets.src
, where src is the data source name. It is run before the layout user scripts are run. It indicates that specified user script failed. Check the SQL errors that precede this message to see why the script failed.
ERROR: User DataTables global define script "filename" failed
This message can appear during a define
or a redefine
command and indicates that the specified user script failed. This script is named script.user_datatables.src
, where src is the data source name. It is run before the define user scripts are run. Check the SQL errors that precede this message to see why the script failed.
ERROR: User global layout script "filename" failed
A new type of user script that applies to all data sources was added to the 6.5 clients for Miser customers that shares the same scripts with different data source names. This script, named “script.user_all_sources”, is run before the global data sets scripts are run. This error indicates that the specified script failed. Check API or SQL errors that precede this message to see why the script failed.
ERROR: User script "filename" failed
This message can appear during a runscript
command and indicates that the specified script failed. Check API or SQL errors that precede this message to see why the script failed.
ERROR: User stored procedure creation script "name" failed
This message indicates that the client got an error running the user script script.user_create_sp.name
, which is used to split up the actions that would normally be executed in the user script script.user_create.name
. This last user script is executed at the time a table is created at the beginning of the data extraction phase. The stored procedure user script is also executed when a refresh
command is executed. This command gets automatically run during the execution of a reorganize
command. This allows user written stored procedures to be kept current after a DMSII reorganization, rather than just creating them and finding out that
they no longer work after a DMSII reorganization.
ERROR: User table creation script "filename" failed
This message can appear during a process
or clone
command and indicates that specified data table creation user script fails. Check API or SQL errors that precede this message to see why the script failed.
ERROR: Value out of range for 'F' option argument
The -F
option is used to pass dbutility an AFN after which it stops. If you use a value that is not in the range 1 to 9999 this error is displayed.
ERROR: Value out of range for 'V' option argument – using nn instead
The -V
option is used to pass dbutility a control table version to use in the unload
command. The value being passed must be greater than 0 and less than or equal to the present version of the control tables. The following table documents the control table version for recent releases of Databridge.
Client version | Control Table Version |
---|---|
7.1 | 36 |
7.0 | 33 |
6.6 | 31 |
6.5 | 26 |
ERROR: Virtual data set link for data set name /rectype is NULL, make sure that automate_virtuals is true
In a MISER database, user scripts link the virtual data sets and the real data sets from which they are derived by using the virtual_ds_num
, real_ds_num
and real_ds_rectype
columns in the DATASETS control table. If the configuration parameter automate_virtuals
is not enabled, this pointer is not set up and executing a createscripts
command returns this message.
For non-MISER databases, the client gets the information from the Engine when the parameter automate_virtuals
is set to True. This makes the client handle virtual data sets in a much more coherent manner by paying attention to the relation between virtual data sets and the actual data sets they are derived from.
ERROR: Work_desc pool empty
This error, which only occurs with multi-threaded updates, indicates that there is a bug in the client. The work descriptors are tiny records that are used to queue the same DMS buffer on multiple update worker work queues. Running out of these structures is an indication that the client is failing to return some of these to the pool when they are no longer needed.
ERROR: Write failed for binary configuration file "name", errno=number (errortext)
This message can occur during a dbutility import
command or when DBClntCfgServer updates a client configuration file. It indicates that an I/O error occurred while writing to the specified binary configuration file. The system error included in this message should explain why this error occurred.
ERROR: Write failed for bulk load file for table 'name', errno=number (errortext),
Record: recordtext
This message applies only to Windows clients. This message can occur during a process
or clone
command and indicates that an I/O error occurred while writing to the specified temporary data file. This message typically occurs when you run out of disk space (resulting in the errortext “Out of Disk Space”). The system error included in this message should explain why this error occurred.
ERROR: Write failed for discard file for table 'name', errno=number (errortext)
This message, which can occur during a process
or a clone
command, indicates that an error occurred while writing a record to a discard file, whose name appears in the message. The system error included in this message should explain why this error occurred. The most common cause of this error is running out of disk space. Make sure you take advantage of the recently added configuration parameter max_discards
that allows you to prevent this situation from occurring. You can do this in one of two ways: (1) you can either make the client abort after a certain number of discards, regardless of which table they belong to, or (2) you can limit the number of discards records written to a discard file for any given table.
ERROR: Write failed for log descriptor file "name", errno=number (errortext)
The client uses the binary file log.cfg
to keep track of the log file name. If the error persists, delete this file and the client will create a new one. The most likely source of this error is file ownership conflicts between the command line client, dbutility, and the service. The system error message included in this message should provide information about why this error occurred.
ERROR: Write failed for Null Record file, errno=number (errortext)
This message, which can occur during a define
or redefine
command, indicates that an I/O error occurred while writing to the null record file. The system error included in this message should explain why this error occurred.
ERROR: Write failed for pipe for table 'name', errno=number (errortext), Record: recordtext
This message, which is limited to UNIX, can occur during a process
or a clone
command when the main process is writing data to a UNIX pipe used to pass data records to the process that runs the bulk loader (SQL*Loader in for Oracle and pgloader for PostgreSQL). The text errortext provides information for clues about why the error occurred. The most common cause of this error is that the bulk loader exceeded the maximum discards threshold and aborted the run. In this case, the errortext will be "Broken Pipe".
ERROR: Write failed for trace descriptor file "name", errno=number (errortext)
The client uses the binary file trace.cfg
to track the trace filename. If this error persists, delete this file and the client will create a new one. The most likely source of this error is file ownership conflicts between the command line client, dbutility, and the service. The system error message included in this message should provide information about why this error occurred.
ERROR: You need to run a reorganize command for DataSource name
This message indicates that you are trying to run a redefine
command after a successful run of the command that determined that a reorganize
command need to be run next. If you do not want to run the reorganize
command, you must reload the control tables form the backup file created by the redefine
command.
ERROR: 'Y' option must be followed by the text 'reclone_all'
When you use the -Y
option with the command line client dbutility, you must specify the argument reclone_all
after -Y
. Failure to do so results in this error, which is meant to prevent accidental use of -Y
, when you meant to type -y
.
FATAL ERROR: error_name: error_string
This error message only applies to the Kafka Client operating in transaction mode.
Kafka configuration error
This error message indicates that a configuration error caused the Kafka initialization procedure dbkafka_init() to fail. It will be followed by the message ERROR: Call on dbkafka_init() failed, rc = 1
.
Kafka configuration error: error_text
This error message indicates that an error occurred while processing the secondary configuration file. error_text should provide more information about what the nature of the error.
Kafka error: Failed to initialize transaction mode: error_string
Starting in 7.1 the Kafka Client optionally operates in transaction mode. This error indicates that the initialization failed. One of the reasons for this error is the configuration of the maximum in-flight messages is greater than 5.
Kafka error: Failed to instantiate producer: name
The produced is the thread writes to Kafka queue. This message means that the Client could not start the producer. This is probably caused by an incorrect Kafka configuration.
Kafka error: Failed to instantiate topic name: error_msg
This is a queue to which the Kafka Client writes messages. There are two reasons where this error occurs; the first one is that the Kafka Client cannot create a topic and the second is the topic does not exist.
Kafka error: Message delivery failed: error_msg
This message indicates that there is an internal Kafka problem.
Kafka error: Produce to topic name: error_msg
This message indicates that there is an internal Kafka problem.
RPC PROTOCOL ERROR: Type = responsetype StrNum=number
This message can occur during any client command that involves communications with DBServer or Enterprise Server. It indicates that an RPC protocol error occurred while trying to read a response packet to an RPC. In this case, try again. If this error occurs persistently and is reproducible, contact Customer Support.