Skip to main content

Striim Platform 5.0 documentation

Database Writer

Writes to one of the following:

  • Db2 for z/OS versions 12 and 13

  • HP NonStop SQL/MX (and SQL/MP via aliases in SQL/MX)

  • MariaDB (including Amazon RDS for MariaDB, MariaDB Galera Cluster, MariaDB Xpand, and SkySQL)

  • MySQL (including Amazon Aurora for MySQL, Amazon RDS for MySQL, Azure Database for MySQL, and Google Cloud SQL for MySQL)

  • Oracle (including Amazon RDS for Oracle)

  • PostgreSQL (including Amazon Aurora with PostgreSQL compatibility, Amazon RDS for PostgreSQL, Azure Database for PostgreSQL, Azure Database for PostgreSQL - Flexible Server, Google AlloyDB for PostgreSQL, and Google Cloud SQL for PostgreSQL)

  • SAP HANA

  • SingleStore (formerly MemSQL)

  • SQL Server (including Amazon RDS for SQL Server and Azure SQL Database)

  • Sybase

  • Yellowbrick

Note

The drivers for MariaDB, MySQL, Oracle, PostgreSQL, and SQL Server are bundled with Striim Platform. For all other databases, the appropriate JDBC driver must be installed as described in Installing third-party drivers in Striim Platform.

Target database setup for Database Writer

For Database Writer to function correctly, the following target database privileges / permissions must be granted to the user specified in the Username property:

  • Db2 for z/OS:

    • for DML: SELECT, INSERT, UPDATE, DELETE, REFERENCES on the tables to be written to

    • for schema evolution: CREATETAB on the target database, ALTER, DROP on the tables to be written to

  • MariaDB and MySQL:

    • for DML: SELECT, INSERT, UPDATE, DELETE, REFERENCES on the tables to be written to

    • for creating the target schema when the source is Database Reader with Create Schema enabled (see Performing initial loads with Database Reader): CREATE ON *.*

    • for schema evolution (CDDL): CREATE, ALTER, DROP on the target schema

  • Oracle:

    • for DML: SELECT, INSERT, UPDATE, DELETE, REFERENCES on the tables to be written to, CREATE SESSION to the user specified in Username

    • for creating the target schema when the source is Database Reader with Create Schema enabled (see Performing initial loads with Database Reader): CREATE SESSION, CREATE USER, CREATE ANY TABLE, CREATE ANY INDEX, UNLIMITED TABLESPACE

    • for schema evolution (CDDL): CREATE TABLE, CREATE ANY TABLE, ALTER ANY TABLE, DROP ANY TABLE on the target schema, CREATE SESSION to the user specified in Username

  • PostgreSQL:

    • for DML: CONNECT, SELECT, INSERT, UPDATE, DELETE, REFERENCES on the tables to be written to

    • for creating the target schema when the source is Database Reader with Create Schema enabled (see Performing initial loads with Database Reader): CONNECT ON DATABASE and CREATE ON DATABASE on the target database

    • for schema evolution (CDDL): CONNECT on the target database, CREATE on the target schema, and must own the tables (to allow ALTER and DROP)

  • SQL Server:

    • for DML: SELECT, INSERT, UPDATE, DELETE, REFERENCES on the tables to be written to (unnecessary if the user has the db_owner role)

    • for creating the target schema when the source is Database Reader with Create Schema enabled (see Performing initial loads with Database Reader): enter the following command in a SQL Server client, replacing <user name> with the name specified in the Username property: ALTER ROLE db_owner ADD MEMBER <user name>;

    • for schema evolution (CDDL): CREATE, ALTER, CONTROL on the target schema (unnecessary if the user has the db_owner role)

  • Sybase:

    • for DML: SELECT, INSERT, UPDATE, DELETE on the tables to be written to; to perform any operation on the IDENTITY column, the user must either own the table or have the System Administrator role (sa_role)

  • Yellowbrick

    • for all operations: CONNECT on database to be written to

    • for DML: CONNECT, SELECT, INSERT, UPDATE, DELETE, REFERENCES on the tables to be written to

    • for creating the target schema when the source is Database Reader with Create Schema enabled (see Performing initial loads with Database Reader): CONNECT ON DATABASE and CREATE ON DATABASE on the target database

    • for schema evolution (CDDL): CONNECT on the target database, CREATE and USAGE on the target schema, and must own the tables (to allow ALTER and DROP)

Database Writer properties

property

type

default value

notes

Batch Policy

String

eventcount:1000, interval:60

The batch policy includes eventcount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded. To disable batching, set to BatchPolicy:'-1'.

With the default setting, events will be sent every 60 seconds or sooner if the buffer accumulates 1,000 events.

Bidirectional Marker Table

String

When performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.

CDDL Action

String

Process

See Handling schema evolution.

Checkpoint Table

String

CHKPOINT

The table where DatabaseWriter will store recovery information when recovery is enabled. See Creating the checkpoint table below for DDL to create the table. Multiple instances of DatabaseWriter may share the same table. If the table is not in the Oracle or SQL/MX schema being written to, or the same MySQL or SQL Server database specified in the connection URL, specify a fully qualified name.

Column Name Escape Sequence

String

When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source, you may use this property to specify which characters Striim will use to escape column names that are on the List of reserved keywords. You may specify two characters to be added at the start and end of the name (for example, [] ), or one character to be added at both the start and end.

If this value is blank, Striim will use the following escape characters for the specified target databases:

  • Oracle: " (ASCII / UTF-8 22)

  • MySQL: ` (ASCII / UTF-8 60)

  • PostgreSQL: " (ASCII / UTF-8 22)

  • SQL Server: []

Commit Policy

String

eventcount:1000, interval:60

The commit policy controls how often transactions are committed in the target database. The syntax is the same as for BatchPolicy. CommitPolicy values must always be equal to or greater than BatchPolicy values. To disable CommitPolicy, set to CommitPolicy:'-1'.

If BatchPolicy is disabled, each event is sent to the target database immediately and the transactions are committed as specified by CommitPolicy.

If BatchPolicy is enabled and CommitPolicy is disabled, each batch is committed as soon as it is received by the target database.

If BatchPolicy and CommitPolicy are both disabled, each event received by DatabaseWriter will be committed immediately. This may be useful in development and testing, but is inappropriate for a production environment.

Connection Retry Policy

String

retryInterval=30, maxRetries=3

With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.

Connection URL

String

  • for Db2 for zOS: jdbc:db2://<IP address>:<port>/<database name>

  • for HP NonStop SQL/MX: jdbc:t4sqlmx://<IP address>:<port> or jdbc:t4sqlmx://<IP address>:<port>/catalog=<catalog name>;schema=<schema name>

  • for MariaDB: jdbc:mariadb://<ip address>:<port>/<database name>; for additional options, see Failover and High availability with MariaDB Connector/J

  • for MariaDB Galera Cluster: specify the IP address and port for each server in the cluster, separated by commas: jdbc:mariadb://<IP address>:<port>,<IP address>:<port>,...; optionally, append /<database name>; for additional options, see Failover and High availability with MariaDB Connector/J

  • for MySQL: jdbc:mysql://<ip address>:<port>

    When writing to MySQL, performance may be improved by appending ?rewriteBatchedStatements=true to the connection URL (see Configuration Properties and MySQL and JDBC with rewriteBatchedStatements=true).

  • for Oracle: jdbc:oracle:thin:@<hostname>:<port>:<SID> or jdbc:oracle:thin:@<hostname>:<port>/<service name>. For RAC or PDB environments, use the service name, not the SID. For Oracle Cloud DBaaS, use jdbc:oracle:thin:@//<public IP address>:1521/<PDB name>.<identitydomain>.oraclecloud.internal. To use Oracle native network encryption, append ?encryption_client=required&encryption_types_client= followed by a comma-delimited list of the encryption types to support (3DES112, 3DES168, AES128, AES192, AES256, RC4_128, RC4_256, RC4_40, or RC4_56), for example, ?encryption_client=required&encryption_types_client=AES192,AES256. If one or more source tables contain LONG or LONG RAW columns, append ?useFetchSizeWithLongColumn=true. If appending more than one element, begin all but the first with & rather than ?.

  • for PostgreSQL, jdbc:postgresql://<ip address>:<port>/<database name>

  • for SAP HANA: jdbc:sap://<ip address>:<port>/?databaseName=<database name>&currentSchema=<schema name>

  • for SingleStore (formerly MemSQL): same as MySQL

  • for SQL Server: jdbc:sqlserver://<ip address>:<port>;DatabaseName=<database name>

    You may use Active Directory authentication with Azure SQL Database (see Supporting Active Directory authentication for Azure) or, when Striim is running in Windows, with SQL Server (see Supporting Active Directory authentication for SQL Server).

  • for Sybase: jdbc:jtds:sybase:<ip address>:<port>/<database name>

  • for Yellowbrick: jdbc:postgresql://<network name>:<port>/<database name>

Excluded Tables

String

If Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names (separated by semicolons) and wildcards may be used exactly as for Tables.

Ignorable Exception Code

String

By default, if the target DBMS returns an error, DatabaseWriter terminates the application. Use this property to specify errors to ignore, separated by commas. For example, to ignore Oracle ORA-00001 and ORA-00002, you would specify:

IgnorableExceptionCode: '1,2'

Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).

When replicating from MySQL/MariaDB, Oracle 12c, PostgreSQL, and SQL Server CDC readers, the following generic (that is, not corresponding to any database-specific error code) exceptions can be specified:

  • DUPLICATE_ROW_EXISTS: could not insert a row in the target because an identical row already exists

  • NO_OP_UPDATE: could not update a row in the target (typically because there was no corresponding primary key)

  • NO_OP_PKUPDATE: could not update the primary key of a row in the target (typically because the "before" primary key could not be found); not supported when source is PostgreSQLReader

  • NO_OP_DELETE: could not delete a row in the target (typically because there was no corresponding primary key)

These exceptions typically occur when other applications besides Striim are writing to the target database. The unwritten events will be captured to the application's exception store, if one exists (see CREATE EXCEPTIONSTORE).

See also Switching from initial load to continuous replication.

JAAS Configuration

String

If you Enable Kerberos authentication, use this property to configure it.

To authenticate Oracle using a Kerberos principal, specify java.security.krb5.conf=/<path>/krb5.conf; doNotPrompt=true; principal=oracleclient@<Kerberos realm>.

For example, java.security.krb5.conf=/etc/krb5.conf; doNotPrompt=true; principal=oracleclient@MYDOMAIN.COM

To authenticate Oracle using a Kerberos credential cache, specify java.security.krb5.conf=/<path>/krb5.conf; doNotPrompt=true; useTicketCache=true; ticketCache=<fully qualified credential cache file name.

For example, java.security.krb5.conf=/etc/krb5.conf; doNotPrompt=true; useTicketCache=true; ticketCache=/home/Jdbc/Security/krbcache/krb5cc'

To authenticate PostgreSQL, specify java.security.krb5.realm=<Kerberos realm>; java.security.krb5.kdc=<Kerberos server IP address or network name>; javax.security.auth.useSubjectCredsOnly=false; java.security.auth.login.config=/<path>/login.conf.

For example, java.security.krb5.realm=MYDOMAIN.COM; java.security.krb5.kdc=kdc.mydomain.com; javax.security.auth.useSubjectCredsOnly=false; java.security.auth.login.config=/home/Jdbc/Security/login.conf.

You must also append the appropriate application name from the specified 'login.conf (see Enable Kerberos authentication) to the connection URL. For example, jdbc:postgresql://<ip address>:<port>/<database name>&jaasApplicationName=<application name>.

Parallel Threads

Integer

See Creating multiple writer instances (parallel threads).

Enabling recovery for the application disables parallel threads.

Password

encrypted password

The password for the specified user. See Encrypted passwords.

Preserve Source Transaction Boundary

Boolean

False

Set to True to ensure that all operations in each transaction are committed together.

When the target's input stream is the output of an HP NonStop source or when writing to an HP NonStop database, this setting must be False.

This setting interacts with CommitPolicy as follows:

  • When PreserveSourceTransactionBoundary is True and CommitPolicy is disabled, each transaction will be committed when all of its operations have been received. For example, if you have a series of three transactions containing 300, 400, and 700 operations, there will be three commits.

  • When PreserveSourceTransactionBoundary is True and CommitPolicy has a positive EventCount value, that value is the minimum number of operations included in each commit. For example, if CommitPolicy includes EventCount=1000 and you have a series of three transactions containing 300, 400, and 700 operations, there will be one commit, after the third transaction (because the first two transactions had a total of only 700 operations, less than the EventCount value).

SSL Config

String

When the target is Oracle and it uses SSL, specify the required SSL properties (see the notes on SSL Config in Oracle Reader properties).

Statement Cache Size

Integer

50

The number of prepared statements that Database Writer can cache. When the number of cached statements exceeds this number, the least recently used statement is dropped. When a DatabaseWriter Oracle target in the same application fails with the error "ORA-01000: maximum open cursors exceeded," increasing this value may resolve the problem.

Tables

String

Specify the name(s) of the table(s) to write to.

If the source is Database Reader, its Create Schema property is True, and the target is MariaDB, MariaDB Xpand, MySQL, Oracle, PostgreSQL, or SQL Server, the specified schema(s) and table(s) will be created in the target (see the discussion of Create Schema in Database Reader properties for more details). Otherwise, the table(s) must exist in the target when the application is started, and if a specified target table does not exist, the application will terminate with an error. To skip writes to missing tables without terminating, specify TABLE_NOT_FOUND as an Ignorable Exception Code.

MySQL, Oracle, and PostgreSQL names are case-sensitive, SQL Server names are not. Specify names as <database name>.<table name> for MySQL, <schema name>.<table name> for Db2 for z/OS, Oracle, PostgresQL, and SQL Server (but see the note below about SQL Server source table names)..

When the target's input stream is a user-defined event, specify a single table.

When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:

source.emp,target.emp
source.db1,target.db1;source.db2,target.db2
source.%,target.%
source.mydatabase.emp%,target.mydb.%
source1.%,target1.%;source2.%,target2.%

Starting in Striim 4.2.0.4, you may use the KeyColumns option to specify a column in the target table that will contain a unique identifier for each row: for example, Tables:'sourcedb.emp,mydb.mySchema.emp KeyColumns(emp)'. If necessary to ensure uniqueness, specify multiple columns with the syntax KeyColumns(<column 1>,<column 2>,...). You may use wildcards for the source table, provided all the tables have the key columns: for example, Tables:'sourcedb.%,mydb.myschema.% KeyColumns(...)'. The KeyColumns option can be useful in various situations, for example when the source or target table has no primary key, or when the source and target have different primary keys.

If some of the source table names are mixed-case and the target database's table names are case-sensitive, put the wildcard for the target in double quotes, for example, source.*,target."*".

In Db2 for z/OS, table and column names may not exceed 30 characters.

See Mapping columns for additional options.

Username

String

Specify the DBMS user name the adapter will use to log in to the server specified in ConnectionURL.

Vendor Configuration

String

When the target is SQL Server, the following configuration options are supported. If the target table contains an identity, rowversion, or timestamp column and you do not specify the relevant option(s), the application will terminate.

  • enableidentityInsert=true: for insert operations only (not updates), replicate identity column values from the source to the target using identity inserts

  • excludeColTypes={identity|rowversion|timestamp}: ignore any identity, rowversion, or timestamp values in the source and have the target database supply values; to specify multiple options, separate them with a comma, for example, exludeColTypes=identity,rowversion

  • To combine both options, separate them with a semicolon. For example, enableidentityInsert=true; exludeColTypes=timestamp would replicate identity column values and have the target database supply timestamp values.

Note

PostgreSQL does not allow NULL (\0x00) character values (not to be confused with database NULLs) in text columns. If writing to PostgreSQL from a source that contains such values, Contact Striim support for a workaround.

Database Writer sample application

The following example uses an input stream of a user-defined type. When the input is the output of a CDC or DatabaseReader source, see Replicating data from one Oracle instance to another.

The following TQL will write to a MySQL table created as follows in MySQL database mydb:

CREATE TABLE mydb.testtable (merchantId char(36), dateTime datetime, amount decimal(10,2), zip char(5));

The striim user must have insert permission on mydb.testtable.

CREATE SOURCE PosSource USING FileReader (
  directory:'Samples/PosApp/AppData',
  wildcard:'PosDataPreview.csv',
  positionByEOF:false
)
PARSE USING DSVParser (
  header:yes
)
OUTPUT TO RawStream;
 
CREATE CQ CsvToPosData
INSERT INTO PosDataStream partition by merchantId
SELECT TO_STRING(data[1]) as merchantId,
       TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,
       TO_DOUBLE(data[7]) as amount,
       TO_STRING(data[9]) as zip
FROM RawStream;

CREATE TARGET WriteMySQL USING DatabaseWriter (
  connectionurl: 'jdbc:mysql://192.168.1.75:3306/mydb',
  Username:'striim',
  Password:'******',
  Tables: 'mydb.testtable'
) INPUT FROM PosDataStream;

Creating the checkpoint table

When recovery is not enabled, there is no need to create the checkpoint table.

When recovery is enabled, DatabaseWriter uses the table specified by the CheckpointTable property to store information used to ensure that there are no missing or duplicate events after recovery (see Recovering applications). Before starting DatabaseWriter with recovery enabled, use the following DDL to create the table, and grant insert, update, and delete privileges to the user specified in the Username property. The table and column names are case-sensitive, do not change them.

Db2 for z/OS:

CREATE TABLE CHKPOINT (
  ID VARCHAR(100) PRIMARY KEY NOT NULL,
  SOURCEPOSITION BLOB,
  PENDINGDDL DECIMAL(1),
  DDL CLOB);

HP NonStop SQL/MX (replace <catalog>.<schema> with the catalog and schema in which to create the table):

CREATE TABLE <catalog>.<schema>.CHKPOINT (
  ID VARCHAR(100) NOT NULL NOT DROPPABLE PRIMARY KEY,
  SOURCEPOSITION VARCHAR(30400),
  PENDINGDDL NUMERIC(1),
  DDL VARCHAR(2000)
) ATTRIBUTES BLOCKSIZE 32768;

MySQL:

CREATE TABLE CHKPOINT (
  id VARCHAR(100) PRIMARY KEY, 
  sourceposition BLOB, 
  pendingddl BIT(1), 
  ddl LONGTEXT);

Oracle:

CREATE TABLE CHKPOINT (
  ID VARCHAR2(100) PRIMARY KEY, 
  SOURCEPOSITION BLOB, 
  PENDINGDDL NUMBER(1), 
  DDL CLOB);

PostgreSQL:

create table chkpoint (
  id character varying(100) primary key,
  sourceposition bytea,
  pendingddl numeric(1), 
  ddl text);

SQL Server:

CREATE TABLE CHKPOINT (
  id NVARCHAR(100) PRIMARY KEY,
  sourceposition VARBINARY(MAX), 
  pendingddl BIT, 
  ddl VARCHAR(MAX));

Sybase

CREATE TABLE CHKPOINT (
  id VARCHAR(100) PRIMARY KEY NOT NULL,
  sourceposition IMAGE, 
  pendingddl NUMERIC, 
  ddl TEXT);

Database Writer data type support and correspondence

Use the following when the input stream is of a user-defined type. (See the Change Data Capture Guide when the input is the output of a CDC or DatabaseReader source.)

Most Striim data types can map to any one of several column types in the target DBMS.

See also Data type support & mapping for schema conversion & evolution.

TQL type

Cassandra

Db2 for z/OS

MariaDB / MySQL

Oracle

java. lang. Byte

blob

  • BINARY

  • BLOB

  • VARBINARY

  • BIGINT

  • LONGTEXT

  • MEDIUMINT

  • MEDIUMTEXT

  • SMALLINT

  • TEXT

  • TINYINT

  • INT

  • NUMBER

java. lang. Double

double

DOUBLE

  • DOUBLE

  • REAL

  • BINARY_DOUBLE

  • BINARY_FLOAT

  • FLOAT

  • NUMBER

java. lang. Float

float

REAL

FLOAT

  • BINARY_DOUBLE

  • BINARY_FLOAT

  • FLOAT

  • NUMBER

java. lang. Integer

int

INTEGER

  • BIGINT

  • INT

  • MEDIUMINT

  • SMALLINT

  • TINYINT

  • INT

  • NUMBER

java. lang. Long

bigint

BIGINT

  • BIGINT

  • SMALLINT

  • TINYINT

  • INT

  • NUMBER

java. lang. Short

int

SMALLINT

  • BIGINT

  • SMALLINT

  • TINYINT

  • INT

  • NUMBER

java. lang. String

varchar

  • CHAR

  • CLOB

  • DBCLOB

  • GRAPHIC

  • VARCHAR

  • VARGRAPHIC

  • CHAR

  • TINYTEXT

  • VARCHAR

  • CHAR

  • NCHAR

  • NVARCHAR

  • VARCHAR

  • VARCHAR2

  • VARRAY

org.joda. time. DateTime

timestamp

  • DATE

  • TIME

  • TIMESTAMP

  • TIMESTAMP WITH TIMEZONE

  • DATE

  • DATETIME

  • TIMESTAMP

  • YEAR

  • DATE

  • TIMESTAMP

TQL type

PostgreSQL

SAP HANA

SQL Server

java. lang. Byte

not supported

  • BLOB

  • VARBINARY

  • BIGINT

  • SMALLINT

  • TEXT

  • TINYINT

java. lang. Double

double precision

  • DOUBLE

  • FLOAT

FLOAT

java. lang. Float

float

  • FLOAT

  • REAL

  • FLOAT

  • REAL

java. lang. Integer

  • integer

  • serial

INTEGER

  • BIGINT

  • NUMERIC

  • SMALLINT

  • TINYINT

java. lang. Long

  • bigint

  • bigserial

BIGINTEGER

  • BIGINT

  • SMALLINT

  • TINYINT

java. lang. Short

  • smallint

  • smallserial

SMALLINT

  • BIGINT

  • SMALLINT

  • TINYINT

java. lang. String

  • character

  • character varying

  • date

  • numeric

  • text

  • timestamp with timezone

  • timestamp without timezone

  • ALPHANUM

  • NVARCHAR

  • VARCHAR

  • CHAR

  • NCHAR

  • NVARCHAR

  • TEXT

  • UNIQUEIDENTIFER

  • VARCHAR

  • XML

org.joda. time. DateTime

  • timestamp with timezone

  • timestamp without timezone

  • DATE

  • SECONDDATE

  • TIME

  • TIMESTAMP

  • DATE

  • DATETIME

  • DATETIME2

  • TIME

Runtime considerations for Database Writer

In Db2 for z/OS, table and column names may not exceed 30 characters.