Skip to main content

Striim Platform 5.0 documentation

Writers overview

The following is a summary of writer capabilities. For additional information, see:

Note

Unless otherwise specified, Striim does not support writing to sources or cloud services whose endpoints are located in AWS GovCloud, Azure Government, or Google for Government.

Writer capabilities

writer

input stream type(s)

supports replication[a]

supports Database Reader auto-quiesce

output(s)

DDL support

parallel threads

recovery[b]

ADLS Gen2 Writer

JSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

optional rollover on schema evolution

no

A1P

Azure Blob Writer

JSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

-

no

A1P

Azure Event Hub WriterAzure Event Hub Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

-

no

AIP (default) or E1P[d]

Azure Synapse Writer

user-defined, WAEvent[e]

yes

yes

Azure Synapse table(s)

schema evolution

yes

A1P

BigQuery WriterBigQuery Writer

user-defined, WAEvent[e]

yes

yes

BigQuery table(s)[f]

schema evolution

yes

A1P

Cassandra Cosmos DB Writer

user-defined, WAEvent[e]

yes

yes

Cosmos DB Cassandra API tables

-

yes

E1P[d][g]

Cassandra Writer

user-defined, WAEvent[e]

yes

yes

Cassandra tables

-

yes

E1P[d][h]

Cloudera Hive Writer

user-defined, WAEvent1

no

no

Hive table(s)[f]

-

no

A1P

Cosmos DB WriterCosmos DB Writer

user-defined, JSONNodeEvent, WAEvent[e]

yes

yes

CosmosDB documents

-

yes

A1P[i]

Database WriterDatabase Writer

user-defined, WAEvent[e]

yes

yes

JDBC to table(s) in a supported DBMS[f]

schema evolution

yes

E1P[d]

Databricks Writer

user-defined, WAEvent[e]

yes

yes

Delta Lake tables in Databricks

schema evolution

yes

A1P

Db2 for z/OS

see Database Writer, above

Fabric Data Warehouse Writer

user-defined, WAEvent[j]

yes

yes

data warehouse tables in Fabric

schema evolution

yes

A1P

Fabric Lakehouse File Writer

JSONNodeEvent, ParquetEvent[k], user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

optional rollover on schema evolution

no

A1P

Galera

see Database Writer, above

GCS Writer

JSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

optional rollover on schema evolution

yes

A1P

Google PubSub Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

-

no

A1P

Hazelcast WriterHazelcast Writer

user-defined, WAEvent[e]

yes

no

Hazelcast map(s)[f]

-

no

A1P

HBase WriterHBase Writer

user-defined, WAEvent[e]

yes

no

HBase table(s)**

-

yes

A1P

HDFS Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

no

Avro, delimited text, JSON, XML

optional rollover on schema evolution

no

A1P

Hive WriterHive Writer

user-defined, WAEvent[e]

yes (when using SQL MERGE)

yes

Hive table(s)[f]

-

yes

E1P (when using MERGE) or A1P

Hortonworks Hive Writer

user-defined, WAEvent[e]

yes (when using SQL MERGE)

no

Hive table(s)[f]

-

no

E1P (when using MERGE) or A1P

HP NonStop SQL/MP & SQL/MX

see Database Writer, above

JMS Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

no

delimited text, JSON, XML

-

no

A1P

Kafka WriterKafka Writer

user-defined, JSONNodeEvent, WAEvent[e]

no, but see Using the Confluent schema registryUsing the Confluent or Hortonworks schema registry

yes

Avro, delimited text, JSON, XML

can track schema evolution using schema registry

yes

E1P[d]

Kinesis WriterKinesis Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

-

no

E1P[d]

Kudu WriterKudu Writer

user-defined, WAEvent[e]

yes

yes

Kudu table(s)[f]

-

yes

A1P

MapR DB Writer

user-defined

no

no

MapR DB table

-

yes

A1P

MapR FS Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

no

Avro, delimited text, JSON, XML

no

A1P

MapR Stream Writer

JSONNodeEvent, user-defined, WAEvent, XMLNodeEvent

no

no

Avro, delimited text, JSON, XML

no

A1P

MariaDB

see Database Writer, above

MemSQL

see Database Writer, above

Microsoft Dataverse Writer

user-defined, WAEvent

yes

no

Dataverse entities

schema evolution

no

A1P

MongoDB Cosmos DB Writer

JSONNodeEvent, user-defined, WAEvent[e]

yes

yes

CosmosDB documents

no

A1P[i]

MongoDB Writer

JSONNodeEvent, user-defined, WAEvent[e]

yes

yes

MongoDB documents

-

yes

A1P or E1P[l]

MQTT Writer

user-defined

no

no

Avro, delimited text, JSON, XML

no

A1P

MySQL

see Database Writer, above

Oracle Database

see Database Writer, above

PostgreSQL

see Database Writer, above

Redshift WriterRedshift Writer

user-defined, WAEvent[e]

yes

yes

Redshift table(s)[f]

yes

A1P

S3 WriterS3 Writer

JSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEvent

no

yes

Avro, delimited text, JSON, XML

optional rollover on schema evolution

yes

A1P

Salesforce Writer

user-defined (in APPENDONLY mode), WAEvent[e]

yes (in MERGE mode)

yes

Salesforce objects

no

yes

A1P

Salesforce Marketing Cloud Writer

user-defined, WAEvent

no

no

Salesforce Marketing Cloud tables

no

yes

A1P

SAP HANA

see Database Writer, above

ServiceNow Writer

user-defined, WAEvent[e]

yes (in MERGE mode)

ServiceNow table(s)

A1P

Singlestore

see Database Writer, above

Snowflake WriterSnowflake Writer

user-defined, WAEvent[e]

yes

yes

Snowflake table(s)[f]

schema evolution

yes

A1P

Spanner Writer

user-defined, WAEvent[e]

yes

yes

Spanner table(s)[f]

schema evolution

yes

E1P[d]

SQL Server

see Database Writer, above

SysOut

any except Avro

n/a

no

log file or terminal

all input is written

no

A1P

Yellowbrick

see Database Writer, above

[a] Supporting replication means that the target can replicate insert, update, and delete events from the source.

[b] A1P ("at-least once processing") means that after recovery there may be some duplicate events written to the target. E1P ("exactly once processing") means there will be no duplicate events.

[c] When the input stream is of type ParquetEvent, the writer must use Avro Formatter or Parquet Formatter.

[d] If the source is WAEvent from Incremental Batch Reader, recovery is A1P.

[e] WAEvent must be the output of a Database Reader, Incremental Batch Reader, or SQL CDC source.

[f] With an input stream of a user-defined type, output is to a single table or map. Output to multiple tables or maps requires source database metadata included in WAEvent.

[g] Primary key updates to source rows cannot be replicated.

[h] Primary key updates to source rows cannot be replicated.

[i] Not supported when the writer's input stream is the output of Cosmos DB Reader or Mongo Cosmos DB Reader in incremental mode.

[j] WAEvent must be the output of a Database Reader, Incremental Batch Reader, or SQL CDC source.

[k] When the input stream is of type ParquetEvent, the writer must use Avro Formatter or Parquet Formatter.

[l] See notes for the Checkpoint Collection property.