Skip to main content

Striim Platform 5.0 documentation

Release notes

The following are the release notes for Striim Platform 5.0.6.

Requirements

CentOS 7.9 and RHEL versions prior to 7.9 that were certified for Striim 4.2.0 are not certified for 5.0. We strongly recommend that before upgrading to 5.0 you upgrade to a certified operating system (see System requirements).System requirements

Changes that may require modification of your TQL code, workflow, or environment

  • Starting with release 5.0, Striim Platform requires JDK 11. Due to this change, once you upgrade to Striim 5.0, it will not be possible to downgrade to a previous release.

  • Starting with Striim 4.2.0.22, the internal Kafka version is 3.6.2. When you upgrade from an earlier release, the internal Kafka instance will be upgraded. If you have created any Kafka property sets for the internal Kafka instance, you will need to edit their kafkaversion property accordingly.

  • Kafka Reader and Kafka Writer versions 0.8, 0.9, and 0.10 are no longer supported. When you upgrade to Striim 5.x, any applications using those versions will automatically be updated to use Kafka Reader 2.1 or Kafka Writer 2.1 (which are backward-compatible with the old versions).

  • As part of upgrading to 5.0, a Snowflake Writer with Streaming Upload enabled and Authentication Type set to OAuth or Password will have that property value switched to Key Pair.

  • Starting with release 4.2.0, TRUNCATE commands are supported by schema evolution (see Handling schema evolution). If you do not want to delete events in the target (for example, because you are writing to a data warehouse in Append Only mode), precede the writer with a CQ with the select statement SELECT * FROM <input stream name> WHERE META(x, OperationName) != “Truncate”; (replacing <input stream name> with the name of the writer's input stream). Note that there will be no record in the target that the affected events were deleted.Handling schema evolution

  • The Fetch Size property has been removed from MSJet.

  • OJet requires Oracle Instant Client version 21.6. See Install the Oracle Instant Client in a Striim server or Install the Oracle Instant Client in a Forwarding Agent.

  • MongoDB Reader no longer supports MongoDB versions prior to 3.6.

  • MongoDB Reader reads from MongoDB change streams rather than the oplog. Applications created in releases prior to 4.2.0 will continue to read from the oplog after upgrading to 4.2.1. To switch to change streams:

    1. Export the application to a TQL file.

    2. Drop the application.

    3. Revise the TQL as necessary to support new features, for example, changing the Connection URL to read from multiple shards.

    4. Import the TQL to recreate the application.

  • Databricks Writer's Upload Policy's default eventcount value has been increased from 10000 to 100000 and the Hostname property has been removed since the host name can be retrieved using. the connection URL. (See Databricks Writer properties.)

  • After upgrading to 5.0.6, a vector embeddings app may terminate with a "java.lang.ClassCastException: class java.util.HashMap cannot be cast to class com.webaction.security.PasswordPlatform" error (known issue DEV-51079). To resolve this, delete and re-create the vector embeddings generator (see Vector Embeddings).

Customer-reported issues fixed in release 5.0.6

  • DEV-27262: path /opt/striim in striim-node package is not relocatable

  • DEV-37704: security scanners sending ZMQ vulnerability packets stops Streams from processing events

  • DEV-48420: web UI loading slow

  • DEV-49083: GG Trail Reader halts due to daylight savings time 'gap'

  • DEV-49368: issues when upgrading 4.2.1.1 to 5.0.2

  • DEV-49407: Database Writer: slow performance on benchmark test

  • DEV-49533: MSJet error "Cause: 2505 : Failure in Executing Queries External Exception invalid vector subscript"

  • DEV-49715: MSJet error "java.lang.ClassCastException: class java.lang.Integer cannot be cast to class java.lang.String"

  • DEV-49796: MSJET error "Terminated due to Exception with Sequence number ... Error while processing a LogRecord"

  • DEV-50583: restrict external link access in Striim

  • DEV-50767: Kafka Reader with Avro parser: performance degrades over time.

  • DEV-50791: restrict external link access in Striim

  • DEV-51046: can't disable Striim Copilot

Customer-reported issues fixed in release 5.0.2.2

  • DEV-49364: Unable to view web UI page due to timezone using Korean characters

  • DEV-49114: PostgreSQL Reader fails with "Failed to create Type for the table - null.null"

Resolved issues

The following previously reported known issues were fixed in release 5.0.2:

  • DEV-12638: Oracle Reader "Last Observed Timestamp" has incorrect time zone

  • DEV-35539: MySQL ALTER TABLE ... ADD COLUMN with AFTER clause issue

  • DEV-35681: MySQL ALTER TABLE ... ADD COLUMN with INSTANT clause issue

  • DEV-44412: Enabling persistence for a stream in Flow Designer can cause errors, such as the source disappearing from the app.

Known issues from past releases

  • DEV-5701: Dashboard queries not dropped with the dashboard or overwritten on import

    When you drop a dashboard, its queries are not dropped. If you drop and re-import a dashboard, the queries in the JSON file do not overwrite those already in Striim.

    Workaround: drop the namespace or LIST NAMEDQUERIES, then manually drop each one.

  • DEV-8142: SORTER objects do not appear in the UI

  • DEV-8933: DatabaseWriter shows no error in UI when MySQL credentials are incorrect

    If your DatabaseWriter Username or Password values are correct, you will see no error in the UI but no data will be written to MySQL. You will see errors in webaction.server.log regarding DatabaseWriter containing "Failure in Processing query" and "command denied to user."

  • DEV-11305: DatabaseWriter needs separate checkpoint table for each node when deployed on multiple nodes

  • DEV-17653: Import of custom Java function fails

    IMPORT STATIC may fail. Workaround: use lowercase import static.

  • DEV-19903: When DatabaseReader Tables property uses wildcard, views are also read

    Workaround: use Excluded Tables to exclude the views.

Third-party APIs, clients, and drivers used by readers and writers

  • Azure Event Hub Writer uses the azure-eventhubs API version 3.0.2.

  • Azure Synapse Writer uses the bundled SQL Server JDBC driver.

  • BigQuery Writer uses google-cloud-bigquery version 2.42.3 and google-cloud-bigquerystorage version 3.9.1.

  • Cassandra Cosmos DB Writer uses cassandra-jdbc-wrapper version 3.1.0

  • Cassandra Writer uses cassandra-java-driver version 3.6.0.

  • Cloudera Hive Writer uses hive-jdbc version 3.1.3.

  • CosmosDB Reader uses Microsoft Azure Cosmos SDK for Azure Cosmos DB SQL API 4.54.0.

  • CosmosDB Writer uses documentdb-bulkexecutor version 2.3.0.

  • Databricks Writer uses Databricks JDBC driver version 2.6.29. It also uses the following:

    • for authentication using Azure Active Directory and staging in ADLS Gen2: azure-identity version 1.5.3

    • for staging in ADLS Gen2: azure-storage-blob version 12.18.0

    • for staging in DBFS: databricks-rest-client version 3.2.2

    • for staging in S3: aws-java-sdk-s3 version 1.12.589 and aws-java-sdk-sts version 1.11.320

  • Derby: the internal Derby instance is version 10.9.1.0.

  • Elasticsearch: the internal Elasticsearch cluster is version 5.6.4.

  • Fabric Data Warehouse Writer uses mssql-jdbc version 12.8.1.jre8, msal4j version 1.17.1, and azure-storage version 4.4.0.

  • Fabric Lakehouse File Writer uses httpclient version 4.5.13.

  • GCS Writer uses the google-cloud-storage client API version 1.106.0.

  • Google PubSub Writer uses the google-cloud-pubsub client API version 1.110.0.

  • Hazelcast is version 5.3.5.

  • HBase Writer uses HBase-client version 2.4.13.

  • Hive Writer and Hortonworks Hive Writer use hive-jdbc version 3.1.3.

  • The HP NonStop readers use OpenSSL 1.0.2n.

  • JMS Reader and JMS Writer use the JMS API 1.1.

  • Kafka: the internal Kafka cluster is version 3.6.2.

  • Kudu: the bundled Kudu Java client is version 1.13.0.

  • Kinesis Writer uses aws-java-sdk-kinesis version 1.11.240.

  • MapR DB Writer uses hbase-client version 2.4.10.

  • MapR FS Reader and MapR FS Writer use Hadoop-client version 3.3.4.

  • MariaDB uses maria-binlog-connector-java-0.2.3-WA1.jar and mariadb-java-client-2.4.3.jar.

  • MariaDB Xpand uses mysql-binlog-connector-java-0.21.0.jar and mysql-connector-java-8.0.30.jar.

  • Mongo Cosmos DB Reader, MongoDB Reader, and MongoDB Writer use mongodb-driver-sync version 4.8.2.

  • MySQL uses mysql-binlog-connector-java-0.21.0.jar and mysql-connector-java version 8.0.27.

  • Oracle: the bundled Oracle JDBC driver is ojdbc-21.6.jar.

  • PostgreSQL: the bundled PostgreSQL JDBC 4.2 driver is postgresql-42.4.0.jar.

  • Redshift Writer uses aws-java-sdk-s3 1.11.320.

  • S3 Reader and S3 Writer use aws-java-sdk-s3 1.11.320.

  • Salesforce Reader uses the Force.com REST API version 53.1.0.

  • Salesforce Writer: when Use Bulk Mode is True, uses Bulk API 2.0 Ingest; when Use Bulk Mode is False, uses the Force.com REST API version 53.1.0.

  • Snowflake Reader: the bundled Snowflake JDBC driver is snowflake-jdbc-3.18.0.jar.

  • Snowflake Writer: when Streaming Upload is False, uses snowflake-jdbc-3.18.0.jar; when Streaming Upload is True, uses snowflake-ingest-sdk version 2.2.2.

  • Spanner Writer uses the google-cloud-spanner client API version 1.28.0 and the bundled JDBC driver is google-cloud-spanner-jdbc version 1.1.0.

  • SQL Server: the bundled Microsoft SQL Server JDBC driver is mssql-jdbc-7.2.2.jre8.jar.

  • Yugabyte: uses the bundled PostgreSQL JDBC driver.