Incremental Update 18

Incremental Update 18

Gerd Zellweger
Gerd ZellwegerHead of Engineering / Co-Founder
| February 26, 2025

We've just shipped feldera v0.38, as always the release is packed with new and useful improvements and includes some highlights such as a new redis output connector and support for input connector orchestration. We'll discuss both additions in more details in this post.

Redis

v0.38 includes support for a new output connector to send data to Redis, a popular KV-store. Here is a simple example from the docs that sets-up a redis sink to populate data from a feldera view:

create table t0 (c0 int, c1 int, c2 varchar);

create materialized view v0 with (
'connectors' = '[
  {
    "transport": {
      "name": "redis_output",
      "config": {
        "connection_string": "redis://localhost:6379/0",
        "key_separator": ":"
      }
    },
    "format": {
        "name": "json",
        "config": {
          "key_fields": ["c0","c2"]
        }
    }
  }
]'
) as select * from t0;

With this pipeline, any insert into table t0 will lead to a key-value entry being added in redis. For example, executing the following ad-hoc query in Feldera:

INSERT INTO t0 VALUES (1, 1, 'first')

will add the following KV-pair to the redis instance:

Key: 1:first

Value: "{\"c0\":1,\"c1\":1,\"c2\":\"first\"}\n"

Input Connector Orchestration

One often requested feature from our customers is the ability to easily orchestrate input connectors in feldera when having to deal with many different data-sources. With v0.38 this has now become much easier thanks to the newly supported start_after and labels JSON attributes. Here is a simple example from our updated docs:

create table price (
    part bigint not null,
    vendor bigint not null,
    price integer
) WITH ('connectors' = '[{
    "labels": "price.backfill",
    "transport": {
        "name": "url_input", "config": {"path": "https://feldera-basics-tutorial.s3.amazonaws.com/price.json"  }
    },
    "format": { "name": "json" }
},
{
    "start_after": ["price.backfill"],
    "format": {"name": "json"},
    "transport": {
        "name": "kafka_input",
        "config": {
            "topics": ["price"],
            "bootstrap.servers": "redpanda:9092",
            "auto.offset.reset": "earliest"
        }
    }
}]');

In this case, the price table has two connectors:

  • A JSON URL with a label price.backfill. feldera will start to ingest from this immediately once the pipeline starts.
  • A kafka topic, that we configure (using the start_after attribute) to start ingesting only after the first connector has been fully ingested.

This feature allows you to easily manage complex backfill scenarios with data arriving from multiple sources.

As always let us know your thoughts about the new release in our Slack or Discord channel and stay tuned for more updates.

Other articles you may like

Database computations on Z-sets

How can Z-sets be used to implement database computations

Implementing Batch Processes with Feldera

Feldera turns time-consuming database batch jobs into fast incremental updates.