The Feldera Blog

An unbounded stream of technical articles from the Feldera team

Why incremental aggregates are difficult - part 1

Many traditional query engines may be able to handle only some kinds of queries, or only some kinds of input updates incrementally, reverting to full recomputation for unsupported operations. Feldera uniformly handles insertions and deletions and stacked views using arbitrary monotone and non-monotone queries.

Nobody ever got fired for using a struct

Rust structs are usually the obvious way to represent data. But when you serialize wide SQL tables with hundreds of nullable columns, that "obvious" layout can quietly double your storage cost. Fixing it turns out to be a surprisingly simple trick.

Can your incremental compute engine do this?

Handle 217 join, 27 aggregations, and 287 linear operators on a single 16-core machine using 15GB RAM at steady state. Here's the proof.

Introducing Feldera Health

Introducing Feldera Health: a lightweight dashboard that shows your infrastructure status without Kubernetes access. Get quick answers when pipelines fail.

Feldera in 2025: Building the Future of Incremental Compute

Feldera broke this 50-year barrier with incremental computation. A better way to create sophisticated analytics at the speed your data actually changes: real-time.

Introducing Feldera's Visual Profiler

We built a browser-based visualizer to dig into a Feldera pipeline's performance metrics. This tool can help users troubleshoot performance problems and diagnose bottlenecks.

Constant folding in Calcite

In this article we describe in detail how the Calcite SQL compiler framework optimizes constant expressions during compilation.

The Dirty Secret of Incremental View Maintenance: You Still Need Batch

Using a simple example, we show why even a powerful IVM system like Feldera requires special care to make backfill efficient.

How Feldera Customers Slash Cloud Spend (10x and beyond)

By only needing compute resources proportional to the size of the change, instead of the size of the whole dataset, businesses can dramatically slash compute spend for their analytics.