Meet the Feldera storage engine

Ben Pfaff
Ben PfaffChief Engineer / Co-Founder
| August 6, 2024

By popular demand, here is a blog post about Feldera's storage design at last!

Incremental computation is fundamentally a time-space tradeoff. Thatis, Feldera maintains state that allows it to update views by only looking at new changes and completely avoid recomputing over historical data. That means that Feldera might need to keep a large amount of state around for later use, because of operations like joins, certain aggregates and the DISTINCT operator. In many cases, that state can grow beyond the amount of memory that users can affordably populate in their cloud deployments. Therefore, Feldera needs to efficiently support using disks and SSDs for the data that it uses for incremental computation. In the context of Feldera, we call this feature simply "storage".

Requirements

We needed some features beyond just the ability to store data:

  1. Efficient lookup for keys and ranges of keys. Many of our internal operators work on ranges of keys and expect to be able to access sequential keys quickly.
  2. Fast checkpointing. For fault tolerance, Feldera needs to be able to quickly persist all of the tables in a pipeline. We also needed a simple form of versioning for tables, either built-in or layered over storage.
  3. Integration with Rust, the language that Feldera is implemented in,
    including the ability to store data aligned for in-place access.
  4. Confidence in design and implementation maturity.
  5. Performance.

With these requirements in mind, we considered existing storage solutions, primarily key-value embedded databases:

  • RocksDB. We experimented extensively with RocksDB, integrating it into our code base, but we rejected it for reasons already explained in a prior blog post.
  • FoundationDB. We were eager for FoundationDB to work, since as a distributed database it would have given Feldera fault tolerance against hardware failure "for free", but we rejected it for some of the same reasons as RocksDB, including data alignment difficulties.
  • Other solutions, including some Rust crates that we decided were not mature or performant enough.

Rolling our own

Since we could not find a suitable pre-existing storage system, the remaining option was to build our own. Building a database, even a key-value embedded database, is a substantial endeavor, so we did not make this choice lightly. However, Feldera's internal architecture made some aspects of the job easier:

  • Log-structured tables. Batches of changes, including insertions, deletions, and updates, are incrementally added to tables, and later merged in background threads.
  • Shared-nothing architecture. Each Feldera worker thread processes an independent stream of data, so the storage layer can be per-thread as well.

For a choice of basic data structures, our requirement for reading ranges of keys led us to a tree-based storage approach. In particular, we adopted a design based on log-structured merge-trees (LSM trees), the data structure that underpins RocksDB and many other databases.

An LSM tree is particularly appropriate for Feldera because it so closely resembles the data structure that it already uses for in-memory storage, called a "spine". A spine, like an LSM tree, is a collection of sorted runs, also called "batches". Adding data to either data structure adds another run. Thus, we were able to convert our in-memory spines into on-storage LSM-like trees by implementing a batch type that was stored on disk rather than in memory.

For this purpose, we implemented our own B-tree like file format. The Feldera spine writes each batch of substantial size to an individual file. Batches inside Feldera are always created in sorted order, which allowed us to write these files sequentially without any seeks and with minimal in-memory buffering. Because batches are never modified in-place, the file format and the code that implements it does not need to make allowances for adding or removing or modifying data.

Searching a spine, or an LSM tree, requires searching all the runs, which means that the number of runs is key to performance. Thus, these data structures support ways to reduce the number of runs, by merging existing runs. Merging runs requires only sequentially reading each of them, which is efficient both in memory and on storage.

Merging can benefit size as well as search performance, because with
the Z sets that Feldera uses, identical records merge by adding
weights instead of producing duplicates, or even "cancel out" entirely
if their weights add to zero. In addition, if the associated query
only works with recent data, a merge can delete older data that can no
longer influence the results.

Development experience

We started prototyping our storage system in December. It took past the end of January before the first version was ready for testing. After that, we had to revise the approach a number of times along a few different axes.

One axis was flexibility inside Feldera in terms of which batches were kept in memory versus storage. In Feldera, operators pass batches from one to another as streams, with the batch type determined at compile time. We initially implemented new on-storage "file" batches alongside the existing memory based "vector" batches, which fixed the location of a batch based on a choice made at compile time. This proved too inflexible, and soon we implemented "fallback" batches that internally encapsulated either a "file" or "vector" batch at runtime. This allowed us to easily adjust policies on where to store data.

Another axis was performance. Our storage layer initially had poor performance, with some queries running 100× slower when storage was enabled. Some of this was due to too much movement between disk and memory because of the inflexible location, which we fixed as previously described. A lot of it was due to a poor caching layer, which we refined and improved over time. Another reason was that we had little intuition on which parts of the storage API were likely to be the most performance-sensitive, which we started to optimize by profiling queries that we cared about. We also found some unexpected performance costs, such as high locking overhead in the Rust crate for tracking metrics! With our improvements, some queries now run almost as fast with storage as without.

Correctness, on the other hand, we have not so far found particularly difficult to achieve. The Feldera storage format, and the reader and writer code for it, is quite simple, much simpler than a general-purpose key-value database. This made it straightforward to write good tests.

Results

As a result of our work on storage, Feldera can now run queries that store data using 25% to 75% less memory, for the queries that use the most memory. Queries that use substantial storage do run up to 2× to 3× slower. We're making great improvements in storage every week, so users can expect the typical performance penalty to decrease over time.

We often use queries from the Nexmark benchmarks as part of our optimization work. These queries are good examples because they can be considered engine-neutral and exist for several streaming engines. The following table shows how, for 100 million events, Feldera's performance and memory use compares with and without storage. The results fall into three groups:

  • Group 1, the bulk of our queries. These queries use substantial amounts of memory (10 GiB or more) without storage. For most of these queries, enabling storage reduces memory use by over 50% (the outlier is q17, which benefits only 19%). Enabling storage typically has a performance cost, from 2% up to 56%.
  • Group 2, containing q0, q1, and q2. These queries use hardly any memory (under 1 GiB) whether storage is enabled or not, because they do not store any state. As a results, they also run in about the same amount of time regardless of whether storage is enabled.

Some of these queries use more memory with storage enabled, up to a factor greater than 2×, but the total memory use is still 1 GiB or less, so it's an insignificant difference.

These queries also take about the same amount of runtime regardless of whether storage is enabled.

  • Group 3, containing q3, q8, and q10. These queries use small amounts of memory, which increase modestly when storage is enabled. The current Feldera policies for spilling data from memory to disk do not help these queries use less memory, but this may not be an issue since they use little anyhow.
Storage Results

It's also worth looking at how performance and memory scale when we go farther, to 200 million events:

  • For group 1, with storage disabled, we would expect to see memory usage double and, with storage enabled, remain flat. This is approximately the case for q15, q17, q18, and q19.

Queries q4 and q7 did better than that, using about the same absolute amount of peak memory (even a little less) with 200 million events as with 100 million. This suggests that Feldera's SQL engine was able to make use of "lateness" with these queries to discard intermediate data when it was too old to influence future results. To test that, we reran these queries with the lateness optimization disabled. The runtime increased significantly in all cases. With storage disabled, peak memory approximately tripled, whereas with storage enabled it hardly changed. This evidence supports our hypothesis, since lateness allows less data to be processed, speeding up processing and reducing intermediate data volume.

Query q9 demonstrated unexpected behavior where memory usage increased with storage but not without. We haven't explored the reason for this yet.

  • Group 2 maintained its performance and minimal memory use, as one would expect.
  • In group 3, memory use and runtime roughly scaled with data volume, which suggests that the Feldera storage policies don't apply differently for 200 million events than for 100 million in these cases.
Storage results with 200 million events

Conclusion

With storage enabled, the Feldera stream processing engine can run queries that store large amounts of data using far less memory than otherwise needed. Storage comes with a performance penalty, but that penalty will continue to decline as optimization work continues.

Other articles you may like

Incremental Update 6 at Felderaincremental-update

Incremental Update 6 at Feldera

We’re excited to announce the release of v0.26, which represents a significant step forward for Feldera. This release includes over 200 commits, adding 25,000 lines of new code and documentation. Let's dive into the highlights!

Database computations on Z-sets

How can Z-sets be used to implement database computations

Incremental Update 5 at Felderarelease

Incremental Update 5 at Feldera

A quick overview of what's new in v0.25.