In light of the end of 2023 and the beginning of 2024, here is a summary of Speedb's innovations and activities in 2023.

During this year Speedb released 7 open source versions, each one of them includes new features, performance improvements, and bug fixes. 

Looking back at 2023


One of our main goals is to improve the user experience and the simplicity of the Speedb storage engine to allow unprofessional users to use it easily.

In this area, we introduced several improvements:

  • A new log parser tool that provides a fast and easy way to get insights about the storage engine. 
  • A new method to automatically configure memory options based on resources that the user allocates to the storage engine.
  • Live configuration changes that allow to modify mutable options on the fly.

Performance Stability

We introduced a dramatic improvement to the performance stabilization. You can deep dive into this blog to find out how we did it and how we managed to stabilize performance and eliminate the hiccups

Streaming applications

2023 marked a pivotal moment as Speedb forged a groundbreaking partnership with Confluent, entering a new era for streaming applications. This strategic alliance signifies a shift in the industry, combining the strengths of Speedb with Confluent's prowess in crafting tailored solutions for Flink and Kafka Streams users. 


Speedb added a ‘safety belt’ to the pinning policy so you can enjoy the tremendous benefits of pinning index and filter block to memory without risking the application in out-of-memory events. 

The new static pinning policy is a great milestone towards the memory manager project planned for 2024. 


We launched the early adopters' program of Speedb cloud that allows you to dynamically scale in seconds and leverage the cloud benefits while using Speedb enterprise technology. With Speedb cloud you can easily create multiple readers on demand of the same dataset.


Speedb is running the most vibrant community of RocksDB and Speedb users in the world.

The Speedb discord server is very busy with questions and collaborations of more than 500 developers from different companies and industries. 

Don't be left behind! Join the community today and enjoy this great knowledge base and talented people.


We introduced a new memtable type that consists of 2 data structures to overcome the performance limitation of parallel writes to the SkipList. This new memtable provides tremendous performance improvements in read, write and seek operations. 

Dynamic Load Distribution 

Compaction operations and heavy read/write workloads caused rush hour issues and unstable performance.

To address this, Speedb implemented dynamic adjustments to static controls, such as rate limiter and background I/O size limits. This approach distributes the load more evenly, mitigating the rush-hour problem and resulting in response times approaching theoretical expectations. The outcomes include a 50% reduction in average response time in problematic scenarios and a nearly 10-fold decrease in the P95 response time.

A look forward to 2024

Our plans for 2024 are very exciting!

Memory manager: 

The memory manager project aims to solve 2 main problems that exist today with RocksDB: the complexity of memory resource allocation and the out-of-memory risk of misconfiguration that leads to application failures.
It will allow users to easily use the storage engine without having to configure dozens of parameters and without needing specialized skills to tune it. By simplifying the usage of Speedb, which is an embedded library, the users (developers) can be free to handle their application’s challenges and avoid spending time and effort to configure the storage engine. Doing a dynamic allocation of memory allows for ad-hoc performance improvements based on the shift in workload. 

Secondary index

Our upcoming plans include enhancing search efficiency by introducing a secondary index. By incorporating this feature during object write actions, we aim to streamline the search process for objects with specific attributes. This addition of a secondary index introduces a searchable criterion, significantly reducing the number of iterations needed for an optimized and faster search experience. Stay tuned for these exciting developments!


The iteration over a range of data is a basic and fundamental function of storage engines, it’s also very expensive, from a performance perspective. Users are using a less than optimal setting due to misunderstanding of the options, and hence suffer from performance problems. 

One of our planned projects for 2024 is to rewrite the iterators to simplify the usage and improve the performance of iterators by making it more effective and simple. 


A distributed key-value store is crucial for scalable and efficient data management in modern computing environments. By distributing data across multiple nodes or servers, it enables seamless scalability, fault tolerance, and improved performance. 

In 2024, Speedb plans to make a big step towards a fully distributed solution.

Join our GitHub repo to stay up to date on these innovations and many more!

Related content: