https://github.com/DiceDB/dice/blob/0e241a9ca253f17b4d364cdf... defines func ExpandID, which reads from cycleMap without locking the package-global mutex; and func NextID, which writes to cycleMap under a lock of the package-global mutex. So writes are synchronized, but only between each other, and not with reads, so concurrent calls to ExpandID and NextID would race.
This is all fine as a hobby project or whatever, but very far from any kind of production-capable system.
This PR attempted to fix the memory model violation I mentioned in the parent comment, but also added in an extra change that swapped the sync.Mutex to a sync.RWMutex. The PR description claimed 2 benefits: "Eliminates the data race, ensuring thread safety" -- correct! at least to some level; but also "Improves performance by allowing concurrent ExpandID calls, which is likely a common operation" -- which is totally unsubstantiated, and very likely false, as RWMutex is only faster than a regular Mutex under very narrowly-defined load patterns.
In any case, the PR had no kind of test or benchmark to validate either of these claims, so not a great start by the author. But then a maintainer chimed in with a comment that expressed concerns about edge-condition performance details, without any kind of data or evidence, and apparently didn't care about (or know about?) the much more important fixes that the PR made re: data races.
> I tried changing this, but I did not see any benefit in benchmark numbers.
No apparent understanding of the bugs in this code, nor how changes may or may not fix those bugs, nor really how performance is defined or can be meaningfully evaluated.
Again, hobby project or whatever, all good. But the authors and maintainers of this project are clearly, demonstrably, in over their heads on this one.
Looking at the diceDB code base, I have few questions regarding its design, I'm asking this to understand the project's goals and design rationale. Anyone feel free to help me understand this.
I could be wrong but the primary in-memory storage appears to be a standard Go map with locking. Is this a temporary choice for iterative development, and is there a longer-term plan to adopt a more optimized or custom data structure ?
I find the DiceDB's reactivity mechanism very intriguing, particularly the "re-execution" of the entire watch command (i.e re-running GET.WATCH mykey on key modification), it's an intriguing design choice.
From what I understand is the Eval func executes client side commands this seem to be laying foundation for more complex watch command that can be evaluated before sending notifications to clients.
But I have the following question.
What is the primary motivation behind re-executing the entire command, as opposed to simply notifying clients of a key change (as in Redis Pub/Sub or streams)? Is the intent to simplify client-side logic by handling complex key dependencies on the server?
Given that re-execution seems computationally expensive, especially with multiple watchers or more complex (hypothetical) watch commands, how are potential performance bottlenecks addressed?
How does this "re-execution" approach compare in terms of scalability and consistency to more established methods like server-side logic (e.g., Lua scripts in Redis) or change data capture (CDC) ?
Are there plans to support more complex watch commands beyond GET.WATCH (e.g. JSON.GET.WATCH), and how would re-execution scale in those cases?
I'm curious about the trade-offs considered in choosing this design and how it aligns with the project's overall goals. Any insights into these design decisions would help me understand its use-cases.
I was hoping for a response, but no one bothered. I had noted the following when I made that comment and will just wrap up from my end so this could be used by others for reference later.
I'm skeptical that the re-execution approach can scale for complex queries, the latency and throughput improvements would be offseted by the computational cost and bottlenecks introduced for achieving it via its reactivity mechanism (query subscription), this might not work at scale and serve niche use cases.
There are various ways throughput and latency for kv stores can be improved, so bar is really high here.
The messaging with Dice seems unclear and confusing to describe its purpose/use-cases over alternatives, or how it achieves them, which could just be how it's marketed. But it seems to be a collection of ideas and a WIP project.
I think reducing data fetching complexity and complex key dependencies for end clients could be appealing, and it would be great to have it at the KV store level, but there is no reason this type of reactivity can't be implemented on top of various clients for existing KV stores (like Redis). And basic WATCH with transactions are even offered out of the box in them.
Deno kv seem nice but its vendor locked, also there are many others like dragonfly, valkey etc, redis could still work, even something over sqlite can work, deno has a selfhosted kv on top of sqlite - https://github.com/denoland/denokv
From that and the thread so far it seems, they want to make some super cache by building a realtime multi-threaded kv store, improving latency and reducing its read load via its reactivity mechanism. Solving the problem of cache invalidation.
Not sure how this will be achieved but there is no harm in trying. From what is said and shared, rationale behind this design and its tradeoffs are not clear, code could be fixed/improved but providing clarity on this is essential for adoption.
I've seen this more and more with software landing pages, they are somehow so deep into developing/marketing that they totally forget to say what the thing actually is or does, that's why you show it to family and friends first to get some fresh eyes before publishing the site.
Looks like a Redis clone. The benchmarks compare it to Redis.
Description from GitHub:
> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads.
DiceDB is an in-memory database that is also reactive. So, instead of polling the database for changes, the database pushes the resultset if you subscribe to it.
We have a similar set of commands as Redis, but are not Redis-compliant.
Even clicking through to the Github, after reading the "What is DiceDB?", I'm still not very clear. It feels more like marketing than information.
"What is DiceDB?
DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads."
From the benchmarks on 4vCPU and num_clients=4, the numbers doesn't look much different.
Reactive looks promising, doesn't look much useful in realworld for a cache.
For example, a client subscribes for something and the machines goes down, what happens to reactivity?
UPD Nevermind, I didn't have my eyes open. Sorry for the confusion.
Something I still fail to understand is where you can actually spend 20ms while answering a GET request in a RAM keyvalue storage (unless you implement it in Java).
I never gained much experience with existing opensource implementations, but when I was building proprietary solutions at my previous workplace, the in-memory response time was measured in tens-hundreds of microseconds. The lower bound of latency is mostly defined by syscalls so using io_uring should in theory result in even better timings, even though I never got to try it in production.
If you read from nvme AND also do the erasure-recovery across 6 nodes (lrc-12-2-2) then yes, you got into tens of milliseconds. But seeing these numbers for a single node RAM DB just doesn't make sense and I'm surprised everyone treats them as normal.
Does anyone has experience with low-latency high-throughput opensource keyvalue storages? Any specific implementation to recommend?
I had the same reaction as you. And that's for 4 simultaneous clients, too, for a single client you get 3159 ops/s (from https://dicedb.io/benchmarks/). I'm not too familiar with in-memory databases in general but I would have expected figures in the millions on modern hardware. Makes me feel there's some hidden bottleneck somewhere and the benchmarks are not purely measuring the performance of the software.
In-memory caches (lacking persistence) shouldn't be called a database. It's not totally incorrect, but it's an abuse of terminology. Why is a Python dictionary not an in-memory key-value database?
I didn't see it in the docs, but I'd want to know the delivery semantics of the pubsub before using this in production. I assume best effort / at most once? Any retries? In what scenarios will the messages be delivered or fail to be delivered?
Different tool. I metrics I am optimizing for are different hence wrote a separate utility. May not be the most optimized one. But I am usign this to measure all things DiceDB and will be using this to optimize DiceDB further.
What are some example use cases where having the ability for the database to push updates to an application would be helpful (vs. the traditional polling approach)?
One example is when you want to display live data on a website. Could be a dashboard, a chat, or really the whole site. Polling is both slower and more resource hungry.
If it is built into your language/framework, you can completely ignore the problem of updating the client, as it happens automatically.
15655 ops a second with a Hetzner CCX23 machine with 4 vCPU and 16GB RAM is rather slow for an in-memory database I hate to say it. You can't blame that on network latency as for example supermassivedb.com is written in go and achieves magnitudes more, actually x20 and it's persisted.. I must investigate the bottlenecks with Dice.
I love the "Follow on twitter" link with the old logo and everything, they probably used a template that hasn't been updated recently but I'm choosing to believe it's actually a subtle sign of protest or resistance.
Snapshot functionality is WIP, which can be utilised to persist and replay data between reboots.
For now Golang SDK is only one, more SDKs are to be added soon.
Based on this thread, I'm not sure you would want to use this over keyspace notifications, but I will also say that there comes a point in the maturity of a system when keyspace notifications become a complicated, unreliable, resource-heavy nightmare. They work fine is your needs and scale are limited, but it's definitely not what you want if handling lots of frequent chances across craploads of keys, with complicated logic for who needs them and how they get routed to them, and where it matters if the notification is successfully received.
But certainly you could build something to handle these and most other needs in this realm with mostly just redis, using streams for what needs to be more robust, in tandem with pub/sub, keyspace notifs, etc. in the areas they are suited to.
The benchmark tool is different. I mentioned the same on my benchmark page.
We had to write a small benchmark utility (membench) ourselves because the long-term metrics that we are optimizing need to be evaluated in a different way.
Also, the scripts, utilities, and infra configurations are mentioned. Feel free to run it.
There are _so many_ bugs in this code.
One example among many:
https://github.com/DiceDB/dice/blob/0e241a9ca253f17b4d364cdf... defines func ExpandID, which reads from cycleMap without locking the package-global mutex; and func NextID, which writes to cycleMap under a lock of the package-global mutex. So writes are synchronized, but only between each other, and not with reads, so concurrent calls to ExpandID and NextID would race.
This is all fine as a hobby project or whatever, but very far from any kind of production-capable system.
https://github.com/DiceDB/dice/pull/1588
This PR attempted to fix the memory model violation I mentioned in the parent comment, but also added in an extra change that swapped the sync.Mutex to a sync.RWMutex. The PR description claimed 2 benefits: "Eliminates the data race, ensuring thread safety" -- correct! at least to some level; but also "Improves performance by allowing concurrent ExpandID calls, which is likely a common operation" -- which is totally unsubstantiated, and very likely false, as RWMutex is only faster than a regular Mutex under very narrowly-defined load patterns.
In any case, the PR had no kind of test or benchmark to validate either of these claims, so not a great start by the author. But then a maintainer chimed in with a comment that expressed concerns about edge-condition performance details, without any kind of data or evidence, and apparently didn't care about (or know about?) the much more important fixes that the PR made re: data races.
https://github.com/DiceDB/dice/pull/1588#issuecomment-274521...
> I tried changing this, but I did not see any benefit in benchmark numbers.
No apparent understanding of the bugs in this code, nor how changes may or may not fix those bugs, nor really how performance is defined or can be meaningfully evaluated.
Again, hobby project or whatever, all good. But the authors and maintainers of this project are clearly, demonstrably, in over their heads on this one.
Haven't looked at the code, but enforcing mutual exclusion between writers but not readers can make sense for a single-writer lock-free algorithm.
Looking at the diceDB code base, I have few questions regarding its design, I'm asking this to understand the project's goals and design rationale. Anyone feel free to help me understand this.
I could be wrong but the primary in-memory storage appears to be a standard Go map with locking. Is this a temporary choice for iterative development, and is there a longer-term plan to adopt a more optimized or custom data structure ?
I find the DiceDB's reactivity mechanism very intriguing, particularly the "re-execution" of the entire watch command (i.e re-running GET.WATCH mykey on key modification), it's an intriguing design choice.
From what I understand is the Eval func executes client side commands this seem to be laying foundation for more complex watch command that can be evaluated before sending notifications to clients.
But I have the following question.
What is the primary motivation behind re-executing the entire command, as opposed to simply notifying clients of a key change (as in Redis Pub/Sub or streams)? Is the intent to simplify client-side logic by handling complex key dependencies on the server?
Given that re-execution seems computationally expensive, especially with multiple watchers or more complex (hypothetical) watch commands, how are potential performance bottlenecks addressed?
How does this "re-execution" approach compare in terms of scalability and consistency to more established methods like server-side logic (e.g., Lua scripts in Redis) or change data capture (CDC) ?
Are there plans to support more complex watch commands beyond GET.WATCH (e.g. JSON.GET.WATCH), and how would re-execution scale in those cases?
I'm curious about the trade-offs considered in choosing this design and how it aligns with the project's overall goals. Any insights into these design decisions would help me understand its use-cases.
Thanks
I was hoping for a response, but no one bothered. I had noted the following when I made that comment and will just wrap up from my end so this could be used by others for reference later.
I'm skeptical that the re-execution approach can scale for complex queries, the latency and throughput improvements would be offseted by the computational cost and bottlenecks introduced for achieving it via its reactivity mechanism (query subscription), this might not work at scale and serve niche use cases.
There are various ways throughput and latency for kv stores can be improved, so bar is really high here.
The messaging with Dice seems unclear and confusing to describe its purpose/use-cases over alternatives, or how it achieves them, which could just be how it's marketed. But it seems to be a collection of ideas and a WIP project.
I think reducing data fetching complexity and complex key dependencies for end clients could be appealing, and it would be great to have it at the KV store level, but there is no reason this type of reactivity can't be implemented on top of various clients for existing KV stores (like Redis). And basic WATCH with transactions are even offered out of the box in them.
Deno kv seem nice but its vendor locked, also there are many others like dragonfly, valkey etc, redis could still work, even something over sqlite can work, deno has a selfhosted kv on top of sqlite - https://github.com/denoland/denokv
Also with dice its creator had made this talk
https://hasgeek.com/rootconf/2024/sub/how-we-made-dicedb-a-t...
From that and the thread so far it seems, they want to make some super cache by building a realtime multi-threaded kv store, improving latency and reducing its read load via its reactivity mechanism. Solving the problem of cache invalidation.
Not sure how this will be achieved but there is no harm in trying. From what is said and shared, rationale behind this design and its tradeoffs are not clear, code could be fixed/improved but providing clarity on this is essential for adoption.
Is there a single sentence anywhere that describes what it actually is?
I've seen this more and more with software landing pages, they are somehow so deep into developing/marketing that they totally forget to say what the thing actually is or does, that's why you show it to family and friends first to get some fresh eyes before publishing the site.
Looks like a Redis clone. The benchmarks compare it to Redis.
Description from GitHub:
> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads.
Arpit here.
DiceDB is an in-memory database that is also reactive. So, instead of polling the database for changes, the database pushes the resultset if you subscribe to it.
We have a similar set of commands as Redis, but are not Redis-compliant.
No. I had the exact same problem.
Feels arrogant. "Of course you already know what this is, how could you not?"
A Redis-inspired server in Go
Even clicking through to the Github, after reading the "What is DiceDB?", I'm still not very clear. It feels more like marketing than information.
"What is DiceDB? DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads."
The docs do, the site is useless.
> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware.
A Redis-like database with a Redis-like interface. No info about drop-in compatibility, I assume no.
seems like a key store, with an ability to watch/subscribe to monitor for the change of values in real time
Drop in replacement of Redis.
Using an instrument of chance to name a data store technology is pretty amusing to me.
No chance if we live in a deterministic universe.
This is essentially what all in-memory data stores have always been
Kinda refreshing to see someone own it and run with it
DiceDB sounds like the name of a joke database that returns random results.
No it doesn't.
From the benchmarks on 4vCPU and num_clients=4, the numbers doesn't look much different.
Reactive looks promising, doesn't look much useful in realworld for a cache. For example, a client subscribes for something and the machines goes down, what happens to reactivity?
Something I still fail to understand is where you can actually spend 20ms while answering a GET request in a RAM keyvalue storage (unless you implement it in Java).
I never gained much experience with existing opensource implementations, but when I was building proprietary solutions at my previous workplace, the in-memory response time was measured in tens-hundreds of microseconds. The lower bound of latency is mostly defined by syscalls so using io_uring should in theory result in even better timings, even though I never got to try it in production.
If you read from nvme AND also do the erasure-recovery across 6 nodes (lrc-12-2-2) then yes, you got into tens of milliseconds. But seeing these numbers for a single node RAM DB just doesn't make sense and I'm surprised everyone treats them as normal.
Does anyone has experience with low-latency high-throughput opensource keyvalue storages? Any specific implementation to recommend?
> Something I still fail to understand is where you can actually spend 20ms
Aren’t these numbers .2 ms, ie 200 microseconds?
I had the same reaction as you. And that's for 4 simultaneous clients, too, for a single client you get 3159 ops/s (from https://dicedb.io/benchmarks/). I'm not too familiar with in-memory databases in general but I would have expected figures in the millions on modern hardware. Makes me feel there's some hidden bottleneck somewhere and the benchmarks are not purely measuring the performance of the software.
They also sounded fishy to me. I'd expect closer to 10x as much throughput with Redis: https://redis.io/docs/latest/operate/oss_and_stack/managemen...
Looks like your units are in ms, so 0.20 ms.
In-memory caches (lacking persistence) shouldn't be called a database. It's not totally incorrect, but it's an abuse of terminology. Why is a Python dictionary not an in-memory key-value database?
Any reason to use this over Valkey, which is now faster than Redis and community driven? Genuinely interested.
DragonflyDB is also in that race, isn't it?
I didn't see it in the docs, but I'd want to know the delivery semantics of the pubsub before using this in production. I assume best effort / at most once? Any retries? In what scenarios will the messages be delivered or fail to be delivered?
This seems orders of magnitude slower than Nubmq which was posted yesterday: https://news.ycombinator.com/item?id=43371097
Different tool. I metrics I am optimizing for are different hence wrote a separate utility. May not be the most optimized one. But I am usign this to measure all things DiceDB and will be using this to optimize DiceDB further.
ref: https://github.com/DiceDB/membench
What are some example use cases where having the ability for the database to push updates to an application would be helpful (vs. the traditional polling approach)?
One example is when you want to display live data on a website. Could be a dashboard, a chat, or really the whole site. Polling is both slower and more resource hungry.
If it is built into your language/framework, you can completely ignore the problem of updating the client, as it happens automatically.
Hope that makes sense.
15655 ops a second with a Hetzner CCX23 machine with 4 vCPU and 16GB RAM is rather slow for an in-memory database I hate to say it. You can't blame that on network latency as for example supermassivedb.com is written in go and achieves magnitudes more, actually x20 and it's persisted.. I must investigate the bottlenecks with Dice.
- proudly open source. cool! - join discord. YAY :(
FYI: Here is the creator and maintainer's profile: https://github.com/arpitbbhayani
Is there a plan to commercialise this product? (Offer commercial support, features, etc.) I could not find anything obvious from the home page.
Is Arpit is the system design course guy?
Yes. I do run a sys design course on weekends.
I feel like this needs a ‘Why DiceDB instead of Redis or Valtio’ section prominently on the homepage.
Did you mean Valkey, or has the js community now managed to shoehorn an entire high-availability database server into a javascript object proxy?
I love the "Follow on twitter" link with the old logo and everything, they probably used a template that hasn't been updated recently but I'm choosing to believe it's actually a subtle sign of protest or resistance.
Just use Bluesky. It’s the better middle finger.
I prefer that over X icon.
Is this suffering from the same problems like Redis when trying to horizontally scale?
I guess yes.
> For Modern Hardware fully utilizes underlying core to get higgher throughput and better hardware utilization.
Would be great to disclose details of this one. I'm interested in using what DiceDB achieves higher throughput.
> fully utilizes underlying core to get higgher throughput and better hardware utilization
FYI this is a misspelling of "higher"
Who is this for? Can you help me explain why and when I'd want to use this in place of redis/dragonfly
I think Postgres can do everything this does and better if you use LISTEN/NOTIFY.
I like it!
Anyway to persist data in case of reboots?
That's the only thing missing here.
Is Go the only SDK ?
Snapshot functionality is WIP, which can be utilised to persist and replay data between reboots. For now Golang SDK is only one, more SDKs are to be added soon.
Why would I use this over keyspace notifications in redis?
Based on this thread, I'm not sure you would want to use this over keyspace notifications, but I will also say that there comes a point in the maturity of a system when keyspace notifications become a complicated, unreliable, resource-heavy nightmare. They work fine is your needs and scale are limited, but it's definitely not what you want if handling lots of frequent chances across craploads of keys, with complicated logic for who needs them and how they get routed to them, and where it matters if the notification is successfully received.
But certainly you could build something to handle these and most other needs in this realm with mostly just redis, using streams for what needs to be more robust, in tandem with pub/sub, keyspace notifs, etc. in the areas they are suited to.
Database as a transport?
DiceDB is an in-memory, multi-threaded key-value DBMS that supports the Redis protocol.
It’s written in Go.
nope. We do not support Redis protocol :)
[dead]
[dead]
[dead]
[flagged]
I am not sure if this is satire or not...
I think performance benchmark you have done for DiceDB is fake.
These are the real numbers - https://dzone.com/articles/performance-and-scalability-analy...
Does not match with your benchmarks.
The benchmark tool is different. I mentioned the same on my benchmark page.
We had to write a small benchmark utility (membench) ourselves because the long-term metrics that we are optimizing need to be evaluated in a different way.
Also, the scripts, utilities, and infra configurations are mentioned. Feel free to run it.