Reply to an open minded reader

Monday, 08 October 12
Today I was at the hospital for the usual Greta heartbeat trace that is used to monitor the baby's status when the birth is near (all ok btw): my sole escape from the boringness of the hospital and its deep disorganisation was my iPhone that I was using to read the twitter timeline for the "Redis" search, when I stumbled upon an article written by @soveran about Redis used as a data store, an open minded reader.

Maybe it's because Michael follows the Redis development since its start and worked a lot with it (and contributed the site as well!), as I do, but well, he really almost used the same words I would use to write his blog post, or at least this was my feeling. What he says is that, you can use Redis as a data store (and we have an interesting story about persistence to recount, if safety is your concern), but as long as you accept two major tradeoffs.

One is the fact that you are limited to memory size, and for extremely write heavy applications while persisting Redis could use as much as 2x memory, so in certain cases you are limited to the 50% of available memory you have.

The second is even more important, and is about the data model. Redis is not the kind of system where you can insert data and then argue about how to fetch those data in creative ways. Not at all, the whole idea of its data model, and part of the fact that it will be so fast to retrieve your data, is that you need to think in terms of organising your data for fetching. You need to design with the query patterns in mind. In short most of the times your data inside Redis is stored in a way that is natural to query for your use case, that's why is so fast apart from being in memory, there is no query analysis, optimisation, data reordering. You are just operating on data structures via primitive capabilities offered by those data structures. End of the story.

However, I would add the following perspective on that.

If you do judicious use of your memory, and exploit the fact that Redis sometimes can do a lot with little (see for instance Redis bit operations), and instead of just have a panic attack about your data growing outside the limits of the known universe you try to do your math and consider how much memory commodity servers nowadays have, you'll discover that there are a tons and one more use cases where it's ok to have the limit of your computer memory. Often this means tens of millions of active users in one server.

And another point about the data model: remember all those stories about DB denormalisation, and hordes of memcached or Redis farms to cache stuff, and things like that? The reality is that fancy queries are an awesome SQL capability (so incredible that it was hard for all us to escape this warm and comfortable paradigm), but not at scale. So anyway if your data needs to be composed to be served, you are not in good waters.

If you know what you are doing you can understand if Redis is for you, even as your sole, main, store.

P.s. if you want to watch @soveran speaking live, go to Redis Conf to meet him. There is also our core hacker Pieter Noordhuis that will rewrite Redis in COBOL or whatever you may need if you ask politely, and a lot more great Redis hackers.
Posted at 11:42:52 | permalink | discuss | print
Do you like this article?
Subscribe to the RSS feed of this blog or use the newsletter service in order to receive a notification every time there is something of new to read here.

Note: you'll not see this box again if you are a usual reader.


blog comments powered by Disqus