Redis weekly update #6 - 2.2 and VM

Friday, 09 July 10
Hello again and welcome to the Redis weekly update #6!

The latest weeks have been very busy and interesting, I and Pieter Noordhuis visited VMware and attended the SF Redis Meetup, as you can guess quite exciting experiences for both us :)

Redis 2.0 reached RC2 in the meantime and is going to be shipped as stable in a few weeks.

Simultaneously Redis 2.2 is almost complete from the point of view of features, with the biggest thing currently in the workings being replication fo DELs on expires in slaves and append only file so that it will be possible to perform write operations against expiring keys.

But let's talk a bit about what we already got into 2.2.

Redis 2.2 is an order of magnitude less memory hungry for aggregates

Maybe you remember that in the implementation of the Hash data type I did a trick: instead of saving small hashes (hashes composed of a few tens of elements, where every element is not too big) as hash tables, I encoded them in a special way. What I did was creating a simple special encoding that was suited for string to string maps, something like a unique blob of data with prefixed length, so if you have an hash that is "name => foo, surname => bar", this is stored in a unique string like
4:name3:foo7:surname3:bar
This is just an example, the actual format is different (and binary). As you probably already know Pieter Noordhuis implemented similar encodings for Lists, and for Sets composed of integers in Redis master.

In the meantime I reworked the top-level key->value dictionary in order to save memory using directly C strings instead of Redis Objects for keys. Another minor fix to sds also saved a lot of memory in 64 bit systems.

All this changes make Redis an order of magnitude less memory hungry for many kind of datasets!

What kind of datasets?
  • Many keys containing lists with an average length of 10, 100, 300, 500 elements
  • Many keys containing sets of integers with less than 1000 elements or so
  • Many keys containing hashes representing objects with 10, 20, 50 fields
some actual measurement of memory used:
  • 1 million keys with 500 elements list in each key: 2GB of memory used
  • 1 million of keys with an Hash representing an user with name,surname,10 more random fields: 300 MB of memory used.
With sets composed of numbers (even large 64 bit numbers) expect the same order of magnitude.
Note: all this benchmarks are about 64 bit builds, expect a 35% - 40% memory saving with 32 bit builds.

The meaning of this numbers

Now it's time to analyze this data. 1 million keys with 500 elements lists, are, in SQL terms, 500 millions rows. You are storing this in 2 GB of RAM, and as you know even cheap servers today have 8 GB of RAM.

What about, for instance, very big lists? Over a given threshold this lists will not be specially encoded, and will use much more space. But this is also a use case where VM will create troubles, as there is to deserialize-serialize too much objects per key.

Accessing the different types provided by Redis directly on disk with "element" granularity is not an opinion: Redis is atomic and fast because it took the other path, is a different project than an on-disk database with complex data type, threaded, and with tons of synchronization needed to handle this kind of complex data types with atomic operations.

So basically with Redis the matter is, there is no way to have long aggregate data types like long lists without using the RAM needed to store an actual linked list in memory (the same concept also holds true for sets, sorted sets, and all the other types). And I think this is fine, Redis can't do everything: be fast, use little memory, support huge data types, at the same time.

It tries to do trade offs: it's fast, sometimes it's memory cheap as it can do some trick, it still supports lists of hundred of millions of elements, but using more memory.

The case for many keys with small values

What happens when instead there are many keys with small values? This is not the best case for VM, there is basically no memory saving as the key need to stay in memory, and in place of the original object there is the "VM pointer" object to locate the serialized object inside the swap file.

When VM is still useful?

The most interesting case for VM is when there are big objects and there is really a lot of bias in the dataset access, but for this to justify the use of VM the dataset must be very large, and the bias very ... biased, as otherwise as you saw even with millions of users it is still not a problem to save a lot of data in a single box with a decent amount of memory. Why should you support million of users and be so cheap on RAM and hardware? Does not look like a real world scenario.

Another case is that you have many large "stupid" values (string values). Like HTML pages stored at every key and so forth. But in this case Redis performances will start to be more and more similar to on-disk solutions, as actually it is a disk-backed store in this configuration. Still it is useful that you don't need an additional caching layer as Redis will work both as memcached and MySQL in some way.

My point here is: before using VM make sure that you need it if you plan to use 2.2: the dataset in memory is more fun ;) and definitely more viable when 2.2 will be released as stable.

When Redis Cluster will be available you'll also have a straightforward path to add nodes when you need more memory.

All this facts together means that VM is being marginalized by the optimizations in memory usage that 2.2 is implementing. I had the impression that it was important to communicate this to users.
30401 views*
Posted at 11:53:47 | permalink | 2 comments | print
Do you like this article?
Subscribe to the RSS feed of this blog or use the newsletter service in order to receive a notification every time there is something of new to read here.

Note: you'll not see this box again if you are a usual reader.

Comments

Luca Guidi writes:
09 Jul 10, 17:20:14
I didn't got two points: the performances degradation in the HTML storing use case and the VM usefulness for large values.

As I understood from previous posts and wiki entries, the need of build a custom VM was demanded by the inadequacy of the OS implementation. Usually the memory pages are 4096 bytes, and it can happen that a single "active" byte can prevent the page from being swapped. So I thought this was the most useful case for Redis VM.

But probably I'm missing something.. ;)
Salman writes:
12 Jul 10, 10:59:45
Great work, I just wish cloud servers shipped with more RAM :) (on the cheap)
comments closed