Redis 2.6 was expected to go live in the first weeks of 2012, but today is 24th of February and there are still no 2.6-rc1 tags around, what happened to it you may ask!?
Well, for one time, a delay is not a signal that something is wrong. What happened is simply that we put a lot more than expected inside this release, so without further delays here is a list of new features:
- Server side Lua scripting, probably the most exciting and big news, with built-in support for fast json JSON and MessagePack encoding and decoding.
- Milliseconds resolution expires, also added new commands with milliseconds precision. This means that if you set an expire at 1 second, now the key will stop existing after exactly 1000 milliseconds, with an error of +/- 1 millisecond. At the same time you have new commands like PEXIRE, PTTL, PSETEX, that let you specify the timeout of a key in milliseconds. What to trottle an API so that no more than two requests per 50 milliseconds are done? now you can easily.
- Hardcoded limits about max number of clients removed. Now your Redis instance can handle all the clients your OS is able to handle, without recompilations or other hard coded limits.
- AOF low level semantics is generally more sane, and especially when used in slaves. This is an uncommon use case, and the misbehavior was subtle, but now the implementation and behavior is definitely more sane.
- Clients max output buffer soft and hard limits. You can specifiy different limits for different classes of clients (normal,pubsub,slave).
- AOF is now able to rewrite aggregate data types using variadic commands, often producing an AOF that is faster to save, load, and is smaller in size. So what in 2.4 used to be N LPUSH calls to reconstruct a list of N items, now it is N/64, because variadic LPUSH with (up to) 64 arguments was used.
- Every redis.conf directive is now accepted as a command line option for the redis-server binary, with the same name and number of arguments. You can write ./redis-server --slaveof 127.0.0.1 6379 --port 6380, and in general pass any possible option, exactly like it is specified in redis.conf.
- Hash table seed randomization for protection against collisions attacks.
- Performances improved when writing large objects to Redis.
- Significant parts of the core refactored or rewritten. New internal APIs and core changes allowed to develop Redis Cluster on top of the new code, however for 2.6 all the cluster code was removed, and will be released with Redis 3.0 when it is more complete and stable.
- Redis ASCII art logo added at startup. This is where our major efforts went in the latest months.
- redis-benchmark improvements: ability to run selected tests, CSV output, faster, better help, and support for pipelining giving awesome results. More about this later in this blog post.
- redis-cli improvements: --eval for comfortable development of Lua scripts.
- SHUTDOWN now supports two optional arguments: SAVE and NOSAVE. They respectively force to save an RDB when no RDB persistence is configured, or to avoid to save when RDB persistence is configured.
- INFO output split into sections, the command is now able to just show specific sections.
- New statistics about how many time a command was called, and how much execution time it used (INFO commandstats).
- More predictable SORT behavior in edge cases.
- INCRBYFLOAT and HINCRBYFLOAT commands, for atomic fast float counters.
- Virtual Memory was removed from the code (was already deprecated in 2.4)
- Much better bug report on crash, with stack trace, register dump, state of the client causing the crash, command vector and so forth. This was in part back ported to 2.4 releases.
There are two features still to merge, but already implemented into branches:
- Small hashes now implemented using ziplists instead of zipmaps, for better performances when there are more than 253 fields but less than the number of fields needed to convert the zipmap into a full hash table.
- More coherent behavior of list blocking commands in presence of non trivial conditions and blocked clients.
And new internals...
Redis 2.6 offers the above new features, but another interesting fact is that it is also a spinoff of the
unstable branch, the one that is going to be Redis 3.0 soon or later. Instead 2.4 was a spinoff of Redis 2.2 code base. This means that we now are working with a better code base that makes implementing certain features simpler.
It will also make it much easier for us in the future to backport stuff from the unstable branch to 2.6. This means that we can either backport stuff from time to time into 2.6 releases, or to create a 2.8 branch to merge all the interesting features that are already stable to create an intermediate release in a few months from now.
Redis benchmarks with pipelining support, impressive numbers, and stupid benchmarks
After looking to the next
set of benchmarks that were actually measuring everything but actual DB performances, I decided to go ahead and implement pipelining in the Redis-benchmark tool to show some good numbers.
Redis-benchmark used to create 50 clients, and perform something like: send request, wait for reply, send request, wait for reply, with all those 50 clients. However Redis supports
pipelining, that is, if you have N queries to do where you don't need the reply of the previous to perform the next request, you can send N queries at once to Redis, and then read all the replies. This dramatically improve performances because there are less syscall required, less context switches, less TCP packets, and so forth.
Most real world Redis applications use pipelining, often you need to do things like paginate a list of objets, so you do LRANGE to get the IDs, and then a pipeling with all the GET or HGETALL and so forth. Or you want to write an object on the database and update it's position into a sorted set.
But still redis-benchmark was not able to test pipelining, so when we saw
Redis can do 150k requests per second in entry level hardware we were actually saying
... if you never use pipelining at all. But how it can perform if you can use it?
Let's check with pipelining, using my glorious MBA 11" running OSX:
$ redis-benchmark -P 64 -q -n 1000000
PING_INLINE: 540540.56 requests per second
PING_BULK: 636942.62 requests per second
SET: 301204.81 requests per second
GET: 430848.75 requests per second
INCR: 341530.06 requests per second
LPUSH: 305623.47 requests per second
LPOP: 296120.81 requests per second
SADD: 313774.72 requests per second
SPOP: 418060.22 requests per second
Wow, 430k GETS/sec requests per second with a macbook air, and finally with this new benchmark not everything is the same, PING is faster than GET that is faster than SET, and so forth.
This also means: more ability to optimize commands in our side.
If you test this into a Xeon, you get 650k GETs easily, or other impressive numbers even reducing the pipeling from 64 to 32 or 16.
Now to show how benchmarks can easily be turned into everything you want, we have this numbers of Redis performing 500k operations per second, per core, but now in the web site of HyperDex I read:
With 32 servers and sufficient clients to create a heavy workload, HyperDex is able to sustain 3.2 million ops/s..
Hey dudes, I can do 1/6th of the ops/sec you do with 32 servers using just 1 core of my Xeon desktop. What this means?
nothing.
Long story short, don't show benchmarks unless you have a very good methodology explained in the web site, and your methodology makes sense, otherwise it is just marketing that does not provide any value to the user.
A better way to do benchmarks is to isolate a common real-world problem, and write a real world implementation of this problem using different databases, in the idiomatic way for every database, mixing both writes and reads in the same benchmark. Then test the different implementations with many simultaneous clients, with millions of objects.
Those tests, performed independently by smart programmers, is what is making Redis very popular across guys that have serious requests per second, and I hope that 2.6 with built-in server side scripting will allow them to get more out of Redis.