The key-value data model is an emerging paradigm in many forms, from key-value stores to caching systems, where data is represented in form of a map between keys and values. Example of real world systems using this model are memcached
, a distributed caching system, Tokyo Cabinet
, two key-value databases.
This data model is at the same time old and new: as old as LISP's associative lists, and even older, but at the same time a new emerging paradigm for scalable applications.
In order to exploit the full potential of key-value stores, more advanced operations and design patterns on the key-value model are needed besides the Set-ket, Get-key, Delete-key basic operations of Dictionary data structures.
For instance one additional and very useful operation is an atomic increment/decrement.
set x 10
incr x => 11
incr x => 12
The atomic increment operation is particularly useful since it allows to obtain an unique identifier for an object.
id = incr nextId => 100
set obj_<id> "My new object"
Multiple instances of a program can access simultaneously the key-value store and still be sure to obtain unique references (keys) where to store objects, in order to reference this new objects by key in other parts of the key-value data space using simply the identifier.
Some key-value stores like Redis
support Lists as values. List support allows to model a large number of problems in a simpler way, expecially when atomic operations to append or consume elements from the list are provided.
This two features, atomic increments and lists as values can be combined in a very powerful way. For example imagine to have the following problem: the key-value store contains keys representing objects with an unique reference (identifiers), this objects can be news in a social news site like Reddit
id = incr NextId => 1
set news_url_<id> "http://foobar.org"
set news_title_<id> "My foobar story"
push mylist 1
id = incr NextId => 2
set news_url_<id> "http://antirez.com"
set news_title_<id> "The blog you are reading right now"
push mylist 2
id = incr NextId => 3
set news_url_<id> "http://someothersite.com"
set news_title_<id> "Some Other Site"
push mylist 3
Later I'll be able to ask a specific range of the list (for example the latest 10 elements added) and populate a web page asking for the news_url_<id> and news_title_<id> keys.
While to retrieve things in the same or reverse chronological order is very useful (for example the 'new sumbissions' page of a Reddit alike site needs such a feature), there are times when we want to perform sorting of this items. To generate the front page of a social news site is such an example of application demanding sorting.
This article is a proposal for a sorting strategy in a key-value data model.
Sorting and weight keys
In order to perform sorting we need another player in our data schema, that is, weights. Our three news with ID 1, 2 and 3 can have different weights generated accordingly to the news age and score (the score may result from user votes). One of our social news system will be in charge of updating this scores performing set operations on score keys:
set score_1 124
set score_2 4461
set score_3 -50
We can now define a new SORT operation on our key-value data model that takes as input key holding a list, and a pattern to retrieve scores for every element of the list:
sort mylist by score_* => 3 1 2
Elements of the list are compared by the value of the score_<id> key. A real world implementation could provide a way to specify if the sorting is done ascending or descending, and the range of values we want to retrieve in order to implement pagination or to take only the first N top items.
Sorting into a distributed environment: key tags
One of the greatest benefits of the key-value data model is that the dataset can be partitioned into N different servers just hashing the key. This can be a problem when sorting is required since the scores needed to sort a given list may be distributed across different stores. This problem can be resolved making the sorting server able to access the other stores to lookup the score keys in other servers: while this can be a good solution the latency time may slow down the sorting operation too much.
An alternative is to make sure that all the scores needed to sort a given key are stored in the same servers of the key holding the list. In order to do so instead to define the hashing of the key as a simple operation that hashes the whole key, only a specific subset of the key is hashed if the key contains a special delimiter.
For example the key "foobar" can be hashed as a full 6 bytes key, but if the pattern [...] is found inside the key, only the part of the key between [ and ] is hashed. As long as the part of the string inside the  delimiters is the same multiple keys will hash to the same value, and will be stored in the same server.
Hash(mylist[homenews]) == Hash(score_10_[homenews]) == Hash(score_20_[homenews])
What was described in this article will soon appear on the Redis key-value store. If you have feedbacks about how to improve this model please leave a comment here. Thank you.
vote on reddit
or hacker news