Tag Archives: booksleeve

redis: Sorted Sets as iterable key-value work queues

I’m in the process of building an internal search engine at work, and first on the block are our network share drives. During a typical week, we have about 46,000 unique documents that are written to in our Lexington office. This number only represents things like Word docs, Excel spreadsheets, PowerPoint presentations, PDFs, and so forth. (A count of all touches is 3-4x higher.)

This presents some challenges: crawling the network shares at regular intervals is slow, inefficient, and at any given time, a large portion of the search index may be out of date, which is bad for the user experience. So I decided an incremental approach would be better: if I re-index each document as it’s touched, it eliminates unnecessary network and disk IO, and the search index is updated in close to realtime.

Redis

My second-level cache/work queue is a redis instance that I interact with using Booksleeve. The keys are paths that have changed, and the values are serialized message objects stored as byte arrays that contain information about the change. The key-value queue structure is important, because the only write operation that matters is the one that happened most recently. (Why try to index a document if it gets deleted a moment later?)

This is great in theory, but somewhere along the way, I naively assumed it was possible to iterate over redis keys… but you can’t. Not easily or efficiently, anyway. (Using keys in a production environment is dangerous, and should be avoided.)

Faking iteration

The solution was relatively simple, however, and like all problems in software development, it was solved by adding another level of indirection: using the redis Sorted Set data type.

For most use cases, the main feature that differentiates a Set from a Sorted Set is the notion of a score. But with a Sorted Set, you can also return a specific number of elements from the set. In my case, each element returned is the key to a key-value pair representing some work to be done.

Implementing this is as easy as writing the path to the Sorted Set at the same time as the key-value work item is added, which can be done transactionally:

using (var transaction = connection.CreateTransaction())
{
    int i = 0;  //dummy value representing some "work"
    foreach (var word in WORDS)
    {
        transaction.SortedSets.Add(REDIS_DB, set, word, SCORE);
 
        //Set the key => message (int i) value
        transaction.Strings.Set(REDIS_DB, word, i.ToString());
        i++;
    }
    transaction.Execute();
}

Downstream, my consumer fills its L1 cache by reading n elements from the Sorted Set:

var pairs = new Dictionary<string, string>();
using (var transaction = connection.CreateTransaction())
{
    //Get n keys from the set into the Dictionary
    var keyList = connection.Wait(connection.SortedSets.RangeString(REDIS_DB, set, 0, LIMIT));
 
    foreach (var key in keyList)
    {
        var value = Encoding.Default.GetString(connection.Strings.Get(REDIS_DB, key.Key).Result);
        pairs.Add(key.Key, value);
 
        //Remove the key from the SortedSet
        transaction.SortedSets.Remove(REDIS_DB, set, key.Key);
        //Remove the key from the Keys
        transaction.Keys.Remove(REDIS_DB, key.Key);
    }
    transaction.Execute();
}

And there we have fake key “iteration” in redis.