David Cramer's Blog

Building Cursors for the Disqus API

This last week we've been implementing cursors for the Disqus API (3.0). If you're not familiar, the concept is like cursors in your database: create a marker for where you are with your result set so you can iterate through a large set of results efficiently. Think of it like a snapshot. A marker that lets us retrieve the results you were previously looking for, and return a subset of those results.

LIMIT/OFFSET is Bad

One of the big questions I've seen come up, is "Why not just use LIMIT and OFFSET?" To answer this, you must understand how LIMIT/OFFSET actually works. For this we'll use your typical database example. You come in, request all results that rhyme with RICK, and there are approximately 1000 results. You first ask it for the first 100, which is very easy, as it can yield one row as it gets it, which means it just returns the first 100 rows that match the result set. Fast forward, and now you are asking it for rows 900-1000. The database now must iterate through the first 900 results before it can start returning a row (since it doesnt have a pointer to tell it how to get to result 900). In summary, LIMIT/OFFSET is VERY slow on large result sets.

Range Selectors

The typical solution to avoiding the above pattern is to switch to range selectors. Using some kind of index, you tell it exactly where you need to start and stop. Using the above example, we would say "I want RICK results that have an ID greater than 900 and less than 1000", which will get you approximately the same thing. With this solution, however, you have to worry about gaps in your ranges. The result set, 900 to 1000, could have anywhere between 0 and 100 rows, which isn't what you really want.

Non-Unique Ranges

There is one final thing we had to take into account when designing our cursors. We use them for both timestamp and incremental ID sorting (ideally timestamp-only), which works great, but presents the problem of conflicts. It's very unlikely that two sets of data will have the exact datetime (down to the microsecond), but it happens, especially on very large data sets (like ours). To combat this, we have to actually combine range offsets with row offsets.

id  | timestamp         | title
-------------------------------
1   | 1299061169.043267 | foo
2   | 1299061169.043267 | bar
3   | 1299061170.034193 | baz

Combining Selectors

Our final result consists of generating range offsets with row offsets. We start by generating the absolute highest range identifier we can from a result set (typically the last row in the result), and then we append a row offset on to this (usually 0). In the case where the last row is identical to one or more rows (from end to start) we just increment this offset number. The resulting database logic turns into something like SELECT FROM posts WHERE timestamp > 2012-10-12T08:12:56.34153 LIMIT 50 OFFSET 5. Remember, the key here is that the "timestamp" value we're sending is continually changing as we paginate through the cursor, which allows us to keep these queries very efficient.

I should note, that we also had to deal with doing the opposite operation of paginating forward, being the obvious "previous results". This had its own set of problems that we basically had to reverse all of our operations. Given that we are at the cursor we see above, we need to generate a "previous cursor" object. To do this, we just take the first row in the series (again, doing the same offset calculations), and set a directional flag. The result is almost more documentation than code, just because of how complicated the logic can appear.

The end result of our cursors in the API, looks a little bit like this:

    "cursor": {
        "prev": "1299061169043267:0:1",
        "hasNext": true,
        "next": "1299061158809627:0:0",
        "hasPrev": true,
        "total": null,
      },

The logic is a bit fuzzy, and we have to do some best guesses in places (such as determining if there is actually a valid previous cursor), but the database queries end up about as efficient as we can hope for. We end up with (N results + 1) rows when we're paginating forward, and (N results + 2) when pulling up previous cursors. To avoid confusion, this is literally one query for every request, period. There's no additional overhead for doing counts or determining what your next or previous cursors are. That's one optimized SQL statement to fetch your results, calculate your next, and previous cursors.

Since I feel bad for not leaving you all with much code, check out some of the database utilities that we use at Disqus to make life with Django QuerySets a bit easier.