Datastore & Lexicographical contention

Colt McAnlis
Aug 23, 2018 · 3 min read

Urban IoT was is a sensor and spatial representation company. Their work is simple, they deploy a whole load of sensors in an urban area that monitor everything from noise, temperature, humidity, pressure, (and more) .They were noticing issues when pushing the time stamped data to GCP Datastore, saying that after a while, their reads slowed down significantly on their dashboards.

Stamped out.

Cloud Datastore is built on top of Google’s NoSQL database, Bigtable, and is subject to Bigtable’s performance characteristics. Bigtable scales by sharding rows into separate tablets (aka a contiguous unit of storage); When a Bigtable tablet experiences a high write rate, the tablet will have to “split” into more than one tablet which is the core component that allows Bigtable to properly scale.

The important point here is this : Sharded rows on separate tablets are lexicographically ordered by key.

By default, Cloud Datastore allocates keys using a scattered algorithm. As such, when values are randomly or even semi-randomly distributed, tablet splits function well. This is because the work to write multiple values is distributed amongst several Bigtable tablets.

However when the keys are lexicographically close, (like a monotonically increasing timestamp…) this work gets isolated into a small update window, and slows things down.

Which, low and behold, was exactly what Urban IoT was doing.

Stamped out.

Remember that for indexed values, we must write corresponding index rows sharded across tablets. If you’re creating new entities at a high rate with monotonically increasing indexed property (like a timestamp), then you’ll run into hotspotting for reads. The result is that the “eventual consistency” model of datastore will take longer to get “consistent” and as performance worsens, read queries will get longer, and eventually cause timeouts.

The take-away about this problem is that hitting this condition is really rare.

To prove this, check out the graph below; It’s a 250QPS load test to datastore, creating entities with keys that are timestamps over a 20 minute window.

It’s not until we jack up the QPS to 750, and run the test for at least an hour, that things start to go south:

Sadly though, this is _exactly_ the situation that Urban IoT was in. They had hundreds of thousands of sensors distributed in an urban environment, all creating new timestamped data at an alarmingly high rate.

The fix is in.

Recall that by default, Cloud Datastore allocates keys using a scattered algorithm Thus you will not normally encounter hotspotting on Cloud Datastore writes if you create new entities at a high write rate using the default ID allocation policy. Typically you only see this type of problem if you’re creating the IDs yourself, and making them linear.

If you do have a key or indexed property that will be monotonically increasing then you can prepend a random hash to ensure that the keys are sharded onto multiple tablets. This is problematic if you plan on doing queries, as you will need to prefix and unprefix the values, then join the results in memory — but it will reduce the error rate of your writes

Doing so let us get performance back down to what they would expect @ the 250QPS rate, without those hot spotting issues.

Last note

And one more tip:

Don’t prematurely optimize for this case, since chances are, you won’t run into it.

It took a couple days of really hammering on Datastore to get our tests to behave the way that Urban IoT was seeing in the wild. So unless you’re in that ballpark, I suggest spending your time in other areas ;)

Colt McAnlis

Written by

DA @ Google; http://goo.gl/bbPefY | http://goo.gl/xZ4fE7 | https://goo.gl/RGsQlF | http://goo.gl/4ZJkY1 | http://goo.gl/qR5WI1