How 1970s database research influenced our new integration platform

Leo Cassarani
Geckoboard: Under The Hood
6 min readDec 4, 2018

Geckoboard’s integrations with third-party services are the core of the product. Without them, our customers couldn’t connect their data from all the different tools they use.

However, the sheer variety of tools and metrics our customers need, makes our integrations platform one of our biggest engineering challenges!

How do we provide accurate, up-to-date data for a large portfolio of integrations? In some cases even getting the data isn’t easy, as often the numbers aren’t available via the service’s API and they need to be calculated. And then there’s the maintenance burden. How do you handle changing APIs, whilst tracking down bugs, and efficiently rolling out new features across every integration?

Over the years we’d tried a variety of approaches, from a team of interns to opening up our platform to third-party developers, but we never truly cracked these problems. Each new integration, metric, and visualisation had to be hand-coded by an engineer.

The result was that even when we had an integration with a service, it didn’t support the actual metric the customer wanted!

There were incremental improvements we could, and did, make but we knew if we were ever to substantially increase the number and quality of our integrations we needed to rethink the problem from the ground up. Here’s the story of how we did it.

Leo at work in the Geckoboard office.

Lessons from history

Even before we kicked the project off, it struck me that there were analogies between the problem we were trying to solve and relational databases. It was an unorthodox approach, but I started the project by immersing myself in a whole load of academic research from the 1970s. This proved invaluable. The 70s were a groundbreaking period in Computer Science, and I began by reading Edgar F. Codd’s 1970 paper which shows how information stored in large databases can be accessed without knowing how the information was structured, or where it resided in the database.

I began thinking: what if the data we were dealing with wasn’t buried behind a third-party API, but instead was in a database that we controlled? The operations we wanted to perform with our integrations were the same ones that you’d get in a relational database, or with SQL.

Next, I started reading papers about the first types of relational databases. One paper published by IBM in the 70s about a database system called System R, was particularly useful. System R was, effectively, the first database to really implement the SQL idea: the comparative query. A main focus of the paper was “How do you solve this access path selection?” and the answer was a query planner.

With a query planner, you take a query, give it to a piece of code, and then give it to a modular database that’s responsible for planning how to answer the query. The database then generates a set of plans, scores them, and selects the plan with the best score for executing the query. Finally, you ship it off to the database that’s actually going to execute it. Prior to this, the operator would just be piping out the access path to the execution part.

This was a major innovation at the time, and nearly 50 years later it seemed like it was the perfect way to think about our integration problem. Following the example of System R, we could write a query planner and place it in between the user who wants an answer to their query and the third-party API which stores the data.

This was all very well in theory, but how would our query planner learn how to work with third-party APIs? Whenever you look at an API’s documentation, it will usually list its core models (the objects in the third-party system), its endpoints (the queries and mutations that can performed on those models) and the relationship graph between its models (how one object can be “joined” onto another). Our challenge, then, was to translate this documentation into a machine-readable format that our query planner could consult whenever it was asked to answer a question, such as: “how many Zendesk tickets did we solve this week?”. Would this even be possible?

Query planning steps.

Building the platform

At this stage of the project we were still a small research team but slowly we broadened the conversation and started to involve more people.

We spent a couple of months refining our designs and building prototypes — first on paper and then in code. The initial results were positive so we started testing a wider variety of integrations and different types of API. The approach seemed to work. It was time to get started in earnest!

Although we were building a generic platform, we started by tackling one integration first. Zendesk was one of our most popular integrations on the old platform but the metrics we provided were generally not what our customers needed. Whereas a lot of other APIs would give you pre-aggregated, pre-computed numbers and metrics, Zendesk doesn’t. As we wanted to tackle the longstanding issue with paginated APIs head on, Zendesk was the clear choice.

At first, we built a query planner that would produce the most naive possible plan, and as we were initially focusing on Zendesk, we designed just enough of the declarative format for specifying how an API works in order to get some basic queries up and running. After that, we worked hard on making our query planner smarter, and added a set of very sophisticated optimisations that ensured we were always coming up with the best and most efficient query plan. We also extended the API “metadata” to fully support some of the more unusual endpoints in the Zendesk API.

Of course, the back-end wasn’t the whole story. To realise our dream of being able add new integrations in days, not months, we also had to completely rethink how we built our integration UIs. It wouldn’t be much good if they still needed to be manually created each and every time we wanted to integrate with a new service. To solve this we designed and implemented a new dynamically generated universal UI, but that’s a whole other blog post!

All in, it took us around nine months from beginning to end to release the Zendesk integration on the new platform. And as hoped, for most of that time we weren’t doing anything Zendesk-related; we were just building a platform that could deal with arbitrary integrations and arbitrary queries for those integrations.

Now nearly a year on we’re proud (and slightly relieved!) the platform is living up to its promise. It’s now easier than ever to build a new integration, and improvements we make to the platform automatically roll out to all our new integrations. A far cry from when each were individually hand coded.

The future of the ‘Data In’ project

Of course, the work is not done! We’re continuing to add support for new types of API and we still regularly draw inspiration from the history of relational databases. Our query planner continues to get smarter. We’ve added optimisation rules to make certain slow queries much faster. We’ve begun rolling out cardinality estimation, so that when the query planner suggests multiple plans, we’re able to pick the most efficient one.

Then there’s the whole question of caching! For us caching means two things. The first is what you’d typically think of when you think about caching: temporarily storing data to improve efficiency. We want to keep our customers’ data as up-to-date as possible, whilst minimising the number and size of requests we make to third-party APIs, so that we don’t fall foul of rate limits. Working out what to request and how to schedule it is a challenging problem.

We’re also looking to use caching to support historical data. Let’s say you want to display how many followers you have on Twitter. It’s very easy for us to go to the Twitter API and say: “Tell me how many followers I have.” What you can’t track is how many Twitter followers you’ve had over time. By extending our platform to collect and store values at regular intervals we can build these missing datasets.

These are just a few ways we’re extending the platform to make it more powerful. Ultimately we’re aiming for a future where where programmers won’t be needed to add new integrations. That would be incredible — both technically and for us as a business. There’s a lot of exciting work for us to do yet, but that’s where we’re headed.

And in the meantime, we’re hiring, so why don’t you come join us?!

--

--