How Hasura saved us $50,000

Brandon
7 min readAug 14, 2019

--

We recently decided to use a GraphQL solution called Hasura on a big project. Thanks to Hasura, we completed the project with 1.5 engineers in just over a month, a savings of around $50k compared to contractors’ quotes. This article explains what Hasura is and why we chose it.

Recently, we decided to build a realtime chat web app from scratch. Our requirements were about what you’d expect: it had to be realtime, support a standard set of chat features, and use our front-end stack of React, TypeScript, and Apollo.

Since the project was relatively self-contained, we decided to solicit bids from contractors. We spent a few weeks drafting a spec, then reached out to a range of contractors ranging in size from solo developers to large studios.

The quotes we got back were surprising: the bids ranged from $70 — $170k, and anywhere from 6–10 weeks of development time for teams of 1–3 developers.

These bids were much higher than we’d expected, so we went back to our spec & tried to understand what contractors were seeing that we weren’t. We reasoned that the bulk of work stemmed from two pieces of the app: the backend, and the client-side challenges presented by creating a realtime app with Apollo (which, for all its charms, still hasn’t nailed the ergonomics of their subscriptions system).

We decided to in-house the project, but we wanted to find a way to minimize the complexity burden of these two requirements. The real-time requirement prompted us to take a look at Firebase, which, to its credit, is sort of a household name when it comes to realtime.

But Firebase comes with flexibility tradeoffs we weren’t ready to make . While the details of those tradeoffs are out of scope for this article (you can read some of those details in this comparison of GraphQL and Firebase), they were pretty similar to the 2-pronged Faustian bargain most BaaS offer:

  1. Lock-in: most BaaS require adopting a certain data model, using a proprietary SDK, or some other form of lock-in.
  2. Long-term support: if the service shuts down, users may be left holding the bag.

Like many (most?) startups, we weren’t ready to make those compromises.

Then, almost on a lark, we took a look at Hasura.

Hasura markets their product as a “GraphQL-out-of-the-box.” It’s a standalone server that sits between your database & client. This differentiates Hasura from other offerings (like Prisma), and puts it closer in spirit to projects like PostGraphile.

Source: Hasura vs. Prisma at hasura.io

To set Hasura up, you simply point it at your Postgres instance & launch it. Things get a bit more tricky if you’re pointing Hasura at a pre-existing Postgres instance, but in general, it’s pretty painless.

From there, you manage your schema via the Hasura GUI. As you add entities to your database model, Hasura creates just about every GraphQL query, mutation, subscription, and parameter you might possibly need.

Since it was FOSS and self-hosted, we knew Hasura wasn’t as much a risk as a BaaS offering like Parse (or any of the other BaaS that’ve shuffled off this mortal coil over the past decade). Still: nothing’s free, so we figured the convenient abstractions Hasura offered would come with most of the same tradeoffs as a regular BaaS.

We’re happy to report that we were wrong. Here’s why.

Lock-in

We sought to measure how much lock-in Hasura brought by estimating how much time it’d take to migrate away should the need arise somewhere down the line.

To quantify this, it’s important to understand precisely how Hasura works. If you know GraphQL, you might reasonably assume that, under the hood, Hasura’s creating custom resolvers for every mutation, query, subscription, etc.

Were this the case, it’d mean that migrating away from Hasura would involve writing code for dozens of new resolvers that produced exactly the same results as Hasura’s dynamically generated resolvers. That’d make the cost of migrating away (and, therefore, the lock-in) massive.

But this isn’t how Hasura works. Instead, every GraphQL operation you send to the server is transpiled to raw SQL.

For example, consider this GraphQL query that fetches the roles of a user with an ID of 2:

app_users(where: { id: { _eq: 2 } }) {
roles {
role {
id
name
}
}
}

Using the query analyzer built-in to the Hasura GUI, we can see the exact SQL being generated against our schema and, as an added bonus, the results of Postgres’ ANALYZE command:

The SQL and ANALYZE output for a Hasura-generated GraphQL Query

This is about as transparent as I think a product like Hasura can hope to be. There’s very little “magic” happening, so swapping Hasura for another solution in the future should be straightforward. This was enough to address our concerns about lock-in.

Efficiency gains

We weighed the lock-in “costs” of Hasura against the value it’d add to our productivity. For us, most of the value stems from three key features.

Proximity to Postgres’ features

Before Hasura, most of the server-side code we wrote was simply translating client data needs into SQL queries using an ORM. This code was usually tedious to write & maintain — precisely the sort of boilerplate commodity code we wanted to avoid. Having a human write this sort of code instead of a machine created very little marginal value for our business (to be clear: this of course isn’t true of all resolvers, but ours were especially vanilla and boring CRUD ops).

With Hasura, we get a practically full suite of SQL tools right in the client, reducing our server-side code load to near zero. The auto-generated API covers not only simple tasks — like writing WHERE arguments, sorting, traversing relationships, etc. — but also more specialized tasks, like performing CRUD operations inside of JSON[B] fields, or filtering entities based on the results of multi-table JOIN queries.

At its best, Hasura makes front-end development feel like the declarative future we were promised when GraphQL was released: simply state the data requirements of your UI, and let a library handle everything else.

Subscriptions as Live Queries

For a product with real time requirements, Hasura’s subscriptions might be the killer app.

Subscriptions in Apollo have always worked a bit differently than I think many developers expect. When I first learned about them, I assumed they’d mimic Firebase’s always-up-to-date functionality — i.e., if I write a query for a chat’s messages, and those messages change, my client should update automatically. This pattern’s often called “live query.”

The reality was a little less simple. Instead of delivering always up-to-date results, subscriptions simply alert the listener that a specific thing has happened. For example, an onNewMessage subscription might tell the client that there was a new message, but it’s the client’s responsibility to merge that new message with the messages already in cache.

While Apollo provides some convenience methods to help with this post-event cache update, in practice, it’s still a huge pain to implement. We found that this often made it easier to simply resort to long-polling (trivial with Apollo’s pollInterval prop).

Hasura appears to have gone their own way with subscriptions, though, choosing instead to make them bona fide live queries. I’m not exaggerating when I say this cut our network layer code (e.g., Apollo queries & the requisite container components) by at least a third within the client.

I should note here that this feature isn’t without its compromises. For example, we still haven’t quite figured out how to do optimistic UI updates with <Subscription> components. It’s also not clear how future changes to react-apollo might impact Hasura’s non-conventional subscription implementation. For now, though, these are trade-offs we’re happy to make.

Escape hatches

At some point between the client and the database, data has to be transformed from the database model to the shape the UI expects. In a conventional architecture, these transformations often live on the server.

Because BaaS don’t typically let developers write their own logic, any necessary data transformations are off-loaded to the client. We prefer to keep our clients limited to simply rendering views, so we try to avoid this sort of complexity.

Fortunately, Hasura has a respectable collection of escape hatches. Besides remote schemas (a feature also available in a few GraphQL solutions), Hasura offers a clever feature: create any view or stored procedure, and Hasura can automatically generate GraphQL operations.

Subjectively, complex data transforms are far easier to write and maintain in SQL than they are in JavaScript or TypeScript. Being able to go from creating the view to querying it in the client in mere seconds feels feels almost like cheating.

Honorable mention: migrations

Hasura emits migration files whenever you make a change to the schema via the GUI. These migrations can be applied at server start-time auto-magically, which promises a full CI/CD migrations workflow right out of the box.

We’re stoked about this promise, but we haven’t yet had the time to try it out, so we can’t say yet whether or not it delivers.

Conclusion

We’ve found that for every additional piece of architecture a feature requires us to touch, the time to complete that feature rises non-linearly. Before Hasura, adding a simple button to our UI usually required making changes to the client, modifying our GraphQL server, updating the database, deploying the migrations, and writing / updating tests for every piece of that chain.

Now, that same button is simply a matter of updating the client, clicking a few things in the Hasura UI, then testing & deploying. We’ve cut the amount of server code we have to maintain by 2/3rds, allowing us to spend that time building more cool stuff for our users.

We expect we’ll uncover blemishes in the product as we stretch its legs, but for now, Hasura lets our engineers focus on building better interfaces for our users.

Cresta’s an a16z-backed AI startup born in the labs at Stanford. We’re solving hard problems at the intersection of natural language AI and human-computer interaction. If that’s up your alley, drop us a line or check out our jobs page.

Special thanks to Alex Roe for helping revise this article.

--

--