Type-Safe API without writing and maintaining them in our CQRS + Event Sourcing Application

How using Read Models and Hasura saved us months of development

Jaga Santagostino
Casavo
8 min readJan 12, 2021

--

Where are we going?

During this year, one major goal at Casavo has been starting to work on Worms, an application that is responsible to encapsulate our Core Domain and orchestrate the processes of many Casavo departments in order to deal with the complexity of today and the years to come.

A core domain is what makes an organization special and different from other organizations. An organization cannot succeed (or even exist) without being exceptionally good in their core domain.

Where we started

One important thing to consider when talking about Worms is that we weren’t starting from scratch, inside Casavo there is already a system that is “central” to the business and used by many people in their day-by-day work, so why do we even need Worms?

The old system grew with the increasing needs of the company and it grew fast in the past years.

Like, really fast!

This fast-paced development allowed Casavo to grow, automate, and standardize the processes but also increased the complexity to a level where it started to be really hard to make changes.

Let’s face it, the old system had become a bottleneck, but it’s ok, it served us very well and allowed us to gain the domain knowledge necessary to create something better.

In the trenches

People use the old system daily, we can’t stop them nor let them wait months or years while Worms becomes “Perfect”, so we opted to strangle the old system and incrementally move and add features to the new one.

The design of the new system.

Deciding what features to maintain, what features to change, and what to remove, is not an easy task, it required a lot of analysis and discussion with domain experts, after all, we are not making a new system for the sake of it, we want it to be the best possible experience for our users and avoid working on anything that is not relevant (or it has been relevant in the past, but is not anymore).

The result of these analyses has been mapped to multiple BPMN diagrams and the development started.

Worms need a Frontend

In the first months of development, the focus has been on Worms’ Backend to create core features and automation that offer great value but “behind the scenes” by the time we started to receive the first designs to create the Frontend, Worms did not have any API (nobody needed it yet).

Usually, this is the moment where we decide to either using REST or GraphQL, we already use GraphQL in Casavo and we love it so it would have been an easy choice, but Worms is not just a CRUD application….

Worms is more than a CRUD

Worms is an event-driven application that reacts to important domain events and applies business rules to potentially generate new events,
it is a DDD + CQRS + Event Sourcing application

DDD + CQRS + ES

I know, I know…
It looks like Buzzword-bingo but this architecture has already proved to be very suited to our use-cases, it helps us tame the complexity of our domain in an elegant and scalable manner.

Read Models

In this article, we will focus on just a single aspect that this architecture, the fact that it allows us to have Read Models.

Explaining what a Read Model is and how it works can get complicated pretty quickly without context, this definition is enough to understand the rest of the article.

A read model is a model specialized for reads, that is, queries. It takes events produced by the domain and uses them to build and maintain a model that is suitable for answering the client’s queries.

For examples, if 99% of the times when we query data about a House we also need to read information about the owner (or other pieces of information that are in a different service or data-store) we can aggregate all the data at write time and have the client querying all the data it needs directly, this is more performant and with a better developer experience

This means we can create DB tables and Views that matches how we will use the data in the clients without requiring complex join at runtime, knowing already how we will query the data allow us to do all the heavy lifting at write-time

What if we change our mind and need something new or different?

No problem, this architecture allows us to create and populate Read Models to match anything we need (thanks to Event Sourcing), Read Models are just a Projection of how we want to see out data (the data itself is stored in the Events and can be used to generate many and different projections)

Read models are amazing, ok.. now what?

We have all the data we need, but we can’t use it.

It should be easy right?

The Read Models already have the data exactly the way we want to use it in the Frontend, but in order to do so, we have to create soooooo much boilerplate.

  • A GraphQL or REST API
  • The logic for filtering, pagination, limit…
  • Authentication/Authorization
  • Write (and maintain )the exact same data structures and encoding/decoding logic in TypeScript (Frontend) and Elixir (Worms Backend) for every single Read Model

How frustrating is that!?

All this complexity and code to maintain just to move some data around! It’s already validated and ready to read, why is that so hard?
And what about changes? if we have to add or rename a field we have to touch at the very least 3 places.

What if we could just use the data in the DB?

Wait, What? A Frontend reading from a Database? WTF?

Before sending me to jail for crimes-against-microservices-best-practices let me clarify, I’m not suggesting to write SQL queries in the Frontend or talk to the DB directly, this is just bad (for so many reasons).

But if we think about it, by having Read Models, we use the backend just as a proxy to read data from the DB, make it secure, allow filtering, limiting queries, etc.

Guess what? we can do that without writing all that boilerplate

Hasura to the rescue

Hasura is a project that allows us to generate a GraphQL server starting from an existing Postgres Database

Postgres is still the owner of the data and the DB shape, we decide how it is structured, Hasura just introspects the DB to generate a type-safe server from it.

Here’s an example:

Postgres table (only column names are important)
Hasura GraphQL Schema generated from Postgres schema

As you can see above this not only creates a GraphQL server that matches exactly our read models but also allows for filtering and limits on every field, out of the box!

By adding Hasura this is how Worms evolved

We use Hasura just for reading

Hasura is reading from ad DB owned and populated by someone else (Worms backend), this is considered a bad practice, and in a normal situation, it is a really bad one.

In a traditional application, the database is owned by a service and this service is responsible for proxy communication to the DB, in order to make consistent usage of data, avoid queries that can destroy performance (limiting number or paginated items for example)

But in this situation we are using Hasura as a read-only layer, everything is still managed by Worms (to avoid any possible mistake we use a read-only database credential for Hasura)

The C in CQRS (command).

Hasura is able to also handle the creation and update of records via GraphQL mutation, but we decided to not use this feature because what we write can be very different from what we query, and for this reason, “CRUD” entities are not a good fit for us.

We use HTTP request to send commands to the backend, that can trigger logics that will eventually update Read Models if necessary (for example a `delete-banana` command will result in removing that banana from the Read Model used to query Bananas 🍌)

Conclusion

This architecture allowed our Frontend team to be autonomous in a lot of situations, being able to create many different new pages with different data requirements without requiring changes to the Backend.

Sure, from time to time we have to add/rename or remove a field from a read model (or create a new read model) but no more “can you please add this filter to the API?” or “we need a different sorting in this page” kind of discussions 😄

I feel comfortable saying that this saved us months of writing, testing, and (years of) maintaining boilerplate code.

Bonus chapter

Code (generate) like it’s 2025

We went from not having an API to having a “self-service” GraphQL endpoint with a lot of features!

Now what? we can go even further!

imagine a GraphQL query like that

Thanks to Hasura filtering capabilities we can get all canceled appointments for a specific acquisitionSpecialist and only the fields we need ❤

what happens next?

We use TypeScript and love to have typesafe code, unfortunately, this usually means writing a lot of boilerplate to deal with data fetching.

Yes, I don’t like writing boilerplate code, you got me 😅

  • First, need to write a typescript declaration for this query, something like
  • And some data fetching logic to call the GraphQL query
  • In our case is a React Application so we would use something like Apollo or Urql

while this is far from being complex code, is very boring to write and maintain, so why don’t we just generate it?

By leveraging tools like graphl-codegen we are able to generate 100% of the code involved in data fetching.

  • no boilerplate ❤
  • typesafe by default ❤

We were able to generate a React hook based on the GraphQL query

we literally only had to write the GraphQL query in the `.graphql` file 🤯!

Remember the days we were writing the types of API response payloads by hand?

This was the past, as we don’t want to spend precious time doing that anymore

Happy code generation!

--

--

Jaga Santagostino
Casavo

Independent Software consultant lavoro.devmilano.dev @reactjs_milano organizer — photographer