In this series of blog posts, we share Moonpig’s journey to start using GraphQL. This part will explain why GraphQL and why we transition from REST to GraphQL.
At Moonpig we decided to embrace GraphQL when we considered which technologies to use in our re-platforming project at the beginning of 2019 — looking also at our (then) sister company Photobox that had successfully built a GraphQL-based API solution.
A re-platforming project is usually necessary when a system reaches a point in its natural evolution where it becomes challenging to maintain and also to extend. Together with Serverless, and cloud-first technologies, we choose GraphQL for our client-server data communication.
After so many years working with REST, we realised that although it is a robust and reliable technology, our APIs were difficult to maintain.
GraphQL is still a young technology but it is seeing increased adoption every day because it solves some of the challenges that have been typically faced in the past with REST.
This is not intended to be a fully exhaustive list of all the features that GraphQL has to offer (after five years of GraphQL being public, you can find plenty of good material on the web for that). It’s a look at how we leveraged some of these features to improve our product and our Developer Experience, and an explanation of why we moved away from REST for the client/server communication. In future posts, we’ll also look at the architectural decisions we’ve made to support GraphQL in a serverless and microservice-based platform, and some of the problems we’ve had to solve in that context.
Why REST is difficult
Developing and supporting a full compliant REST implementation (or RESTful) is difficult. Having endpoints returning a single resource that the client has to follow to reassembly the data needed to build a view is challenging. It requires various endpoints to interact with, multiple roundtrips and additional logic to handle the exchange of request/response. For this reason, most of the time, what you will probably end up with is a RESTish implementation where an endpoint returns more data than it should try to accommodate the requirements from the client. We can consider each HTTP request as a state machine with transitions between states to manage.
If a view of the application needed multiple entities, we would end up combining these state machines which will result in more code to write and test.
Moreover, this logic would be replicated across all the supported clients, so the amount of code to maintain increases significantly.
the best code is no code at all — Jeff Atwood
From a customer perspective, any additional request has a cost in terms of payload (so data-usage) and latency of the application. This can be problematic where connectivity is not stable or has a very low throughput, and we care a lot about our customers so we want to avoid these kinds of problems.
This problem is also known as under-fetching. The client has to perform multiple HTTP requests before all the data needed to render a view is available. Spreading the data across different endpoints also made it difficult for the developer to figure out from where to fetch it. Documentation is not always a first-class citizen of an API and even when you have made best efforts to write good documentation, having to deal with multiple endpoints increases the cognitive load for the developer.
It was also likely that our clients had different requirements in terms of data. A different User Experience requires different data. Having a mobile app consume the same data as that which is consumed by a desktop app is extra bytes wasted because usually the UI on the mobile is less complex and requires fewer data. We were over-fetching data. We didn’t find a good way to give capabilities to the client to select which fields of the response they are interested in.
For this reason, following the Backends for Frontends pattern, we have started to develop bespoke endpoints that were returning the data already formatted for a specific client. These additional endpoints increased the complexity in terms of business logic and the number of developers involved to maintain them. This approach furthermore moved us far away from having a RESTful implementation.
GraphQL was developed in 2012 at Facebook who made it public in 2015 with this post on the React blog:
In 2012, Facebook was rewriting its iOS app from a web app to a native app. Whereas before they were rendering HTML as a view for the iOS client, they now needed a way to define an API that was easily consumable from iOS and web clients.
we used to deliver markup from the server and we need to switch to APIs — Nick Schrock, co-creator of GraphQL
What they realised was that they didn’t have a robust way to fetch an arbitrary tree of data. Even if the name suggests it, GraphQL is not an arbitrary query language for graph databases, it is intended to query and fetch a tree of data. We can summarise it as:
A query language for clients to express requirements, a type system for servers to express possibilities, and an introspection system that allows clients to discover these possibilities — Marc-Andre Giroux in Production Ready GraphQL
The transition from REST to GraphQL
In the journey of migrating from REST to GraphQL, we have rationalised and tried to solve some of the problems that we had in the past with our legacy API. We’ve fully embraced some of the principles at the foundation of GraphQL. Our GraphQL API is designed with clients in mind (it’s product-centric). We strive to avoid leaking any implementation details to the clients and aim to expose only what is used on the client. We worked to avoid building a 1:1 mapping between our legacy REST API and GraphQL.
Having a single endpoint to query solved the discoverability problem, removed the network overhead by reducing the number of requests and decreased the cognitive load for the developer.
When we started to rethink our view layer for our website, we knew that a component-based library and a design system needed to be two essential characteristics of the new stack. We can think about almost all UI as a tree of components. We wanted to move away from an imperative programming style with our views in favour of a declarative style, and for this reason (and some others) we chose to use React. Ideally, a component should be able to define which data it requires without worrying about how or from where the data is retrieved. The declarative nature of GraphQL helps the developers to express these requirements in terms of a tree of data and for this reason, is a perfect companion to React.
GraphQL didn’t only solve technical challenges, but it also became a communication tool for us. When we start to work on a new feature, we identify first what the product requirements are and then we use the Schema Definition Language (SDL) as a data modelling tool to define how the API should look like based on the UX (i.e. the client intents) and the UI (i.e. the tree of components). The SDL is the neutral language that both back end and front end developers understand. Collaborating on a schema also has the nice side-effect of reducing silos in the organisation. Only when the schema is defined and agreed, we start to derive from there the implementation details on both backend and front-end.
As with any API technology (or architectural style), nothing will help you to design a good API (an “API that is easy to use and hard to misuse” — Joshua Bloch) so having an agile tool as the schema that lets you quickly iterate and safely deprecate fields that are not used anymore eases the burden of this process.
The type-safe nature of GraphQL, in combination with the auto-generation of types on the client, helped us to reduce some common bugs.
Even if GraphQL is not mature as REST and there are still some areas that require standardization such as GraphQL errors or GraphQL over HTTP, having a plethora of tools makes the Developer Experience smooth and the developer happy (and we will share more about this topic in a future blog post).
The migration from REST to GraphQL is similar to the migration from SQL to NoSQL. If you try to apply what you have learnt with REST to GraphQL, you will suffer the same bad experience as if you move normalise data into a NoSQL database as it is.
As with any migration, we faced a few challenges along the way such as how to implement a caching solution and how to federate multiple micrographs — we will talk about this in more detail in a future blog post — but nothing that we weren’t able to solve with a good understanding of the technology and the right tools.