GraphQL Remote Schema Stitching in a Multi-Service Architecture
A look at how to merge remote GraphQL schemas and speed up responses by leveraging through Apollo Engine.
Micro-services have been gaining popularity and are now becoming the status-quo of building and designing web applications. Many of the common patterns of multi-service architectures involve the use of numerous APIs where each has their own purpose and are eventually used as the building block for the frontend application.
GraphQL over REST
GraphQL is a query language for your API that gives you the power to ask exactly for the data you need and nothing more, making it easier to evolve your API over time. You also get a complete documentation of your endpoints — making GraphQL completely badass!
One common misconception is that in order to use GraphQL one must completely rethink the existing infrastructure from scratch when in fact is as easy as adding a very light wrapper around your existing API, enabling you to get started right away.
Remote schema stitching
As your product evolves, you often end up having multiple schemas. Perhaps some are local, while others are different database layers like CosmicJS, which comes with its own schema.
Introspecting GraphQL APIs
One useful property of GraphQL is that it supports introspection. This means that you can ask GraphQL schema informations about what queries it supports by sending an introspection query.
We can use that to create a new GraphQLSchema instance with a schema definition that is identical to the existing one. To achieve this purpose we make use of a set of utilities called graphql-tools that allows us to use remote GraphQL endpoints as if they were local schemas.
To demonstrate the ease and flexibility of GraphQL and for the purpose of this tutorial, I’ve created a working project on GitHub which is composed of two services.
1. Movie Service
This is pretty straightforward: A simple service to list all the movies or find a movie by id. Nothing fancy.
2. Main Service
This is where the magic happens. The main service combines multiple GraphQL APIs allowing us to get all the data we need in one request through schema stitching. In order to do that we need to create a GraphQL schema object for each API. This is done in three steps:
- We use Apollo Link to request and fetch the GraphQL results.
- introspectSchema to get the schema from the remote endpoint and pass it to makeRemoteExecutableSchema
- makeRemoteExecutableSchema: Creates a local proxy for the schema that knows how to call the remote API. Which basically returns the schema definition + the resolvers.
makeRemoteExecutableSchema receives two arguments: a schema definition that was obtained via an introspection query and a link that is connected to the existing remote GraphQL to be proxied. The former is actually forwarding the queries and the mutations to the underlying GraphQL API.
Running the Application
To facilitate a faster bootstrapping of the application we use Lerna and Yarn workspaces. This makes it easier to manage multiple packages. Before we run the application we need to run the bootrap command at the root of the repository, which will install all the dependencies and connect dependent projects with symlinks. Lastly, we use Lerna to start the services.
git clone firstname.lastname@example.org:suciuvlad/graphql-microservices-example.git
git checkout remote-schema-stitching
npm install --global lerna
lerna run start:dev --parallel --concurrent
API caching is a common pattern to reduce the load on your servers and get faster response times. When it comes to GraphQL there are couple of pain points the community seems to complain of:
- GraphQL requests are sent via POST requests making it rather difficult to cache.
- As the application grows in complexity the GraphQL size of the individual queries is getting larger as well.
- It’s very easy to fall into the trap of the classic N +1 issue.
There are many fantastic resources on how to fix the N+1 issue and for that reason, we’re not going to go into the details. The most popular solution involves using Facebook’s DataLoader.
Improve GraphQL Performance with Apollo Engine & Response Caching
Running a couple of queries reveals the fact that our response times are rather huge: ~650 ms for requests made to the 3rd party GraphQL schema provided by CosmicJS.
To fix this we’re going to setup Apollo Engine that comes out of the box with a bunch of cool features like automatic persisted queries, query batching, performance tracing, error tracking and lastly it also acts as a CDN. Let’s get started:
- We need to add Apollo Engine to our list of packages.
- Then enable Tracing and Cache Control in the Apollo Server.
- Next, we have to login into Apollo Engine and generate an API key that’s eventually going to be passed to the Engine constructor.
git checkout remote-schema-stitching-cache
By now your code should look something in the lines of:
Specifying cache length
It’s imperative to understand that while we did specify a defaultMaxAge cache, this can be overridden by using cache hints inside the schema definitions. However, there’s a gotcha:
Schema stitching doesn’t yet support the ability to pass through cache control extensions from the remote server.
What this means is that we need to add cache hints at the stitching server layer instead, through resolvers and by making use of a handy new method called delegateToSchema that allows forwarding parts of the queries or even new queries to schemas.
Engine proxy will strip out both the tracing and caching extensions out of responses by default so we passed some additional options to the constructor in order to verify if the new resolvers are working as expected.
By now everything is setup and the requests are going to be cached by Apollo Engine according to the caching rules and you’ll have access to your performance metrics.
In this article, we learned how to merge multiple schemas together using makeRemoteExecutableSchema from graphql-tools. Next we explored how easy it easy to add Apollo Engine to our application in order to get faster response times and use resolvers to override the default cache control by delegating schemas.