Serverless GraphQL cached in Redis with Apollo Server 2.0
Cache data sources with Apollo Server 2.0 in AWS Lambda
If you have followed the GraphQL Summit in Europe a few days ago you might be excited as we were at BrikL about announcement of Apollo Server 2.0.
We have been using AWS Lambda with GraphQL since 2016 and have switched to Apollo last year using Apollo Client and Server in a ‘serverless’ setting. Now with the new 2.0 release coming up we are excited to give the current release candidate a shot.
First of all a big thank you to the Apollo team and all contributors especially Alessio to add the Lambda support and Yan Cui for continuous serverless lessons.
Lazy reading…? You can find the repository here
Apollo Server 2.0 combines a lot features with with Data Sources being one of them. The new release candidate rc2 includes:
- Error handling
- Schema mocking
- Healthchecks
- Edge support
- GraphQL Playground
- and more..
Data sources?
Chances are high that your GraphQL server is receiving data not only from one data storage or one endpoint only — in many cases your schema is made of either legacy APIs in REST, external APIs or other sources which you may like to cache.
This is where data sources come in handy in Apollo Server 2.
Martijn Walraven:
Data sources are classes that encapsulate fetching data from a particular service, with built-in support for caching, deduplication, and error handling. You write the code that is specific to interacting with your backend, and Apollo Server takes care of the rest.
Apollo Blog
That means with Apollo Server you can add additional data sources to your schema using the same caching mechanism.
When I read the post yesterday I thought — sounds great — let’s setup a sample.
Serverless Apollo Server with Redis
Above we can see an overview what we are going to put together here.
AWS API Gateway: GraphQL endpoint receiving POST/GET
AWS Lamda: Apollo Server 2.0 in NodeJS using apollo-server-lambda
Amazon CloudWatch: Logging
AWS X-Ray: Log performance traces
Dashbird: If you dislike CloudWatch like me > use Dashbird!
Redis: Cache with e.g. AWS ElastiCache or redislabs.com
NY Times API: Sample REST endpoint for article search we want to cache
The flow will be:
Our client will be sending a GraphQL request off to the API Gateway which then proxies the request to our AWS Lambda.
Apollo Server will then translate the query to the data source in the schema.
Apollo Server will check if the type is ‘cacheable’ (cacheControl), if so it will look in our Redis endpoint for a cached response.
If there is no cached response it will send of a request to our REST API (NY Times) in this case. Again it will check if the response is ‘cacheable’ and store the result in Redis.
Enough talking — let’s code
You can find the sample repository here:
https://github.com/Brikl/serverless-apollo-datasource-redis
To get started as quickly as possible we will be using serverless framework (you can use AWS SAM local too).
For that we create a serverless.yml to define API and Lambda configuration.
Next up we will need a handler in NodeJS to run in Lambda:
Use external data sources that are not http based such as DynamoDB will require you to set the callback waiting to false otherwise your Lambda will timeout.
context.callbackWaitsForEmptyEventLoop = false;
Besides you will need your package.json and simple webpack configuration to round things up.
Then you are all set to deploy to AWS using serverless deployment command:
serverless deploy — region us-east-1 — stage staging — profile yourprofilename
How to run
Once deployed serverless will return you a API URL — since Apollo Server has the GraphQL playground build-in we can just open the respective URL:
Now we can run our sample query in the playground
query searchNYTimes{
searchNYTimes(
q: “new york”
){
web_url
}
}
To complete our setup we will need to take a few more steps:
- Setup Dashbird: https://dashbird.io/docs/get-started/quick-start/
This will help you to get a better overview on logging, performance and X-Ray traces in one places especially if you have multiple Lambda functions. - Setup Redis: If you haven’t — hop over to redislabs.com and setup a Redis or use AWS ElastiCache
- Get NY Times API Key: Go to https://developer.nytimes.com/signup and request your own key
- Setup SSM parameter: Instead of commiting variables into source control use a service like AWS System Manager and store your parameters there
Go to: https://console.aws.amazon.com/systems-manager/parameters?region=us-east-1 and add the required parameters:
/apollo-test/staging/NY_TIMES_APIKEY
/apollo-test/staging/REDIS_HOST
/apollo-test/staging/REDIS_PASSWORD
/apollo-test/staging/REDIS_PORT
If you have trouble with the setup, feel free to open an issue in Github or add a comment here.
Next?
Now this was just a quick proof of concept on running a Redis cache with the new Apollo Server which can come in handy, when you rely on external REST API maybe if you run your Apollo Server in different regions or have a pay-per-use API where you like to reduce number of requests.
In a more sophisticated setup there is a few points to add or keep in mind.
By using callbackWaitsForEmptyEventLoop settings set to false you may be subject to closing other running connections to other data sources and some running HTTP requests for reporting e.g. error reporting to Sentry will be cancelled. You might want to split the Lambda functions to one with callbackWaitsForEmptyEventLoop true and one with false and then perform Schema stitching for example. Note this is specific to Lambda and NodeJS.
Using the SSM Parameter Store is great to replace variable in source code but you can take it even further and instead of using environment variables you can load them on Lambda container setup as described by the awesome Yan Cui.
Besides using Dashbird for logging and performance traces in the future you will also be able to use Apollo Engine for more detailed traces in your GraphQL schema and caching insights.
Plus the Redis cache can also be used as Persistent Query store. A common practice is using Persistent Queries from Apollo Client as GET requests. When using GET requests we should also add a Cloudfront API Cache in front of our API Gateway since Apollo Server 2.0 includes sending Cache-Control headers along to even further reduce your Lambda and remote endpoint calls.
Stay tuned we will share more experience on trying Apollo Server 2.0 with Apollo Engine, AWS Cloudfront Cache and other features soon. > Read it:
Thank you.
About BrikL
At BrikL we are excited about new technologies in the GraphQL ecosystem such as Apollo Server 2.0 where AWS, Apollo and GraphQL are an essential part of our stack.
If you are in Thailand or around, join the GraphQL meetup in Bangkok.
BrikL is a fashion tech startup providing a one-stop solution to fashion companies. Our Fashion Design App is cloud-native built on top of serverless technologies with React, Apollo, GraphQL, NodeJS, DynamoDB and more.