Creating a TypeScript API that consumes gRPC and GraphQL via generated types
There’s been much hype over the past couple of years surrounding both GraphQL and gRPC. At Attest, whenever we create internally facing products, we have a much higher risk-appetite, and feel these are great opportunities to play around with new technologies that we might one day use for our customer-facing endpoints. We noticed that both gRPC and GraphQL centre their underlying principles around the “design by contract” approach, and thought this was a great opportunity to test these technologies on one of our internal APIs.
In this article we aim to demonstrate how we managed to get fully end-to-end Typed API, written in TypeScript, by generating types for either end of the application. We will also discuss how we can use these types on any front-end clients, the difficulties we encountered, any quirks we found, and how our experience has been so far. Lastly, we’ll look into the pros and cons of having chosen TypeScript.
Before reading any further, make sure you understand the basics of gRPC, TypeScript, and GraphQL. The following doc links and articles may be helpful:
The first step was deciding upon the structure of our application. Two major factors that influenced our architecture were the ability to easily:
- Swap RPC clients for REST ones without modifying resolvers (controllers), services, or any related models/transformers.
- Swap GraphQL for a simple REST API without modifying our clients, services, or any related models/transformers.
We ended up following the service layer pattern, in order to have the most protocol-agnostic approach possible:
Each of the layers displayed above consists of their own models, transformers, and errors, where two-way model transformation is performed in the parent layer — allowing each layer to be consumer-agnostic. The decision to have a model representation on each layer stemmed from the need to correctly separate concerns, making the code easier to maintain, more testable, and reusable. The cost of doing is verbosity, a small price to pay for what we consider a big win, as if we wanted our client layer to live in a separate repo, to be consumed by another project — this would be extremely simple to do.
Considering the diagram above, the directory structure was laid out in a similarly comprehensive structure:
The API layer (GraphQL) is in charge of both defining endpoints and routing them to their appropriate resolvers (controllers). We use Express as a base for our application and add Apollo Server as middleware to aid with the routing and interpretation of our GraphQL schema. The main reasons we picked Apollo Server were down to developer tooling:
- As a consumer of the API, mocking one or more responses is made unbelievably easy. It means that the classic “should we create E2E or Integration tests” debate is laid to rest before it even begins. Integration tests become the norm as a simple boolean flag mocks API responses for you.
- GraphQL Playground comes built in which makes it easy to execute queries and mutations against the API. Think of this as Postman (or a REST client) built into an endpoint that is aware of the definition of the API, both validating and assisting with request syntax before you even try and execute it.
The schema directory holds the definition for all of our API models and endpoints, and is defined following the schema model definition described in the Apollo docs.
Our service layer holds the core logic used within our application. This layer (consumed by the resolvers defined in the API layer) communicates with different clients, where it gathers and sends relevant data between the clients and resolvers.
This layer is vital, as it communicates and orchestrates multiple clients, while isolating the details of the clients APIs from the resolvers. This allows the resolvers to be as simple as possible, the clients to focus on their downstream micro-services, and the service layer to stitch it all together — meaning that replacing a gRPC client for an HTTP one does not have an effect on the service layer, and replacing the logic of a service method leaves the resolver untouched.
The client layer is used to communicate to different gRPC micro-services or external HTTP endpoints. Axios is used to communicate over HTTP and to communicate with the gRPC services we use a combination of generated gRPC clients, and a bespoke abstraction using promises.
Apollo server enables you to add middleware between each request and send through any context you may need alongside your requests. We leverage this layer to send through JSON Web Tokens (JWT) from the front end down to our micro-services:
Once we hit our client layer we convert this into gRPC metadata so we can pass through request context to gRPC micro-services, where we can handle authorisation.
Automation via generation
Once we had fully fledged out the architecture of the system, we needed to guarantee that both ends of the app were typed. This would mean that any changes to our
.proto files or any files containing
gql tags would result in compile time errors as opposed to runtime errors.
Generating TypeScript types from a GraphQL schema
One of the most powerful features offered by GraphQL is introspection. The ability to understand the structure of an entire API through a single query. The kind people at Apollo provide a library called graphql-tools which, given an endpoint, converts a schema into an equivalent JSON interpretation.
The downside of the tools above is that they rely on a running instance of the server. It’s annoying to have to remember to start the server everytime we want to regenerate the types; so we created a (slightly primitive)
bash script to “automate” the process.
- Spin up a server in a background processs.
- Introspect the schema and generate the
- Kill the server.
- Generate TypeScript definitions from the
Using the simple script above, TypeScript types are generated for us. This means that if we change the following
After defining our query, running
tools/generate-gql.sh will generate the following
schema.d.ts file for us:
The above types mean that we can even type our resolvers:
Using schema.json on the front end
schema.json file goes beyond generating back end types — it can be used by any clients that consume the API to generate their own types. This means that when we change our API, our client (a front end app which is also written in TypeScript) is able to download the schema, generate types and have any potential errors show up at compile time — a win for both API and front end development.
Comparison of GraphQL TypeScript generation tools
We found two great tools that allowed for typed, generated code:
We decided to use GraphQL Code Generator as it generates resolver types, as well as Query and Mutation structures — meaning there’s no way that you can change an existing query/mutation without having to modify its accompanying resolver as can be seen by
Generating gRPC clients and types from proto files
Whilst we considered and tested using dynamically generated code at run time for our gRPC clients, statically generated code provided typing and all the advantages of compile time checking.
When you have
protobuf API definition files that are shared across different services, it’s hard to share these files and keep your API consumption and implementation up to date. This problem is compounded when your consumers and producers are in separated across multiple git repos. To fix this, we have a
proto repo that’s installed as a git submodule (
│ ├── chicken_service/
│ │ ├── model/
│ │ | └── chicken.proto
│ │ └── chicken_service.proto
│ └── ...
Models are separated from service and request definitions to allow for re-usability, and each service has its own directory so we can target code generation on a per-service basis.
Quick note: We’re always wary of depending on libraries with limited support, but agreatfool has done an amazing job — we’ve had very few issues using this library.
We run this script whenever changes are made to our
proto files — doing so will generate the following files:
Which, in turn, allows us to communicate to clients as follows:
Comparison of gRPC TypeScript generation tools
We found two tools that allowed for typed, generated code:
- Improbable’s grpc-web: A tool built to communicate to services implementing gRPC over the Web from a front-end client by creating a gateway proxy between the client and server.
- grpc_tools_node_protoc_ts: A wrapper around grpc_tools_node_protoc that generates corresponding
grpc-web first, as it’s built by a well know company and has multiple contributors, even though it is meant for the Web. There was one issue with this:
This package supports Node.js, but requires that the server has the gRPC-Web compatibility layer.
Due to the large amount of work needed for this, as well as forcing a dependency on each of our producers, we weren’t willing to implement this compatibility layer on each of our producers.
Setback: Error handling in gRPC
gRPC offers providers a number of different errors to choose from, that a producer can throw. Errors such as
GRPC_STATUS_NOT_FOUND, however, are too vague to get any concrete value out of — we need to know what is not found and why it’s not found.
On our producers, we set this using the
details property — think metadata for errors — and add something more explicit, e.g.
At the time this article was written, if both
details were provided, when using the Node.js library, one was overwritten by the other — meaning we were unable to get explicit error types without a few hacks.
Another quirk is that the TypeScript typing generated sets
error type to
any, meaning we need to explicitly typecast the error.
Using TypeScript to build an API
Using TypeScript over a more suitable backend language, such as Golang, Java, or Rust came with its share of disadvantages (predominantly in the gRPC communication layer), however it also had a couple of major benefits.
- Apollo Server — which allowed us to generate the precious
schema.jsonfile — which in turn meant that our API endpoints on the front end were all typed. This is an extremely strong tool that — when integrated both on the API as well as the front end — can benefit both teams, immensely.
- The project was started with a small team consisting of 2x front end and 2x backend engineers, meant that anyone could jump on to this internal repository when we needed the extra hands.
- A language such as Golang or Rust could be more performant and our backend engineers could have been more comfortable writing these.