GraphQL Server: After ‘Hello, World’

Part 1 of 2: Schemas & Core Concepts

Justin Mandzik
6 min readJun 14, 2017
http://firstround.com/review/draw-the-owl-and-other-company-values-you-didnt-know-you-should-have/

So you’ve kicked the tires and decided GraphQL is a good approach for your project. Now you’re staring at an empty editor window, pondering on how to structure your code. The GraphQL docs are a great resource for whetting the appetite, but doesn’t (and shouldn’t) offer much project-specific guidance. The Apollo documentation, while library specific, offers a wealth of information and some solid strategies. What follows is some opinionated commentary on successes and failures from the first 4 large projects I’ve built with Node.js.

https://wehavefaces.net/graphql-shorthand-notation-cheatsheet-17cd715861b6

Use the Schema Definition Language

Many GraphQL libraries (like graphql-js) expose a language-specific API to construct your schema. My advice is not to use it. Here’s why:

Terseness

Say you need to describe a non-null list of non-null IDs as input for some mutation. You could define it as:

ids: new GraphQLNonNull(
new GraphQLList(
new GraphQLNonNull(
GraphQLID
)
)
)

Or you could use the DSL:

ids: [ID!]!

Portability

If you are in a team environment, chances are different layers of the stack choose different technologies. Being able to directly consume type definitions regardless of runtime is a powerful abstraction. As adoption increases, the door is open for repositories of common type definitions.

Declarative-ness

Using the language-specific API to construct your schema offers the ability to programmatically generate your schema. Unfortunately, leveraging this means giving up some of the declarative nature of a schema. Writing code to produce a schema imperatively tends to be “sticky”, in that code that depends on dynamic type generation tends to also need to be dynamic. It becomes difficult to extract​ static definitions later and you can quickly run into chicken/egg scenarios. More code means more vectors for bugs and a larger surface area for testing. Using the DSL leverages the heavily tested libraries to state what your data model looks like without having to care how those types form a spec compliant schema.

Using the language-specific API (and many of the tutorial posts) encourage co-location of type definitions and the resolve functions that satisfy them. While the tight coupling can lower some of the cognitive overhead of learning GraphQL, I found having clear separation of schema and business logic encouraged better quality code. The Apollo graphql-tools package is a well-written and tested library that embraces this approach.

Separate Contract Fulfillment, Business Logic, and Transport Mechanics

How to organize the parts of your API that do the Real Work® can be highly subjective, but I got a lot of mileage out a variation of a pattern recommended by the Apollo team. Resolvers (contract fulfillment) leverage Models (business logic) which leverage Connectors (transport/caching). Their GitHunt example project organizes its models around a data source (the DB, the GitHub API); historically, this hasn’t been a great fit for my projects. I find that the majority of the time my core types have fields that span data sources; my models are built to describe the types themselves. One or more connectors are passed into models on instantiation.

Resolvers

I recommend keeping resolvers as ‘thin’ as possible. They are the outside edge of your API, directly responsible for fulfilling the contract (schema). As time goes on, you’ll inevitably find yourself supporting deprecated fields. Many GraphQL examples show doing [async] work directly in the resolvers: this is a pattern that won’t scale as your codebase does. Try to limit work done in the resolvers to light transformations at most, deferring real work to instantiated models passed in via context.

Keep resolvers thin. Resolver composition and organization is covered in Part 2

Models

I cannot recommend strongly enough to isolate your business logic into agnostic models; that is, the model instance should have no idea they are being used in a GraphQL implementation. Encapsulating the business logic into dedicated models provides an extraordinary opportunity to keep code DRY. Many projects don’t have the luxury of green-field API development. Often, the road to adoption is by adding a GraphQL endpoint to an existing RESTful API. Having agnostic libraries that don’t know if they’re servicing REST, SOAP, or GraphQL API calls makes for great re-use and clean testing boundaries. If you punt network/transport logic to Connectors, Models will be easy to test by passing in fake database handles and mocked downstream API calls.

Connectors

I try to keep transport mechanics and intra-request caching needs encapsulated in a Connector. Instantiated Connectors can be passed to Model constructors to give them a handle to the outside world. Whereas a Model method might contain logic like select * from db where type = ‘cats', your Connector is whats going to manage database handles, connection pooling, and data loaders.

Gluing it all together in Middleware

There are GraphQL middleware bindings for most prominent Node web server frameworks. Setting up stateful connections to the outside world (databases, message brokers, etc.) should be done on server start. Per-request data (user, auth, etc) can be passed along in context as needed. Resolver functions have access to a shared context over the lifecycle of a request in the 3rd argument.

Organizing your Schema

As my projects grow, I like to organize my schema definition into directories that match the core entities being modeled. At server start time, these files are easily globbed into an array of type definitions and handed over to Apollo for generating an executable schema. Order of declaration doesn’t matter here, so the glob approach allows for frictionless re-organization.

➜  petstore
└── schema
├── Cat
│ ├── mutations.gql
│ ├── query.gql
│ └── type.gql
├── Dog
│ ├── mutations.gql
│ ├── query.gql
│ └── type.gql
├── common
│ ├── directives.gql
│ ├── enum.gql
│ └── pagination.gql
└── root.gql

Alternatively, schema definitions can be colocated with resolvers & models, effectively using directory structure as namespacing. Some find that this approach fits the React mental model well. Personally I like to enshrine schemas as a Sacred Contract, to be apart and above implementations.

The Lesser-Known ‘Extend' Keyword

Definitions are in plain text with IDE-friendly file extensions to accommodate syntax-highlighting. I use this extension for Visual Studio Code, but most extensible editors have plugins by now. The extend keyword allows you to build upon definitions declared elsewhere in the project, so you aren’t forced into ‘bubbling up’ query/mutation declarations to a single spot. While I personally prefer this approach, having a single Query definition to serve as a catalog of end points can be argued as easy to reason about. In my larger projects (40+ entity type definitions), the extend approach becomes a diff-friendly, fractal pattern to follow.

Feel free to check out part 2 covering resolver composition and caching concepts.

--

--