hoto by James Pond on Unsplash

Easy and readable React/GraphQL/Node Stack — Part 1

Zac Tolley
Scropt
Published in
11 min readApr 24, 2019

--

Web development has moved at a rapid pace over the past .. well .. 20 years. Over that time technologies have come and gone but the core approach has not fundamentally changed. You store some structured data, get to it via a JSON API layer, which contains a bunch of logic, then and you do more data manipulation in the browser or a server rendering layer and finally produce some HTML. The way we do this has changed but the fundamentals, not so much. This article tries to describe a simpler stack that still starts with data at one end and HTML at the other, but tries to reduce how many times you transform that data and write simpler code.

Towards the end of 2018 the React team announced ‘Hooks’. Most examples show how to use hooks to provide local state so that stateful components can be written as function components instead of class based. There are, however, some projects taking advantage of hooks for other things and one of them is ‘react-apollo-hooks’. Apollo Client is the most popular library for integrating GraphQL into React JS. Apollo lets you express your GraphQL queries and then use them to query or update data. Originally Apollo used Higher Order Components to associate a query and it’s results with a component, then more recently they introduced Query and Mutation components that let you include a query and it’s results in your JSX using render props. Neither solution is idea, HOC can get messy when you need to use multiple integrations or queries and render props bring logic and integration in render markup, which is not where it should be. Apollo do plan to introduce native support for hooks at a later date but for now ‘react-apollo-hooks’ has proved to be very popular and reliable.

Using the hook based syntax a developer can call a GraphQL query and get access to the results and query status in a very simple and easy to use form.

`const { data, error, loading } = useQuery(GET_DOGS)`

For mutations you call ‘useMutation’ with the mutation query and options to define the variables for the query and additional behaviour. This syntax is extremely simple. When you start using this you simple wonder why it was ever done any other way.

The final piece in the front-end stack is Typescript and a library called ‘graphql-code-generator’. Typescripts popularity is growing and growing and it promises some key benefits to the developer. Typescript can help you spot faults in your code and removes the need to keep a mental model of an applications data structure in your head. In the past it has been difficult to implement. Before Typescript support was added to Create React App it was a challenging to create a build and assembly config, not made easier by the criminally bad and out of date documentation. Honestly Microsoft.. WHY?! I have previously attempted to create a Typescript based Redux project. There is no clear or common pattern for this and the amount of extra code you have to create means it is not cost effective or easy to maintain and as such defeats the object. With react hooks and libraries like ‘graphql-code-generator’ this all goes away. Using hooks for state and integration produces simpler code that is more like plain Javascript. The duplication of defining data schemas then defining ‘Type’ definitions for that data goes away as the type definitions area automatically derived from the schema and queries. As a result you can use Typescript in your React application to actually reduce the amount of code you have to write and get type checking with smart autocomplete for free.

This series of articles will create a simple full stack ‘Todo’ application. You will be able to create tasks and also assign them to different projects, this is to demonstrate how to handle nested field resolvers.

The stack will store data in PostgreSQL, fetch and manage that data using Knex and expose it via a GraphQL api using Apollo GraphQL server. On the client side we’ll have a React JS application calling the API using hooks generated by ‘graphql-code-generator’ which uses Apollo Client, then render any forms using Formik.

Lets start by getting the project setup

Getting started

Before you start, ensure you have Node JS and Yarn installed, you can use NPM if you wish but this walk through uses workspaces to simplify things a little. You will also need to install PostgreSQL. This articles assumes we will run it with Docker (so install that). If you want a copy of the code you can grab it off Github.

Create a new folder for the project and then enter it using the command line. We need to create a package file to hold scripts and dependencies. Pick your favourite editor and create package.json with:

{
"name": "stack",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"private": true,
"workspaces": [
"server"
]
}

Then create a folder named ‘server’ and create another package.json in there:

{
"name": "server",
"version": "1.0.0",
"main": "index.js",
"license": "MIT"
}

You’ll notice the top level package file defines workspaces, this is a feature Yarn provides to for monorepos or projects that contain multiple sub projects/packages. Why are we using it here? Using this feature will mean you can do yarn install at the top level and it will install both server and client dependencies and if the server and client share any dependencies they will only get downloaded once.

Server dependencies

The server element of the stack uses a combination of Express JS, Apollo GraphQL server, Knex and will talk to a PostgreSQL database. From the server folder type the following to bring in the required dependencies.

yarn add \
apollo-server-express \
express \
graphql \
graphql-import \
knex \
pg \
uuid
yarn add --dev dotenv nodemon

The database

If you already know what you are doing with PostgreSQL then you need to make sure there is a PostgreSQL database available with a user account associated with it that has access rights to create new tables and modify data.

If you don’t know PostgreSQL or you don’t have a database to hand then for this article we are going to use Docker to let us start up a database with a user name and password all done for us.

If you haven’t installed Docker on your machine, go install it now. Once you are ready create a PostgreSQL container for this project and run it on a slightly non-standard port to make sure it doesn’t clash with anything already installed on the machine. From the command line enter:

docker run -d \
—-name stack_postgres \
-e "POSTGRES_PASSWORD=password" \
-e "POSTGRES_USER=stack" \
-e "POSTGRES_DB=stack" \
-p 54320:5432 postgres

Database schema

There are a number of different tools available for managing databases from Javascript, the new entrant in the market is Prisma, the most popular is Sequelize and the last entrant is Knex. Although Sequelize is currently the most popular it is arguably not the simplest to use. Prisma is new and promises type checking and a future article will investigate this to see if it is feasible to have an end to end Typescript solution. Knex is extremely simple to use and has excellent tools to manage database schemas. This article uses Knex.

The first thing that must be done is to configure how to connect to the database and following good practice we’ll do this via environment variables.

You need to create a ‘knexfile’ to configure knex and a place to instantiate it.

# server/knexfile.jsconst path = require('path')module.exports = {
client: 'pg',
connection: process.env.DATABASE_URL || {
host: '127.0.0.1',
port: '54320',
user: 'stack',
password: 'password',
database: 'stack',
},
migrations: {
directory: path.join(__dirname, 'src', 'db', 'migrations')
},
seeds: {
directory: path.join(__dirname, 'src', 'db', 'seeds')
}
}

# server/src/db/knex.js
const environment = process.env.NODE_ENV || 'development'
const config = require('../../knexfile')
module.exports = require('knex')(config)

You can set the connection details in an environment variable manually if you wish but for this project we are going to use dotenv to load the values from a text file.

# server/.envDATABASE_URL=postgres://stack:password@localhost:54320/stack

Finally we can define a simple database table structure. Knex comes with a number of handy commands, so lets use one to create a migration:

yarn knex migrate:make todo

This will create a placeholder migration file under server/db/migrations into which you can define your table structure. The Knex site can give you much more information on how to do this, but for now try:

exports.up = function(knex) {
return knex.schema.createTable('todo', todo => {
todo
.uuid('id')
.notNullable()
.primary()
todo
.string('title')
.notNullable()
todo
.boolean('complete')
})
}
exports.down = function(knex) {
return knex.schema.dropTable('todo')
}

And lets put some test data in there too, just to be sure

# server/db/seeds/todos.jsconst uuid = require('uuid')async function clear(knex) {
await knex('todo').del()
}

async function seed(knex) {
await clear(knex)
await knex('todo').insert({
id: uuid(),
title: 'Test action one',
complete: false
})
await knex('todo').insert({
id: uuid(),
title: 'Test action two',
complete: false
})
}

module.exports = { clear, seed }

Lets see if it worked

yarn knex migrate:latest
yarn knex seed:run

So.. did it work? You should get some clue from the output ;) Take a break, make some coffee and maybe commit your code.

GraphQL — Define the API

GraphQL is very different to JSON based API’s. In a JSON API you tend to create endpoints/urls tied to an entity, use HTTP methods to describe what type of action you wish to take and the server defines what it returns. If the client needs a different field or a new sub entity a change needs to be made to the backend to support it or the frontend needs to make multiple calls. This leads to an API needing constant modifications and can also mean that changes to the API can break the frontend. The front and back ends are tightly coupled and as a result, fragile.

GraphQL is different. In a graphQL API you describe your data in a schema. A schema describes types, queries and mutations. A type typically relates to an entity such as contact, customer or company. In the case of this example our first type is a todo. Queries describe how we access types. Mutations describe how types can be changed.

For our project we’ll create this schema. You can store this is .js file that parses a string or you can save it as a text file and parse it later. This project does the latter.

# server/src/graphql/schema.graphqltype Todo {
id: ID!
title: String!
complete: Boolean!
}
type Query {
todos: [Todo]
todo(id: ID!): Todo
}
type Mutation {
addTodo(title: String!): Todo!
updateTodo(id: ID!, title: String!, complete: Boolean!): Todo!
}

The schema defines an object type ‘Todo’ that has 3 required properties. The schema describes how a client can view all todos or a single todo when the id is provided. Furthermore a client can add a new todo or update an existing one.

When a client accesses data it tells the server what query it wishes to use and describes the data it wants back.

query {
todos {
id
title
complete
}
}

In this case the client tells the server it is issuing a request for a query named todos, and it wants it to return the id, title and complete properties.

So how do we handle this? We create an Apollo server, tell it where our schema is and how want to resolve requests. In this example we’ll work backwards. We will write a couple of functions that will get a collection of Todos and also help find a single todo.

# server/src/graphql/Query/todos.jsconst knex = require('../../db/knex')const todos = () => knex('todo')module.exports = todos------# server/src/graphql/Query/todo.js
const knex = require('../../db/knex')const todo = (_, { id }) =>
knex('todo')
.where({ id })
.first()
}
module.exports = todo------# server/src/graphql/Query/index.jsconst todo = require('./todo')
const todos = require('./todos')
const Query = {
todo,
todos,
}
module.exports = Query

We create a Query folder, because this code is associated with the Query part of our schema and a file per query handler to keep code neat and easy to maintain when a few developers are working on the same project. Finally we tie it together in an index file.

As you can see it’s just regular javascript, and thanks to the combination of Knex incredibly simple. Look at the todos handler, you can almost miss the line that does the work. It helps that Apollo knows how to work with promises so it waits for Knex to go get all the todo data returns the result. The single todo handler isn’t much more complex.

Stepping up a level we need to define how to resolve all requests, not just a Query. So we add

# server/src/graphql/index.jsconst Query = require('./Query')
const resolvers = { Query }
module.exports = { resolvers }

Finally the tricky bit, we want to build a GraphQL server and integrate it with Express JS so we can serve up static assets, such a compiled React JS application.

require('dotenv').config()const express = require('express')
const { ApolloServer } = require('apollo-server-express')
const { importSchema } = require('graphql-import')
const { resolvers } = require('./graphql')const typeDefs = importSchema('./src/graphql/schema.graphql')const server = new ApolloServer({ typeDefs, resolvers })
const app = express()
server.applyMiddleware({ app })
app.use('/', express.static('public'))
const port = process.env.SERVER_PORT || 4000
app.listen(port, () => console.log(`App listening on port ${port}!`))

The first line ensures we get our database connection information from the .env file, or the environment. After that we grab a few dependencies, including our code. The schema is parsed into something Apollo can understand and we create an instance of Apollo server that is attached to an Express JS server, which by default exposes the Apollo server under ‘/graphql’.

Apollo server reads the resolver object structure and follows a simple convention that maps the schema to that resolver object and its associated functions. So when we make a request for the ‘todos’ query it looks for a resolver/handler in ‘resolvers.Query.todos’ and looks in the schema file for Query.todos to gather information about the function and its associated types. This not only ensures that the fields requests and their associated types are valid, but also allows GraphQL tools to autosuggest properties, names and types to help the user build a query. It will also come in handy later when writing the client side code as it will be used to create type checking information.

So the moment of truth…

node src/index.js

You should get a message telling you the server started. Fire up a browser and goto http://localhost:4000/graphql/playground. Apollo gives you a GraphQL tool for free for you to explore the schema and send requests. Cut and paste that query from above into the left window and press the big play button.

Well Done!

You now have a working GraphQL server talking to PostgreSQL. I hope this demonstrate how easy this is and how little code is needed. Like all things this is a simplified example and there’s more to it. You still need to support mutations and we haven’t talked about property resolvers which help you return nested/related information.

In the next part we’ll enhance the backend a little to explore mutations and nested properties and how they are resolved.

If you want a copy of the code it is available on Github.

--

--