Implementing GraphQL in your Redux App

Ryan C. Collins
React Weekly
Published in
12 min readSep 15, 2016

This post aims to be a guide for implementing GraphQL into a Redux App. It goes over some of the realizations I made throughout the process of integrating it into my workflow. I expect that you have some amount of familiarity with the setup involved in React / Redux projects and will link to resources as I go.

While Redux at it’s core is a State management library, often times your action creators take care of loading data from your API. The method that I have used mostly is to implement thunks, along with having several actions / action creators to communicate between the store and UI during the data-loading process.

What this generally ends up looking like is the following:

import {
LOAD_EMPLOYEE_DATA_INITIATION,
LOAD_EMPLOYEE_DATA_SUCCESS,
LOAD_EMPLOYEE_DATA_FAILURE,
} from './constants';
export const employeeUrl = 'http://0.0.0.0:1338/api/employees';// loadEmployeeDataInitiation :: None -> {Action}
export const loadEmployeeDataInitiation = () => ({
type: LOAD_EMPLOYEE_DATA_INITIATION,
});
// loadEmployeeDataSuccess :: Array -> {Action}
export const loadEmployeeDataSuccess = (data) => ({
type: LOAD_EMPLOYEE_DATA_SUCCESS,
data,
});
// loadEmployeeDataFailure :: Error -> {Action}
export const loadEmployeeDataFailure = (error) => ({
type: LOAD_EMPLOYEE_DATA_FAILURE,
error,
});
export const loadEmployeeData = () =>
(dispatch) => {
dispatch(
loadEmployeeDataInitiation()
);
return fetch(employeeUrl)
.then(res => res.json())
.then(data => {
dispatch(
loadEmployeeDataSuccess(data)
);
})
.catch(error => {
dispatch(
loadEmployeeDataFailure(error)
);
});
};

The pattern here is to essentially represent the control flow of your data via the dispatching (via a thunk) of actions at various stages of your asynchronous control flow.

I actually really like this pattern. I think that it works very well for the majority of my data-fetching needs. So why, you ask, would I even want to use GraphQL?

About GraphQL

GraphQL is a declarative query language, created by Lee Byron of Facebook. The initial reason why I wanted to learn GraphQL is because basically everything that comes out of Facebook is golden. Lee has supplanted himself and Facebook as one of the major thought leaders in the space and I trust his/their judgement. He brought us ImmutableJS, an implementation of persistent immutable data structures in JavaScript, the life-cycle methods in React, several contributions to the TC39 committee. Anyways, Lee and his colleagues know their stuff.

The major accomplishment of GraphQL, in my opinion, is that it inverts control of the server layer to the client. Commonly, REST architecture leaves the control of the data layer to the server. It is common nowadays to build APIs in a micro-service oriented manner, whereby the API is built in isolation of the client application and the two are connected via RESTFul endpoints.

Typical RESTful Client Server Model

The problem with RESTful endpoints, especially at a large scale, is that the control of the data returned from custom endpoints generally lies in the hands of the server developer. We can fake some amount of control using query parameters, but ultimately the client-side developer is at the whim of the server.

As our front ends grow more and more complex, this model starts to show its weaknesses. Over-fetching of data, multiple round trips per request, misunderstanding of the data model. It also becomes hard to maintain RESTful APIs over time because changes to the models require significant amounts of planning in order to ensure that they aren’t breaking the system.

GraphQL aims to invert the control of data flow, giving control to the front end developer. It gives them the power to define their data needs, while also optimizing the entire experience, solving many of the problems of the custom endpoint approach. On top of that, the language is highly declarative and the process of expressing your data needs fits right into the client-side development flow that many of us have become accustomed to with React.

Back to the point

Now that we have a bit of the history behind us and a greater understanding of where GraphQL fits in, we can explore a bit about why we would want to integrate GraphQL into our Redux applications and how we would go about doing it.

GraphQL, although extremely powerful, is just one part of the picture. In order to integrate it into a React/Redux application, we could use some help. This is where Apollo and Relay fit into the picture. Both intend to bridge the gap and provide the missing implementation boilerplate that makes the integration a seamless experience. I am choosing to focus on Apollo, but I’d love to use Relay and will leave that to a future article.

Apollo

The integration of Apollo into a Redux application is very straight forward because Redux is baked into Apollo Client, the library we’ll be using here. Before we get into integrating Apollo Client into our front end workflow, let’s go over the basics of setting us up on the server side.

Server Setup

With Node, Express and a few GraphQL related libraries, the server setup is fairly simple. In my example, I chose to completely forego any database setup and I generated some random JSON and CSV data that I used instead of a database.

You’ll have to keep in mind that this is not an example of a production setup, but is rather an example of how you can start messing around with GraphQL today.

// server/schema/schema.js
import {
GraphQLSchema,
GraphQLObjectType,
GraphQLInt,
GraphQLString,
GraphQLList,
} from 'graphql';
import issuesJSON from '../data/issues.json';
import employeesJSON from '../data/employees.json';
import _ from 'lodash';
import fs from 'fs';
import path from 'path';
const store = {};
// Some helper methods to load CSV into JSON, which were removed for brevity's sake.const customerJson = loadCsv();const CustomerType = new GraphQLObjectType({
name: 'Customer',
fields: () => ({
week_num: { type: GraphQLInt },
num_customers: { type: GraphQLInt },
}),
});
const PersonType = new GraphQLObjectType({
name: 'Person',
fields: () => ({
name: { type: GraphQLString },
avatar: { type: GraphQLString },
company: { type: GraphQLString },
}),
});
const EmployeeType = new GraphQLObjectType({
name: 'Employee',
fields: () => ({
id: { type: GraphQLString },
numemployees: { type: GraphQLInt },
location: { type: GraphQLString },
}),
});
const IssueType = new GraphQLObjectType({
name: 'Issue',
fields: () => ({
id: { type: GraphQLString },
submission: { type: GraphQLString },
closed: { type: GraphQLString },
status: { type: GraphQLString },
customer: { type: PersonType },
employee: { type: PersonType },
description: { type: GraphQLString },
}),
});
const StoreType = new GraphQLObjectType({
name: 'Store',
fields: () => ({
employees: {
type: new GraphQLList(EmployeeType),
resolve: () => employeesJSON,
},
issues: {
type: new GraphQLList(IssueType),
resolve: () => issuesJSON,
},
customers: {
type: new GraphQLList(CustomerType),
resolve: () => customerJson,
},
}),
});
const QueryType = new GraphQLObjectType({
name: 'Query',
fields: () => ({
store: {
type: StoreType,
resolve: () => store,
},
}),
});
export default new GraphQLSchema({
query: QueryType,
});

Above is a very simple schema definition that we use to describe our data to GraphQL. If you were hooking this into a database, you would be making calls to it in the resolve functions you see above instead of returning the data loaded from the mock JSON files. The reason that we define a schema will be discussed more in detail later, but the idea is that we can generate a structure and assign types to our data to make static analysis possible.

We wrap our data in a root level key that we are calling Store here in so that when we query our data, it will be wrapped in a root level object. This is something that you might be used to if you’ve used the Rails ActiveModel Serializers Gem with the JSON API adapter type before. As shown below, it acts as a root level key in our queries. This is really just a preference thing and you can take it or leave it.

query Customers {
store {
customers {
week_num
num_customers
}
}
}

With our schema defined, we can now tell our Express server to serve the data through an endpoint using the GraphQL Express library.

// server/app.js
import 'regenerator-runtime/runtime';
import express from 'express';
import path from 'path';
import fs from 'fs';
import { graphql } from 'graphql';
import { introspectionQuery } from 'graphql/utilities';
import schema from './schema/schema';
import morgan from 'morgan';
import cors from 'cors';
// constants needed
const isDeveloping = process.env.NODE_ENV !== 'production';
const port = isDeveloping ? 1338 : process.env.PORT;
const app = express();
const graphqlHTTP = require('express-graphql');
const query = 'query { employees { id, numemployees, location }}';
if (isDeveloping) {
app.all('*', (req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Methods', 'GET, POST');
res.header('Access-Control-Allow-Headers', 'Content-Type');
next();
});
app.use(morgan('combined'));
}
app.use(express.static(__dirname + '/public'));graphql(schema, query).then((result) => {
console.log(JSON.stringify(result))
});
(async () => {
try {
app.use(
'/api',
cors(),
graphqlHTTP({ schema, pretty: true, graphiql: true })
);
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, 'public/index.html'));
});
app.listen(port, '0.0.0.0', (err) => {
if (err) { return console.warn(err); };
return console.info(
`==> 😎 Listening on port ${port}.
Open http://0.0.0.0:${port}/ in your browser.`
);
});
let json = await graphql(schema, introspectionQuery);
fs.writeFile(
'./server/schema/schema.json',
JSON.stringify(json, null, 2),
err => {
if (err) throw err;
console.log("JSON Schema Created")
});
} catch (err) {
console.log(err);
}
})();

Now, it may look like there is a lot going on here, but really it is fairly simple. Most of this is really just boilerplate Express stuff, for example we are using CORS and Morgan when the app is in development in order to make our lives easier. We are also setting our access control headers.

The async IIFE that you see here is just a neat use of the Async Await feature that will be released, I believe, in the next formal JavaScript specification. It allows us to write the GraphQL schema into a schema.json file asynchronously using the new syntax. The generation of this file is part of the process.

Note, to get this working, you will need to have a file that first requires babel-core/register and then loads up the express server.

To do this, I’ve created a root level server.js file with the following:

require('babel-core/register');
var app = require('./server/app');

Clientside setup

Now we can move onto the client-side setup. I began this project as a fork of my own React Boilerplate, where I have all of the boilerplate setup for React / Redux. My boilerplate aims to modularize React applications, and thus the setup was a bit different from the guide located on the ApolloClient site.

The main difference between the code on the site is that I separate out my Routes, Store and Reducer into separate modules, which you can see if you take a look at my boilerplate project. For this reason, I suggest creating a new file called apolloClient, that you will use as a singleton in multiple files.

// /app/src/apolloClient.js
import ApolloClient, {
createNetworkInterface,
addTypeName,
} from 'apollo-client';
const client = new ApolloClient({
networkInterface: createNetworkInterface('http://0.0.0.0:1338/api'),
queryTransformer: addTypeName,
});
export default client;

Make sure that you set your network interface URL to the same endpoint that we created early in the Express setup.

Now, we need to use this client in multiple places in our App:

// /app/src/reducers.js
import client from './apolloClient';
const rootReducer = combineReducers({
// ... Other reducers go here
routing: routerReducer,
form: formReducer,
apollo: client.reducer(),
});
/ /app/src/routes
import { ApolloProvider } from 'react-apollo';
import client from './apolloClient';
/* eslint-enable */
const routes = (
<ApolloProvider store={store} client={client}>
<Router history={history}>
<Route path="/" component={App}>
<IndexRoute component={Pages.LandingPage} name="Home" />
<Route path="*" component={Pages.NotFoundPage} />
</Route>
</Router>
</ApolloProvider>
);
// /app/src/store.js
import client from './apolloClient';
const loggerMiddleware = createLogger();
const middlewares = [thunk, loggerMiddleware, client.middleware()];

const enhancers = [];
const devToolsExtension = window.devToolsExtension;
if (typeof devToolsExtension === 'function') {
enhancers.push(devToolsExtension());
}
const composedEnhancers = compose(
applyMiddleware(...middlewares),
...enhancers
);
const store = createStore(
rootReducer,
initialState,
composedEnhancers,
);

The three important pieces here are that we need to get setup is

  1. We need to make a reducer that will act as our representative for the apollo client within our store.
  2. We also need to hook up our ApolloClient singleton to our application via the ApolloProvider, a HOC that sits around our routes in this case.
  3. Finally, we need to add the ApolloClient middleware to our store.

This may seem like a lot of boilerplate, and well it is. Similarly to Redux, once we have this boilerplate setup, our application development process is extremely streamlined and declarative.

By wiring in the ApolloClient, we are now free to carry on using Redux as we were before, while using GraphQL to replace our Thunks from earlier.

Within our containers, we can create declarative query strings that Apollo will handle the details of executing clientside. At this point, a container might look something like this

import React, { Component, PropTypes } from 'react';
import { connect } from 'react-redux';
import { bindActionCreators } from 'redux';
import * as KeyMetricsViewActionCreators from './actions';
import gql from 'graphql-tag';
import { graphql } from 'react-apollo';
import cssModules from 'react-css-modules';
import styles from './index.module.scss';
import Heading from 'grommet/components/Heading';
import { LineChart } from 'components';
class KeyMetricsView extends Component {
render() {
const {
loading,
store,
areaChartLabels,
} = this.props;
return (
<div className={styles.keyMetricsView}>
<Heading align="center">
Key Metrics
</Heading>
{loading ?
<Heading tag="h2" align="center">Loading</Heading>
:
<LineChart
data={store.customers}
labels={areaChartLabels}
/>
}
</div>
);
}
}
KeyMetricsView.propTypes = {
store: PropTypes.array.isRequired,
areaChartLabels: PropTypes.array.isRequired,
loading: PropTypes.bool.isRequired,
error: PropTypes.object,
};
// mapStateToProps :: {State} -> {Props}
const mapStateToProps = (state) => ({
areaChartLabels: state.keyMetrics.areaChartLabels,
});
// mapDispatchToProps :: Thunk -> {Action}
const mapDispatchToProps = (dispatch) => ({
actions: bindActionCreators(
KeyMetricsViewActionCreators,
dispatch
),
});
const allCustomers = gql`
query allCustomer {
store {
customers {
week_num
num_customers
}
}
}
`;
const ContainerWithData = graphql(allCustomers, {
props: ({ data: { loading, store } }) => ({
store,
loading,
}),
})(KeyMetricsView);
const Container = cssModules(ContainerWithData, styles);export default connect(
mapStateToProps,
mapDispatchToProps,
)(Container);

Notice how we are still able to continue using connected Redux components. To me, this is the best of both worlds. Once you have this wired up, you will see that behind the scenes, ApolloClient is taking care of the asynchronous action creators for us and we no longer need to follow the model mentioned earlier.

This setup allowed me to get my hands dirty with GraphQL without giving up Redux. When I have a better understanding of how Relay fits into that same picture, perhaps I will give it the ‘ol College try, but for now I am very happy with this setup.

In this example, what I am really trying to show you is that the GraphQL setup with ApolloClient is very similar to what we are already doing with Redux. It allows us to wrap our container with a function that maps the data returned from our Query to properties that we can using in our React container and pass down to our Children.

ApolloClient behind the scenes is doing almost exactly what we were manually doing earlier with our Thunks. It is sending Redux actions at each stage of the control flow. The benefit, if you choose to see it that way, is that it takes care of abstracting away those details.

Inversion of Control

Back to the point, GraphQL flips the entire server relationship model on its head. It gives us, the people who are actually building the client-facing application UI complete control over our data needs. It does so while remaining flexible and performant, cutting down on refetches and preventing over-fetching. It lets us represent our data needs declaratively and helps to bridge the gap between the client and server.

It also encourages exploration of data. Using the GraphiQL IDE, which can be enabled with the Express GraphQL library, the client-side developer has a tool that they can use to get their hands on the actual data by writing GraphQL queries in the browser. Take a look at GraphQLHub to see what I mean. I’ve been spending some time getting my Data Science friends pumped on the idea of using GraphQL as part of a data science pipeline. I can see it being useful in a number of other domains where having control over representing your data needs as a Graph would come in handy.

What you are seeing in action with the GraphiQL IDE is the static analysis that GraphQL is able to do behind the scenes. The GraphQL specification makes key the concept of allocating types to your data schema, providing a mechanism to perform static analysis. As you are typing out your query within the GraphiQL IDE, you will see instant feedback, including intelligent autocompletion and syntax error highlighting. All of this is made possible by mechanisms built right into the GraphQL specification.

Beyond the purview of this article, these tools provide even more powerful data fetching facilities, such as optimistic UI, pagination, mutations, and many others. Relay has client-side caching built right into it, and manages many of the complexities of loading massively complex feeds of data.

More Resources

Hopefully this article has gotten you pumped about integrating GraphQL into your app. I hope to write more articles like this in the future as I get more hands on time with a production setup. In the meantime, here are some of the resources that I used to learn about GraphQL and the accompanying tools and libraries.

The application that I built can be found here for reference, although I must warn that I am still working on the app, so it could break over the next week.

Enjoyed this article? Please make sure to tap the like button below!

--

--

Ryan C. Collins
React Weekly

Hey, I’m Ryan! I work in the Tech industry and post content about tech, leadership, and personal finance. https://www.ryanccollins.com