React Continuous Delivery with Gulp, AWS, and Terraform

Adam Neary
5 min readMar 22, 2016

--

Plenty of great ink has been spilled over unit testing React components and Redux reducers et al, but we had a lot of difficulty finding meaty posts about settings up proper full-stack integration tests for the modern JS stack.

This is article 3 in a 4-part series:

Our setup was particularly tricky because we have a handful of fully decoupled backend services and two React apps sharing components. If you’re going down a similar road, hopefully this series saves you a couple steps.

Part 3: Gulp and AWS for Effortless Continuous Delivery

We have two React apps in production with no backend delivery whatsoever. They are simply sitting as static files on S3 with CloudFront to serve them. To the endless ire of those who yearned to spend six weeks artisanally crafting front end devops, we are eight months in production with a global user base and have had literally zero downtime.

Let’s talk about how and why…

What About Universal Javascript? (in our case…YAGNI)

If we needed users to access thousands of data-rich pages outside of a logged in state, we might want to devise a way to deliver those pages fully baked from the server. Every millisecond to first user interaction counts. But in our case, the marketing site is fully static and delivered over CDN, so we are covered there. If the user is hitting one of our React apps, the user’s first interaction with is a simple login page.

As a result, we are able to deliver the apps as static assets with no server interaction whatsoever. As exciting as the possibilities of Universal Javascript may be, for today it’s “You Aren’t Gonna Need It” for us.

Webpack and Gulp…Playing Nicely Together

I suspect Webpack could do everything we need, but we have the responsibilities split between Webpack and Gulp, and they work together nicely.

A relatively plain vanilla Webpack and Babel setup take care of all asset packaging and compilation. We develop locally with Webpack taking care of the development server and Hot Module Replacement. When we deploy, Webpack compiles the assets for us. No news here.

Only when we’re deploying does Gulp come into play, and we chose Gulp because it’s so great with pipelines, and the interactions with Github, AWS, and Slack are so easy to plug in.

A few pretty useful things are going on here:

  • Deploy environment: The desired deploy environment is captured with a command line argument, and the corresponding configuration is loaded into environment variables. Other environment variables such as AWS keys are maintained in CI.
  • Github: We tag all releases in Github with a unique build number. At the outset of a deploy, we ping Github for the most recent build number and increment. When post-deploy smoke tests are run, this build number is sent to Saucelabs. The build number is even baked into the app code itself so that we can always verify build numbers in the console and within logging and error tracking. #WeGotThisOnLock
  • AWS: As we will discuss in the next section, index.html is set to never cache, while all other assets are revved and set to never expire.
  • Slack: Once we’re out, we let CI do the bragging in Slack.

CloudFront, S3, Route 53, and React Router

Gulp can drop a set of static assets in S3, and CloudFront can ensure that they are delivered to our users efficiently. But it is critical that users see the right assets. This is achieved by revving everything except index.html. These assets accumulate in the S3 folder and are never deleted. They’re small and space is cheap. The index.html file, in contrast, is replaced each time, and as long as it is set to never expire, users will always be seeing the correct build.

We originally tried managing this with configuration on CloudFront and S3, but it never quite worked. However, tagging assets individually with the awspublish library (see above) has been bulletproof. Problem solved.

Nice side benefit: When we tag the release in Github we include the final copy of index.html. This means if we ever need to revert back to a previous build, we can literally just drop that html file into S3 manually, and it will immediately be pointing to the working assets we need, which were never deleted. (I am sure that could be automated, but we don’t do a lot of rolling back.)

Now that we have a way to deploy assets for each environment (and these assets are configured in the build pipeline to interact with the appropriate backend services), we need to configure DNS.

We have two apps:

  • The main app is available at the naked domain: sera.co
  • A team admin uses subdomains like myteam.sera.co or yourteam.sera.co (these teams do not exist, which is why you would get the error you do!)

S3: Each combination of app and environment has its own S3 bucket. Just enable static website hosting and point web requests and errors to index.html. It really is that simple.

CloudFront: Create a CloudFront distribution for each S3 bucket. In our case, we redirect all HTTP to HTTPS (yes, CloudFront handles SSL). Be sure to set caching to “Use Origin Cache Headers.” Then, create a custom error response that sends 404 errors to index.html with a 200 status. Provided you are using Browser History with React Router, all requests to the domain will end up in your app where React Router can handle them.

Route 53: Now that Route 53 can handle Alias records, all you need is an A record with an alias target set to the cloudfront url. In our case, one for sera.co and one for *.sera.co. Repeat for each environment, and you’re done. …or use Terraform (see below).

React Router: Browser History gives you everything you need to handle links of any depth, provided CloudFront routes 404s to index.html.

We use react-redux-router, which gives us a simple listen() method we can use for page view tracking. (Remember when javascript used to be hard?)

Managing Config with Terraform

I like to think of our infrastructure as a pure function of configuration and environment variables. Or maybe those are sort of the same thing. In any case, Terraform gives us a simple place to define our configuration as code, eliminating so much of the potential for mishaps when firing up new environments.

Yes, Gronk. The team reluctantly allows me to name repos and servers after Patriots players and staff.

In the end, when we needed to fire up a new environment, the front end side of the operation was as simple as:

  1. Using Terraform to provision the new AWS services
  2. Adding the name of the new environment to our list of valid deploy environments in Gulp, and
  3. Adding a config dotfile of the format .env.[newEnvVar] with variables such as which hosts to hit for various backend services

After all these years, I still have no idea how to configure nginx. Here’s hoping that never changes.

--

--