How Front-End Ops Works on the Hillary Tech Team

David Fox Powell
Git out the vote
Published in
7 min readAug 23, 2016

Dedicating a team entirely to front-end ops is rare for most any organization and is definitely a first for presidential campaign tech. The campaign environment lends itself to fast paced technological life cycles where developers get to create and deploy a variety of projects in a relatively short period of time. On the Hillary Tech Team we embraced front-end ops as a priority from the beginning, long before the many short sprints of the primaries, and will continue this effort until the longest sprint ends on November 8th. As our pace increases exponentially, keeping the front-end build and testing infrastructure solid has proven invaluable to ensuring that our rapid release cycle does not result in increased deployment of bugs, inefficiencies, and fragility.

So what is front-end ops anyway…is that even a thing?

For many of those in the Medium and Twitter sphere debating over JavaScript tools fatigue the explosion of front-end technologies and tools is very real. With the creation of NodeJS in 2009, and its de facto package manager NPM in 2011, many scripts used to compile, test, and deploy front-end code moved from the realm of backend developers and DevOps into the front-end. Languages and frameworks such as es6, SCSS, and React rely on code compilation and bundling, leaving the developers who choose them the additional overhead of learning the tools that build this code into an end product.

This is where front-end ops comes in. We live in the middle ground between DevOps and front-end developers and our job consists of connecting the dots between the multiple disparate build tools, testing frameworks, and continuous integration API’s to form a cohesive system.

Our choice to embrace front-end ops on the campaign meant uniting around build tools we felt would help skyrocket our productivity. As an initial team of 3 front-end developers back in June of 2015 we knew that the future held intense times ahead. We foresaw building a multitude of applications ranging from microsites and custom CMS solutions to form builders, donation handlers, and events location tools. We chose utilities and frameworks such as React, Flux, SCSS, and es6 that we thought would help us rapidly release applications in a re-usable and testable manner. Therefore, the tools we rallied around had to support:

  • Linting of JS and JSX code for reduction of errors and to encourage a uniform code style across projects.
  • Compilation of JavaScript and SCSS from modular files into bundles that can be optimized and cached.
  • Integration of JavaScript templating to compile HTML that can be written to static files or served dynamically.
  • A local server with livereload/hot-reload allowing for preview of code on a developer’s laptop.
  • Unit, integration, and end to end testing across various browsers and devices, runnable locally as well as within a continuous integration environment.
  • Continuous deployment of code and replacement of environmentally dependent asset paths and variables in JavaScript and CSS.

Looking back on those early days, my grandpa’s favorite quote rings in my ears: “if hindsight were foresight, we’d all be rich”. This definitely applies (minus the rich part 😜) to the lessons we learned from our first attempts at a front-end build structure. As our project requirements have continuously expanded and our tech team has grown to over 60 members, our front-end build system has been forced to keep up. From this massive growth and many hard fought projects, 3 distinct guidelines have become increasingly obvious.

Guideline 1: Boilerplates have too much boilerplate in them

When asked about “trending repos” on GitHub, Sindre Sorhus in his AMA blog listed plugins and boilerplates as some of the most popular. This makes sense as more and more front-end developers get overwhelmed with per project setup and want to “just get coding”. Our first attempt at front-end build architecture on the campaign took a boilerplate approach…something similar to the react-redux-universal-hot-example. Looking at boilerplates in this genre makes it easy to see why members of the JavaScript community often begrudgingly reminisce for the days of jQuery and a <script> tag. The package.json alone has about 50 dependencies, almost equal amounts of devDependencies, and 13 NPM scripts to build the project. On top of that it has multiple build files alongside front-end code that could result in a curious developer spending hours of tinkering in order to customize and understand the build, rather than immediately getting started to writing code for the browser.

Our initial approach with boilerplate build in every individual repo.

We found this individualized boilerplate approach to get unruly in no time, as developers lost time tinkering with their local build and updates to any piece of the boilerplate had to be rolled out across multiple projects rather than from a centralized location. It was analogous to having the Ruby on Rails asset pipeline hardcoded into every project, and its entrails exposed for all to see. This quickly resulted in wasted time on build updates, hot-fixes, and dependency management.

Guideline 2: Manage dependencies centrally not locally

A modern JavaScript heavy front-end application generally has more than enough dependencies as is without throwing all of the build dependencies into the package.json. As eluded to in guideline #1 this definitely wouldn’t work for us in the long term, and difficult updates such as Babel 5 to Babel 6, or the left-pad debacle can leave developers confused and heavily reliant on the intervention of front-end ops.

When we eventually moved away from managing dependencies on a project level basis, and abstracted dependency management into external packages various new possibilities appeared. Updates and bug fixes could be rolled out with a simple `npm i` locally or in CI, problematic packages could be shrinkwrapped across various projects, and the package.json slimmed down to only the dependencies the front-end code relied on directly.

Guideline 3: Allow for per project based customization

In March of 2016 we refactored our entire build boilerplate based upon learnings from guidelines #1 and #2 with the goal of modularizing and externalizing boilerplate code from current and future projects. The refactored architecture took a form similar to that of the Babel and React repos on GitHub, where a “packages” directory containing individual modules that comprise the entire suite are published independently to NPM. A tool called Lerna is the glue that pieces together this mono-repo approach by managing the process of linking packages locally and later publishing them to NPM independently. Therefore, the code that powers the build for all of our front-end repos is itself built, tested, and published by various JavaScript build tools.

Modular build boilerplate with utils, config, addons, and tasks grouped together to form presets and run from a central core. All packages are published independently to NPM to be installed by various projects.

Even though the mono-repo approach allowed us to consume published packages as a whole via a “core” or independently in a piecemeal fashion, this wasn’t quite enough for our build requirements. Every project is different and some front-end developers dislike abstraction equally, if not more, than they dislike front-end tooling. Therefore, we developed a system for task specific configuration to be supplied on the project level in order to customize various aspects of the build. On top of that, custom Gulp tasks can be added by the developer directly into the repository allowing them to freely add functionality to the abstracted boilerplate. The final result of these efforts is a modularized boilerplate that can manage dependencies centrally and be customized on the project level to fit the needs of our wide range of requirements.

Sounds cool…but what have we done with all this?

As our build system has evolved from monolith to mono-repo it has helped to facilitate not only some amazing innovations but stable products with quick turnaround times. Many utility modules have been built and published to our self hosted NPM, and are installed daily within a variety of HFA repositories. Our agile teams have produced projects that utilize isomorphic rendering of React as well as others that implement a mobile first performance optimized vanilla JS “in-house” Flux framework. The majority of UI customization is styled with our custom SCSS framework called Pantsuit, which is itself supported by our build system and packaged on our internal NPM. These projects have been successfully deployed in multiple environments ranging from AWS Lambda, static content hosted on S3, chrome extensions, and an EC2 hosted NodeJS Koa2 server that templates data from a Wordpress JSON api.

With less than 3 months left we have a lot more in the works that will most likely surpass anything we’ve done in the last year. It will be an exciting race to the finish for both our tech team and the Nation. Simultaneously riding the rollercoaster of a presidential campaign and the JavaScript community has been a unique experience, and has allowed us to support the development of applications that will not only benefit the campaign, but ultimately the Nation as a whole.

If you like this article, go to hillaryclinton.com, open your JavaScript console, and get involved.

--

--