Building applications at campaign speed

Kyle Rush
Git out the vote
Published in
4 min readApr 20, 2016

--

Anyone who has worked in technology on a presidential campaign can tell you that it’s very different than working in technology at a technology company, startup, corporate company, agency, government, or any other kind of organization. The biggest difference is the speed at which the software development cycle moves on a campaign. It’s a race with an end date and two candidates, only one of which can win. Speed is important to say the least.

In the 2012 cycle the Digital department had a team of about 22 mostly frontend software engineers, but also some quality assurance and project management resources. We wrote a ton of code and learned a lot from that experience. Those learnings have shaped our approach to building application frontends on the Hillary 2016 campaign.

One of the biggest pain points in 2012 was an inadvertent decision to cram too many projects into our content management system (CMS) application. Many projects that were static (no database dependency) were placed into the CMS codebase and deployed this way. This approach was painful because we never found a way to make the CMS technology play well with 22 people working in it and deploying code at campaign speed. If you had a static project you still had to setup a fully dynamic application environment on your machine and go through the process of testing it in staging and then going to production. This process butts heads against what I consider the number one principle of software engineering on a political campaign: Speed of releases is of the utmost importance.

This time around we have made it possible for engineers to fork a frontend boilerplate repo and use it as either part of a dynamic application or as a static interface. As a result our software engineers get a fully functioning frontend build system that requires relatively little configuration to get up and running. It comes with all the latest bells and whistles: Babel transpiling for JavaScript, JSX compilation, server-side react rendering, code linting, Sass, testing infrastructure so engineers can run tests locally and in CI, and more.

This build boilerplate allows a team to move quickly on a new project in a completely clean codebase. They don’t have to integrate their code into an existing codebase. They don’t have to work on the deployment cycle of another team; they deploy when they want. They don’t have to deal with a massive suite of tests that don’t relate to their specific project.

To give you an idea of how this works in practice, www.hillaryclinton.com/events, www.hillaryclinton.com/contribute/donate, and www.hillaryclinton.com/ are all separate codebases. This allows the teams that manage events, donate, and the homepage to move independently on their own timeline with their own tech infrastructure. There are many more codebases, but I won’t list them all here.

This approach introduces a problem though. All three of those codebases have an application that has to look and behave like the others. They share the same design and some of the same features, like login and create account for example. To solve this problem we write modular code. The design problem is solved by our own 100% custom built UI bootstrap and component library (code named Pantsuit :). Every new boilerplate project has the library built into it. For many projects our engineers have to write very little and sometimes no CSS at all because of Pantsuit. By just adding Pantsuit classes to HTML elements we have a fully in-brand, ready-to-go interface.

Just like the JavaScript open source community, our engineers will modularize JavaScript code that might be used by other teams. They’ll write the module in a separate repository and publish it on our internally hosted NPM instance and then pull the dependency into their project. The module has its own tests in its repo and each release follows SEMVER. Our most used module right now is something we call JS API Wrappers. Any project that needs to communicate with our APIs uses JS API Wrappers to do so. When we need a global change with how we interact with our backend APIs we publish a new version of JS API Wrappers and let all consumers know they need to bump their version, run their tests, and release.

Another example is a module we call Form Components which serves as our de-facto Flux implementation of web forms. It provides a lot of functionality like form field validation, tracking in Google Analytics and Optimizely so we can understand how users are interacting with a form, success and error states, and much more. When an engineer has a web form to make, they install Form Components with NPM and then implement the module.

Maintaining this many codebases is one of the downsides to this approach. it has been a struggle to keep all these codebases up to date with both external and internal dependencies. To make this easier we have dedicated frontend ops resources that maintain changes to the boilerplate, help with broken builds, deployments, and whatever else is needed. We scale the frontend ops resources as the number of software engineers increases. Right now our ratio of frontend ops to software engineer is 1:17.

This approach has worked well for us so far, but it is not without its pain points. Writing modular code is hard. Maintaining a well tested module for consumption with many use cases can be challenging. Our team is learning a lot about releasing alpha/beta versions, communicating changes to consumers, proper SEMVER, the importance of changelogs and documentation, and communication in general. However, struggles aside, this is a huge improvement from what we had last cycle. We have spent a lot of time over the past year architecting and building this approach to make sure that we can strike quickly when we need to.

--

--