Automating Alpha

Casey Webb
4 min readApr 22, 2017

--

Our first delve into CI/CD

At Profisciencē, we do a lot of things the old school way, but we’re making the rounds and bringing not only our app, but the tooling behind it and our architecture into 2017.

This week, that meant automating our alpha server so that pull requests become available for our lovely tester Kim. As it stood, her workflow involved RDP-ing into alpha, doing a git pull and git checkout, and publishing a site (one of our build tasks).

Our build process already contained half of what we needed to get started, a yarn deploy task that we can use to build the client, compile the api, create a database from a backup + migrate it, and deploy to IIS (see #2). All we were missing was an equivalent yarn undeploy to clean up after the PR was closed, and this was a quick enough addition.

First there was the not so fun stuff: creating a test server with a publicly accessible domain as I didn’t feel like constantly changing my webhook url on GitHub with localtunnel, or paying for ngrok. They’re both great, but ngrok costs money, and localtunnel’s urls are emphemeral. Since I have access to MSDN and was already using https://www.vultr.com/, I decided to have a go at installing a Windows Server 2016 box over there, and after a bit of struggle got a server provisioned (hint if you do this: the ISO size with the slipstreamed drivers is just too big to upload. IMGBurn however is able to get it under the threshold if you check “Optimize Duplicate Files” under Settings > Build > Page 1).

Cool, now onto the fun stuff. I’d played with GitHub webhooks before when building Hubbard, and my first inclination was to modify that to support branches, but ended up ditching that idea not far into it.

The first iteration was a simple node script that started a server listening for a pull request webhook. On receipt of one, it would spawn a process and run yarn deploy -- --appName=PR-<number> in the locally cloned repo (appName specifying the name to use for the database and IIS site). This is preferable to doing an clone and starting it the same as in development (yarn start in our case) for each PR in our case because the project is huge, and clones can take a while even on a good connection. Occam’s razor, I thought. I started it on my server and let it run.

It wasn’t long though that a few thing became clear:

  • Since builds are being produced from the same repo and published to IIS, they can’t be concurrent
  • Developers should be notified if a PR fails to deploy, or when it’s done. Confirmation that it’s started would be nice as well.
  • Build logs need to be accessible somehow
  • It would be nice to be able to deploy non-PR branches, or stage sites for special occasions like UX-testing sessions
  • master and dev branches should always be deployed and up-to-date

So, I pondered, and ultimately decided to use Hubot as a starting ground.

For the previously stated reasons, cloning the project for each build was a no-go. The resources on the server this will eventually be deployed on are nothing special either, so queueing was the best option. I implemented a simple async queue and wired up my existing webhook code using hubot-github-webhook-listener, and to cover notification added slack messages on queue, start, success, and failure.

With minimal effort, 2 of the 5 issues were nixed. I wasn’t satisfied however, and ended up adding calls to the GitHub status API — if you’re unfamiliar, it’s how you get these nice things…

This was a wee bit more effort, but still nothing outrageous.

Next, logs. To do this, I added code to create a file system write stream that is then passed to all the child processes. This is simple, and works splendidly. Serving them was handled by a robot.router endpoint that serves the log for a specific sha. Eventually, I tweaked this to use SSE so that the logs are streamed in near-realtime without the need to refresh.

Bam. Only 2 must-haves left.

This was a matter of refactoring the deploy code into a utility, and duping the deploy-pr.js task that contained the webhook into files for master, dev, and custom. master and dev are near identical, and simply have hardcoded values for their appName and listen to the push instead of pull_request event. Custom uses robot.reply to call that same deploy function with the branch name and app name supplied by the user instead of a webhook payload.

And that was it. Now, RDP-ing into alpha is a thing of the past, and we’re immediately notified of failing production builds. It might be a bit overdue, but we’re getting there, and I’m having a good time doing it.

To see the end result, check out Profiscience/alfie on GitHub.

--

--