Hassle-Free Go in Production

How we manage our go infrastructure with one engineer.

Ben Newhouse
6 min readNov 20, 2013

Background: I’m a co-founder/developer of bubbli, a new iOS app. I previously worked at Yelp where I created Yelp Monocle which generated in hundreds of thousands of new users in a matter of days. Given that the “magic factor” between the Yelp Monocle and bubbli is similar, I wanted to make sure we were prepared for anything. With the following set-up we indeed scaled to hundreds of thousands of users without any major issues/downtime. I also happen to be the only engineer on the team.

Note: skip to the bottom for the juicy bits/code snippets

I will spare you the long road which lead to Go (hint: it’s Python and gross amounts of Puppet) and just cut to the chase.

The Lifecycle of Our Go Code

  1. Write some Go code, test it, commit it
  2. A Makefile compiles all of our code into a single executable with no real dependencies (read: things that don’t exist outside a modern linux distribution).
  3. The Makefile then packages together a .deb package, consisting of the Go executable and assorted upstart scripts and configuration.
  4. The Makefile then distributes the .deb package to a private S3 bucket.
  5. Our servers at a set interval (currently 10 seconds) perform HEAD requests against the .deb package.
  6. If the package was changed, download and install it.
  7. Send SIGUSR2 to our bubbli processes, which causes our web machines to gracefully restart immediately and our computer vision machines to restart when it’s next convenient.

There is really nothing revolutionary about anything I’ve mentioned here, but all of these steps when put together result in a great deployment experience.

With a little bit more effort you could set this up to continuously deploy with the proper git hooks.

The Good

Very few moving parts

We have one executable that does virtually everything our servers need to do (except computer vision, which is a C++ program that we call out to). In order to run our web server, you just need to scp it to a linux box, configure some options with environment variables, and run it. Oh, and it’s 1.6MB gzipped.

Catch errors early

Static typing is great, and while you can still shoot yourself in the foot with interfaces, if the code compiles, it’s much more likely to be correct than if a Python web server boots up. Furthermore, since Go doesn’t need a huge set of surrounding components to run properly you remove a whole class of failures due to issues in your deployment environment. Coming from Python where we would run code on top of Werkzeug inside of Gunicorn inside of a Virtualenv inside Supervisord all behind Nginx and configured with Puppet, this was a huge relief.

Deploying scales well

S3 is huge and reliable. Once your package is uploaded, you don’t generally need to worry about whether your distribution strategy is bottlenecked somewhere or dependent on you not losing your SSH connection mid-deploy. Coming from Puppet which consumed obscene amounts of CPU in doing essentially the same work on a bunch of machines, it’s great not to have to worry about a puppet-master as a single point of failure. Of course S3 can fail too, but all things considered, it is much more robust than anything we would maintain in-house.

A reader commented that you should always check MD5 hashes when copying your .deb file around,which is definitely the case. Fortunately, if you use the awscli to do so, this is done automatically!

Deploying is fast

It’s primarily determined by how often our autoupdate Upstart script checks for updated packages. Fortunately S3 requests are $0.004 per 10,000 requests so refreshing every ten seconds costs about $0.10 per month per server.

Deploying is seamless

Using the goagain package (https://github.com/rcrowley/goagain) it’s super very straightforward to write net/http servers that seamlessly hand off a listening socket to a new process, wait a bit for existing requests to finish and then terminate. For some reason, the example is with a generic TCP socket, but you can hand a listener straight to net/http too, just replace

go serve(l)

with

go http.Serve(l, nil)

after you’ve set up your handlers. Restarting a running web process is as simple as sending it a SIGUSR2 signal.

Rollbacks are (theoretically) easy

You can set your S3 bucket to version your package, so that you can restore an old version which will automatically be picked up by your machines and installed. I’ve never actually needed to do this, however.

The Less Good

Must deploy from a binary compatible machine

This is a minor inconvenience if you develop on a Mac, but probably better in the long run because deploying from one OS to another is just asking for trouble. I know that there are some tools in the golang community for cross compiling, but I haven’t tried them out yet.

Configuration isn’t super flexible

At the moment, we ship a bunch of configuration in our Upstart scripts which are packaged into our .deb packages. This feels really dirty, but given how easy it is to deploy, it isn’t a huge issue. I know Instagram just refreshes configuration info from a redis server every 30 seconds, but something like serf (http://www.serfdom.io/) may work too.

Deployment doesn’t self-repair in the case of failure

When we deploy, I watch our aggregated logs (on http://papertrailapp.com) to make sure that the machines restart themselves properly and act accordingly if I see a flood of errors.

Tips and Tricks

Use Upstart for everything

It’s built-in to ubuntu and can do everything supervisord can, except the pretty web interfaces, which don’t really scale to many machines anyway.

Use IAM profiles to grant access to your debian packages

IAM profiles configure your EC2 machines to automatically receive credentials when they boot up (the standard AWS libraries all know how to find them). This avoids shipping around your AWS keys which is generally a bad idea.

Use Ubuntu cloud-init in userdata for server provisioning

If you don’t know, cloud-init is a text file that you pass to Ubuntu during install that bootstraps the machine. EC2 allows you to pass this in whenever you boot a new machine through EC2 userdata.

One of the best parts of using cloud-init is that you can setup an autoscaling configuration on EC2 that includes your cloud-init in the userdata and can then scale up and down with a single command and no other orchestration.

Here’s what ours looks like (pardon the weird Medium formatting).

#!/bin/shsudo apt-get -y updatesudo apt-get -y install gdebi-core python-setuptools && sudo easy_install awsclimkdir -p /var/bubblicd /var/bubbliaws s3 cp s3://[redacted].deb bubbli-server.deb --region us-east-1sudo gdebi -n bubbli-server.debsudo start bubbli-websudo start bubbli-autoupdate

Make a debian package with a Makefile

The art of debian packaging is actually really simple for a barebones .deb file, but it’s poorly documented. Here’s the target from our makefile that we use for building the Makefile. (Again, sorry about the formatting).

package: all        echo 2.0 > build/debian-binary        echo "Package: bubbli-server" > build/control        echo "Version:" 1.0-${VERSION} >> build/control        echo "Architecture: amd64" >> build/control        echo "Section: net" >> build/control        echo "Maintainer: [Your name] <[Your email]>" >> build/control        echo "Priority: optional" >> build/control        echo "Description: [A description of your service]" >> build/control        echo " Built" `date`        sudo rm -rf build/usr        mkdir -p build/usr/local/bin        mkdir -p build/etc/init        cp build/bubbli build/usr/local/bin/bubbli        cp upstart/*.conf build/etc/init        sudo chown -R root: build/usr        sudo chown -R root: build/etc        tar cvzf build/data.tar.gz -C build usr etc        tar cvzf build/control.tar.gz -C build control        cd build && ar rc bubbli-server.deb debian-binary control.tar.gz data.tar.gz && cd ..

One slightly less than optimal thing about this is that .deb files requires certain things to be owned by root and at the moment we just do that with a `sudo chown -R root`.

When this is done, we end up with bubbli-server.deb which can be uploaded to our S3 package repository of choice using the awscli.

Use goagain for serverless restarts

As explained in the “Deploying is seamless” section.

Use Upstart to automatically upgrade servers when the package stored on S3 changes

Here’s our Upstart script, with the sensitive parts redacted. (Again, really sorry about the Medium code formatting).

start on startupconsole logrespawnscript        # Write changes to syslog        exec >/dev/kmsg 2>&1        mkdir -p /var/bubbli        while true        do                 # Check if the MD5 sum of the package is different than what we have                if test -e /var/bubbli/bubbli-server.deb && aws s3api head-object --bucket [Your packaged bucket] --key [Your package key] --region us-east-1 | grep `md5sum /var/bubbli/bubbli-server.deb | awk '{print $1;}'` > /dev/null; then                         # Found, do nothing                        echo "Do nothing" > /dev/null                else                         echo "Updating bubbli-server"                        aws s3 cp s3://[your package bucket]/[your package file] /var/bubbli/bubbli-server.deb --region us-east-1                        sudo dpkg -i /var/bubbli/bubbli-server.deb                        killall -s USR2 bubbli || echo "No existing bubbli instances running"                fi                sleep 10        done end script

It might make more sense to do this as a cronjob, but Upstart is eventually supposed to replace cron anyway (and I happened to be amidst writing a bunch of other Upstart jobs when I needed this).

That’s all for now. Let me know if you have any issues or improvements to this setup — ben@bubb.li or @newhouseb on Twitter.

--

--

Ben Newhouse

code + design, entrepreneur in hiding. cofounded bubbli (acquired by Dropbox). previously made yelp monocle (for iPhone and Android). stanford alu