Building a CI system for Go, with Jenkins

Theodoros Ntakouris
Artisans of Tech
Published in
9 min readAug 24, 2017

In this blog post I am going to outline what was performed in order to setup a Jenkins CI pipeline, for a new Go project. In the end, you are going to find the full Jenkinsfile as a gist.

Reader Disrection Advised: Dangerous Shell-Fu ahead. (Mostly due to random Jenkins issues that have been flagged ‘Low Priority’ or not yet fixed.)

CI Build Status Tower

Outline

We will be using Jenkins 2.0 and a pipeline declaration which is included in our source repository. For that, we will be using git and the source code will be hosted onto a BitBucket repository. It doesn’t matter whether it is a private or public repository; public repos are generally easier to mess around with because they require no authentication. If you go the ‘hard’ way of having a private repo like I did, extra points if you used keys and OAuth tokens instead of username:password authentication!

Initializing

I won’t cover how to setup a Jenkins server, how to open up 8080 inbound TCP ports on your AWS servers, or how to create a new BitBucket repository. I will also assume that you have already installed the proper plugins into your Jenkins installation (Pipelines, Slack Notifications, BitBucket Related — Build Status, PR Builder) and that you have also connected everything together with proper tokens and credentials. This includes communication between Jenkins, BitBucket and Slack.

Two Words about the Jenkins box

Since Go was designed to be super fast to both run and compile, I figured out that there is no need to setup multiple build slaves and integrate Docker containers into your build proccess. Scaling up should be sufficient, so this will be no Google-Scale guide. If your project requires a cluster of computers to build and test you should probably have a devops guy somewhere in your company :).

This is why I have installed Go and git on the Jenkins box. The only thing you need to check is whether the GOROOT environment variable is properly set. Go projects depend on the GOPATH variable but this will be handled easily by using workspaces & withEnvironmentVariables on the pipeline! That means that every job will start on a new folder, which will be configured to be the new GOPATH for every spawned process inside that pipeline! This makes life much easier. This new ‘GOPATH’ variable will die after our pipeline has finished. If we don’t specify one, the normal/environment GOPATH is going to be used, which would be improper for what we want to achieve.

For those of you who are not familiar with Jenkins, it already gives us the option of telling it the max # and lifetime of builds that we want to keep, specify log rotation and other useful house-keeping features.

Some people prefer going the extra step and using docker containers as build slaves - not necessarily in a clustered environment, just to isolate more, because their program’s tests may perform file operations and/or screwing with the operating system without cleaning up.

Sidenote: This will only cover building pure Go projects. If you want to build & test your embedded JavaScript or other weird configuration that you package into the produced binary, FIY or wait for another blog post…

Moving On

Now that everything is set up, it’s time to make our hands dirty by writing some pipelines.

The first thing to be done should be the workspace creation. The code speaks for itself. (ws stands for workspace and withEnv stands for withEnvironmentVariables):

Inside these blocks is where we are going to run our 4 stages: Checkout, Pre Test, Build, Test & Publish to BitBucket.

Before continuing, why Checkout Stage ?

Well, just because we’re triggering the build whenever a change is pushed to BitBucket, Jenkins is smart enough to clone the branch that changed onto a predefined path (typically JENKINS_HOME/project-name). We don’t want that, we want the repository to be cloned inside our new workspace. It would be possible to instruct jenkins with a relative path on the job configuration, but it’s preferred to make the pipeline as portable as possible, so we will just add checkout scm onto the checkout stage (yes, that’s it- and its called multibranch pipeline). Because of the configuration (building on push), the correct branch will be cloned, from the URL and credentials (if any), that you have configured in the pipeline job.

Copying files from that predetermined path to our workspace is a TERRIBLE idea! Concurrent builds break and it’s a hack, rather than a proper solution.

This gives me the opportunity to state some important stuff about the project structure that this CI pipeline is aimed at : Although you can change the GOPATH to whatever subdirectory you want, you may face some problems with relative imports of your project. This pipeline suits a project that sits on one repository as a whole, not by having multiple subpackages as different repositories — typically to be used as libraries. Depending on what you want to achieve, you are free to edit GOPATH as you wish, add more parameters to the commands, as well as perform any directory changes, or changing the way dependencies are managed. Most problems should be able to be tackled by a few shell-fu commands.

Long story short, your favorite butler Jenkins is going to clone your repository on the new workspace and it’s your business to move it around and set an appropriate GOPATH in order for the upcoming commands to work.

Pre Test

In this part, all the dependencies are pulled. I am using the new dep tool (and you should as well). Again, the code speaks for itself — notice that I also add GOPATH/bin to the PATH variable, in order to be able to use the downloaded binaries further down the road:

(Yes, I follow the convention of having the starting point of my programs under GOPATH/src/cmd/ , just in case I want to use anything as a library.)

Notice the last line: dep ensure (as well as the commands you are going to see below, at the testing stage) does not accept parameters as path targets so we have to cd into the desired directory and just perform the command from there.

Testing

Testing, Linting, Vetting and Race Condition Detection take place.

If any of these sh commands exits with a non-zero value, the build will abort/fail. That means that you can use go test without the need to export the test results to Jenkins. If you do want to have a view on these test results though, you can pipe the test output to go2xunit , which I have included in the pre test phase. That tool can export xUnit xml test reports, which is a format that Jenkins supports.

You should avoid testing anything inside the vendor directory. That’s why instead of just doing go test , we do go test $(go list ./.. | grep -v /vendor/) . In this particular example, because my project is self-contained and I check-in the vendor directory in the repository (in order to make dep ensure faster — if I ever want to update, I can change the Jenkinsfile to run it with the -update flag), I want to avoid linting, testing and vetting anything that’s not mine. That’s how I came up with this command: (Remember, I use BitBucket) go list ./.. | grep -v /vendor/ | grep -v github.com | grep -v golang.org . This will effectively list all the directories with a path relative to $GOPATH/src that your project contains, without golang stuff, or libraries pulled from github. You can mess around further by chaining greps or using other options (perhaps to allow particular github projects/libs getting tested). You can tell that I redirect the output to a file, in order to print the file relative to just $GOPATH, which is the place where we are going to run the testing, linting commands from. Correct paths are stored in a variable, for productivity, reusability and readability’s sake.

Then, I proceed into printing each line of the file, prefixed with ./src/ in order to have the commands run on the correct files. One more thing to notice would be the triple-double quotes sh """content""" that got used in order to make Jenkins escape, and not try to parse the contents of the command, due to weird symbols.

Building

Basically build the thing. You can build for any platform you want. I took the extra mile and added -s as a flag, which strips off the debugging symbols. This is some extra source code protection, because our repository is private. This is going to be super fast because necessary files are already produced by running go test !

Actually building is only needed for the next phase (publishing binaries). That’s because if something was to fail at compile-time, tests would not be able to run.

Publishing to BitBucket Downloads

Locate the produced executables and push to BitBucket via a POST request (yes, we still use curl). You can produce archives of your builds, you can build for different platforms and also choose which branches should be built/uploaded. For example, you might only want to publish your release and/or snapshot branches. We’ll just find out how to get the branch name & commit hash, tarball the project and shoot it up to BitBucket Downloads with a simple POST request, performed by cURL (Authentication either with user:pass or key). The fail flag makes curl exit with a non-zero exit code if the response code isn’t proper (200).

I chose BitBucket Downloads but you can upload to your own website via ftp/ssh. This would also be the stage of publishing the produced docker image, if you are using Docker.

You might want to add a couple of stages or do this whole thing differently. Some people prefer having a ‘production’ branch and code each feature onto a different branch. Then, the job of the CI would be to merge into master, then check if master builds and if it does, push to production. This would be good but sometimes error-prone. Many prefer just doing pull requests from master to release manually.

One Last Thing

We didn’t cover notifying BitBucket, nor Slack about the build status. It’s time to act smart. First, we’ll define a method that sends messages to slack with pretty colours and what not:

We will use that to send notifications to slack (build status, job name, build number, link to job) whenever we want: I decided to notify when the build starts, fails or succeeds. The same strategy works for BitBucket (In-progress, Fail, Success), although we are not going to need a helper method for this one.

Here is the smart part: Because the build will abort if any process exits with a non-zero code, we can wrap our whole pipeline with a try-catch-final block, notifying about build status there. Check it out:

This way we get informed about our build status on Slack and the fancy build passed/failed badges on every commit appear on BitBucket.

Go builds and tests run pretty fast (except the grabbing dependencies part). You could also add event keys/names to the status notification in order to be able to visually tell on the BitBucket UI that the build is not just ‘In Progress’, but also that it is ‘Testing’. I, personally, see no use in this because either way (not compiling/tests failing), human interaction is required, not to mention PRs without tests (or builds passing) :) .

Full Jenkinsfile

Running on EC2

Here it is:

I hope you enjoyed reading and learning from this blog post as much as I enjoyed writing it and learning from doing it. Give me a heart or clap if you wish. Happy coding!

--

--