Automating automated testing
Using Python and APIs to automate CI at Runscope
Runscope is a tool that let’s me inspect what’s going on when one service talks to another service over an API. I used this tool in the past to inspect interactions with Github Enterprise and various CI tools/services. So when Runscope told me they wanted to shore up their continuous integration, I jumped at the opportunity.
code → commit → test → build → deploy → monitor
Runscope uses a service oriented architecture (SOA). One of the pains with SOA is that for each service you need to think about the following: code → commit → test → build → deploy → monitor. In terms of setup this means:
- Writing code.
- Creating a new repository in source control.
- Creating a related CI task.
- Configuring deployment.
- Setting up monitoring.
So my assignment was to help automate some of this flow. When a new project is made in Github (their source control system), that a corresponding Jenkins job is created.
A Jenkins Job
Runscope engineers primarily write Python and Go. One standardization they made is that every Python build could run the same run_python.sh script. This script would run linting, nosetests and handle python packaging. It intelligently “sniffed” the project to determine what things needed to happen. This made all the Jenkins jobs uniform.
An alternative to a magic-build scripts, is makefiles. You can use a common entry point and abstract away whatever project specific tasks you might have. For example you can have this task in Python:
pip install -qr requirements.txt
You can do this in Go:
go get ./...
Then all your Jenkins jobs look like this:
A script or a Makefile doesn’t matter, any type of standardization here will help.
Creating the Jobs
We ran across a fabulous tool, Jenkins Job Builder, that could take some YAML templates and turn them into Jenkins jobs. It had a fairly robust inheritance system that worked well for the Python projects, but fell short when it came to Go. The Go projects at Runscope all were unique in some way: some triggered other builds; some built releases; and some created artifacts.
The Runscope team also wanted each of their pull requests to be tested. So we setup a small service called Leeroy, which served as an intermediary between Github pull requests and Jenkins jobs. Leeroy had it’s own configuration needed for each Github repository. We decided that when Jenkins was testing pull requests we needed a separate set of notification rules. We didn’t want to clutter the Hipchat channels with each pull request. To handle this Leeroy talked to a separate job for pull requests.
I wrote a script that did three things:
- It queried Github for all the Go and Python projects at Runscope.
- It used Jinja 2 to generate config files for the Jenkins Job Builder to create two jobs: the standard “something new in master” job; and the pull request job that Leeroy uses.
- It generated a configuration for Leeroy to link repositories to jobs in Jenkins.
Handling the Go Projects
Go was difficult, because each job had some unique property. After an assessment I discovered that those unique properties fell into a few buckets:
- Each build had a fairly unique build script.
- Some jobs built releases.
- Some jobs uploaded artifacts to s3.
- Some jobs triggered other builds.
We did the following to handle this:
- We required each “Go” project to have a jenkins.sh file and defined what would normally go in a “Execute Shell” section of Jenkins.
- Any project that made a release needed a jenkins_release.sh file. If that file was present we would define the job as having a release build and run that script on success.
- We moved artifact uploads into the jenkins.sh script rather than defining them in Jenkins.
- We stored builds to be triggered in a jenkins.yml file.
By doing these four things I was able to still run a single command and build out the templates that the Jenkins Job Builder needed to create jobs in Jenkins. I also took advantage of Jinja2's rich templating features which let us do a few things that we couldn’t easily do with the Jenkins Job Builder alone.
Automating it all
With all the pieces in place we could automate this easily with a cron job that:
- Ran the custom script to create our job configurations.
- Run the Jenkins Job Builder to create or update.
- Restarted Leeroy as needed if there are new Github projects.
This ensured that all new jobs were getting tested before and after pull-requests got merged, without too much work on the service creator.
If you like companies that prioritize internal dev-tools, Runscope might be a fabulous fit. Additionally they make one of the best developer tools for working with APIs which came in quite handy. I create 4 or 5 buckets to keep track of the calls I was making.