Managing Jenkins Pipelines

Teodor Todorov
THG Tech Blog
Published in
5 min readApr 15, 2019

A common model of deploying through Jenkins is to have projects contain their own Jenkins files. We discuss the advantages and disadvantages of this, as well as describing an alternative approach using the Jenkins CLI and templates.

Our Continuous Integration and Continuous Delivery management system was focused on Jenkins for a long time. The original model was that each project would define their own Jenkins files for the pipelines. This led to having similar or even identical pipelines across projects.

This model had its benefits as you could quickly see how to run, test and deploy the project. However, managing the Jenkins jobs and pipelines can quickly turn into a chore in this model. We encountered the following problems:

* Updating the pipelines takes a long time because you have to update multiple files across multiple repositories.

* Since we were creating Jenkins jobs from the UI there was no central repository that had an overview of how the set of application is built.

With new projects on the horizon we decided to review how we manage CI/CD solutions.

The first step we took to try and make managing Jenkins jobs easier was move away from creating the Jenkins jobs from the UI and configure them as code. To do that we decided to use the Jenkins CLI. There were other options available to us, but we wanted the most flexibility to choose exactly what will be possible and restrict what we believed to be bad practice. Another nice benefit was it wasn’t only limited to managing jobs. One of the other options we were considering was using the jenkins-jobs-builder. It was good for managing jobs but not much else.

In the long run we want to split Continuous Integration (in Jenkins) from Continuous Delivery (using Spinnaker), but we still want one tool that can create a pipeline across both Jenkins and Spinnaker ensuring the two systems can communicate with each other. We felt like Jenkins CLI was the only option flexible enough to allow for the integration later on.

Pipeline Templates

To start off with we templated the XML for a Pipeline and a Multibranch pipeline. From our experience these were the only 2 jobs we use. To help organise we added a View template as well. To template them we chose Jinja2. It is a templating engine we were already familiar with because of our use of Ansible and it allowed us to work with undefined values. Being able to handle undefined values was quite important for us as it allowed us to set defaults for them in the template itself. The template for a pipeline is as follows:

The end configuration options for a pipeline ended up looking like so:

The only mandatory parameters are the one in the pipeline section making configuration as simple as possible. It would be nice if you provide a trigger as well but you will still be able to manually run the pipeline.

Moving Jenkins files to a central repository

The next step was to move the Jenkins files to a central repository used by multiple projects. We decided to have one pipelines repository per group of related projects. This will encourage sharing the same build, tests and lint steps across projects as they will all use the same steps in their CI/CD pipelines.

Since most of the projects we have are written in Go and built using a Makefile we decided on steps that each Makefile needs to have in order to use the pipeline. After the move we ended with a single Jenkins file for building a Go project. The file uses information from the GitLab webhook to know which project to build and test. This allowed us to have a single job for building a Go project. Now when we update it, all the projects benefit from it.

To be able to use a single job for building Go applications we had to make the job more flexible. This meant we weren’t able to use the declarative pipeline syntax but had to rely on groovy for more complex logic. This allows us to add an optional build step at first.

We treat the pipelines as any other software project. We can version it and let people choose which version they want to use. This allows us to have different versions of the pipeline in Jenkins and people can choose which one to use simply by adjusting the URL for the webhooks.

Now that we had this in place, we wanted to make it easier to create and update jobs in Jenkins. What we currently had to do was run a command like:

j2 -f yaml pipeline.j2 values.yaml | java -jar jenkins-cli.jar create-job my-awesome-pipeline

But what we wanted is: jenkins-cli create-project project.json

Since that was a simple task, we decided to use a bash script for it as the proof of concept. The structure of the project definition ended up looking like:

Here the name will be the view created in Jenkins. We have an array of pipelines where you specify the name and values to use for the template.

This allowed us to add and update pipelines by running Jenkins pipelines. What the script would do is check if a pipeline exists and updates it, or if it doesn’t exist it will create it with the new values. Ultimately this brings us one step closer to having all our Jenkins jobs as code, allowing us to version and rollback in disaster cases.

Next steps

What comes next for us is splitting out the deployment part from Jenkins and moving it to Spinnaker. We also need to create a tool that can create pipelines across Jenkins and Spinnaker.

We’re recruiting

Find out about the exciting opportunities at THG here:

--

--