How we use Jenkins Pipeline to standardize our Continuous Integration

Thomas Weingardt
grandcentrix
Published in
6 min readJan 22, 2019
Photo by Chris Chow on Unsplash

At grandcentrix we have used Jenkins as Continuous Integration (CI) and Delivery (CD) tool for many years. But as the number of projects was growing and our demands on tools have increased, we had to come up with a solution that helps us to establish more standards. That’s where Jenkins Pipeline comes into the game.

What is Jenkins Pipeline?

Simply said it’s a way to describe Jenkins jobs via code. Jenkins Pipeline provides a domain-specific language (DSL), which includes a number of plugins and commands. You can find a list of available plugins here.

Jenkinsfiles

One of the advantages is that you can commit your Pipeline scripts to the repository they belong to. These files are called Jenkinsfile. Let’s start with a simple Jenkinsfile.

Simple Pipeline script executing commands on a node

In the first row we specify the Jenkins node (a.k.a agent or slave). Every command (also called step) that is executed in the following block is executed on this node. In this example we just do a simple echo and execute a shell command, which calls the make tool.

The following Jenkinsfile is something that you might find more often. It includes so-called stages. A stage is a subset of steps to visualize as one unit with its status. You can name them as you want to.

Pipeline script using stages

This results in this visualization shown on the Jenkins UI of your job. You can see the different stages with their duration in the various runs. This makes finding errors easier.

Pipeline visualization with its stages on the Jenkins UI

Multibranch Pipeline

Another advantage of using Pipeline is that you can let Jenkins automatically create new jobs by using Multibranch Pipeline. You can configure the creation of new jobs when new branches or tags are created or Pull Requests in platforms like GitHub are opened. Every job is then based on the configured Jenkinsfile and runs with the corresponding branch. We are also using this for our Pull Requests. Additionally it helps us creating release builds automatically if a tag with a certain name scheme is created.

Multibranch Pipeline with jobs for each Pull Request

Introducing a Shared Library

We want to keep up the DRY principle and standardize our Jenkins jobs even more. Jenkins Pipeline includes the concept of shared libraries to share functionality between projects. That’s what we want to take a look at now.

Shared libraries can be declared globally or per folder. At grandcentrix we use a global library, which includes functionality for our different disciplines like Android, iOS and backend. This can be used by all projects on Jenkins by using a simple annotation in their Jenkinsfile. The library itself is just a git project, which will be checked out by Jenkins when the job is executed. The library must be configured in advance including the URL of the git repository and a name of your choice.

Using a shared library in a Jenkinsfile

This is a sample script using a shared library. The first line imports the library. Its format is @Library(‘<library name>@<branch name>’). In this sample we named the library gcx. Followed by an @ and the branch name you want to checkout, here we used a tag v1.0. The function doSomething() is defined in this library, so we can call this right after the import.

Implementing a shared library means you have to use a subset of groovy. This is currently the only available language for Pipeline. You cannot use the whole groovy syntax as Jenkins needs to be able to serialize data to resume jobs in case that Jenkins is restarted. This cannot be guaranteed for each command.

Global variables and custom steps

Directory structure of a shared library

Shared libraries have a fixed directory structure that you need to stick to. They are divided into folders for source code (called src), global variables (called vars) and miscellaneous resources (called resources).

While the src folder includes groovy source files ordered in packages like you probably know from Java projects, the vars folder includes source files on the top level. These files are available as global variables in your Jenkinsfile. This allows to create scripts like this:

Jenkinsfile using a global variable

This global variable is created by adding a file log.groovy to the vars folder. Pipeline just takes the name of each file in this folder and creates a variable from it. Each file can include multiple methods. In this example it’s info and error. You are also able to use the source code that is in the src folder by simply importing the classes.

Definition of global variable

With global variables you are also able to define your own step, as shown in this example. A step only includes one method called call. This is automatically called when you use the global variable as method call. You can also provide parameters and define default values.

Global variable as custom step
Definition of custom step

You can also provide a closure as a step parameter. This allows you to call commands before or after a block gets executed:

Closure as custom step parameter
Definition of custom step with closure as parameter

Shared library @ grandcentrix

With these tools in our hands we thought about categorizing the functionalities for our different disciplines (Android, iOS and backend) by creating a global variable for each of them.

Two styles of using our shared library have prevailed: Using high-level functions which work with a set of configuration parameters and using low-level functions where you have the power to combine commands however you need them for your job.

High-Level functions

Build jobs of different projects are often quite similar to each other. This is why we implemented high-level functions, which are simple one-liners that can be configured.

Jenkinsfile using high-level functions from shared library

As you can see this kind of library functions require the creation of a configuration map. We also added some hook closures in case someone needs to add some special behavior.

All the handling to make sure that steps are executed on the correct node, the error handling and so on is done by high-level functions like buildPullRequest.

Low-Level functions

In various cases it can be necessary to have a custom flow for build jobs. Also it can be a personal preference of a developer to have more control over the exact commands executed. So it’s possible to use all the low-level functions in the Jenkinsfiles that are also used by the high-level functions internally. We also call it freestyle.

Jenkinsfile using low-level functions from shared library

This requires you to do basic stuff like handling the node and using stages. But it also gives you full control over the commands.

Because it’s hard to cover all use cases with high-level functions we currently use a mix of both styles depending on the use case. Usually we have one Jenkinsfile per use case without mixing the styles inside a single file. While for Pull Requests it’s easier to use high-level functions it’s common to use low-level functions when it comes to release builds.

Conclusion

After we introduced this as our standard way of building jobs on Jenkins, we got rid of all the chaos. Before there were a lot of jobs which were hard to maintain. If you wanted to change any command or parameter, you had to change it manually in all jobs. Now we have the Jenkinsfiles, which are committed to our repositories and so are also part of our review process.

Our very own community of tools experts is constantly working on our shared library to make sure that build jobs are easy and understandable for everyone.

--

--