Jenkins Pipelines Shared Libraries

Michael Short
spaceapetech
Published in
7 min readJun 19, 2019

Introduction

In the past I’ve spoken about our shared libraries at Space Ape (Part 1 & Part 2). TL/DR, we have a LOT of shared libraries, and they are used across all of our games, from our live games like Transformers: Earth Wars and Fastlane through to our unreleased prototype projects.

Freestyle Jobs

At Space Ape we use Jenkins to manage and release builds for everything (client, server, Devops and shared libraries). All of our shared libraries are built and released using traditional Jenkins Freestyle Jobs. However we came to the point where the use of Freestyle Jobs was giving us some friction.

For those that are unfamiliar, Freestyle Jobs are the traditional, predefined method of creating builds in Jenkins. Jobs are created by adding a set of steps. Jenkins then runs through each of these steps sequentially. Steps must be supported by Jenkins itself, or via a Jenkins plug-in. A lot of the time we tend to just add a shell stage, and then write shell script in job itself.

Jenkins Freestyle Job setup

Each of our shared library Freestyle Jobs are independent of one another. So say you want to roll out a new feature in the build pipeline, we would have to go through 60+ jobs! Or you need to build an older version of a library, but the job has since changed and doesn’t support the old build setup. Then there’s also the fact that people change each of these jobs independently, to the point where they are doing vastly different things. Some jobs would add release tags to Git, others would build documentation etc. We needed to consolidate all of this, and move it into one easy to manage place.

Declarative vs Scripted

Before we begin there are two types of pipeline supported in Jenkins. Declarative and scripted.

Declarative pipelines are ideal for straight forward build systems as they have a very simple and pre-defined structure. Traditionally people wrote scripted pipelines. Scripted pipelines are written in Groovy. Scripted pipelines give you a massive amount of control and power, you can define and manipulate the entire build flow via code. So they are ideal for more complex build systems. At Space Ape we use scripted pipelines.

First Attempt Jenkins Pipeline

The first thing we had to do was look through all of our existing Freestyle Jobs and define all the common steps that we would need. Once we had this we started with one of our simplest libraries and started writing a Jenkins pipeline for it.

Pipelines are written in Groovy. Start by placing a file at the root of your repository called “Jenkinsfile” — no file extension. When you build your project, Jenkins will grab this file and execute the code inside it.

The build is executed on an agent — a built node. It’s super simple to define an agent in a pipeline, all you do is provide a label, then all code within the curly braces will be executed on that node.

Defining a pipeline node

Jenkins pipelines are comprised of stages. A stage is simply a block of work. It makes sense to group related work into stages, here is a diagram of our shared code stages from Jenkins:

Space Ape’s shared library Jenkins Pipeline

All the stages within a node will be executed in sequence. Again, defining a stage is really simple:

Defining stages within a pipeline

Out of the box, and on top of Groovy, Jenkins adds a number of basic helper functions. There’s also a Pipelines Utility plug-in which contains a lot of useful functions, e.g.

  • read/writeJson()
  • un/zip()
  • pwd()
  • sh()

Once we defined the first pipeline, we needed to start porting our other shared libraries over to use it. Our first instinct was to put the Jenkinsfile in a separate repository and then check it out for each project, but then we came across Shared Libraries.

Adding a New Pipeline Job

In Jenkins Blue Ocean select “New Pipeline”

Select “New Pipeline”

Now select where your hosting service

Hosting services

And the organisation within that hosting service

Organisations

And finally select the repository that holds the project you want to add

Select your repo

Jenkins will then look through each of your branches and find the Jenkinsfiles within those branches. It also integrates really nicely with pull requests. Jenkins will automatically pick up new pull requests and build them for you, it then reports the status of the build back to your SCM system — we use GitHub and the result is displayed in the pull request itself!

Pull requests in Blue Ocean

On top of this there is Blue Ocean also offers a beautiful UI for your Unit Tests

Unit Test viewer in Blue Ocean

Shared Library Structure

Jenkins Shared Libraries allow you to define and share pipeline code between projects. This is exactly what we needed. To create a shared library, create a new repository with the following structure:

(root)
+- src # Groovy source files
| +- org
| +- foo
| +- Bar.groovy # for org.foo.Bar class
+- vars
| +- foo.groovy # for global 'foo' variable
+- resources # resource files
| +- org
| +- foo
| +- bar.json # static helper data for org.foo.Bar
  • vars — this is where you put all of your global functions, the ones you will call from your project’s Jenkinsfile. For us, this is where the vast majority of our code lives
  • src — This is a regular Java-style source code directory. All source code and classes in here can be imported into your pipeline using an import statement.
  • resources — Any supporting resources for your library will go here. As an example, we build documentation for all of our shared libraries. This documentation is then uploaded to S3. We also build docsets for Dash and Velocity. The template files used to build the documentation lives here.

Adding Shared Libraries to Jenkins

To add your shared library to Jenkins, head to Manage Jenkins->Configure System and look for “Global Pipeline Libraries”. Here you can give your shared library a name (which will be used in future to import your library into your Jenkinsfiles), a default version (for us this is “master”) and also let Jenkins know where your shared library lives.

Using Shared Libraries

It’s really simple to import a shared library into your project’s Jenkinsfile.

Importing a shared library into your Jenkinsfile

Note the underscore at the end is not a typo. It imports everything from the library into your Jenkinsfile. Now you can use the code in your shared library as though it were in your project. If you need to import multiple libraries into your pipeline you simply provide the Library function with a list of libraries to import

Importing multiple shared libraries into your Jenkinsfile

Moving Code Into a Shared Library

People shouldn’t have to spend hours setting up a new build in Jenkins. Therefore we decided to put almost all of our build code into the shared pipeline. Here’s an example Jenkinsfile from one of our projects:

Example project Jenkinsfile

sharedCodePipeline is a global function that lives in our shared library. This function takes a Map of parameters and then creates the build pipeline for you. We have kept the pipeline as flexible as possible, if a project requires it, stages can be replaced or injected and additional parameters can be easily added.

This is how we dynamically construct our build pipeline

You might notice we have a number of functions that we call within our pipeline. As a quick example, this is how we install packages from Nuget using the installNugetPackages function.

Installing our Nuget dependencies

As I mentioned above, we defined our stages. 99% of the time our shared libraries will use the predefined structure. But we also allow people to override stages very easily. Say we need to checkout an additional repository at the start of our build

Overriding the checkout stage and adding additional build parameters

Summary

We’re now in a place where its really easy to add new builds, add new features to our build pipeline, fix bugs in the build pipeline etc. On top of that we’ve managed to consolidate all of our steps, so all of our projects run unit tests, build documentation, run library dependency checks etc. The Jenkins Pipeline learning curve is steep, and development and debugging can be slow as you need to kick off a build, wait for it to fail, fix the issue and then start again. It can be frustrating but once your pipeline is up and running it’s so much more flexible and useable than Freestyle Jobs.

If you have any questions or need any help, hit me up on Twitter as documentation and knowledge on Jenkins Pipelines can still be quite hard to come across online.

--

--