Automatic release of a Python command line application with Jenkins Pipeline

Jenkins Pipeline, Python and Docker Altogether

How to write a Jenkins pipeline that automates the release of a command line tool in Python?

Laurent Prévost
Geek Culture

--

Photo by Chris Ried on Unsplash

Introduction

In my current position, we faced an issue with our GitHub Enterprise appliance. We needed to synchronize users from our Active Directory with the appliance on various levels:

  • Allow users to log into the GitHub Enterprise appliance
  • Allow users to access some organizations
  • Allow users to be member of teams

Part of the synchronization already exists with GitHub Enterprise but not completely in the way we wanted to manage our users and rights.

We wrote a “small” command line tool to do the job and other useful commands like archiving and moving a repository from the command line. But first, it was the opportunity I waited for to put my hands into Python.

I have a strong background in Java, excellent knowledge with Ruby (especially Ruby on Rails) and JavaScript/TypeScript but not with Python. It was the occasion to build a tool with some best practices like unit testing and CI/CD.

In this article, we will not focus if it is a good idea or not to build such a tool as described. This is not the goal. We will discuss setting up a Jenkins pipeline to build my Python application based on Docker.

Context

We started by writing a bit of code. This code includes the unit tests and various configurations to build and test the project.

At a moment, it will be the time to build and distribute the tool. We will need to install it on some management server or elsewhere to use it. For that, we need a tool chain to build, test, package and distribute the command line tool.

Our first concern was to know where we will run the tool. The server to run this tool has already some Python applications running on it. We would not like to manage various Python versions and to deal with dependency installations.

We were looking for a “lighter” way to run my command line tool. This is where Docker comes into the equation. Docker is a friendly way to isolate runtimes. Having a container which is self contained is a significant advantage to let the host running only Docker and the minimal set of requirements.

Another concern is the Jenkins slaves, where there is no Python installed. We would like to avoid to maintain the Python stack on the Jenkins slaves (with or without the help of Jenkins tools). Docker containers came into my mind. We could run builds and tests inside a container.

At this stage, we did two things in parallel:

  • Creating my first Jenkins Pipeline,
  • Running some Docker experiments

Prerequisites

To get the full picture of the present material, we advise to have some knowledge in the following fields:

  • Jenkins pipelines (DSL, Groovy, credentials, …)
  • SonarQube from SonarSource (code quality analysis)
  • Git as SCM tool
  • Docker to build images, run containers, …

In the references section, you will find various links to the tools’ documentations.

Jenkins Pipeline — First Version

The first version of my Jenkins pipeline aims to build and test the project’s code. It makes my Pull Requests checks to be green or to be red. There are four successive steps:

  • Checkout the code
  • Build the Docker image
  • Run the tests
  • Run the code quality analysis

In addition, there is a last post-actions step to help to clean up everything.

Jenkins Pipeline View — First Version

Jenkins file

The Jenkins files use dedicated Groovy DSL. It includes also some DSL statements from additional plugins. Part of the file contains raw groovy statements. In the end, it consists to describe how to build and test the project like a Makefile can do or a bash script.

Jenkinsfile — First Version

Dockerfile

The Docker file is the recipe to prepare the Docker image with several steps executed successively. We install additional dependencies, copy source code, and install project dependencies.

✳️ The last statement in the Docker file describes which path (/usr/src) to mount automatically when used with --volumes-from. We will see later how we are using it.

Dockerfile — First Version

Pipeline Execution

We can see a bit more in details what is happening when we run some pipeline commands.

Pipeline Execution — Checkout

In the pipeline, the first step is “Checkout”. There is nothing special to say. It checkouts the project’s source code to build it.

Pipeline Execution — Build

The command to build the Docker image is in the step “Build” from the Jenkins pipeline. The run is straight forward.

❯ docker build . -t ghcli:py[+] Building 30.2s (13/13) FINISHED                                                                                                                                                                                                     
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 373B 0.0s
...
=> [7/8] RUN python -m pip install -e ".[test]" 27.3s
=> [8/8] COPY . . 0.3s
=> exporting to image 0.7s
=> => exporting layers 0.7s
=> => writing image sha256:42a0d2b837...bb5fdef2994e8 0.0s
=> => naming to docker.io/library/ghcli:py

Pipeline Execution — Test

With the Docker Image, we can run the tests through a container. It corresponds to the step “Test” from the Jenkins pipeline.

✳️ Note --name ghcli which gives a name to the Docker container. It will let us use the container as a Volume container in the next Jenkins pipeline step.

✳️ You can observe there are two tests run sections. The first one is for the syntax validation by flake8, and the second is the unit tests execution. It is interesting to see the syntax validation is part of the tests.

❯ docker run --tty --name ghcli ghcli:py /usr/bin/make testfind . -name "*.pyc" -delete
find . -name "__pycache__" | xargs -I {} rm -rf {}
rm -rf ./.pytest_cache
rm -rf ./build/*
rm -f ./.coverage
pytest . --flake8
======================= test session starts ========================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, ...
collected 440 items
conftest.py . [ 0%]
ghcli/commands/command.py . [ 1%]
...
tests/utils/test_filters.py ....... [ 97%]
tests/utils/test_utils.py .......... [100%]
======================= 440 passed in 10.82s =======================pytest tests --cov --cov-report term --cov-report xml:build/coverage.xml --cov-report html:build/htmlcov --html build/test_report.html --junitxml build/unit-report.xml======================== test session starts =======================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, ...
collected 296 items
tests/commands/test_archive_repository_command.py ...... [ 2%]
tests/commands/test_command.py ....................... [ 11%]
...
tests/utils/test_filters.py ...... [ 96%]
tests/utils/test_utils.py ......... [100%]
-------- generated xml file: /usr/src/build/unit-report.xml ---------
---- generated html file: file:///usr/src/build/test_report.html -----
--------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover
--------------------------------------------------------------------
.../create_user_command.py 19 0 2 0 100%
...
.../utils.py 28 0 8 0 100%
--------------------------------------------------------------------
TOTAL 1738 0 392 0 100%
Coverage HTML written to dir build/htmlcov
Coverage XML written to file build/coverage.xml
======================= 296 passed in 6.19s ========================

There is no difference between this Docker run and a test run from a local environment. The interesting fact is the possibility to run the tests without setting up a Python environment on the host. We need the Docker engine to run the Docker containers but nothing more.

Pipeline Execution — Quality

The last step in the Jenkins pipeline is the “Quality” step. It consists to run a SonarQube command line utility from a Docker container. It will run the code analysis and send the results to the SonarQube server.

✳️ In the next command, you can observe we are mounting all the volumes from the container ghcli which comes from previous Docker command. The --volumes-from ghcli will mount /usr/src from the previous container into the SonarQube container. We need this to analyse the source code of the project.

⭐ If you want to dig deeper into Docker Volumes, you can read Docker volumes documentation or the following article.

❯ docker run \
--rm \
-e SONAR_HOST_URL=https://<sonarQubeHost> \
-e SONAR_LOGIN=<sonarQubeLoginCreds> \
--volumes-from ghcli \
sonarsource/sonar-scanner-cli \
sonar-scanner -Dsonar.branch.name=<branchName>
INFO: Scanner configuration file: /opt/.../sonar-scanner.properties
INFO: Project root config file: /usr/src/sonar-project.properties
INFO: SonarScanner 4.6.2.2472
INFO: Java 11.0.11 AdoptOpenJDK (64-bit)
INFO: Linux 3.10.0-1062.9.1.el7.x86_64 amd64
...
INFO: 144/144 source files have been analyzed
INFO: Python test coverage
INFO: Parsing report '/usr/src/build/coverage.xml'
...
INFO: Read 757 type definitions
INFO: Reading UCFGs from: /usr/src/.scannerwork/ucfg2/python
INFO: 18:50:42.96343 Building Runtime Type propagation graph
INFO: Analyzing 3359 ucfgs to detect vulnerabilities.
...
INFO: ------------- Check Quality Gate status
INFO: Waiting for the analysis report to be processed (max 300s)
INFO: QUALITY GATE STATUS: PASSED - View details on https://...
INFO: Analysis total time: 19.291 s
INFO: --------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: --------------------------------------------------------------
INFO: Total time: 23.957s
INFO: Final Memory: 8M/34MINFO: --------------------------------------------------------------

Pipeline Execution — Result

With this pipeline in place, we have now a project that runs on Jenkins. The execution will block the Pull Requests if a failure happens during the execution.

Pull Request Blocked by Pipeline Execution Failure

This is not bad. Each time we push a branch and create a pull request, it triggers a pipeline’s execution. You can read the appendices I and II to see the configuration around integrating Jenkins and GitHub (webhooks, multibranch, …).

Jenkins Pipeline — Second Version

In the second version, we will go further and put in place a mechanism to create a release automatically of the command line tool. The creation of the release will occur when we merge a branch into the main branch.

It follows the trend of continuous delivery where you deploy as soon as a feature is ready to go to production. In this context, we will deploy nothing but build a new version of the tool each time a feature is ready.

Version Increment

The first thing we were looking for was a tool to manage automatically the increment of our version numbers. During our investigations, we discovered the tool bump2version which does exactly what we were looking for.

The tool allows us to configure the way to recognize a version in my files, to decide which files we want to update, and to do the update in those files. We can decide by myself the version format.

Bump2version Configuration

The setup of bump2version is easy to do. It requires a small file called .bumpversion.cfg which contains the files to update when we use the tool.

.bumpversion.cfg File

The first file updated is a flag file called VERSION. This file will make it easier to get the version in the Jenkins pipeline.

VERSION file

The second file updated is the script of the command line tool. It contains the version directly. The tool bump2version can update any file.

scripts/ghcli file

The tool also updates its configuration file.

It is easy to run the tool. Look at the following command.

❯ bump2version --current-version 0.1 --allow-dirty minor

Pipeline Flows

My Jenkins pipeline must cover to use cases:

  • Building a new release when branches are merged to master
  • Building and validating branches and pull requests without creating a new release

Below, you can view the Jenkins pipeline flow for the first use case. We defined the following steps: checkout, prepare, test, qualify, version, publish, and release.

Jenkins Release Pipeline Flow

In the second use case we skip the version, publish, and release steps. They exist but a condition prevents them to be run.

Jenkins Build Pipeline Flow

Pipeline

Since the previous version of the pipeline, a lot of things changed. The base elements still stayed the same.

Jenkins Pipeline — Second Version (without stages)

In the next couple of sections, we will dig into the details of each step. We will discuss each of the steps. The full pipeline code is in the appendix III.

Pipeline — Stage “Checkout”

The checkout stage is straightforward. It retrieves the source code of the project from the expected branch.

ℹ️ To retrieve the branch name, we use two Jenkins environment variables. When we are in change job, we have the variable CHANGE_BRANCH available and filled with the correct branch. Otherwise, we need to use the BRANCH_NAME.

Pipeline —Stage “Checkout”

Pipeline Execution — Stage “Checkout”

The execution result of the checkout stage contains several git commands. The git/github plugins handle the commands for us.

[Pipeline] { (Checkout)
[Pipeline] script
[Pipeline] {
[Pipeline] deleteDir
[Pipeline] git
...
Cloning the remote Git repository
Avoid second fetch
Checking out Revision 275...3e7 (refs/remotes/origin/...)
Commit message: "..."
Cloning repository git@<host>:<org>/<repo>.git
> git init /var/lib/jenkins/workspace/_... # timeout=10
Fetching upstream changes from git@<host>:<org>/<repo>.git
...
> git checkout -f 275...3e7 # timeout=10
> git branch -a -v --no-abbrev # timeout=10
> git checkout -b ... 275...3e7 # timeout=10
> git rev-list --no-walk 21b...785 # timeout=10
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage

Pipeline — Stage “Prepare”

In the prepare stage, we do variable initialization and build statements.

ℹ️ We need to initialize some variable that we reuse across the pipeline. We also define the flag to know if the job run is a release or not.

ℹ️ We build the Docker Image used during the pipeline and prepare a container to use with --volumes-from later.

ℹ️ The following command detects if the last commit has more than one ancestor. If a commit has only one ancestor, it cannot be a merge commit. It will probably not work when you are doing squash and rebase strategies.

❯ git rev-parse --verify -q HEAD^2
Pipeline — Stage “Prepare”

The Docker file is not so different from before. We added the update of pip at the same Docker statement than the apt packages.

We have to copy several base files before we can install the dependencies. Once done, we can install the development dependencies.

Dockerfile File

We can now copy all the project sources. We use .dockerignore file to avoid copying too many useless things or things that can bring uncertainty.

.dockerignore File

If we compare it to the .gitignore, it contains more ignore statements. In the .dockerignore file, we excluded files that are tracked but not used in the Jenkins pipeline inside the Docker containers.

.gitignore File

Finally, we did some cleanup and consolidation in the setup.py file which is used to install the development dependencies. The development dependencies also include the “production” dependencies.

setup.py File

Pipeline Execution — Stage “Prepare”

In the prepare stage, we retrieve the different commands creating the Docker Image and Container.

[Pipeline] stage
[Pipeline] { (Prepare)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ cat VERSION
[Pipeline] sh
+ git rev-parse --verify -q 'HEAD^2'
+ echo y
[Pipeline] sh
+ docker build . -t ghcli:ghcli_ad03ab99f73c4f1b919bf8414b22b879
Sending build context to Docker daemon 891.4kB
Step 1/7 : FROM python:3.8.10-buster
---> e7d3be492e61
...
Step 7/7 : COPY . .
---> 075f58c2f430
Successfully built 075f58c2f430
Successfully tagged ghcli:ghcli_ad03ab99f73c4f1b919bf8414b22b879
[Pipeline] sh
+ docker container create -v /usr/src \
--name ghcli_ad0...879 ghcli:ghcli_ad0...b879

306...cd6
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage

Pipeline — Stage “Test”

We also run the tests during the test stage like we did in the previous version of the pipeline.

ℹ️ We run the tests with the Docker Volume set with --volumes-from. You can feel this is not useful but you will see later we had to do this to store and keep the test result reports. We need these reports to run the qualify stage.

Pipeline — Stage “Test”

Pipeline Execution — Stage “Test”

There is nothing new for the test stage. The result is the same as the previous Jenkins pipeline.

[Pipeline] { (Test)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ docker run --tty --rm --volumes-from ghcli_ad0...879 ghcli:ghcli_ad0...879 /usr/bin/make test
find . -name "*.pyc" -delete
...
rm -f ./.coverage
pytest . --flake8
======================= test session starts ========================
...
collecting ...
collecting 255 items
collected 440 items
...
tests/utils/test_utils.py .......... [100%]

======================= 440 passed in 9.05s ========================
pytest tests --cov --cov-report term ...
======================= test session starts ========================
...
collecting ...
collecting 93 items
collected 296 items

...
tests/utils/test_utils.py ......... [100%]

...
Name Stmts Miss Branch BrPart Cover --------------------------------------------------------------------
.../create_user_command.py 19 0 2 0 100%
...
--------------------------------------------------------------------
TOTAL 1744 0 392 0 100%
...
======================= 296 passed in 5.05s ========================
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage

Pipeline — Stage “Qualify”

The qualify stage is not so different from before. We continue to run it with the SonarQube client Docker Image.

ℹ️ We use the same Docker Volume as in the test stage. Sonar can retrieve the coverage and unit tests reports. If we do not do this, the quality gate will stay red forever.

Pipeline — Stage “Qualify”

Pipeline Execution — Stage “Qualify”

In the qualify stage, there is no major difference regarding the previous pipeline version. As explained previously, we use the Docker container volume used in the test stage to retrieve the test reports.

[Pipeline] stage
[Pipeline] { (Qualify)
[Pipeline] withCredentials
Masking supported pattern matches of $SONAR_LOGIN
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] sh
...
+ docker run --rm --volumes-from ghcli_ad0...879 -e SONAR_HOST_URL=https://<sonarQubeHost> -e SONAR_LOGIN=**** sonarsource/sonar-scanner-cli sonar-scanner -Dsonar.pullrequest.branch=<branch> -Dsonar.pullrequest.key=24 -Dsonar.pullrequest.base=main
Unable to find image 'sonarsource/sonar-scanner-cli:latest' locally
...
Status: Downloaded newer image for sonarsource/sonar-scanner-cli:latest
...
INFO: SonarScanner 4.6.2.2472
...
INFO: QUALITY GATE STATUS: PASSED - View details on https://<sonarQubeHost>/dashboard?id=ghcli&pullRequest=24
INFO: Analysis total time: 19.075 s
INFO: --------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: --------------------------------------------------------------
INFO: Total time: 23.574s
INFO: Final Memory: 8M/34M
INFO: --------------------------------------------------------------
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage

Pipeline — Stage “Version”

In the version stage, we are starting the release process. In this stage, we do an increment of the minor version by one.

ℹ️ This stage starts with a condition. We execute the stage if and only if the branch is main and the last commit is a merge commit.

The next commands, we run the version bump within a Docker Container. The name assigned to the container makes it easier to retrieve the updated files.

After we retrieved the update files, we simply clean the Docker Container created. We do not need it anymore in the pipeline execution.

Pipeline — Stage “Version”

Pipeline Execution — Stage “Version”

The execution of the job skips the version stage when there is no release to create.

[Pipeline] stage
[Pipeline] { (Version)
Stage "Version" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage

Otherwise, the stage execution produced the following result. We retrieve the different Docker commands to copy the files updated with the new version.

Pipeline] { (Version)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ docker run --tty --name ghcli_ad0...879 -e CURRENT_VERSION=0.3 ghcli:ghcli_ad0...b879 /usr/bin/make release
bump2version --current-version 0.3 --allow-dirty minor VERSION
[Pipeline] sh
+ docker commit ghcli_ad0...879
[Pipeline] sh
+ docker cp ghcli_ad0...879:/usr/src/VERSION .
[Pipeline] sh
+ docker cp ghcli_ad0...879:/usr/src/scripts/ghcli scripts/
[Pipeline] sh
+ docker cp ghcli_ad0...879:/usr/src/.bumpversion.cfg .
[Pipeline] sh
+ docker rm ghcli_ad0...879
ghcli_ad0...879
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage

Pipeline — Stage “Publish”

After the version stage, we have the publish stage. With the Docker Image ready, we need to upload it to a Docker Registry to make it available.

⚠️ Depending on the use case, the publish stage is not required. In our case, we want to use the command line tool directly from a Docker Container to avoid to set up and maintain Python where we will run the tool.

ℹ️ In the first commands, we retrieve the new version from the previous stage. We tag the Docker Images from this new version and with the latest version keyword.

ℹ️ We use the second sets of commands to login and upload the Docker Image to the Docker Registry.

Pipeline — Stage “Publish”

Pipeline Execution — Stage “Publish”

Like the previous stage, we can skip this stage as it has a conditional guard.

[Pipeline] stage
[Pipeline] { (Publish)
Stage "Publish" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage

When we execute the stage, it produces the following result. We retrieve the version and tag the Docker Image with it. We push the Docker Image to the Docker Registry.

[Pipeline] stage
[Pipeline] { (Publish)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ cat VERSION
[Pipeline] sh
+ docker tag sha256:c81...67b registry/component-releases/ghcli:0.3
[Pipeline] sh
+ docker tag sha256:c81...67b registry/component-releases/ghcli:latest
[Pipeline] }
[Pipeline] // script
[Pipeline] withCredentials
Masking supported pattern matches of $USERNAME or $PASSWORD
[Pipeline] {
[Pipeline] sh
...
+ docker login --username **** --password **** <registry>
...
Login Succeeded
[Pipeline] sh
+ docker push registry/component-releases/ghcli:0.3
The push refers to repository [registry/component-releases/ghcli]
cde8248d47c8: Preparing
...
3bbdeb55be4f: Pushed
0.4: digest: sha256:128...6a6 size: 3476
[Pipeline] sh
+ docker push <registry>/component-releases/ghcli:latest
The push refers to repository [<registry>/component-releases/ghcli]
cde8248d47c8: Preparing
...
ccb9b68523fd: Layer already exists
latest: digest: sha256:128...6a6 size: 3476
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage

Pipeline — Stage “Release”

The final stage release aims to complete the release process by manipulating the Git repository.

ℹ️ We add and commit the updated files. The updated files contain the new version.

ℹ️ We set a Git tag to the commit we just created. It marks the release version to the commit.

ℹ️ And finally, we push the updates to the Git repository (remote origin)

Pipeline — Stage “Release”

Pipeline Execution — Stage “Release”

Again, we execute this stage only if we match the condition.

[Pipeline] stage
[Pipeline] { (Release)
Stage "Release" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage

When we create the release, it produces the following output. It contains the various git commands to commit the updated files, create the tags, and push all the content to the remote origin.

[Pipeline] stage
[Pipeline] { (Release)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ git add VERSION .bumpversion.cfg scripts/ghcli
[Pipeline] sh
+ git commit -m 'Release 0.3'
[<branch> b45a012] Release 0.3
3 files changed, 3 insertions(+), 3 deletions(-)
[Pipeline] sh
+ git tag v0.3
[Pipeline] sh
+ git branch -u origin/<branch>
Branch '<branch>' set up to track remote branch 'feature/<branch>' from 'origin'.
[Pipeline] sh
+ git push
To <host>:<org>/<repo>.git
275bd9c..b45a012 <branch> -> <branch>
+ git push --tags
To <host>:<org>/<repo>.git
* [new tag] v0.3 -> v0.3
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage

Pipeline — Post Actions

After the pipeline execution, we need to do some housecleaning. The cleaning removes remaining Docker Container/Image not needed anymore. We also clean the working directory (workspace).

Pipeline — Post Actions

Pipeline Testing

It can be really slow to test Jenkins pipelines. First, all the stages can take multiple minutes to run. Debugging in such a situation is not convenient.

To reduce the speed of testing, we added several tasks into the Makefile to first test the different commands we will use inside the pipeline. Not a complete and perfect solution, but it is far better than nothing.

ℹ️ All the tasks starting with ci- are the tasks we used to test the different commands from the pipeline.

Makefile File

There are the commands to build the Docker Image and run the tests.

docker build . -t ghcli:test
docker run --tty --rm ghcli:test /usr/bin/make test

Then, we can concentrate on the release part of the pipeline where we bump the version number and tag the image. Here, we use a trick to make a trick to make it working well. We were not aware of the way make is resolving the variable. The following article helped us to put in place a lazy evaluation of the variable we use in the make tasks.

The following Makefile snippet shows how to use the lazy evaluation mechanism.

make-lazy = $(eval $1 = $$(eval $1 := $(value $(1)))$$($1))CURRENT_VERSION ?= $(shell cat VERSION)
DOCKER_IMAGE_COMMIT = $(shell docker commit ghcli)
NEW_VERSION = $(shell cat VERSION)

$(call make-lazy,NEW_VERSION)
$(call make-lazy,DOCKER_IMAGE_COMMIT)

Now we have the lazy evaluation mechanism, we can observe the tasks split. The ci-pre-release task contains the commands to bump the version. We run the bump version from Docker Container and then we copy the updated files back to the host from the container. It covers the version stage from the Jenkins pipeline.

docker run --tty --name ghcli -e CURRENT_VERSION=$(CURRENT_VERSION) ghcli:test /usr/bin/make release
docker cp ghcli:/usr/src/VERSION .
docker cp ghcli:/usr/src/scripts/ghcli scripts/
docker cp ghcli:/usr/src/.bumpversion.cfg .

In the second part ci-post-release, we focus on the Docker Image tagging. The docker rm occurred only after we get the make variable evaluated. Otherwise, the docker commit will not happen correctly. It corresponds to the publish stage in the Jenkins pipeline.

@echo New version: $(NEW_VERSION)
@echo Image commit: $(DOCKER_IMAGE_COMMIT)
docker rm ghcli
docker tag $(DOCKER_IMAGE_COMMIT) <registryHost>/component-releases/ghcli:$(NEW_VERSION)
docker tag $(DOCKER_IMAGE_COMMIT) <registryHost>/component-releases/ghcli:latest

As you can see, we do not proceed with the docker push to avoid polluting the Docker Registry. We do not deal with the Git tagging for the same reason.

Running the task ci-release brings us the confidence the commands for the different pipeline stages are correct, and in the right order.

With the Makefile approach, we saved time of testing. The commands configured in the Jenkins pipeline are correct and do the expected job. Only the pipeline itself needs to be tested for Jenkins itself.

Playing with the release part requires a bit of flexibility. The replay feature in Jenkins is nice as you can re-run a pipeline with updates done to the Jenkins pipeline code right before the execution. It avoids doing the modifications inside the code repository.

Conclusion

We succeed to create a pipeline that creates releases when we merge branches to the main branch. We kept the test and quality gate in every pipelines’ runs. In summary, we reached the goal we had.

The advantages of Docker based approach is the pipeline only requires a Docker Engine to run the Docker commands. Everything inside the Docker containers is agnostic from the running host. It makes the project completely independent from the running context as we manage the context inside the project.

The drawback of this approach comes in the pipeline itself. It makes the pipeline more complex to write and maintain. There are more commands, more dependencies between steps’ commands.

In our experience, the complexity introduced is worth to gain the independency from the running host. We are completely free to upgrade Python, dependencies, and many other things without having to worry about the side effects on other pipelines.

References

The Jenkins pipeline DSL documentation gives everything needed to write and maintain pipelines.

Jenkins pipeline DSL is based on Groovy. Sometimes, we have to use Groovy in the pipelines. The Groovy documentation is useful.

In the same way, the Groovy Web Console is a kind of online scratchpad where you can test small Groovy piece of code.

The Docker documentation is really exhaustive and gives everything to deal with all the aspects to create and run containers.

We use various Git commands in the pipeline. The Git documentation helped us besides tons of Stack Overflow questions.

The following website is a big tutorial about make and Makefile files.

During the experimentations and implementation phase, we rewrote the setup.py file. During the process, we used the following website describing Python wheel.

Appendices

Appendix I — Jenkins Multibranch Pipeline Git Configuration

The configuration of the Git behaviour inside the multibranch pipeline is simple. There are many options to configure but for our needs, these are enough.

Git Configuration in the Multibranch Pipeline

IIn addition, it requires a Jenkins system configuration for GitHub to accept webhooks from GitHub (configuration under: Configure System). Only Jenkins administrators can do this initial configuration.

✳️ By default, the Jenkins webhook is https://<host>/github-webhook.

Jenkins GitHub Global Configuration

Appendix II — GitHub Repository Jenkins Webhook

GitHub can send various events to webhooks. This is what we configured in our repositories. We define the webhook URL in the first part of the configuration.

GitHub Repository Jenkins Webhook Configuration — Part 1

In the second part, we define which events we want to send to the webhook. In our case, the events are: Branch or tag deletion, Pull requests, and Pushes.

GitHub Repository Jenkins Webhook Configuration — Part 2

Appendix III — Jenkins Pipeline — Second Version (full)

You can read the full Jenkins pipeline code below.

Full Jenkins Pipeline — Second Version

--

--