Improving Build Pipeline UX

Mickaël Morier
Doctolib
Published in
4 min readDec 11, 2017
© Illustration by Bailey McGinn

The Build process at Doctolib is complex and uses several technologies during which more than 6,500 tests of different types and build durations are executed. Developers require direct feedback and a simple overview of what is or is not working. Here we discuss the main “build experience” issues and solutions encountered recently at Doctolib.

Doctolib is built on two main platforms: Rails and React. Some build tasks can be executed in parallel, while others respect a specific order. For example, before building JavaScript assets we need to grab packages from NPM. Until recently, classic Jenkins jobs were being used to orchestrate of all our build steps, with each step being a Jenkins job within which the build script is contained. All of this information is stored in Jenkins.

Here is how a typical Jenkins job setting looked at the time:

Main Jenkins job which orchestrates all Jenkins job steps
One of step Jenkins job launched by the main Jenkins job

Unfortunately, we faced a few problems along the way compelling us to find a new approach to our builds.

Problem #1: Changing the build process can lead to broken branches for other developers

Builds can be broken when a developer changes the configuration of a Jenkins job. To address the issue we moved the build scripts from Jenkins jobs to our Git repository, close to the source code. Now Jenkins jobs only reference shell scripts which gives us the ability to change them freely within each branch without impacting other developers’ work.

The addition of a Jenkins job or a change in its conditions can also unfortunately lead to blocking other developers from building their branches. One possible solution would have been to create a shell script to coordinate all build scripts into one Jenkins job but this would still make executing other tasks in parallel challenging. Instead, we use Jenkins to organize all parts of the build by using a “pipeline” project type.

Pipeline jobs are based on a Groovy file which describes all stages and scripts to execute. Using a pipeline project allows us to reference a Groovy Jenkinsfile to explain which steps must be built and for each step, which build script should be executed. All of these files are stored on Github, no longer on Jenkins as you can see in the following screenshots.

Pipeline Jenkins job configuration. Build instructions are in Jenkinsfile located on our source control
Jenkinsfile describes how to build all steps and which shell script should be launched for each step.

As a result of these changes, improving and modifying the build script of a job and/or changing the orchestration of steps no longer disturbs other developers as all build scripts and Jenkinsfiles are versioned.

Problem #2: Reasons behind build failure are hidden from view

If you have a lot of build steps like we have at Doctolib, it may be difficult to determine specifically which test or which part of the build process has failed. It can be both painstaking and time consuming to find a failure in logs even with masterful browser searching skills.

A Jenkins job failure does not indicate which specific step during the process has failed, instead only the master job indicates a failure within. To solve a problem, one must navigate through the master job and into the step jobs to see their logs and test results in order to identify the issue.

Old main Jenkins job result. We see that test-unit-ruby and test-e2e steps have failed but we need to navigate to each step page to see test results and logs of each step.

By using a single pipeline Jenkins job, all logs and test results are stored in one place. However, while identifying the “where” becomes more readily apparent, it still remains difficult to find the “why” behind a build process failure. To take further advantage of Jenkins pipelines, we installed Blue Ocean. Blue Ocean is the new UI of Jenkins which provides a visual overview of the build process. Partnering a pipeline job with Blue Ocean allows for us to see all of our test results on one screen and easily identify problem areas.

Pipeline visualization: easy to see which steps have failed (test-e2e & test-unit-ruby) and which part of test-e2e step has failed.
Here some test results from test-unit-ruby, test-e2e & test-e2e-zipper steps

Next steps

After 10 months, the feedback we have received from our developers has only been positive. All of us find it easier to modify the build process using this new method.

Build pipelines save time in finding the origin of build failures in developers’ branches, yet, in the case of master or production branches the process still requires additional steps. These branches are crucial to our Duty Guy. In order to keep our daily Duty Guy happy, our plan is to migrate the specific master and production steps into our current pipeline job, turning our simple pipeline into a multi-branch pipeline job. The following captures show how our pipeline dashboard should improve.

Simple pipeline: runs #16421 & #16424 have same commit message but one is built on a feature branch, the other one on master branch.
Multibranch pipeline: each run is identified by its branch.

--

--