Standardising CI/CD — Jenkins Standard Pipelines Libraries

Storkey
ELMO Software
Published in
8 min readMar 3, 2022
DevOps x Jenkins

DevOps & Jenkins

In today’s world, if your application does not have continuous integration (CI), continuous deployment (CD) and if you aren’t deploying your application multiple times a day, you’re falling behind. The DevOps landscape is filled with hundreds of tools and platforms which have been developed specifically for this purpose. One of the most prominent and widely used of these platforms is Jenkins. The big draw towards Jenkins is its flexibility in CI/CD setup. For example at ELMO Software, we have multiple AWS accounts to which we have our single Jenkins platform connected via on-demand slaves. We frequently deploy a wide range of deployables from docker containers to serverless applications that span over many language types.

The problem with Jenkins in a large organisation

At scale, Jenkins is almost too customisable. When Jenkins is used in a large organisation like ELMO, there is an inherent blocker for new projects being deployed due to each project's unique requirements. Each Jenkinsfile ends up being complex and completely unique.

Historically, projects would use different test and linting suites and some wouldn’t have tests at all. Some would have PR pipelines and some wouldn’t. Another massive undertaking was managing the deployment process’ across a matrix of multiple environments, regions and AWS accounts. What I’m trying to emphasise here is that everything was custom and unique sigh.

Not only would this deployment complexity cause blockers/delays for deploying new applications for the first time, it also fostered cultural problems within the organisation itself. Developers would have no control or knowledge over their CI/CD and would be essentially “throwing the grenade over the wall” as “that’s DevOp’s job.” This meant that anytime builds failed, a request was sent to DevOps to go take a look. In a large organisation, this meant a lot of the DevOps teams' time was spent on debugging Jenkins pipeline issues and because of the unique nature of these pipelines, each issue was time-consuming.

Introducing Jenkins Standard Pipelines

Soooo, this is where the idea of “Jenkins Standard Pipelines” comes in. Standardising the CI/CD process across the organisation. The goals for this project were:

  1. Standardise the CI process across ELMO per language/framework.
  2. Standardise the CD process across ELMO per deployable.
  3. Speed up the delivery of CI/CD pipeline.
  4. Dramatically shrink the size of the Jenkinsfiles.
  5. Empower developers to take control of their CI/CD.
  6. Most importantly, simplify the entire process from start to finish.

Jenkins Standard Pipelines library

The way we chose to tackle this problem, was to build an extensive Jenkins library that would act as an extendable framework. The framework would consist of the following:

Reusable vars functions

For developers to consume like any other library if they desired, as well as to service the individual pipeline types (we will get to these shortly).

An entrypoint var function

The function that would be called in any Jenkinsfile wanting to use the standard pipelines. This function is where the developer gets to define what type of pipeline they require and specify the values required for that pipeline. For example the app name, the deployment environments required, a slack channel for these specific deployment messages, things like this.

A suite of Groovy Classes

These classes would outline different concepts that would need to be tied together for the pipelines to function. We built out many class types but I will outline a few of the important ones here:

BuildFlow

BuildFlow was created to identify what the end goal of the pipeline was. The 3 BuildFlow types created are:

REVIEW - PR review pipelines that typically test code and build the deployable

DEPLOYMENT - Deployment of an already built deployable. Think of this as a promotion of a deployable that is already built and deployed to a staging environment.

REVIEW_AND_DEPLOYMENT - End to end pipeline that builds, tests and deploys. Think of this as your first deployment to staging

BuildStages

BuildStages is used to define a set of stages that pipelines would be able to implement in their specific way. A stage will hold the command or reusable vars functions that need to be run at the specific step of the pipeline. Each pipeline type would be able to implement some or all of each of these stages types in their unique way and in whatever order they require.

CODE_SCAN - Static code scanning for bad code regex patterns

BUILD - Building of the deployable

UNIT_TEST - Obviously unit testing

DEPLOY - The most important step, the deployment.

Pipeline

Pipeline Is the most important class of all. Using this class we create a set of Pipeline types that outline what BuildStages run for each BuildFlow for the given project type. These Pipeline types are created based on a few factors:

  • Language/Framework: There are separate pipeline types created for Typescript vs PHP
  • Deployment type: Different pipelines are created for docker images compared to serverless deployments or even static s3 frontends.

PipelineHelper

The PipelineHelper is used in every pipeline to help with understanding all the values passed from the entrypoint in the jenkinsfile and defining sensible defaults for values not passed.

The benefits

So, after all, that work where did it get us?

This is what our pipelines used to look like. Completely custom with very little reusable code and 151 lines in our jenkinsfile:

@Library('jenkins-libraries')_
properties([[$class: 'BuildDiscarderProperty', strategy: [$class: 'LogRotator', artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '10']]]);
if (env.BRANCH_NAME == "master") {
env.NODE_NAME = "linux-prod"
} else if (env.BRANCH_NAME == "develop" {
env.NODE_NAME = "linux-staging"
} else {
env.NODE_NAME = ""
}
node (env.NODE_NAME) {
def app
try {
slackNotifier('STARTED','#slack-channel')
stage("Checkout"){
checkout scm
}
stage('SonarQube analysis') {
def scannerHome = tool 'SonarScanner-Core';
withSonarQubeEnv('Sonar-Core') { // If you have configured more than one global server connection, you can specify its name
sh "${scannerHome}/bin/sonar-scanner -Dsonar.projectKey=<app-name> -X"
}
}
stage('Code Scan') {
echo "scanning code base"
sh "<code scan command>"
}
stage('Build images') {
def GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)

if (env.BRANCH_NAME == "master"){
app = docker.build("<aws-prod-accountid>.dkr.ecr.ap-southeast-2.amazonaws.com/<app-name>:${GIT_HASH}", "-f Dockerfiles/web_app/Dockerfile .")
} else if (env.BRANCH_NAME == "develop") {
app = docker.build("<aws-staging-accountid>.dkr.ecr.ap-southeast-2.amazonaws.com/<app-name>:${GIT_HASH}", "-f Dockerfiles/web_app/Dockerfile .")
}
stage('Trivy Scanner'){
echo """
╔════════════════════╗
║ ║
║ Trivy ║
║ ║
╚════════════════════╝
"""
def GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true) // trivy scan
if (env.BRANCH_NAME == "master") {
sh("trivy image --severity CRITICAL <aws-prod-accountid>.dkr.ecr.ap-southeast-2.amazonaws.com/<app-name>:${GIT_HASH}")
} else if (env.BRANCH_NAME == "develop") {
sh("trivy image --severity CRITICAL <aws-staging-accountid>.dkr.ecr.ap-southeast-2.amazonaws.com/<app-name>:${GIT_HASH}")
}
}
stage('Push to ECR') {
sh '''
eval "$(aws ecr get-login --no-include-email --region ap-southeast-2)"
'''
app.push()
}
stage('Tests') {
def GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
sh("echo 'start running tests'")
sh("docker-compose up -d --build --remove-orphans")
sh("docker-compose exec -T web composer install --no-interaction")
sh("docker-compose exec -T web php -S localhost:7202 -t Tests/PACT/Provider/public/ 1>/dev/null &")
sh("docker-compose exec -T web php bin/console cache:clear -e test")
sh("docker-compose exec -T web bin/console doctrine:database:create -e test")
sh("docker-compose exec -T web vendor/bin/codecept build")
sh("docker-compose exec -T web vendor/bin/codecept run")
// @todo reimplement this
sh("echo 'copy .xml file with test coverage from docker container'")
sh("docker cp web:/var/www/html/<app-name>/Tests/_output/coverage.xml .")
sh("docker cp web:/var/www/html/<app-name>/Tests/_output/coverage ./coverage_html")
sh("cat coverage.xml")
sh("echo 'copy psalm analysis file from docker container'")
sh("docker cp web:/var/www/html/<app-name>/Tests/_output/psalm_output.json .")
sh("cat psalm_output.json")
sh("docker-compose down")
sh("docker-compose rm -v")
sh("echo 'stop running tests'")
}if (env.BRANCH_NAME == "master") {
def GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
parallel (
"Sydney": {
node("linux-prod-<aws-prod-accountid>") {
stage("Checkout"){
echo GIT_HASH
checkout ( [$class: 'GitSCM',
branches: [[name: GIT_HASH ]],
userRemoteConfigs: [[
url: '<https://bitbucket.org/workspace/><app-name>.git']]])
}
stage('Deploy to Sydney') {
GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
sh "helm repo update"
sh "helm ssm upgrade --install --timeout 600s production-<release-name> chartmuseum/<chart-name> -f production-values.yaml --namespace <app-name> --set image.tag=${GIT_HASH}"
}
}
},
"UK": {
node("production-linux-prod-euwe2") {
stage("Checkout"){
echo GIT_HASH
checkout ( [$class: 'GitSCM',
branches: [[name: GIT_HASH ]],
userRemoteConfigs: [[
url: '<https://bitbucket.org/workspace/><app-name>.git']]])
}
stage('Deploy to UK') {
GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
sh "helm repo update"
sh "helm ssm upgrade --install --timeout 600s production-<release-name> chartmuseum/<chart-name> -f values/eu-west-2-values.yaml --namespace <namespace> --set image.tag=${GIT_HASH}"
}
}
}
)
} else if (env.BRANCH_NAME == "develop") {
stage('Deploy') {
def GIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
sh "helm repo update"
sh "helm ssm upgrade --install --timeout 600s staging-<release-name> chartmuseum/<chart-name> -f staging-values.yaml --namespace <app-name> --set image.tag=${GIT_HASH}"
}
}
else {
stage('Deploy') {
echo "Deployed skipped as branch isn't whitelisted"
}
}
} catch (e) {
currentBuild.result = "FAILED"
throw e
} finally {
slackNotifier(currentBuild.result,'#slack-channel')
}
}

Now with the power of the Jenkins Standard Libraries, our jenkinsfile turns into this:

@Library("elmo-shared-jenkinslib") _runPipeline([
type: "DOCKER_SERVER",
app_name: "base-api",
k8s_namespace: "team-namespace",
slack_channel: "rnd_slack_channel",
helm_chart: "base-api",
k8s_values_dir: "values",
env_definitions: [
staging: [:],
production: [
deploy_to: [
elmoau: [:],
elmouk: [:]
]
]
]
])

This number of lines has shrunk down from 150+ to 20. Without any prior knowledge of the Jenkins platform, any developer is more than capable to get their Jenkins CI/CD up and running without any assistance from the DevOps team. Devs have access to extensive documentation that details all the available params, example pipelines, and contribution guides.

Now, whenever a DevOps team sets out to make changes like this in a large organisation, getting adoption and traction can be difficult. The way the DevOps team at ELMO Software went about this was to get the Devs involved in any area we could. This is where working groups and guilds are your best friend. Not only did this speed up the adoption for the project as a whole, but it also meant that the DevOps team were no longer the gatekeepers for everything CI/CD, and the developers were able to start to troubleshoot their own issues. Of course, there are still going to be some issues that arise but this now meant that there was a better avenue for these problems to be surfaced and far more people who were able to work on solving them.

Summary

This standardisation project has yielded returns 10 times over. When we set out on this project, we knew the best way for the DevOps team to measure success was through the number of Jira tickets that are made by developers for support with their Jenkins environments. Since the introduction of the Jenkins Standard Pipeline Library and its working group, we have seen an enormous reduction (very close to the 90% mark) in tickets relating to Jenkins CI/CD setup and support. We’ve also empowered our developers to make the best decisions for their applications and own their deployment pipelines end-to-end allowing for more autonomy and ownership on their part and better manageability for the DevOps team.

I’d love to hear your comments on this. What has been your experience with Jenkins? How have you added structure to your pipelines to manage scale? Let me know and thanks for reading!

--

--

Storkey
ELMO Software

• 🇦🇺 • 26 • Content Creator • DevOps Engineer •