Designing Cloud-Native CI/CD Pipelines with Jenkins X

Design and implementation of Cloud-Native CI/CD Pipelines on Kubernetes for the enterprise, using Jenkins X.

Vincent Behar
Jenkins X
8 min readApr 8, 2020

--

Photo by Victor Garcia on Unsplash

The cloud-native era has opened up a whole new level for CI/CD, and it’s impacting the design of our pipelines. The days of the big monolith pipeline that only one person can decipher are gone. Let’s see how we can design and implement cloud-native CI/CD pipelines on Kubernetes, for the enterprise, using Jenkins X.

Requirements

Let’s review our requirements first. I said “for the enterprise”, because writing pipelines for a single open-source project or hundreds/thousands of enterprise projects is not the same.

The first requirement is the conventions or standards you might have in your company around code quality, testing, packaging and so on. You’d like to ensure that developers will get the same experience working across different repositories using different programming languages. The presence of a CI/CD pipeline is usually the one consistent thing you’ll find on all repositories. We just need to ensure that the content of that pipeline will be consistent too.

The second one is the DRY principle: Don’t Repeat Yourself. With dozen, hundreds or even thousands of repositories in your organization, you’d like to avoid copying big monolith pipelines everywhere. Keep the definition of your application’s pipelines as small as possible, using inheritance or shared pipelines when possible.

The third one is the maintenance cost. Updating pipelines across all your repositories is a complex task. A small change might break the pipelines for half of the projects, and you’d like to know it as soon as possible. And even better: don’t break anything. But still, keep all projects — including the half-dormant ones — up to date.

And the last one is extensibility. If you are using inheritance or shared pipelines, you should be able to customize them easily, without rewriting the whole pipeline from scratch.

Jenkins X Features

Jenkins X has a set of very interesting features, including:

  • support for multiple pipelines per repository. Meaning you can have multiple jenkins-x.yml files in your repository, and all of them will be used to trigger new pipelines. These pipelines will run in parallel and post different “checks” on your GitHub Pull Request.
  • Build Packs and Pipelines inheritance. One of the Build Packs use-cases is to provide pipelines that can be “imported” by other projects. Thus allowing inheritance of pipelines.
  • Pipelines override. Jenkins X Pipelines are written in YAML. Sure, it’s a pain to write verbose YAML, but it also has benefits, including the ability to manipulate the file content programmatically. Pipelines that inherits from a build pack can override part of the parent pipeline: add a custom command, remove another one, replace a third.
  • Release promotion. One of the main practices behind Jenkins X is Gitops. To apply Gitops, Jenkins X “promotes” the application’s releases by creating Pull Request to update versions stored in git repositories. So if we release our build packs, we can use Gitops and release promotion to easily update the versions referenced in all the projects.

Jenkins X Pipelines Design

Now let’s design our ideal CI/CD Pipelines using Jenkins X.

We’ll use a single release pipeline, defined in the main jenkins-x.yml file. But we’ll define multiple pipelines for our Pull Requests: unit tests, integration tests, deployment of the preview environment, validation of the Helm Chart, and code lint.

Splitting our old monolith pipeline in multiple small ones has many benefits:

  • pipelines’ definitions are smaller and focused on a single goal.
  • it will be easier to use pipelines inheritance with small and focused pipelines.
  • pipelines will be executed in parallel, giving faster feedback to the developers.
  • failed pipelines can be re-executed in isolation of the others.
  • it will help to maintain your conventions and standards across all repositories because all projects will need to think about unit and integration tests, code lint, and so on.

We’ll create custom build packs with generic pipelines for the release, the preview environment deployment, and the Helm chart validation. We’ll also write generic pipelines for all supported languages for unit tests, integration tests, and lint. These generic pipelines should be easily customizable, with custom environment variables for example.

For the steps of the pipelines that require a bit of scripting, we’ll write scripts, store them in Git, and make them available to all steps. We don’t want to require developers to store them in all their repositories, nor do we want to package them in all the container images we’ll use in our pipelines. We could apply the pattern used by Tekton to access its tools from any container: store them in a shared pod volume at the beginning of the pipeline, and mount that volume in all containers.

We’ll make sure to define our generic pipelines with all the required context — credentials, environment variables — so that if someone wants to override part of a pipeline by adding a custom step, he doesn’t have to write a lot of glue to retrieve a set of credentials from a Kubernetes secret.

And we’ll ensure that all our applications use specific versions of our build packs. We’ll release and promote our build packs, to trigger new Pull Requests in all the repositories that use them. This will execute the new version of the pipelines in the context of all the repositories, ensuring nothing is broken before merging to the master branch, and using it for all builds. Of course, this practice works best in a cloud environment with a cluster auto-scaler, because it will run a lot of pipelines in parallel. But even if you don’t have enough resources, pods will just be “queued” and wait for their turn to consume resources.

Jenkins X Pipelines Implementation

Let’s get into the implementation details now!

We’ll start by creating a custom Jenkins X Scheduler to define our multiple pipelines:

apiVersion: jenkins.io/v1
kind: Scheduler
spec:
presubmits:
entries:
- agent: tekton
alwaysRun: true
context: unit-tests
name: unit-tests
optional: false
rerunCommand: /test unit
trigger: (?m)^/test( all| unit),?(\s+|$)
- agent: tekton
alwaysRun: true
context: integration-tests
name: integration-tests
optional: false
rerunCommand: /test integration
trigger: (?m)^/test( all| integration),?(\s+|$)

This is just part of the scheduler’s config. See the scheduler config used in Jenkins X for a complete configuration. Don’t forget to link your source repositories to your new scheduler — in the SourceRepository CRD.

We’ll also need to create a new Git repository for our build packs. See https://github.com/jenkins-x-buildpacks/jenkins-x-kubernetes for an example. Note that the build packs must be in the packs directory of your repository:

├── OWNERS
├── README.md
├── jenkins-x.yml
├── packs
│ ├── README.md
│ ├── go-lint
│ │ ├── README.md
│ │ └── pipeline.yaml
│ ├── go-unit-tests
│ │ ├── README.md
│ │ └── pipeline.yaml
│ ├── helm-chart-validation
│ │ ├── README.md
│ │ └── pipeline.yaml
│ ├── preview-env
│ │ ├── README.md
│ │ └── pipeline.yaml
│ ├── release
│ │ ├── README.md
│ │ └── pipeline.yaml

Our build packs pipelines are written in pipeline.yaml files instead of jenkins-x.yml files:

pipelines:
pullRequest:
pipeline:
agent:
image: golang:1.14
options:
containerOptions:
resources:
requests:
cpu: 1
memory: 512Mi
stages:
- name: unit-tests
steps:
- name: go-test
command: GOFLAGS=${GOFLAGS:-$DEFAULT_GOFLAGS} go test
args:
- -v
- ${PACKAGES:-$DEFAULT_PACKAGES}
env:
- name: DEFAULT_GOFLAGS
value: -mod=vendor
- name: DEFAULT_PACKAGES
value: ./...

Notice how we’re allowing users of our pipeline to customize the GOFLAGS or packages, while still providing good default values.

A complete jenkins-x-unit-tests.yml file in an application’s repository will look like:

buildPack: go-unit-tests
buildPackGitURL: https://github.com/owner/jx-buildpacks.git
buildPackGitRef: 1.2.3
pipelineConfig:
env:
- name: GOFLAGS
value: "-v -timeout=10s"

Overriding part of the build pack pipeline:

buildPack: go-unit-tests
buildPackGitURL: https://github.com/owner/jx-buildpacks.git
buildPackGitRef: 1.2.3
pipelineConfig:
pipelines:
overrides:
- type: replace
pipeline: pullRequest
stage: unit-tests
name: go-test
steps:
- name: custom-go-test
command: /path/to/some/custom-command
image: my-custom-container-image

Some of our build packs pipelines will require scripting. We’ll store our scripts in the scripts directory of our build packs repository:

├── OWNERS
├── README.md
├── jenkins-x.yml
├── packs
└── scripts
├── Dockerfile
├── README.md
├── some-script.sh
├── another-script.sh

Notice that we have a Dockerfile in this directory. We use it to store all our scripts in a container image:

FROM alpine:3.11
COPY . /scripts/

We’ll then apply the “Tekton shared-scripts pattern” in our pipelines, to copy the scripts from our container image to a shared volume, and mount this volume on all our containers:

pipelines:
release:
pipeline:
options:
containerOptions:
volumeMounts:
- mountPath: /my-jx-buildpacks-scripts
name: my-jx-buildpacks-scripts
volumes:
- name: my-jx-buildpacks-scripts
emptyDir: {}
env:
- name: MY_JX_BUILDPACKS_SCRIPTS
value: /my-jx-buildpacks-scripts
stages:
- name: release
steps:
- name: copy-my-jx-buildpacks-scripts
command: cp -r /scripts/* ${MY_JX_BUILDPACKS_SCRIPTS}/
image: my-jx-buildpacks-scripts-image:latest
- name: some-step
command: ${MY_JX_BUILDPACKS_SCRIPTS}/some-script.sh
image: some/image:version

The main advantage of this pattern is that we can use our scripts from any steps — including custom ones defined in the application’s pipelines — whatever the container image and the source repository.

Now we’ll need to release both our build packs and scripts. We can use a jenkins-x.yml file in our build packs repository to define the pipeline that will do that. We’ll need to use a custom setVersion step to ensure our tagged build packs will use the right version of the scripts:

buildPack: none
pipelineConfig:
pipelines:
release:
setVersion:
steps:
- name: next-version
command: jx step next-version
args:
- --use-git-tag-only
- name: fix-scripts-image-version
command: sed
args:
- -i
- "-e \"s,image: my-jx-buildpacks-scripts-image:latest,image: my-jx-buildpacks-scripts-image:`cat VERSION`,g\""
- packs/*/pipeline.yaml
- name: git-tag
command: jx step tag

This will update our packs/release/pipeline.yaml file and insert the new release version for our scripts container image.

The rest of the pipeline will be responsible for building the scripts container image with Kaniko and generating the GitHub release with the jx step changelog command.

The next interesting part is the promotion of our new release, to generate Pull Requests to update all the application’s pipelines using one of our build packs. We’re using Octopilot to do it:

buildPack: none
pipelineConfig:
pipelines:
release:
pipeline:
stages:
- name: promotion
steps:
- name: promote
command: octopilot
args:
- >-
--update "regex(file=jenkins-x*.yml,pattern='buildPackGitURL: https://github.com/owner/my-jx-buildpacks.git\s+buildPackGitRef: (.*)')=${VERSION}"
- --repo "discover-from(query=org:owner topic:auto-update-my-jx-buildpacks)"
image: dailymotion/octopilot:latest

It will first search on GitHub for all repositories in the owner organization with the auto-update-my-jx-buildpacks topic. The matching repositories will then be cloned locally, and Octopilot will update all thejenkins-x*.yml files with content matching the given regex, and then create Pull Requests.

Using Octopilot to do the promotion of our new releases allows us to scale to hundreds of repositories because when you add a new repository you just need to give it the right label.

Using Jenkins X practices and features, we’ve been able to design and write modern cloud-native CI/CD pipelines which have a low maintenance cost, are extensible, and help apply our global conventions and standards.

--

--

Vincent Behar
Jenkins X

I’m a developer, and I love it ;-) My buzzwords of the moment are Go, Kubernetes, Observability, Continuous Delivery, and everything open-source