Continuous delivery and deployment with GitLab CI

Jussi Nummelin
Kontena Blog
Published in
14 min readJan 16, 2018

--

This is a topic close to our hearts here at Kontena, Inc. The reason is that we’ve been practicing and doing this for years, even before Kontena, and everything we do now gets automatically shipped. I’ve also written before on some of the more conceptual considerations that should be taken into account when implementing continuous delivery and deployment capabilities.

In this article, I’ll take things to a more concrete level and walk you through how to setup a successful deployment pipeline using containers, GitLab, GitLab CI and Kontena.

Note that all the concepts and steps apply equally regardless of the CI/CD platform you are using. Naturally you will need to adjust the configurations of each step, but for most of the CI/CD tools out there, the setup is pretty similar.

I’ve built a complete sample workflow with our beloved Todo sample application.

Why a pipeline?

To summarise the earlier post, having an automated pipeline for delivering software gives you the ability to push changes to production environments faster and safer than doing it manually. By being able to push changes out faster and safer, you also have the possibility to go faster than your competition.

A quick word about the tools I’ll be using throughout this article.

Containers

I’ll be using containers as the fundamental thing with which we ship software. Containers give you the ability to package and run your application in a fully portable manner. With containers as the build artefact within your pipeline, you can ensure that in every step of the pipeline there is zero variance of the app. And containers make the actual deployment process so much easier.

GitLab & GitLab CI

For running the actual pipeline I’ll be using GitLab CI. It offers an easy to use platform for defining a pipeline configuration in yaml. And, surprise surprise, I like that it can run containers as part of the pipeline too. Being able to use containers as part of the pipeline, makes it very easily configurable as you do not need to go and install some random plugins to be able to do something. For example, if I’d be compiling Java code, I do not need to go and setup JVM on my build machines as I can now execute all the steps in containers.

Kontena

Setting up and managing containers by themselves does not really make sense. Your life is so much easier when you have some container orchestration tool in use which manages the actual deployment process for you. It handles things like rolling deployment, application loadbalancing and container scheduling to name a few. Without one, you end up scripting many of these things yourself. And trust me, it’s a pile of scripts you really do not want to manage long term. In this article I’ll be using Kontena as the deployment target platform to run my app in containers.

We really have created the most easy-to-use and developer friendly platform to run containers on. And with Kontena Cloud, things get even easier to setup and manage as you get everything hosted by us. Oh, did I mention you get free credits when you sign-up, and you don’t need a credit card to start with? Enough to get you started and to even try out the examples shown in this post.

The pipeline

There are probably as many variants of the pipeline steps as there are companies and projects implementing those. And of course, the runtime platform (language runtime, frameworks, etc.) poses some limitations and requirements for the pipeline. What also highly defines the pipeline implementation is the version control flow your project is using.

The example pipeline I’ll be building is based on the learnings and patterns we’ve been using to ship our own production services in the past few years. We’re using a slightly simplified version of the GitFlow workflow with many of our microservices, so I’ll base this sample project also on the same foundation.

Feature branches

All the development happens in feature branches. For each feature branch, each commit actually gets deployed to a testing environment. The deployment is done in such a way, that for each branch, there’s actually a different deployment happening. This pretty much means that for each branch, there’s constantly an application version running to be used by your QA folks for example. This also makes sure that everything is actually deployable since you’ll be able to fix deployment related issues in your feature branch already.

Master

In this case, I’ll use the master branch as the constantly shippable product. Whenever we merge a feature, or any other branches into the master we'll make an automated deployment to the staging environment. It ensures that your master branch is always deployable and gives you a stage to test things out before shipping them into production. I'd strongly suggest using master, or the equivalent of it, to be a protected branch into which you cannot make direct commits. So your commits are basically PR merges with appropriate automated testing, code review, etc. done.

Tags

We’ve been happy users of tags, a.k.a. releases in both GitHub and GitLab, to make production deployments. Of course the same things could be done using a pre-defined branch also. We’ve been using tags since it gives you also a pointer in time — the tag itself — to which you can easily refer to.

Environments

GitLab CI supports a concept called the environment. With this you can “bind” different environments into the stages and steps in your pipeline. With these you can for example, scope the secrets to be injected into the pipeline steps, or you can automatically shutdown a specific deployment for a given feature branch. In the example, I’ll be using three different environments to ship my application to.

When using Kontena, you usually have separate platforms for each of the following environments. Of course test and staging are probably running with less capacity and availability that your production platform. In practice this means that you could run the test and staging as mini platforms, with only a couple of worker nodes connected. For your production platform you want to have more availability built in, so you should always use the standard version. That gives you a highly available setup for the management nodes (a.k.a masters), with clustered databases and everything distributed across multiple availability zones.

Test

For each feature branch I want to have the developer automatically seeing the feature being actually deployed and running. This gives more confidence that the further deploys will also work and also gives you a constantly running app to test and play with. It’s then easy to make, for example, a UX change and actually show the change running.

The test environment actually consists of multiple deployments as each feature branch gets deployed into the same platform. To make this really easy, I’ll use some advanced Kontena Stack techniques to easily customize each deployment a bit.

Staging

The staging platform always runs the application version on the master branch. A new deployment is triggered for each commit into the master. If the master is a protected branch, each commit usually means a merge of some PR. Each deployment updates the existing deployment on Kontena and thus always makes the latest merged features available in staging.

Production

When ever you want to make a production deployment, you’ll do it by creating a tag. Creating a new tag triggers the pipeline and selects a set of jobs that will go and build the needed container(s), push them to the registry and make a deployment to the production platform.

Integrating Kontena platform to GitLab CI

By now it’s pretty clear that we’ll want to integrate our GitLab CI pipeline to make deployments on our Kontena Platform. There’s pretty much two different ways to do it:

  • Connect to the platform master using its REST API
  • Use Kontena CLI “embedded” in the CI pipeline

We usually use the latter method as it’s much easier and manageable.

The CLI can be easily configured to connect to a platform using environment variables. In order to do that you need to figure out a couple of things.

Platform URL

In order for the CLI to connect to the correct platform master we need to give it a URL to the master’s REST API. One of the easiest ways to figure that out is to check it from you local terminal when your own CLI is connected to it:

$ kontena master current jussi/demo https://bold-grass-3288.platforms.us-east-1.kontena.cloud

So for CLI to connect to this specific platform master we need to populate KONTENA_URL environment variable with a value of https://bold-grass-3288.platforms.us-east-1.kontena.cloud.

Platform Grid

We need to tell the CLI also which platform grid we want to operate. Again, it’s easy to check with your local CLI tool:

$ kontena grid ls 
NAME NODES SERVICES USERS
test * 3 0 2

The one with the * next to it is the one your local CLI is connected to. So we need to populate KONTENA_GRID environment variable with the value of testin this case.

Token

All the operations at the platform master API are authenticated and authorized using Oauth2 Bearer tokens. So for the pipeline using the CLI we need to have a token for it. Use your local CLI to generate a new, never expiring token for the pipeline to use:

$ kontena master token create -e 0

Grab the shown access_token and place it in the KONTENA_TOKEN env variable for the CLI to grab it within the pipeline.

The jobs

With GitLab CI, as with most of the CI/CD tooling, you define your build and deployment configuration using yaml syntax. In GitLab CI you define the configuration in a file called .gitlab-ci.yml placed on the projects root.

All the steps in my pipeline are executed as Docker containers. GitLab runner is set up so that it executes each step in a separate container and thus you get nicely isolated builds.

The steps executed depend on the trigger of the pipeline and are grouped together in stages. All the jobs within a stage are executed in parallel and the pipeline moves from one stage to another only after each job in a stage has executed successfully.

In this case I definitely want to execute different jobs based on the branch the commit was made on or if it was a tag that got created.

In the example project I’ve defined the following stages:

  • test
  • build
  • tag
  • deploy
  • smoke-test

The default stages, if you haven’t defined your own stages, are: build, test and deploy. I've changed the order of build and test since I'm using a Ruby based app that doesn't need actual building/compilation.

I’ve used a common variable to make things more controllable and to avoid repetition:

variables: 
GL_IMAGE: registry.gitlab.com/jnummelin/todo-example:$CI_COMMIT_SHA
SMOKE_IMAGE: images.kontena.io/jussi/todo-example:$CI_COMMIT_REF_SLUG
SMOKESTACK: tododev-$CI_COMMIT_REF_SLUG PROD_IMAGE: images.kontena.io/jussi/todo-example:$CI_COMMIT_TAG

Let’s take a look at the jobs that happen for each stage.

Test

In the test stage we're mostly interested in executing some local tests to make sure the application is even remotely possible to push towards production. As mentioned, the example app is a Ruby app so I'll use standard a Ruby environment to execute the basic unit tests.

rspec: 
stage: test
image: ruby:2.3
services:
- mongo:3.2
variables:
MONGODB_URI: mongodb://mongo:27017/todo_test
script:
- bundle install --path=cache/bundler
- rspec spec/

stage: This job is only executed in the test stage.
image: Use the standard Ruby Docker image to execute the job
services: What other services needs to be up-and-running during this job. In this case I'll need MongoDB running as the local tests use it. These are automatically linked to the build container.
variables: What variables are injected into the build container. In this case I'll tell the app where it can find MongoDB.
script: What is actually done during the job. In this case I'll first need to install all the dependencies using bundler and after that execute the actual tests.

That’s it, really. I didn’t have to go and install random plugins to be able to run Ruby jobs or anything. Nice.

As this is the only job in the test stage, we'll move to the build stage once the rspecs pass.

Build

The build stage basically builds me the container that I can use in further stages.

build-image: 
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
- docker build -t $GL_IMAGE .
- docker push $GL_IMAGE

stage: This job is only executed in the build stage.
image: Uses docker:latest image that has all the needed docker cli tooling built-in.
services: As I'm building Docker images, I need to have Docker daemon available. dindactually refers to Docker-in-Docker, so the Docker daemon used in this job is actually running in a container.
script: Login to GitLab registry, build the "local" image and push it.

Tag

I defined a special tag stage as I want to be able to tag the image as a separate step from the image building. As I want/need to also tag the images differently based on the branch/tag the pipeline is executing on it is good to have it as a separate stage.

There’s actually two different jobs defined for the tag stage.

tag-smoke: 
stage: tag
image: docker:latest
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
- docker pull $GL_IMAGE
- docker login -u "$IMAGES_KONTENA_IO_USER" -p "$IMAGES_KONTENA_IO_PASSWORD" images.kontena.io
- docker tag $GL_IMAGE $SMOKE_IMAGE
- docker push $SMOKE_IMAGE
only:
- branches
except:
- master

tag-smoke the job basically takes the existing image built in build-image job and tags it with the branch name. So for example, if my branch name is feature/cool-feature it would tag and push image images.kontena.io/jussi/todo-example:feature-cool-feature. GitLab CI automatically "slugifies"* the branch names so that they can be easily used in many places. This job is only run for branches where the branch name != master.

*) Slugification:

Commit reference lowercased, shortened to 63 bytes, and with everything except 0–9 and a-z replaced with -. No leading / trailing -. Use in URLs, host names and domain names.

Naturally I need to provide the Kontena Image Registry login details as secret variables for the GitLab CI. For creating the needed token, consult the image registry docs.

To tag the image for production, the job is pretty much the same, except for the actual tag:

tag-prod: 
stage: tag
image: docker:latest
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
- docker pull $GL_IMAGE
- docker login -u "$IMAGES_KONTENA_IO_USER" -p "$IMAGES_KONTENA_IO_PASSWORD" images.kontena.io
- docker tag $GL_IMAGE $PROD_IMAGE
- docker push $PROD_IMAGE
only:
- tags

In my case, I want to tag the image with the git tag so that it’s obvious to everyone which version of the app is running at any given point. And as I’m doing production deployments from tags only, this job is restricted to only tags.

Deploy

Now we have the image built and pushed to a Kontena hosted image registry. That’s kinda the first step of getting the application up-and-running. The second step is to instruct Kontena to go and deploy the app with the new image.

As with tag stage, the deployment is naturally split into different jobs based on the target environment.

deploy-smoke: 
stage: deploy
image:
name: kontena/cli:latest
entrypoint: ["/bin/sh", "-c"]
environment:
name: review/$CI_COMMIT_REF_SLUG
url: http://$CI_COMMIT_REF_SLUG.todo-testing.kontena.works
on_stop: stop_smoke
variables:
KONTENA_URL: $KONTENA_SMOKE_URL
KONTENA_GRID: $KONTENA_SMOKE_GRID
KONTENA_TOKEN: $KONTENA_SMOKE_TOKEN
VHOSTS: $CI_COMMIT_REF_SLUG.todo-testing.kontena.works
CI_COMMIT_TAG: $CI_COMMIT_REF_SLUG
script:
- kontena stack install --name $SMOKESTACK || kontena stack upgrade $SMOKESTACK
only:
- branches
except:
- master

deploy-smoke the job defines a dynamic environment based on the branch name it is triggered on. I'll use a shared test platform to deploy all branches into so that needs a couple of Kontena Stack variables in my application stack:

variables: 
release:
type: string
from:
env: CI_COMMIT_TAG
vhosts:
type: string
from:
env: VHOSTS

With these I’m easily now able to deploy multiple different branches into the same platform, just using different names for the stack and different variable values.

Assuming I’m working on a branch named feature/cool-thing, this job would go and deploy my application stack with the name tododev-feature-cool-thing to my common testing platform. When making the deployment, we'll be using an image tagged with feature-cool-thing to deploy the application and the platform loadbalancer is configured to forward all traffic with the domain feature-cool-thing.todo-testing.kontena.works to this specific application version. So automatically we get a branch specific sample app up-and-running.

But how are these arbitrary and dynamic DNS names actually pointing to something that could be actually accessible? It is possible to do some kind of wildcard DNS entries using CNAME aliasing. What I've done is the following:

  • define proper DNS A records to map my test platform loadbalancer(s) public IP address(es) to a domain name. For example IPs 1.2.3.4 and 2.3.4.5 mapped to name test-platform.kontena.works
  • create a wildcard CNAME *.todo-testing.kontena.works that points to test-platform.kontena.works

As a result now foo.todo-testing.kontena.works and bar.todo-testing.kontena.works, and so on, magically point to the same public IP addresses and the Kontena Loadbalancer running there can go and proxy the traffic to the correct stacks based on the incoming hostname. Pretty neat.

The dynamic deployment from any branch gives a nice and super easy way to see the branch live. When we use these dynamic environments in GitLab CI we can even see these in the merge requests:

As you probably guessed, the staging and prod deployment is pretty much the same configuration.

deploy-stack: 
stage: deploy
image:
name: kontena/cli:latest
entrypoint: ["/bin/sh", "-c"]
environment:
name: production
url: https://todo-demo.kontena.works
variables:
KONTENA_URL: $KONTENA_PROD_URL
KONTENA_GRID: $KONTENA_PROD_GRID
KONTENA_TOKEN: $KONTENA_PROD_TOKEN
VHOSTS: todo-demo.kontena.works
script:
- kontena stack install || kontena stack upgrade todo
only:
- tags

We just change the target environment, and the needed secrets for that, and limit the triggering only to tags for the production deployment and to the master branch for the staging deployment.

There’s one special type of job in the deploy stage called stop_smoke.

stop_smoke: 
stage: deploy
image:
name: kontena/cli:latest
entrypoint: ["/bin/sh", "-c"]
variables:
KONTENA_URL: $KONTENA_SMOKE_URL
KONTENA_GRID: $KONTENA_SMOKE_GRID
KONTENA_TOKEN: $KONTENA_SMOKE_TOKEN
GIT_STRATEGY: none
when: manual
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
script:
- kontena stack rm --force $SMOKESTACK
only:
- branches
except:
- master

It’s basically a “hook” which can automatically go and stop the branch specific dynamic environment. As we’re now running these dynamic envs in the same testing platform as the separate stack, stopping the env is pretty straight-forward. We just go and completely remove the branch specific stack installation. This again ties into the environments on the GitLab side nicely:

Smoke test

The last stage and job is to execute automatic UI smoke testing which is executed only for feature branches.

smoke-test: 
stage: smoke-test
image: ruby:2.3
environment:
name: review/$CI_COMMIT_REF_SLUG
url: http://$CI_COMMIT_REF_SLUG.todo-testing.kontena.works
script:
- bundle install --path=cache/bundler
- curl -L https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 | tar -xj -C /usr/bin --transform='s,.*/,,' phantomjs-2.1.1-linux-x86_64/bin/phantomjs
- rspec integration-spec/
only:
- branches
except:
- master

The smoke tests use capybara, poltergeist and phantomjs to execute testing through the application web UI, in headless mode. The tests are written in Ruby so I’ll execute them using the standard ruby:2.3 image. Phantomjs needs binaries in place so I'll go and grab them dynamically before running the actual tests.

How the smoke tests work and actually test the UI is worth its own post that I’ll publish in the near future.

Using containers in your build and deployment pipeline makes so much sense. Not only to actually deploy your own software but also as the basic execution environment for the actual pipeline. When you combine that with a super easy-to-use container platform like Kontena, you can have your pipeline running within hours. When using these kinds of automated deployment pipelines you’ll notice also one thing pretty soon: You don’t feel comfortable pushing manually, your pipeline is the only thing you now trust to do deployments.

If you want to try how easy it can be to ship your applications using automated pipelines, go and sign-up to Kontena Cloud, spin up your test platform and try the sample project for example by forking it.

Image Credit: Architecture Industry by Michael Gaida.

Originally published at blog.kontena.io on January 16, 2018.

--

--

Jussi Nummelin
Kontena Blog

Engineer, Dad, Fly-fisher, Husband; in varying order. Currently fiddling with Kubernetes at Mirantis