GitLab CI/CD from git to K8s

Mohammed Ragab
Jan 23 · 7 min read

In this article I will explain how to use GitLab CI/CD to build , test and deploy nodejs app to K8s (via rancher API)

What is GitLab?

GitLab is a complete DevOps platform. One application with endless possibilities. Organizations rely on GitLab’s source code management, CI/CD, security, and more to deliver software rapidly.

What is GitLab CI/CD?

Is a service being a part of GitLab that build and test the
software whenever the developer pushes code to the application.
GitLab CD (Continuous Deployment) is a software service
that places the changes of every code in the production
which results in everyday deployment of production.
• It is a faster system that can be used for code
deployment and development.
• You can execute the jobs faster by setting up your own
runner (it is an application that processes the builds) with
all dependencies which are pre-installed.
• GitLab CI solutions are economical and secure which are
very flexible in costs as much as the machine used to run it.
• It allows the project team members to integrate their
work daily, so that the integration errors can be identified
easily by an automated build.

GitLab is using DIND (docker in docker) service to build and push docker which means running docker daemon inside docker containers.

What is the problem with using DIND?

If your GitLab instance running out-side Kubernest there is no big problem you can use a docker in your runner machines but in K8s DIND service need a lot of privileges to be opened also there are some problems in using DIND in general such as

1- (Linux Security Modules) like SELinux: when starting a container, the “inner Docker” might try to apply security profiles that will conflict or confuse the “outer Docker.” This was actually the hardest problem to solve when trying to merge the original implementation of the -privileged flag.

2- When you run Docker in Docker, the outer Docker runs on top of a normal filesystem (EXT4, BTRFS, what have you) but the inner Docker runs on top of a copy-on-write system (AUFS, BTRFS, etc.. depending on what the outer Docker is set up to use). There are many combinations that won’t work. For example, you cannot run AUFS on top of AUFS. If you run BTRFS on top of BTRFS, it should work at first, but once you have nested subvolumes, removing the parent subvolume will fail.

So I will use Kaniko executor to build and push docker images in CI/CD pipeline you can use DIND considering the issues.

kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster This enables building container images in environments that can’t easily or securely run a Docker daemon, such as a standard Kubernetes cluster.

Why have I selected Kaniko?

1- it does not need a docker daemon

2- it does not require any special privileges or permissions

3- you can run “kaniko” in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon.

How Kaniko Works?

After I explained the tools and technologies lets start writing a pipeline

We need to make the following:

1- Install node modules and build the nodejs app

2- Test app using JUnit with the test report

3- Build and push the docker image using Kankio executor

4- Clone helm chart from the git repository and edit it using one of scripting language I will use python with YAML pipe

5- Use Rancher CLI and API to update the catalog and upgrade the app

Rancher is an open-source software that combines everything an organization needs to adopt and run containers in production. Built on Kubernetes, Rancher makes it easy for DevOps teams to test, deploy, and manage their applications.

GitLab uses YAML style to write the pipeline

  • First, we need to define the pipeline stages
stages:
- build
- test
- docker-build
- bump-helm-chart-version
- deployment
  • For the build, I will use node image to build the nodejs app you can use any image relative to your app such as .net core, java, Ruby, etc.
build:
stage: build
image: node
script:
- echo “Start building App”
- npm install
- npm build
- echo “Build successfully!”
artifacts:
expire_in: 1 hour
paths:
- build/dist
- node_modules/

In the build stage artifacts, we will expect the build path and node-modules

  • For the testing stage, I will use node image as well
stage: test
image: node
script:
- echo “Testing App”
- npm run test
- echo “Test successfully!”
artifacts:
expire_in: 1 hour
paths:
- test-reports/jest-junit.xml
reports:
junit:
- test-reports/jest-junit.xml

I put Junit validation on the test report file as a result of artifacts to the test stage.

  • For build and push docker image, I will use Kankio image
docker-build:
image: gcr.io/kaniko-project/executor:debug
stage: docker-build
script:
- export VERSION=$(cat package.json | grep version | head -1 | awk -F= “{ print $2 }” | sed ‘s/[version:,\”,]//g’ | tr -d ‘[[:space:]]’)
- echo “app version $VERSION”
- mkdir -p /kaniko/.docker
- echo “{\”auths\”:{\”$CI_REGISTRY\”:{\”username\”:\”$CI_REGISTRY_USER\”,\”password\”:\”$CI_REGISTRY_PASSWORD\”}}}” > /kaniko/.docker/config.json
- /kaniko/executor — context ./ — dockerfile build/dockerfile — insecure — skip-tls-verify — destination $CI_REGISTRY/amlt/test-node-js-app:$VERSION
- echo “Image build and pushed successfully”

First I exported the version of the app from package.json (you can use your way to version your app) by using Linux grep command.

then I created the config.json for Kankio that contains docker registry credential I used GitLab pre-defined ENV. you can set some sort of ENV in your GitLab to be used in your pipeline. From your git repository go to settings and select CI/CD.

finally, I ran the kaniko build/push command

Clone and edit K8s helm charts

In this stage, I will use a pure ubuntu image and I will prepare the following tools to be used in the stage

1- SSH agent to connect to the helm charts git repository for cloning, commit and push.

2- Python and python pipe to edit helm charts

3- Git

bump-helm-chart-version:
stage: bump-helm-chart-version
image: ubuntu:18.04
before_script:
- export VERSION=$(cat package.json | grep version | head -1 | awk -F= “{ print $2 }” | sed ‘s/[version:,\”,]//g’ | tr -d ‘[[:space:]]’)
- apt-get update -y && apt-get install openssh-client -y
- apt install git -y
- apt-get install -y python
- apt-get install python-pip -y
- python -m pip install pyyaml
- eval $(ssh-agent -s)
- echo “$GIT_SSH_PRIVATE_KEY” | tr -d ‘\r’ | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan $GIT_HOST >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- git config — global user.name “${GIT_USER_NAME}”
- git config — global user.email “${GIT_USER_EMAIL}”

In the pre-script job, I prepared our image and set up the required tools, and let git know the user name and emails to avoid asking during cloning or push.

In the script first I connect to git host using ssh-agent then I cloned the helm charts.

I take a copy from the original version to edit it to keep versioning in Rancher for downgrade and upgrade purposes.

I wrote a simple python script to edit YAML files values such as docker image tag, chart version, and app version

import yaml
import os
f=open(“values.yaml”)
y=yaml.safe_load(f)
y[“image”][“tag”] = os.environ[“VERSION”]
with open(“values.yaml”, “w”) as f:
yaml.dump(y, f)
f=open(“Chart.yaml”)
y=yaml.safe_load(f)
y[“version”]= os.environ[“VERSION”]
y[“appVersion”]= os.environ[“VERSION”]
with open(“Chart.yaml”, “w”) as f:
yaml.dump(y, f)

Finally, I committed and push modifications to the git repository.

In the last stage, we need to upgrade the app in K8s using Rancher API

In this stage, I used a Linux image with Rancher CLI

You can use direct HTTP call to Rancher API as well

deployment:
image: ubuntu:18.04
stage: deployment
before_script:
- echo “Configure machine”
- export VERSION=$(cat package.json | grep version | head -1 | awk -F= “{ print $2 }” | sed ‘s/[version:,\”,]//g’ | tr -d ‘[[:space:]]’)
- apt-get update
- apt install curl -y
- curl -k $RANCHER_CLI_URL | tar xz
script:
- echo “Start deployment”
- cd $RANCHER_CLI_FOLDER_NAME
- ls -t
- ./rancher login $RANCHER_URL/v3 — token $RANCHER_API_TOKEN — skip-verify — context $RANCHER_CONTEXT
- |
curl -k — location — request POST “$REFRESH_CATELOG_URL” — header “Authorization: Bearer $RANCHER_API_TOKEN” && sleep 20
- ./rancher app upgrade $RANCHER_APP_NAME $VERSION
- echo “The App successfully upgraded”

First I prepared the image by download rancher CLI from a URL stored in GitLab ENV to be changed easily.

  • I authenticated to Rancher api using rancher access token stored in env
  • In the update catalog step, I used a direct HTTP call as a workaround because CLI does not read/refresh project scope catalogs
  • In the end, I used the app version from the package.json to upgrade the app in K8s using rancher (remember you can use your versioning such as git tags, etc)

Now your app is upgraded and the workload is updated with a new docker image and you saved your time.

In the end, I hope I helped you with ideas to solve your deployment problems

If you are using Jenkins instead of GitLab I recommend checking my article

Nerd For Tech

From Confusion to Clarification

Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To know more about us, visit https://www.nerdfortech.org/.

Mohammed Ragab

Written by

Software engineer

Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To know more about us, visit https://www.nerdfortech.org/.