CI/CD with Gitlab & Kubernetes

Today, after few experimentations, I’m able to show you how to deploy a complete CI/CD Pipeline based on Gitlab and Kubernetes with multi-environments and auto-deployment in the cluster.

Valentin Ouvrard
DevOpsTricks

--

To make this configuration at home, you’ll need a working Kubernetes cluster with two namespaces (default / dev) and a working Gitlab server with a runner dedicated to the proper git repo. You can use the DockerHub to push your Docker images but I prefer to use Gitlab’s internal registry.

The example website is called “Tropical Hosting” (a hosting company example website). To make it work we need this structure in our git repo :

  • .gitlab-ci.yml (used by gitlab for manage our CI)
  • Dockerfile (used by Docker to build our application
  • deploy.sh (used by CI to deploy our application in Kubernetes)
  • template/*.yml (our kubernetes resources like deployment, service, ingresses…)

In my case, I use two environments (dev|prod) in two different virtual-hosts handled by Nginx Ingress Controller (a reverse-proxy for Kubernetes) with TLS termination.

We will start with a Dockerfile :

FROM debian:jessieMAINTAINER NGINX Docker Maintainers “docker-maint@nginx.com”ENV NGINX_VERSION 1.11.9–1~jessieRUN apt-key adv — keyserver hkp://pgp.mit.edu:80 — recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 \
&& echo “deb http://nginx.org/packages/mainline/debian/ jessie nginx” >> /etc/apt/sources.list \
&& apt-get update \
&& apt-get install — no-install-recommends — no-install-suggests -y \
ca-certificates \
nginx=${NGINX_VERSION} \
nginx-module-xslt \
nginx-module-geoip \
nginx-module-image-filter \
nginx-module-perl \
nginx-module-njs \
gettext-base \
&& rm -rf /var/lib/apt/lists/*
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80 443
COPY website /usr/share/nginx/html
CMD [“nginx”, “-g”, “daemon off;”]

As you’ll see, I copy my static website from the website/ path and add it in my Nginx Docker container. So every modification in the code-base create a new dedicated Docker image.

Here is my .gitlab-ci.yml file with two major steps :

- build (build and push latest image with <branch>:<commit> tag)
- deploy (in dev|prod using kubectl and my deploy.sh script)

variables:
DOCKER_DRIVER: overlay
IMAGE_NAME: “gilabregistry/user/tropicalhosting”
build:
image: docker:latest
services:
— docker:dind
stage: build
cache:
key: “builder”
paths:
— ./.build
script:
— docker version
— docker build — pull -t “$IMAGE_NAME:${CI_BUILD_REF_NAME}_${CI_BUILD_REF}” .
— docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlabregistry
— docker push “$IMAGE_NAME:${CI_BUILD_REF_NAME}_${CI_BUILD_REF}”
k8s-deploy-Dev:
image: lwolf/kubectl_deployer:latest
services:
— docker:dind
stage: deploy
script:
— kubectl config set-cluster my-cluster — server=”$K8S_URL” — insecure-skip-tls-verify=true
— kubectl config set-credentials admin — token=”$K8S_TOKEN”
— kubectl config set-context default-context — cluster=my-cluster — user=admin
— kubectl config use-context default-context
— kubectl get cs
— /bin/sh deploy.sh ${CI_BUILD_REF_NAME}_${CI_BUILD_REF} dev dev.tropicalhosting.com
environment:
name: dev
url: https://dev.tropicalhosting.com
only:
— dev
k8s-deploy-Prod:
image: lwolf/kubectl_deployer:latest
services:
— docker:dind
stage: deploy
script:
— kubectl config set-cluster my-cluster — server=”$K8S_URL” — insecure-skip-tls-verify=true
— kubectl config set-credentials admin — token=”$K8S_TOKEN”
— kubectl config set-context default-context — cluster=my-cluster — user=admin
— kubectl config use-context default-context
— kubectl get cs
— /bin/sh deploy.sh ${CI_BUILD_REF_NAME}_${CI_BUILD_REF} default tropicalhosting.com
environment:
name: production
url: https://tropicalhosting.com
only:
— master

There is 3 code blocks, the build one, and two for the deployment, one for dev and one for prod. So you can easily add environment (pre-prod, testing…) by adding a git branch and a deploy block.

I’m using two variables in this file from Gitlab repo’s variables :
- $K8S_URL for access to the k8s API
- $K8S_TOKEN the token for admin user

After this, I have my k8s files in the templates/ directory (for example, deployment.yml) :

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tropicalhosting
namespace: ${NAMESPACE}
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: tropicalhosting
spec:
terminationGracePeriodSeconds: 60
containers:
— image: gitlabregistry/user/tropicalhosting:${BUILD_NUMBER}
name: tropicalhosting
imagePullPolicy: Always
ports:
— containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
imagePullSecrets:
— name: registrypullsecret

You can add some ressources depending on how your application work. In my case I have one deployment, one service and one ingress (TLS).

Don’t forget to create your Pull Secret for your private registry if you don’t use the DockerHub public registry.

As you’ll see, my k8s *.yml files contains some variables. For parse them and replace with the correct value (image tag, vhost for my ingress…) I use a deploy.sh script (with the great envsubst tool) :

#!/usr/bin/env bashTAG=${1}
NS=${2}
URL=${3}
export BUILD_NUMBER=${TAG}
export NAMESPACE=${NS}
export ENV_URL=${URL}
for f in templates/*.yml
do
envsubst < $f > “.generated/$(basename $f)”
done
kubectl apply -f .generated/

It is quite simple, it takes 3 arguments, the image tag (<branch>:<commit>), the k8s namespace and the vhost URL.

All the k8s resources applied to the cluster are created in the .generated/ folder. I included it emply in the git repo. You can put all the resources you application needs in the templates/ folder and they will be parsed via envsubst.

With this setup, you’re able to commit in your dev branch, wait >1mn to see the result of your dev environment. If all your changes are correct, you can easily merge your changes to master branch to deploy them in production.

Pipeline’s builds
Pipeline’s environments

As you can see, I use one pod for myDev environment, and my production environment is scaled at 5 pods :

--

--

Valentin Ouvrard
DevOpsTricks

SysAdmin / DevOps & FOSS Addict / Containerize everything - Docker — Kubernetes 🐳