Even the smallest side project deserves its CI/CD pipeline


With today’s tools, setting up a simple CI/CD pipeline is not difficult. Doing so even for a side project is a great way to learn many things. Docker, GitLab, Portainer are some great components to use for this setup.

The sample project

As an organizer of technical events in the Sophia-Antipolis area (southern France) I was often asked if there was a way to know all the upcoming events (the meetups, the Jugs, the ones organized by the local associations,…). As there was not a single place which listed them all, I came up with https://sophia.events, a very simple web page which try to keep up to date such a list of events. This project is available in GitLab

Disclaimer: damn simple project, but the complexity of the project is not important here. The components of the CI/CD pipeline we will detail can be used in pretty much the same way for much more complicated projects. They are a nice fit for micro-services though.

Quick look into the code

Basically, to keep things dead simple, there is an events.json file in which each new event is added. Part of this file is provided in the snippet below (sorry for the part in French).

“events”: [
“title”: “All Day DevOps 2018”,
“desc”: “We’re back with 100, 30-minute practitioner-led sessions and live Q&A on Slack. Our 5 tracks include CI/CD, Cloud-Native Infrastructure, DevSecOps, Cultural Transformations, and Site Reliability Engineering. 24 hours. 112 speakers. Free online.”,
“date”: “17 octobre 2018, online event”,
“ts”: “20181017T000000”,
“link”: “https://www.alldaydevops.com/",
“sponsors”: [{“name”: “all-day-devops”}]
“title”: “Création d’une Blockchain d’entreprise (lab) & introduction aux smart contracts”,
“desc”: “Venez avec votre laptop ! Nous vous proposons de nous rejoindre pour réaliser la création d’un premier prototype d’une Blockchain d’entreprise (Lab) et avoir une introduction aux smart contracts.”,
“ts”: “20181004T181500”,
“date”: “4 octobre à 18h15 au CEEI”,
“link”: “https://www.meetup.com/fr-FR/IBM-Cloud-Cote-d-Azur-Meetup/events/254472667/",
“sponsors”: [{“name”: “ibm”}]


A mustache template is applied to this file to generate the final web assets.

Docker multi-stage build

Once the web assets are generated, they are copied over into a nginx image, the one that is deployed on the target machine.

The build is then done in 2 parts thanks to the multi-stage build:

  • generation of the assets
  • creation of the final image containing the assets

The Dockerfile used for the build is the following one:

# Generate the assets
FROM node:8.12.0-alpine AS build
COPY . /build
WORKDIR /build
RUN npm i
RUN node clean.js
RUN ./node_modules/mustache/bin/mustache events.json index.mustache > index.html
# Build the final image used to serve them
FROM nginx:1.14.0
COPY --from=build /build/*.html /usr/share/nginx/html/
COPY events.json /usr/share/nginx/html/
COPY css /usr/share/nginx/html/css
COPY js /usr/share/nginx/html/js
COPY img /usr/share/nginx/html/img

Local testing

In order to test the generation of the site, just clone the repo and run the test.sh script. It then create an image and run a container out of it

$ git clone git@gitlab.com:lucj/sophia.events.git
$ cd sophia.events
$ ./test.sh
Sending build context to Docker daemon 2.588MB
Step 1/12 : FROM node:8.12.0-alpine AS build
---> df48b68da02a
Step 2/12 : COPY . /build
---> f4005274aadf
Step 3/12 : WORKDIR /build
---> Running in 5222c3b6cf12
Removing intermediate container 5222c3b6cf12
---> 81947306e4af
Step 4/12 : RUN npm i
---> Running in de4e6182036b
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN www@1.0.0 No repository field.
added 2 packages from 3 contributors and audited 2 packages in 1.675s
found 0 vulnerabilities
Removing intermediate container de4e6182036b
---> d0eb4627e01f
Step 5/12 : RUN node clean.js
---> Running in f4d3c4745901
Removing intermediate container f4d3c4745901
---> 602987ce7162
Step 6/12 : RUN ./node_modules/mustache/bin/mustache events.json index.mustache > index.html
---> Running in 05b5ebd73b89
Removing intermediate container 05b5ebd73b89
---> d982ff9cc61c
Step 7/12 : FROM nginx:1.14.0
---> 86898218889a
Step 8/12 : COPY --from=build /build/*.html /usr/share/nginx/html/
---> Using cache
---> e0c25127223f
Step 9/12 : COPY events.json /usr/share/nginx/html/
---> Using cache
---> 64e8a1c5e79d
Step 10/12 : COPY css /usr/share/nginx/html/css
---> Using cache
---> e524c31b64c2
Step 11/12 : COPY js /usr/share/nginx/html/js
---> Using cache
---> 1ef9dece9bb4
Step 12/12 : COPY img /usr/share/nginx/html/img
---> e50bf7836d2f
Successfully built e50bf7836d2f
Successfully tagged registry.gitlab.com/lucj/sophia.events:latest
=> web site available on http://localhost:32768

Using the URL provided at the end of the output, we can access the web page

The target environment

A virtual machine provisioned on a cloud provider

As you have probably noticed, this web site is not critical (only a few dozens visits a day), and as such it only runs on a single virtual machine. This one was created with Docker Machine on Exoscale, a great European cloud provider.

BTW, if you want to give Exoscale a try, ping me and I could provide some 20€ vouchers.

Docker daemon in swarm mode

The Docker daemon running on the above VM is configured to run in Swarm mode so it allows to use the stack, service, config and secret primitives and the great (and easy to use) orchestration capabilities of Docker Swarm.

The application running as a Docker stack

The following file defined the service which runs the nginx web server containing the web assets.

version: "3.7"
image: registry.gitlab.com/lucj/sophia.events
- proxy
mode: replicated
replicas: 2
parallelism: 1
delay: 10s
condition: on-failure
external: true

Some explanations:

  • the image is in the private registry hosted on gitlab.com (no Docker Hub involved here)
  • the service is in replicated mode with 2 replicas, meaning that 2 tasks / containers of the service are running at the same time. A VIP (virtual IP address) is associated to the service by Swarm so that each request targeting the service is load balanced between the 2 replicas
  • each time an update of the service is done (to deploy a new version of the web site), one of the replicas is updated and then the second one 10 seconds later. This ensures the web site is still available during the update process. We could also have used a rollback strategy but it’s not needed at this point
  • the service is attached to the external proxy network so the TLS termination (running in another service deployed on the swarm, but outside of this project) can send requests to the www service

This stack is run with the following command

$ docker stack deploy -c sophia.yml sophia_events

Portainer to manage them all

Portainer is a great web UI which allows to manage Docker hosts and Docker Swarm cluster very easily. Below is a screenshot of the Portainer interface which lists the stacks available in the swarm.

The current setup shows 3 stacks:

  • Portainer itself
  • sophia_events which contains the service running our web site
  • tls, the TLS termination

If we list the details of the www service, which is in the sophia_events stack, we can see the Service webhook is activated. This feature is available since Portainer 1.19.2 (the last version to date), it allows to define a HTTP Post endpoint that can be called to trigger an update of the service. As we will see later on, the GitLab runner is in charge of calling this webhook.

Note: as you can see from the screenshot, I access the Portainer UI from localhost:8888. As I do not want to expose the Portainer instance to the external world, the access is done through a ssh tunnel which is opened with the following command

ssh -i ~/.docker/machine/machines/labs/id_rsa -NL 8888:localhost:9000 $USER@$HOST

Doing so, all the requests targeting the local machine on port 8888 are sent to the port 9000 on the virtual machine through ssh. 9000 is the port on which Portainer is running on the VM but this port is not opened to the outside world, as it’s blocked by a Security Group in the Exoscale configuation.

Note: in the above command, the ssh key used to connect to the VM is the one generated by Docker Machine during the VM creation.

GitLab runner

A GitLab runner is a process in charge of executing the actions defined in the .gitlab-ci.yml file. For this project, we define our own runner running as a container on the VM.

The first step is to register the runner providing a couple of options.

docker run — rm -t -i \
-v $CONFIG_FOLDER:/etc/gitlab-runner \
gitlab/gitlab-runner register \
--non-interactive \
--executor "docker" \
—-docker-image docker:stable \
--url "https://gitlab.com/" \
—-registration-token "$PROJECT_TOKEN" \
—-description "Exoscale Docker Runner" \
--tag-list "docker" \
--run-untagged \
—-locked="false" \

Among those options, PROJECT_TOKEN is provided from the project page on GitLab.com and is used to register external runners.

The registration token to be used to register a new runner

When the runner is registered, we need to start it :

docker run -d \
--name gitlab-runner \
—-restart always \
-v $CONFIG_FOLDER:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \

Once it’s registered and started, the runner is listed in the project page on GitLab.com.

The runner created for this project

This runner will then receive some work to do each time a new commit is pushed to the repository. It performs sequentially the test, build and deploy stages defined in the .gitlab-ci.yml file.

DOCKER_HOST: tcp://docker:2375
- test
- build
- deploy
stage: test
image: node:8.12.0-alpine
- npm i
- npm test
stage: build
image: docker:stable
- docker:dind
- docker image build -t $CONTAINER_IMAGE:$CI_BUILD_REF -t $CONTAINER_IMAGE:latest .
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
- docker image push $CONTAINER_IMAGE:latest
- docker image push $CONTAINER_IMAGE:$CI_BUILD_REF
- master
stage: deploy
image: alpine
- apk add --update curl
- master
  • the test stage runs some pre-checks ensuring the events.json file is well formed and that there is no images missing
  • the build stage builds the image and pushes it to the GitLab registry
  • the deploy stage triggers the update of the service via a webhook sent to Portainer. The WWW_WEBHOOK variable is defined in the CI/CD settings in the project page on GitLab.com.


  • this runner is running in a container on the swarm. We could have use a shared runner, runners publicly available which share their time between the jobs needed by different projects hosted onGitLab. But, as the runner needs to have access to the Portainer endpoint (to send the webhook), and because I did not want Portainer to be accessible from the outside, having the runner inside the cluster is more secure.
  • also, because the runner runs in a container, it sends the webhook to the IP address of the Docker0 bridge network in order to contact Portainer through the port 9000 it exposes on the host. The webhook thus as the following format[…]a7-4af2-a95b-b748d92f1b3b

The deployment process

The update of a new version of the site follows the workflow below :

  1. A developer pushes some changes to GitLab. The changes basically involve one or several new events in the events.json file plus some additional sponsors’ logos.

2. The GitLab runner performs the actions defined in .gitlab-ci.yml.

3. The GitLab runner calls the webhook defined in Portainer

4. Upon the webhook reception, Portainer deploys the new version of the www service. It does so calling the Docker Swarm API. Portainer has access to the API as the socket /var/run/docker.sock is bind mounted when it is started

If you want to know more regarding the usage of this unix socket, you might be interested by this previous article

5. The users can then see the new version of the web site


Let’s change a couple of things in the code and commit / push those changes.

$ git commit -m 'Fix image'

$ git push origin master

The screenshot below shows the pipeline that was triggered by the commit within the project page on GitLab.com

On Portainer side, the webhook was received and the service update was performed. We cannot see it clearly here, but one replica has been updated, leaving the web site accessible through the second replica. Then, a couple of seconds later, the second replica was updated.


Even for this tiny project, setting up a CI/CD pipeline was a good exercice, especially to be more familiar with GitLab (which is on my To Learn list since a long time) which is a really good and professional product. It was also the opportunity to play with the long awaited webhook feature available in the last version of Portainer (1.19.2). Also, for a side project like this one, the usage of Docker Swarm was a no brainer, so cool and easy to use…