With today’s tools, setting up a simple CI/CD pipeline is not difficult. Doing so even for a side project is a great way to learn many things. Docker, GitLab, and Portainer are some great components to use for this setup.
The sample project
As an organizer of technical events in the Sophia-Antipolis area (southern France), I was often asked if there was a way to know all the upcoming events (the meetups, the Jugs, the ones organized by the local associations, etc.) As there was not a single place which listed them all, I came up with https://sophia.events, a simple web page which tries to keep such a list of events up to date. This project is available in GitLab
Disclaimer: this is a simple project, but the complexity of the project is not important here. The components of the CI/CD pipeline we will detail can be used in pretty much the same way for much more complicated projects. They are a nice fit for micro-services though.
Quick look into the code
Basically, to keep things dead simple, there is an events.json file in which each new event is added. Part of this file is provided in the snippet below (sorry for the part in French).
A mustache template is applied to this file to generate the final web assets.
Docker multi-stage build
Once the web assets are generated, they are copied over into a nginx image — the one that is deployed on the target machine.
The build is then done in 2 parts, thanks to the multi-stage build:
- generation of the assets
- creation of the final image containing the assets
This is Dockerfile used for the build:
# Generate the assets
FROM node:8.12.0-alpine AS build
COPY . /build
RUN npm i
RUN node clean.js
RUN ./node_modules/mustache/bin/mustache events.json index.mustache > index.html# Build the final image used to serve them
COPY --from=build /build/*.html /usr/share/nginx/html/
COPY events.json /usr/share/nginx/html/
COPY css /usr/share/nginx/html/css
COPY js /usr/share/nginx/html/js
COPY img /usr/share/nginx/html/img
In order to test the generation of the site, just clone the repo and run the test.sh script. It will then create an image and run a container out of it.
$ git clone email@example.com:lucj/sophia.events.git$ cd sophia.events$ ./test.sh
Sending build context to Docker daemon 2.588MB
Step 1/12 : FROM node:8.12.0-alpine AS build
Step 2/12 : COPY . /build
Step 3/12 : WORKDIR /build
---> Running in 5222c3b6cf12
Removing intermediate container 5222c3b6cf12
Step 4/12 : RUN npm i
---> Running in de4e6182036b
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN firstname.lastname@example.org No repository field.added 2 packages from 3 contributors and audited 2 packages in 1.675s
found 0 vulnerabilitiesRemoving intermediate container de4e6182036b
Step 5/12 : RUN node clean.js
---> Running in f4d3c4745901
Removing intermediate container f4d3c4745901
Step 6/12 : RUN ./node_modules/mustache/bin/mustache events.json index.mustache > index.html
---> Running in 05b5ebd73b89
Removing intermediate container 05b5ebd73b89
Step 7/12 : FROM nginx:1.14.0
Step 8/12 : COPY --from=build /build/*.html /usr/share/nginx/html/
---> Using cache
Step 9/12 : COPY events.json /usr/share/nginx/html/
---> Using cache
Step 10/12 : COPY css /usr/share/nginx/html/css
---> Using cache
Step 11/12 : COPY js /usr/share/nginx/html/js
---> Using cache
Step 12/12 : COPY img /usr/share/nginx/html/img
Successfully built e50bf7836d2f
Successfully tagged registry.gitlab.com/lucj/sophia.events:latest
=> web site available on http://localhost:32768
Using the URL provided at the end of the output, we can access the web page:
The target environment
A virtual machine provisioned on a cloud provider
As you have probably noticed, this web site is not critical (only a few dozen visits a day), and as such it only runs on a single virtual machine. This one was created with Docker Machine on Exoscale, a great European cloud provider.
BTW, if you want to give Exoscale a try, ping me and I could provide some 20€ vouchers.
Docker daemon in swarm mode
The Docker daemon running on the above VM is configured to run in Swarm mode so it allows to use the stack, service, config and secret primitives and the great (and easy to use) orchestration capabilities of Docker Swarm.
The application running as a Docker stack
The following file defined the service which runs the nginx web server containing the web assets.
- The image is in the private registry hosted on gitlab.com (no Docker Hub involved here).
- The service is in replicated mode with 2 replicas, meaning that 2 tasks / containers of the service are running at the same time. A VIP (virtual IP address) is associated to the service by Swarm so that each request targeting the service is load balanced between the 2 replicas.
- Each time an update of the service is done (to deploy a new version of the web site), one of the replicas is updated and then the second one 10 seconds later. This ensures the web site is still available during the update process. We could also have used a rollback strategy but it’s not needed at this point.
- The service is attached to the external proxy network so the TLS termination (running in another service deployed on the swarm, but outside of this project) can send requests to the www service.
This stack is run with the following command:
$ docker stack deploy -c sophia.yml sophia_events
Portainer to manage them all
Portainer is a great web UI which allows you to manage Docker hosts and Docker Swarm cluster very easily. Below is a screenshot of the Portainer interface which lists the stacks available in the swarm.
The current setup shows 3 stacks:
- Portainer itself
- sophia_events which contains the service running our web site
- tls, the TLS termination
If we list the details of the www service, which is in the sophia_events stack, we can see the Service webhook is activated. This feature is available since Portainer 1.19.2 (the last version to date), it allows us to define a HTTP Post endpoint that can be called to trigger an update of the service. As we will see later on, the GitLab runner is in charge of calling this webhook.
Note: as you can see from the screenshot, I access the Portainer UI from localhost:8888. As I do not want to expose the Portainer instance to the external world, the access is done through a ssh tunnel which is opened with the following command:
ssh -i ~/.docker/machine/machines/labs/id_rsa -NL 8888:localhost:9000 $USER@$HOST
Following this, all the requests targeting the local machine on port 8888 are sent to the port 9000 on the virtual machine through ssh. 9000 is the port on which Portainer is running on the VM but this port is not opened to the outside world, as it’s blocked by a Security Group in the Exoscale configuation.
Note: in the above command, the ssh key used to connect to the VM is the one generated by Docker Machine during the VM creation.
A GitLab runner is a process in charge of executing the actions defined in the .gitlab-ci.yml file. For this project, we define our own runner running as a container on the VM.
The first step is to register the runner providing a couple of options:
CONFIG_FOLDER=/tmp/gitlab-runner-configdocker run — rm -t -i \
-v $CONFIG_FOLDER:/etc/gitlab-runner \
gitlab/gitlab-runner register \
--executor "docker" \
—-docker-image docker:stable \
--url "https://gitlab.com/" \
—-registration-token "$PROJECT_TOKEN" \
—-description "Exoscale Docker Runner" \
--tag-list "docker" \
Among those options, PROJECT_TOKEN is provided from the project page on GitLab.com and is used to register external runners:
When the runner is registered, we need to start it:
CONFIG_FOLDER=/tmp/gitlab-runner-configdocker run -d \
--name gitlab-runner \
—-restart always \
-v $CONFIG_FOLDER:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
Once it’s registered and started, the runner is listed in the project page on GitLab.com:
This runner will then receive some work to do each time a new commit is pushed to the repository. It sequentially performs the test, build and deploy stages defined in the .gitlab-ci.yml file:
- The test stage runs some pre-checks ensuring the events.json file is well formed and that there is no images missing..
- The build stage builds the image and pushes it to the GitLab registry.
- The deploy stage triggers the update of the service via a webhook sent to Portainer. The WWW_WEBHOOK variable is defined in the CI/CD settings in the project page on GitLab.com.
This runner is running in a container on the swarm. We could have used a shared runner — publicly available runners which share their time between the jobs needed by different projects hosted onGitLab — but, as the runner needs to have access to the Portainer endpoint (to send the webhook), and because I did not want Portainer to be accessible from the outside, having the runner inside the cluster is more secure.
Also, because the runner runs in a container, it sends the webhook to the IP address of the Docker0 bridge network, in order to contact Portainer through the port 9000 it exposes on the host. The webhook thus has the following format: http://172.17.0.1:9000/api[…]a7-4af2-a95b-b748d92f1b3b
The deployment process
The update of a new version of the site follows the workflow shown below:
- A developer pushes some changes to GitLab. The changes basically involve one or several new events in the events.json file plus some additional sponsors’ logos.
2. The GitLab runner performs the actions defined in .gitlab-ci.yml.
3. The GitLab runner calls the webhook defined in Portainer.
4. Upon the webhook reception, Portainer deploys the new version of the www service. It does so, calling the Docker Swarm API. Portainer has access to the API as the socket /var/run/docker.sock is bind mounted when it is started
If you want to know more regarding the usage of this unix socket, you might be interested by this previous article
5. The users can then see the new version of the web site
Let’s change a couple of things in the code and commit / push those changes.
$ git commit -m 'Fix image'
$ git push origin master
The screenshot below shows the pipeline that was triggered by the commit within the project page on GitLab.com
On Portainer side, the webhook was received and the service update was performed. We cannot see it clearly here, but one replica has been updated, leaving the web site accessible through the second replica. Then, a couple of seconds later, the second replica was updated.
Even for this tiny project, setting up a CI/CD pipeline was a good exercise, especially to get more familiar with GitLab (which has been on my To Learn list for a long time). It’s an excellent, professional product. It was also a great opportunity to play with the long awaited webhook feature available in the last version of Portainer (1.19.2). Also, for a side project like this one, the usage of Docker Swarm was a no-brainer — it’s so cool and easy to use!