Dockerized GitLab CI: Register Docker Executor as a GitLab Runner

Lal Zada
10 min readSep 12, 2024

--

Source: https://unsplash.com/photos/a-large-ship-in-the-water-2JNNpq4nGls

Continuing the series of Dockerized GitLab, in this post i’ll show you how to register Docker executor as a GitLab runner with your GitLab server for building, testing and deploying your dockerized projects.

In our first post, we set up a GitLab server using barebone docker commands. In the next post, we transitioned those barebone docker commands into a docker compose file for better usability and managing multiple containers easily.

In our last post, we added GitLab runner as a container, integrated with GitLab server and then we registered a Shell executor to run our project pipelines.

Shell pipeline executor from our last post.

In this post, we are going to register docker executor inside our GitLab runner service so we can build, test and deploy our dockerized projects.

Before registering our docker executor, we need a minor update in our docker-compose.yml file by mounting the docker.sock file from our host machine into the GitLab runner container.

Your updated docker-compose.yml should look like this.

version: '3.8'
services:

gitlab-server:
image: 'gitlab/gitlab-ce:latest'
container_name: gitlab-server
environment:
GITLAB_ROOT_EMAIL: "admin@buildwithlal.com"
GITLAB_ROOT_PASSWORD: "Abcd@0123456789"
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost:8000'
nginx['listen_port'] = 8000
ports:
- '8000:8000'
volumes:
- ./gitlab/config:/etc/gitlab
- ./gitlab/data:/var/opt/gitlab


gitlab-runner:
image: gitlab/gitlab-runner:alpine
container_name: gitlab-runner
network_mode: 'host'

# new changes for mounting docker.sock from host to container
volumes:
- /var/run/docker.sock:/var/run/docker.sock

Why mounting docker.sock into GitLab Runner?

When you install docker in a machine. Two diffrent programs comes in:
1. Docker Server
2. Docker Client

Docker Server recieves commands over a socket (either over a network or through a file)

Docker Client communicates over a network and sends message to the Docker server to say make a container, start a container, stop a container etc.

When the docker client and server are running on the same computer, they can connect through a special file called socket. And since they can communicate through a file and Docker can efficiently share files between host and containers, it means you can run the client inside Docker itself.

The Docker daemon can listen for Docker Engine API requests via three different types of Socket: unix, tcp, and fd. By default, a unix domain socket (or IPC socket) is created at /var/run/docker.sock when you install Docker.

We need to mount docker.sock from our host machine into GitLab Runner container. When we want to execute a job using docker, GitLab Runner can create a container on our host machine using this docker.sock file, run the job and then terminate once the job is done.

If your docker server and client are across different machines, you can communicate it through TCP but since we have it both on the same host machine, we are communicating through socket file i.e docker.sock

Once you have updated docker-compose.yml, hit CTRL+C to stop running containers and then run

docker compose up --build --force-recreate

Back to Registration Part

Go to your project runners by going to

Repository → Settings → CD/CD → Runners

at this URL http://localhost:8000/root/build-with-lal/-/settings/ci_cd

build-with-lal in the URL is the the repository name i’ve created earlier.

Existing runner is the Shell executor we created in our last post.

Click on New project runner button

Fill out the runner details. Make sure to add tags so we can run different jobs by specifying a specific runner using these tags.

Click on Create runner button which will redirect you to this page

Copy gitlab-runner register … command from the Step 1

Login to your GitLab runner container using docker

docker compose exec -it gitlab-runner /bin/bash

Once logged in to the GitLab runner container, run command from the above Step 1. Make sure to add these flags to the end of the gitlab-runner register … command.

gitlab-runner register \
--url http://localhost:8000 \
--token glrt-AMG2ra8WJDPkbx-HDAzc \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock \
--docker-network-mode 'host'

--docker-volumes /var/run/docker.sock:/var/run/docker.sock

adding --docker-volumes /var/run/docker.sock:/var/run/docker.sock will mount the docker socket file from the GitLab runner container into the executor container so when we need to run docker CLI commands, docker CLI will have access to docker engine on the host machine via docker.sock file.

--docker-network-mode 'host'

adding --docker-network-mode 'host' will put the executor container on the host network so when cloning the repository, the executor can access it via localhost.

We put the container network on host network (PC network)in 2 different places.

  1. First is in the docker-compose.yml file where put the network of GitLab runner container on the our host machine (PC network) so GitLab server can communicate with GitLab runner over localhost for managing pipelines.
  2. Second, we put the container network on the host network (PC network) when we are registering our docker executor. This network comes handy when the pipeline runs using docker executor, job pulls the docker image and then tries to clone the project repository inside that executor container. At that time, the docker executor tries to access the repository over localhost so it is necessary for the executor container to be on the same network as GitLab server which is localhost.
  • When asking for the GitLab instance URL, leave it as it is by hitting Enter unless your GitLab server and Runner are on different machines.
  • Enter your favourite name for the Runner.
  • Since we want to register a docker executor, enter docker when asking for executor option.
  • When selecting docker executor, you also need to set a default docker image in case you miss it in our pipeline gitlab-ci.yml file when building pipeline for your project. I have set default docker image to python:3.10-alpine which you can override inside your gitlab-ci.yml file

After you fill out all of the above details, you should have your docker runner registered and the runner page should look like this.

If you go back to your project runners page under

Repository → Settings → CI/CD → Runners

you should have a new runner alongside the Shell runner which we created in the previous post.

Test Pipeline using Shell and Docker Executor

Lets try adding a pipeline with 3 different jobs

  1. Shell Executor
  2. Docker Executor with default docker image i.e python:3.10-alpine
  3. Docker Executor with overriding docker image to docker:24.0.5
build with shell executor:
stage: build
tags:
- shell
script:
- date # print current date
- cat /etc/os-release # print os version for Linux

build with docker executor:
stage: build
tags:
- docker
image: docker:24.0.5
script:
- docker info

build with docker executor default image:
stage: build
tags:
- docker
script:
- python --version

Your pipeline editor should look like this under Build → Pipeline Editor

By adding tags to each job, it will make sure to pick the correct runner for this job. So we are adding tags: shell to the first job because we want to that job to be run by our Shell executor. While rest of the 2 jobs should be executed by the docker executor so we are adding tags: docker

In th second job with image: docker:24.0.5, you can use docker CLI to build your project using docker and push your docker image to some container registry.

Commit these changes and switch to Jobs under Build. Build → Jobs

You should have all your jobs running OR already passed.

Go to the details of each job and you can notice that each job is picked and executed by the relevant runner

Job is executed by the Shell executor by printing the current date and OS details.

Job is executed by the Docker runner by using the default docker image i.e python:3.10-alpine

Job is executed by the Docker runner by overriding the default docker image to docker:24.0.5 for utilizing the docker CLI.

If you check all these details from docker info command, the docker server details are coming from my host machine where i have my actual docker engine installed. The reason we used docker image inside the gitlab-ci.yml file is to have docker CLI available. Any instruction you provide to the docker CLI will be passed to the docker Engine on the host machine by using our mounted socket file /var/run/docker.sock

When a job is executing by docker executor, you can monitor containers on your machine and you should have some new containers alongside your existing containers. These new containers are coming from the docker executor and it will be terminated once the job is completed.

These runner containers are the sibling containers of your GitLab runner container and not child containers of the GitLab runner container because we have mounted the docker.sock file from the host machine so all of the containers are managed by the host machine docker engine.

Even if try build and creating container inside our pipeline job, the containers will be actually created on our host docker engine because we are using same docker.sock inside docker executor and its job.

Lets update our pipeline by creating some containers inside our job.

build with shell executor:
stage: build
tags:
- shell
script:
- date # print current date
- cat /etc/os-release # print os version for Linux

build with docker executor:
stage: build
tags:
- docker
image: docker:24.0.5

# new changes for adding dummy containers
script:
- docker run -d --rm --name nested-container1-in-pipelinejob alpine sleep 20
- docker run -d --rm --name nested-container2-in-pipelinejob alpine sleep 20

build with docker executor default image: # default python image
stage: build
tags:
- docker
script:
- python --version
- sleep 10

Once you commit the above changes and pipeline runs, you can monitor your docker containers on your host machine by running

docker ps

The output should look like this.

You can see that our executor container and the containers we created inside our job are created on host docker engine.

Known issues with Docker socket binding

There are some known issues with docker socket binding and managing all containers using a single docker engine highlighted by GitLab documentation here.

  • If a pipeline job ran docker rm -f $(docker ps -a -q), it would remove the GitLab server and runner containers and may be other critical containers as well.
  • If your tests create containers with specific names, they might conflict with each other.

Sharing files and directories from the source repository into containers might not work as expected. Volume mounting is done in the context of the host machine, not the build container.

Socket binding is not the only way to use docker executor with GitLab runner. There is also another way called Docker in Docker aka dind which we are going to discuss in our next post.

Build and Deploy your project using Docker Executor

Now you can add a Dockerfile to your respository and can try to build your project using docker.

Dockerfile

FROM python:3.10-alpine

RUN python --version

Updated gitlab-ci.yml to build project using docker

Build output

We tried to build our project using docker build .

The same way you can login to a container registry using docker login and then push your docker build image to using docker push

As we discussed earlier that using /var/run/docker.sock created child containers on the host docker engine which could have some potential issues. To avoid using /var/run/docker.sock inside a CI job, there is another way called Docker-In-Docker where job’s containers will be creating as a child containers of another service container (Docker-in-Docker) instead of directly on the host docker engine.

Source Code on GitHub

Watch it on YouTube

Thank you for making it till the end 🎉

If this article helped you, make sure to:

--

--

Lal Zada

Tech article every week - A software engineer over a decade experience in building apps, infrastructure and CI/CDs