Docker Compose and GitLab

A walkthrough of how to deploy your app from GitLab using Docker Compose

Vitaly Panyukhin
5 min readMay 20, 2020

There are 2 basic options how to orchestrate the services in production: to use Docker stack of technologies (Docker Compose, Swarm) or to use additional level of abstraction kindly provided by Google — Kubernetes (K8s).

Looking ahead, I would say that Kubernetes is the most advanced orchestrator.

Meanwhile the drawback of using Kubernetes is that you must already have configured K8s cluster, which usually requires maintenance, or requires additional costs for the managed version. As well the team should be trained how to operate with K8s primitives and how to maintain the cluster. Usually dedicated DevOps resources must be involved, while the dev team could handle alone deployment via Docker Compose.

So in case you have just a virtual machines and no managed services in your datacenter to use Docker Compose might be a good option.

This article is a manual how to build infrastructure for software development using GitLab for DevOps and Docker Compose as orchestrator.

When we push code to repository GitLab initiated CI/CD pipeline which assigning jobs described in .gitlab-ci.yml file to the dedicated runner. Runner executes each jobs in separate container. As we want to use Docker Compose to build, deploy and orchestrate our stack, the image used by runner should have docker compose already installed. Special for this purpose I build image and uploaded it on docker hub. Then during each pipeline stage the runner executes shell commands to build — deploy to stage — release — deploy to prod images. Let’s dive into each CI/CD pipeline stage:

  • build — building image from the files pushed to repository; tagging as stage; pushing image to register
  • deploy to stage — pulling stage image from register by stage host
$ docker-compose -H “ssh://<user>@<host>” pull 

passing stage set of variables from GitLab vault to container’s OS variables; updating running containers with new images

  • release — building production image; tagging as prod image; pushing to register
  • deploy to prod — pulling image from register to production host; passing production set of variables from GitLab vault to container’s OS variables; updating running containers with new images

Let’s start step by step with making project, configuring GitLab and Server to see how it all works together.

Project structure

  1. We could open any editor you like and create following files structure:
.
├── app
│ ├── Dockerfile
│ ├── app.py
│ └── requirements.txt
├──.gitlab-ci.yml
└── docker-compose.yml

2. App.py file could contain any framework your app is powered by. Something like:

3. Here is listing for docker-compose.yml file:

4. And Dockerfile:

GitLab project configuration

  1. Create Variables and store your secrets there

2. Create Pre-prod and Master branches. Setup Pre-prod as default branch. Restrict push to master branch and allow only merge requests for maintainers.

3. Create deploy token to let pull the docker image or code repository from GitLab private repository/register.

4. Finally let’s build .gitlab-ci.yml file to define how we will build and deploy or services. As a base image for execution all jobs we will use basic docker image with docker-compose already installed.

And this is the place where exactly the magic happens on how we supply relevant values for production and stage environments from GitLab vault to OS environment variables. Here is the diagram:

Assignment of relevant variable values for each environment happens this way at .gitlab-ci.yml file:


variables:
STAGE_NOT_SECRET_VARIABLE: some_value
PROD_NOT_SECRET_VARIABLE: another_value

deploy-to-stage:
stage: staging
script:
— echo “NOT_SECRET_VARIABLE=$STAGE_NOT_SECRET_VARIABLE” >> .env
— echo “SECRET_VARIABLE=$STAGE_SECRET_VARIABLE_FROM_VAULT” >> .env

deploy-to-prod:
stage: deploy
script:
— echo “NOT_SECRET_VARIABLE=$PROD_NOT_SECRET_VARIABLE” >> .env
— echo “SECRET_VARIABLE=$PROD_SECRET_VARIABLE_FROM_VAULT” >> .env

The values could be soursed from GitLab vault or from the variables written in the .gitlab-ci.yml file. Then Docker Compose are using .env file to read the values and pass them to OS variables of relevant containers. After that they could be accessible from the app.

5. Now to understand the full process we need to dive into GitLab roles. Even a free version it provides 2 basic roles — Developer and Maintainer.

  • Developers could push code to the repository which triggers the CI/CD pipeline which builds an images and deploy them to production or stage servers. As well developers could initiate merge requests to include code from pre-prod branch into master. We may restrict Developers from pushing code directly to master branch. This allow us to perform code review before including code to release image. This behaviours we have already configured in GitLab protected branches settings.
  • Maintainers could modify each setting for particular project — including creation of variables and modification of protected branches permissions. In our case maintainers will accept the merge requests to let pipeline update stage environment and deploy new release to production

Server configuration

Next step is to login into VM and execute the following commands to install required tools such as Docker and Docker Compose and create “gitlab” user for deployment via GitLab. If we have separate servers for production and test environments we need to perform the commands on both.

// create user
$ adduser gitlab
$ passwd gitlab
$ usermod -aG wheel gitlab
// let the user access with a key
$ ssh-keygen
$ ssh-copy-id gitlab@10.10.10.1
// setup docker-compose
// install docker
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo usermod -aG docker gitlab
$ systemctl start docker
// install compose
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
// allow multiple sessions (while deployment here will be lots of non blocking operations)
$ sudo vi /etc/ssh/sshd_config
- MaxSessions 128
$ sudo systemctl restart sshd

So that’s all folks! Thanks for reading this article to the end and good luck with setting up reliable infrastructure.

--

--