GitLab CI/CD for Django on AWS.

Setting up a local Gitlab repository on the same machine as a runner for deployment on AWS EB.

A tutorial for a specific case of using Gitlab on Docker to set up a local Gitlab repository on an Ubuntu machine. This machine can also be used for CI/CD purposes by running a Gitlab Runner, which starts a local staging server running Django and deploys the project to AWS when specified.

Photo by Jongsun Lee on Unsplash

Setting up a GitLab repository

To set up your own GitLab repository on Ubuntu there are a few options. You can install it locally or using a Docker image. We will do it using Docker as it is more structured and optimal.

Installing Docker

Firstly as already mentioned we must install Docker so we can run an image containing everything GitLab needs. If the command docker does not work please refer here for more details on how to install Docker.

Running a GitLab image

As Docker is installed we must run the Docker image that contains everything a GitLab server needs to function. The following command worked for me:

sudo docker run --detach --hostname <insert your ip> --publish 443:443 --publish 80:80 --publish 23:23 --name gitlab --restart always --volume /srv/gitlab/config:/etc/gitlab --volume /srv/gitlab/logs:/var/log/gitlab --volume /srv/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest

This sets up a GitLab Docker container that runs on a specified hostname and ports. It also restarts it when we reboot the machine. More details about the command can be found here.

With this command, you can check your running Docker containers and see if the GitLab image was correctly set up.

docker ps

Very important here is that all the parameters are set correctly and by understanding them as etc. an incorrectly set hostname can produce errors later with not finding the host.

Testing it

If everything is set up correctly we will find a version of GitLab on a specified IP inside the network that can access it. You should see a login page like:

Example of a GitLab instance on a local IP.

Congrats! Now you have your own GitLab instance!

Here you have to Register for a new account as this is separate from the official instance. If we would like to access this GitLab instance from outside the local network some port forwarding must be configured and preferably a static IP obtained and combined with a DNS service.

Setting up Continous Integration and Continous Deployment

GitLab is a great tool for version control, but lately, it also excels in DevOps tasks as CI/CD. It combines the two tasks in one platform and makes it easy to implement both.

Set up your own Runner

A GitLab Runner is a separate service usually ran on top of Docker Container that performs commands which are defined in a .gitlab-ci.yml file in your repository.

Let’s first mention that we do not have to set-up our own GitLab Runner for running tasks and can opt for a shared Runner which is done by default. However, this comes with a cost as you are not able to fully use it always and have to wait for others to finish using it or you can literally pay to use it.

How to install a runner on an Ubuntu machine is best described in the official documentation.

After we have created a Runner we must register it for our GitLab project to use. This is again best described in the official documentation. When following commands in the docs it is important to use a Docker executer as that is what we will use later for execution. When adding tags to the Runner make sure they are in sync with your projects tags as the worker might not work otherwise and it is best to not assign them at all in that case.

If Registration is successful we must firstly Disable Shared Runners for a specific Project:

You can find this under Settings / CI.CD / Runners for a specific project.

Now, if everything was successful, we can see a working Runner and enable it:

Congrats now you have a Runner to run your deployment scripts!


When I finished setting up runners I came across a problem where I get this error when trying to use docker-compose in a staging script:

Possible error with Docker not connecting even if installed before.

I checked and I already install docker-compose in one command before so it should work right? NO! The solution is in modifying config.toml file that configures Runners and I found the solution here:

Prepare project for CI/CD

Creating a .gitlab-ci.yml in your projects root directory is the next thing to do. This file is very important as it defines the behaviour of the project when new code gets pushed to it. How to set it up for a Python application (etc. Django) is described in the documentation. Also, we must take care of requirements.txt file as al the packages defined there get installed on our Runner every time.

Local staging server and AWS deployment

My requirements were to have a local test server for testing a platform everytime new code gets pushed and deploying the code to AWS Elastic Beanstalk only when the code is pushed to a certain deploy branch. This was not described in official documentation so I had to adapt. Used deployment files that are needed by Docker and GitLab are contained in this repository:

To achieve local staging server running runserver command simply on the file was more difficult than expected as a Runner is not meant to run a server. Thus we have to run Docker inside Docker Runner and we need some additional files describing this Container. This part was adapted using this tutorial.

Adding Dockerfile to the project directory is the first step and describes how to set up a Docker Container and what ports to open for the server to work. The following code was used for my case:

FROM python:3.5
RUN mkdir /spinnerStage
WORKDIR /spinnerStage
ADD . /spinnerStage
RUN pip install -r requirements.txt

Now we need to define the commands that are run when composing a Docker container that will serve a Django server on port 8000. This is done by adding a file docker-compose.yml to the project directory. Example of mine:

version: “2.2”
image: postgres
build: .
restart: always
container_name: spinnerStage
command: python runserver
- “8000:8000”
- db

How Docker files and set-up work is nicely described here and an example of using Docker with python is available here.

AWS deployment

Firstly we need files for deploying to AWS that describe what to connect to. This is described here in 2., 3. and 4. step so that eb deploy command works on the Runner. Secret variables for deployment must be specifically provided to a runner by GitLab and set specifically for each project in Settings:

Setting Variables in Settings / CI/CD / Variables.

Finally, after setting things up it is very important to have the final .gitlab-ci.yml file correctly set as it defines most things. The final version of mine: stages:

- staging
- deploy
stage: staging
- echo “Deploying the app”
- pip install docker-compose
- docker-compose build
- docker-compose up -d
type: deploy
- mkdir ~/.aws/
- touch ~/.aws/credentials
- pip install -r deployReq.txt
- printf “[eb-cli]\naws_access_key_id = %s\naws_secret_access_key = %s\n” “$AWS_ACCESS_KEY_ID” “$AWS_SECRET_ACCESS_KEY” >> ~/.aws/credentials
- touch ~/.aws/config
- printf “[profile eb-cli]\nregion=ap-southeast-1\noutput=json” >> ~/.aws/config
- export PATH=~/.local/bin:$PATH
- eb deploy
- deployed


A result of this configuration is that every push on the master branch gets compiled on a runner server in a Docker container which serves a Django server on port 8000 for testing. More importantly, when we are sure that our code is as bugless as possible, we can push the code to the branch named deployed and then the code gets deployed to AWS automatically.

Comment: This article is under construction and will improve with time thus corrections are welcome. For any questions contact the Author and refer to a file GitLab Insights.