What and Who is This Whale? : The Arduous Journey of Our Deployment Process Using Docker

Alya Putri
PsychoTeam
Published in
9 min readApr 15, 2019

If I could sum up our docker deployment process into one word, it would definitely be abstruse. With a steep learning curve, the amount of time and energy we’ve poured into this is probably one of the most during the development of PsychoTip.

Question, what is Docker and why use it?

If I have to be honest, I also have absolutely no clue at the beginning of this project, since I rarely fiddle with the backend side of things. So, it’s safe to say that I had a lot of research and learning to do.

From all the things I’ve read and explorations I’ve done, here’s what I got:

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

There are a few terms that needs to be defined for easier elaboration:

  1. The Docker daemon is a service that runs on your host operating system. TheDocker daemon itself exposes a REST API.
  2. The Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to run quickly and reliably from one computing environment to another. Any changes made on these containers does not affect the other container.
  3. The Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Illustration of a Docker Container.

So far, mumbo jumbo right? Here’s an easy explanation, albeit a little bit long so bear with me.

Say you need to build an application, in our case PsychoTip. In order to make the application accesible to the public, there needs to be some place to host it. Back then, we either needed to set up a dedicated platform called “servers”, or deploy them to hosting companies such as 1&1, GoDaddy, etc.

Docker, like any other cloud service companies, provides a service much optimized for performance and cost efficient. Why? because in cloud computing there exists a certain concept called “virtualization”, meaning the hardware resources can be broken down to optimally provide resources for the customers. This means, the customers only pay for the services they will use, instead of setting up the entire server.

But even traditional cloud computing cannot avoid one thing — heavy Operating System usage. Operating systems such as MacOS, Microsoft Windows, and Linux can easily go over 1 Gigabyte. However, your actual application may have a much smaller size than that. So why would you want a virtual machine that is more than 1GB in size when your application needs less?

This is where the container concept comes in. In Docker’s way, instead of hosting each operating system per each application, some common resources can be shared, and there is something called “docker engine,” which sits on top of an Operating System as shown below.

This is a breakthrough for application development because this level of abstraction is exactly what enterprise companies and individual developers needed — less hassle to “provision” giant virtual machines but only get minimum “containers” essential to host their applications.

Now, how do we implement this into our development process?

In theory, it’s actually quite simple.

Our project has two applications: the client app (Android) and the admin app + API (Django). Because we have two applications, we use two types of Docker image. The first type is the frontend image, which contains a download link to our most current release of the Android application. The frontend image has three images; one from the master branch, one from the staging branch, and one from the development branches (which one has the latest push).

The second one is the backend image, which is used to deploy the Django application. The backend image has one image.

Setup Docker

Here are the list of files we needed to make a Docker image:

1) requirements.txt & build.gradle

requirements.txt contains all the packages that are required for the python project for our API and admin app. While the packages that are needed for the client app uses build.gradle. As of right now, here is the content of our files:

2) Dockerfile

Used by Docker to figure out what steps are needed to be done to create an image. In our project, we used three different Dockerfiles, which are Dockerfile, ProductionDockerfile, and BackendDockerfile.

Dockerfile is used for our staging & development branches environment. The APK we’ve created is copied, and then there will be a static link to download the APK.

EDIT — May 1 2018:

Here’s a line-by-line explanation of the Dockerfile:

  • Line 1: imports the nginx:alpine image. This declares that the docker image will use nginx:alpine.
  • Line 3: Adds all content of the current folder, which in this case is PsychoTip, to the directory with the name “application”.
  • Line 4: Set the current working directory to application.
  • Line 6: Copies nginx default configuration to /etc/nginx/conf.d/default.conf
  • Line 7: Copies the app-debug.apk file to the list of staticfiles on nginx.
  • Line 9: Declares that the container port uses port 80.

ProductionDockerfile is used for out master branch environment. It works the exactly the same as Dockerfile. (EDIT — May 1 2018) The only difference lies in the apk file that is copied in line 7.

EDIT — May 1 2018:

Here’s a line-by-line explanation of the ProductionDockerfile:

  • Line 1: imports the nginx:alpine image. This declares that the docker image will use nginx:alpine.
  • Line 3: Adds all content of the current folder, which in this case is PsychoTip, to the directory with the name “application”.
  • Line 4: Set the current working directory to application.
  • Line 6: Copies nginx default configuration to /etc/nginx/conf.d/default.conf
  • Line 7: Copies the app-release-unsigned.apk file to the list of staticfiles on nginx.
  • Line 9: Declares that the container port uses port 80.

BackendDockerfile is used for our backend part. The image created will use a python:3.5-slim image. The requirements will be installed, then it will execute start.sh when the Docker image is run.

EDIT — May 1 2018:

Here’s a line-by-line explanation of the BackendDockerfile:

  • Line 1: imports the python version 3.5 image. The slim part means we’re using the lighter version of the python 3.5 image, as slim means compressed.
  • Line 3: Runs the command mkdir /code, which make a new directory named “code”.
  • Line 4: Set the current working directory to “code”.
  • Line 6: Copies requirements.txt to “code”.
  • Line 7: Runs command pip install -r requirements.txt, which, as the name suggests, installs all the dependencies declared inside requirements.txt.
  • Line 9: Copies all content of the current folder, which in this case is Psychotip_backend, to “code”. The content includes soruce codes, html, etc.
  • Line 10: Copies start.sh to “code”.
  • Line 12: Sets command to execute start.sh when the container is run.

3) start.sh

This file controls the gunicorn to run a server for the project. In start.sh, we also declared which port are used to run the program. We used start.sh in BackendDockerfile to contain all the commands needed to deploy the backend part, which will result in it being executed when portainer runs docker run image.

Manual Deployment

In the event we don’t need automated deployment, we can build an image from our local machine using the commands below:

The first line will create the image, and the next line will push the image to the registry.

Gitlab Deployment

For automated deployment in Gitlab, we need to setup the .gitlab-ci.yml file in order to deploy the project automatically whenever we push our local repository to the remote repository.

The above snippet from .gitlab-ci.yml will only run on the staging branch. The script will build a docker image using the defined Dockerfile, and push it to the specified registry, in this case registry.docker.ppl.cs.ui.ac.id.

Portainer

Now, we’ve reached the Portainer part. Portainer helps in managing our Docker containers with a graphical interface, instead of a basic command line.

Below are the steps that are needed to deploy the image through Portainer, where we’ ll use the frontend image as an example.

1) Pull the image from the registry

Here, we’ll pull the image that we’ve previously pushed to the registry to Portainer. The image and target registry are specified.

Pull Image Screen

In the picture above, the image name is development, and the target registry is registry.docker.ppl.cs.ui.ac.id.

2) Make container

In the container section, a new container is made using these configurations. As we are using the development image, set the container port defined in Dockerfile (80), and set the host port as desired.

Create Container Screen

3) Test whether image is deployed or not

Open the port address where the container is deployed. If the application is there and the timestamp fits the time of deployment, then the application is live!

Deployed Container Screenshot

Process in Reality?

So. We’ve seen that the process to deploy our application is not that convoluted. But, how it is in actuality is more or less like this:

DisallowedHost Error
Symbolic Link Creation Failure
Timeout (?)

Yes. Even though the steps are easy to understand, the errors are vast and various. We’ve even encountered errors that we’ve never even heard of, such as the middle picture, in which during deployment, it failed to create a symbolic link (???). We’ve asked people, even our lecturer (Hi Pak Daya :D), but apparently nobody knows the solution. And if they don’t, how are we even supposed to know?

Of course, we’ve had a couple of amusing reactions to these errors. Here are some highlights from our group chat.

Conversation Snippets

The leftmost picture was us when we discovered that we only had one image remaining in the registry, even though we already had three the last time we checked. This happened quite recently actually, probably an hour ago, two tops. Please note that some words are censored to keep this article PG-13.

The middle conversation happened during the period where our API deployment never succeeded. At that point, all of us were in the brink of insanity and decided to just randomly sing and use mildly threatening smiley faces to express our frustration. In the end, it all led to the rightmost picture, which can sum up the whole mood of it.

There you have it, our voyage to discovering the intricacies of Docker, with the help of Portainer. I feel like we did a pretty good job, considering we knew almost nothing about it. Claps!

--

--