Software Architecture

Naufal
HappyFresh Fleet Tracker
4 min readOct 17, 2019

Maintainability and continuous deployment is a major concern in enterprise-level projects; in the fleet-tracker project that we are currently delivering, concerns about large-scale and uninterrupted use are also present. In this article, I would like to bring up a premier and elegant solution to this problem: Docker.

Docker, an OS-level platform service

What Is It?

Docker is a set-of-platform service which OS-level virtualization to deliver software in packages called ‘containers’. All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines. Containers are also isolated from one another and bundle their own software, libraries, and configuration files.

This might sound a bit complicated and especially confusing when compared to other machine abstraction methods such as Virtual Machines (VMs), the following diagram illustrates their key differences :

Containers versus Virtual Machines

Virtual machines are an abstraction of a physical hardware, turning one server into many servers. Each VM ran on a machine includes a full copy of an operating system, the application, and necessary libraries — taking up quite a bit of space and memory, and also slow to boot and redeploy.

Containers on the other hand are an abstraction at the application layer that package code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, which take up less space, less memory, and can handle more applications.

Running a Container

The process in creating a docker container in your machine is pretty straightforward, and getting a mock-server running in your own machine is as simple as creating a Dockerfile and building your code with it.

The following code is a bare-bone configuration file to create a PHP docker image :

This simple Dockerfile creates an image on top of an existing configuration file specifically made for PHP with Apache 7.0 installed, ensuring that our code will stay up and running.

There are many different images conveniently hosted on the Docker hub free of use which we can use to build our Dockerfile on. After getting our source-code ready in src/ we could start building our Docker image using docker build and start. Once the docker finishes downloading all of the layers that make up the image, it’s going to copy all of our files in src/ and outputs our new image. We can then run the image using docker run on a machine with Docker installed, and we finally get our container up and running.

Docker Compose

All of this is fine and dandy until you start to run multiple containers and services on various servers and the process starts to get tedious, and that’s where Docker compose comes in. Docker compose lets us define all of our services in a configuration file, and with one command, it will spin up all of the containers that we need.

This sample compose file sets up all the environment variables, volumes, dependencies, and all other configurations that we need, automating the process of building and running the image on the machine.

CI and CD

A large part of maintaining a product that is widely used and competitive is ensuring that your product has continuous integration of feature changes and delivery of product. Gitlab provides an excellent CI/CD tool built in, which allows us to catch bugs and errors early in the development cycle; ensuring that all the code deployed to production complies with the code standards.

The configuration for code deployment come in three distinct development phases: build, test, and deploy. The build configuration ensures that the the code builds successfully using the compiler we specify within the configuration file, and our back-end uses go-builder for GoLang and node for javascript. The testing framework we use for the testing phase consist of the built-in beego testing tool and Jest for javascript. Deployment is done by docker, with services and dependencies set by our dev ops to ensure that the code runs properly on any environment.

I’ve learned a lot in the few weeks I’ve spent on this course, and Docker is definitely one of the technologies I wouldn’t have gotten into within a close time frame had I not work on the fleet tracker project we’re currently developing.

I’m looking forward to learn more about docker and its use, and I’m certainly looking forward to learn about more technologies and techniques in the future.

--

--