Docker, Containerization, and Software Architecture

Rayhan Arwindra
Pilar 2020
Published in
7 min readNov 18, 2020
Source: https://www.wallpaperflare.com/docker-logo-docker-containers-minimalism-typography-wallpaper-284651

Today’s tools for software development are so numerous that it’s difficult to keep track of all the technologies used in a project. This leads to difficulty in migrating the project to other systems or machines, as we struggle to reinstall each and every dependency used on the previous machine, and also get the right version of every single dependency.

By using docker, we can “contain” all our tools and dependencies in a container. Afterwards, we can pass this container image around to other machines, or even other developers. Then by running that container they’ll have the exact set of tools and dependencies that you had on your machine.

Docker

Docker is a tool for container management. It’s a flexible and relatively simple tool to help develop, deploy, and run applications with containers. This process of isolating your project inside a container is called “containerization”.

With docker, you can make your containers portable. Meaning that you can build your containers locally, and then pass it on to other machines, or even deploy it online onto any docker environment.

Container

Photo by Andy Li on Unsplash

Let’s say you were moving overseas. To get all your furniture, cooking utensils, and other objects that won’t fit into your luggage you would need to ship it in a container.

Afterwards, when you arrive at your new home and your shipping arrives, you obtain all the items you put in previously, exactly as you put it. No items have changed, and hopefully none have gone missing.

The same view can be used for containers in software.

Containers are a unit of software that helps developers in isolating their application from the environment. By doing so, developers can solve the ever-irritating “it works on my machine” problem.

Containers are simply a running process that can virtualize an operating system, execute a program, start a server, and so on.

Containers vs Virtual Machine

Containers are often compared to virtual machines (VM), but have some trivial differences. For one, containers run on the host OS, while VMs run their own guest OS, and runs on a hypervisor.

Containers are simply made up of the libraries and binaries for an application, but does not run its own guest OS. This makes containers much lighter and requires fewer resources than when running a VM.

Docker Image

The docker image is the snapshot or template of the container. It contains everything needed to run the application as a container.

An image can include the application’s code, libraries, environment variables, files, and so on. The image can then be run as a container, or even be deployed to any docker environment online.

Docker Example

Say we have a react project on windows like so:

Source: Pilar Project Front-end

If we run the project, we get:

The program works, we can then view the website on the designated port of 1234 on our browser.

Now, let’s say we send this project to another system running ubuntu like so:

Now let’s try running the project:

Looks like there’s an error preventing us to start our project. The project is exactly the same as the one on windows, but since we changed systems there are certain libraries or modules missing.

Let’s utilize docker containers to fix this issue.

Dockerfile

To build a docker image automatically, we can utilize a dockerfile. A dockerfile is a file that contains a list of instructions that users could run on a command line. Refer to this Github page to read more about dockerfile instructions.

Here’s the dockerfile we created:

The first command pulls a docker image of node.js, which is needed for running a react app.

The second command creates a new directory named /app/src, and the command after that sets the working directory to the newly-created directory.

The fourth command then copies the package.json file on the project, which details all the dependencies required by this project.

The fifth command installs all dependencies on the package.json file previously copied.

The sixth command might seem odd, but let’s try to understand it. Previously on the third command, we set the working directory to /app/src. Thus we can then refer to the working directory by simply typing a dot (.).

The dot symbol can also be used to refer to the current directory. So, the command COPY . . means to copy everything in the current directory into the newly created working directory in /app/src.

The next command exposes a certain port, in our case, it’s 1234. Finally, the last command denotes what command would be run to execute the image, which is “npm start”.

The instructions on the dockerfile can be run by typing:

docker build . -t [REPOSITORY]

The repository and tag would be used later when running the created container. Let’s say in this project I named the repository pilarapp, and the tag latest. The command I type would be:

docker build -t pilarapp

After running the build command, we have successfully created a docker image, which can then be viewed by typing:

docker images

There, you can view all images you have, including the ones you pulled. You can also view the repository name for each image, and their tag name.

To run the docker container, we can type:

docker run -it -p [PORT]:[PORT]/tcp [REPOSITORY]:[TAG]

In my case, since the port is 1234, the command I type would be:

docker run -it -p 1234:1234/tcp pilarapp:latest

And here’s the results:

As you can see, the program now works, and the server is live.

If let’s say you have many containers for an app, for example a website and a database, then you could use docker compose. You can read more about docker compose here.

Container Orchestration

Imagine you have a full-stack app, consisting of the frontend, backend, and database. Afterwards, those three parts were deployed onto their own separate servers.

To imagine this, we have to go into software architecture, which is a blueprint of how your software elements interact with one another in a whole system. See the diagram below:

Source: Full-Stack Architecture for Pilar Project

The user interacts with the frontend side of the application, either on the website or on mobile.

Then, the frontend stack makes a request to the API should they require its service.

Finally, The API calls on the database to get data, and that data is forwarded all the way until it reaches the user.

Let’s say we were to containerize all elements on our project, and then deploy them all onto different servers. How can we get them to communicate like in the architecture diagram?

Since containers are portable, it should be expected that they can be moved around and even separated. The fact that they are guaranteed to run on any machine anywhere is also something to take advantage of when deploying a full-stack application separately.

Thankfully, we now have tools to manage, deploy, and scale our applications across different data centers or clouds. These tools are called orchestrators, which help to maintain our containerized applications, and even replace the containers automatically should they somehow fail.

The most popular orchestration tools are Kubernetes and docker swarm, you can read here for more details on setting them up.

Conclusion

Containers enable us to isolate our application and it’s dependencies. Docker allows us to manage those containers, making it more portable and simplifying the containerization process as a whole.

Today, docker is the most popular container management tool for developers worldwide, and for good reason too. Learning docker to containerize your application, then moving further to orchestrate your containers would definitely help you on your software development journey.

--

--