Docker Principles

R.R. Dev
6 min readDec 19, 2023

--

Howdy! I’m back and this time, I want to talk about our little smiling whale, Docker!, you may heard about it, it’s pretty common and popular these days, and if you want to get a good job, you need to learn about this world dominated by containers, and the king it’s Docker!

I probably mentioned containers and why they are important before, maybe on the Kubernetes post but let’s do it again so you don’t have to go anywhere else to get the context.

Imagine a world without containers, easy right? If you’re new and want to learn about Docker principles, you probably don’t know much about it, but then… Why they exists? In the past, we as developers could create an application, I don’t know, let’s say a server, this application/server was a converter, it takes an png image and converter it to a jpg, we code the software and now we want to put it on a physical server to make it available for everyone, make our software ready for the clients, the first question that comes to your mind in that situation is “What kind of server do we need?”, and my friend, that’s the why we need containers.

If you’re thinking you can tell me exactly what you need for your app in order to run and serve to all the expected and unexpected clients, I would say you’re lying, what we all gonna do it’s buy the most powerful machine we can afford, there’s no failure there, it will run no matter what, so you buy the core i7 with 64 GB RAM and 1TB of SSD and then, your application is using the 8% of the resources, Do you see the problem here? You lost money because your app it’s wasting all the unuse resources, why? Because in the past you could shared the resources between multiple apps in the same server, one server, one application, now you’re fired for the waste of money and game over.

A genius noticed we needed a solution, so, VM (virtual machines) appeared to the equation, now, we can virtualized environments and shared resources to as many apps as we need, so it is fixed right? WRONG! yeah, VMs are awesome, and they could be a good solution for some problems but just think about it. VMs are virtualizations of environments, they need operating systems, softwares and of course, physical power, that means that if you have three apps, you’ll have 3 VMs, each VM needs their licenses for Window or Linux, that will cost you an extra, also, what if you need paid resources for each app? or what if one day you need more resources for one app but you already distributed everything and you don’t have more? and of course, what about your budget? You really need power to run 3 VMs, there’s not such a thing like economic server for 3 VMs, that’s why we need something more than VMs.

Some time after dealing with this issues, another genius created the concept of “Containers” and solved this painful situation, so, now we can start with this!

Containers

Finally, the ultimate solution, what the heck are containers? Well, to make it easy, just think about them like lightweight VMs, in professional words, a container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another, in human words, they’re just environments with all the minimum requirements you need to run the app, for example, if you have a JS application, you’ll probably need just the project dependencies and nothing more, that’s the magic, or maybe you’ll need a Linux machine and the container could contain it, without GUI and unnecessarily things, just the minimum to run the app.

Containers vs VMs

To see it more clear, the image above shows you the difference between containers and VMs, VMs needs to run their own guest operating system sharing the resources of the machines and container just run in the host operating system using the resources they need when they need, what you’ll have to manage it’s just the application and its little dependencies, that how you’ll be free of a huge amount of decisions and issues.

Images

Well well well, now you can think, awesome, I know what a container is but how can I create them? The answer, images. No, I’m not talking about the photos of your crush on instagram, we call images in Docker to the build part of Docker life cycle, they’re a layered command, using Union file system, basically we build the image step-by-step following the instructions you wrote in a file called “Dockerfile”

FROM node:17-alpine as builder
WORKDIR /app
COPY package*.json .
COPY yarn*.lock .
RUN yarn install
COPY . .
RUN yarn build

FROM nginx:1.19.0
WORKDIR /usr/share/nginx/html
RUN rm -rf ./*
COPY --from=builder /app/build .
ENTRYPOINT ["nginx", "-g", "daemon off;"]

What you see there it’s a Dockerfile with the instructions for a React app, what Docker will do it’s taking those instructions and build an images, that image will be like a template you can use to create containers, you can create one single container or multiple exactly the same containers, that’s the magic of images.

Registry

Now, Docker had a vision, why having a single image locally when you can have a whole library with a collection of images? That’s what a Registry is, a collection of images, it could be local (the one you have with the Docker software) or you can use the online repository DockerHub.

DockerHub

Here you can find the repo https://hub.docker.com/, there are multiple online and public libraries you can use without creating your own image, for example, if you need to create a container using a Linux image then, you just need to use the public version, and pull it to your local registry.

Docker Process

See, now, installing Docker you’ll have the client and the docker host, client it’s the CLI or the GUI used to connect to the Docker host, the backend side, it will take the commands and instructions from the client and will do whatever you need, Docker will take a look to your local registry and if it finds the right image it will follow the instructions to create the container, if it didn’t find the image then you will need to pull a copy of that image into your local registry from DockerHub.

This is everything you need to know about the theory to use it for you personal projects, but let’s do a practical exercise just to finish this post:

Linux Container

On your terminal, after installing Docker, you can execute this command

docker pull ubuntu:16.04

Now, you have a copy of Ubuntu version 16.04 on your local repository, well, let’s execute Ubuntu

docker run -it ubuntu:16.04 /bin/bash

That will create a temporal container, it will get you into the Ubuntu operating system using the terminal, basically “docker run” it’s to execute your command, to create the container, “-it” is to get you into the Ubuntu terminal, if you execute “ls -a” you’ll see the file directories, something like this:

root@36b587cbd5f3:/# ls -a
. .. .dockerenv bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@36b587cbd5f3:/#

And that’s it, as simple as I said, well, I hope you enjoyed reading this post, I’ll create another one for an advanced Docker tutorial, I’ll teach you how to create servers and internal networks between containers, for now, it’s everything for me, see you in the next one!

--

--