Container/Docker Basics for Busy IT Professionals

Roger Galobardes
7 min readJan 1, 2023

--

Can’t postpone learning about containers anymore in 2023? Don’t have the patience to sit through Youtube videos? Your unread message count is at four digits? This article is for you.

What are Containers? And Docker? Why should I care?

Containers are similar to Virtual Machines but take the virtualisation layer a bit further up:

Virtual Machines share the hardware resources (i.e. a laptop) and then a hypervisor (i.e. VirtualBox) is in charge of assigning and managing the hardware to the VMs. Finally, you need to manage each VM’s Operating System and the software running in it.

Containers share the hardware AND the Operating System. So all that’s left to manage is the software in them (and any libraries required to run it).

Source: Atlassian

The Container Engine (i.e. Docker) acts like the hypervisor (i.e. VirtualBox or VMware player), connecting the operating system with each container.

How are containers separated?

Containers use a feature in the Linux Kernel named Namespaces. These help isolate parts of the Operating System such as Processes, Network Resources, Users…

If you want to know more, this article on LWN is pretty good, which I found via the LiveOverflow video which demonstrates how Namespaces work. (Strongly recommended for a quick deep dive into the feature).

Linux Kernel? I thought Docker was cross-platform? Well, when you run Docker on Mac or Windows it effectively ends up running a VM in the background.

On Windows there are minimum requirements such as having a Win 10 64bit, Hyper-V and virtualisation enabled.

That’s why when running Docker on a Mac you see a VirtualMachine process in the background and the laptop fans might go at full speed.

Why use Containers?

  • Resource Optimisation: Imagine if, instead of running 50 Operating Systems, you could get away with 1. That is a significant amount of resources that you save, which means that you can run more instances of an application with the same hardware. In that same line, imagine the savings in software licensing as well!
  • Deployment Time: You know how long it takes to boot a whole VM and usually it is because of the Operating System boot time. With containers the OS is already running, so all you have to wait for when starting a container is for your software to execute.
  • Consistency/Portability: Containerised applications include all the dependencies to make them work. This means that whatever the Devs programmed and tested should work elsewhere. In the same manner it makes it very easy to replicate any bugs that users might find, as the environment of the user and the developers will be pretty much the same.

Any Container downsides?

Mainly security and management complexity (which can lead to further security issues). Good news to us cybersecurity professionals out there, right?

Virtual Machines have an extra layer of security by separating the Operating Systems — but that is not the case anymore with Containers.

Containers are vulnerable to Kernel exploits which adds an extra element of worry.

If you’re interested, there will be a follow up article focusing exclusively on Container Security.

In terms complexity when working with large scale deployments, there’s orchestration technologies like Kubernetes that help with container deployment and simplify a lot of the work. More on this on the last section!

Docker Architecture and components:

Docker Architecture. Source: Docker Docs

Docker Client:

Helps users interact with Containers. Can be used via CLI, API, or GUI.

When you run commands beginning $docker run you are using the Client.

Docker Host:

The machine where the containers run, can be virtual or physical (yes, you can combine VMs and containers. Most people do).

The Docker Host can be a different machine than where the Docker Client is installed (for example the host can be on AWS while the client on your laptop).

Docker Images:

If we compare containers to VMs, Docker Images are like .OVA files or VM snapshots.

In order to run a container you need an image. This image can be dowloaded from a Docker Registry such as Docker Hub (public registry), a private registry that you keep locally, or you can also create your own images.

Docker Storage:

Data generated inside a container will be lost when a container is deleted. Containers can be stopped and in that time data won’t be lost, but if you wish for the data to be persistent you can mount Volumes, which are folders from the Docker host. Very similar to the way you mount a volume on cloud instances or on VMs.

Docker Networking:

On Linux Docker manipulates iptables to manage container networking.

There are 3 main types of networks (also named drivers), very similar to VM networking:

  • Bridge: Default network. Containers are attached to this network by default and can access each other in the 172.7.X.X space. To access this network you need to map ports on the containers to ports on the host.

You can have more than one Bridge network if you want to segment your networks and attach containers to one or the other, for example if you want to separate containers between front-end and back-end.

  • Host: This connects the container directly to the Host and all the interfaces on the host will be made available to the container. This is ideal if you don’t need network isolation between your Host and the container but not great if you have more than one container.
  • None: The container runs on its own standalone network.

Docker has a built-in DNS server so you can refer to containers by their names.

Docker Compose:

Configuring things like port forwarding and mounting volumes/directories require entering additional parameters. For example:

$docker run -p 8080:80 –v /opt/datadir:/var/lib/webapp nginx

This seems doable for a single container, but when you have a deployment with several containers and complex dependencies… It all becomes very prone to human error.

Docker Compose lets you configure all these commands and dependencies in a YAML file which then you use to build the deployments.

Furthermore, this YAML file helps as documentation as it is pretty easy to read. For example for a basic web server it would look like this:

services:
web:
image: nginx
volumes:
- ./templates:/etc/nginx/templates
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
networks:
- front-end

The YAML file above defines a container that uses the nginx base image, mounts a volume with the directory /ngnix/templates in our docker host, maps port 80 on our host to 8080 on the container, sets up some environment variables, and then it connects the container to the “front-end” network.

If we were to define more containers we can just place them in the same or different subnets by using the networks parameter, it is that easy!

Then from the directory where the Compose YAML file we just need to run:

$docker compose up

How can I get hands-on experience with Docker?

It’s easy! While keeping the Docker Docs on an open tab, I strongly recommend that you try and deploy the Voting example app:

This consists of 5 servers that build a basic web app. The GitHub below contains the Docker Compose file so all you need to do is:

  1. Install Docker Desktop.
  2. Download and run the Compose file in the link:

https://github.com/dockersamples/example-voting-app

3. That’s it! Your voting app should now be running. Spend some time playing with it, troubleshooting what’s running and what is not, etc… For some basic Docker commands check the next section.

Essential Docker Commands:

Docker run creates containers, in the example below with the base image of Ubuntu.

$docker run -d ubuntu

Docker ps lists container usage, by default shows only running containers but -a shows all of them.

$docker ps
$docker ps -a

Docker inspect gives us further information in JSON format about a container:

$docker inspect [CONTAINERID]

There’s commands for almost everything you can think of: Pausing, stopping and removing containers. Search for images, download them, deleting them… You can find more information here.

Container Orchestration:

I mentioned that one of the downsides with containers is how complex their deployments can be to manage once they get to a certain size.

Docker Swarm, Mesos, Kubernetes… are all container orchestration technologies. Each of them have their pros and cons but the most popular at the moment is definitely Kubernetes.

If you would like to know more about Containers, Kubernetes, and you happen to enjoy Greek Mythology here’s a webcomic from Google that explains all of this from a different perspective.

Further reading:

I am currently preparing a similar entry on Kubernetes and another on Container Security. Nothing thorough, just the basics to point you in the right direction!

In the meantime I strongly recommend you check the references linked in the article, especially the Docker Docs and the Google webcomic.

--

--