Virtualization and Docker Containers Simply Explained by a Junior DevOps Engineer

Armond Holman
8 min readOct 8, 2023
source by K21academy

In the IT evolving landscape of DevOps, the terms “virtualization” and “Docker” frequently make appearances. These technologies are instrumental in the world of software development, allowing developers to streamline their work, run different operating systems, and create a versatile and efficient development environment. In this article, we’ll break down the concepts of virtualization and Docker, exploring their key differences and use cases.

What is Virtualization?

Virtualization is technology that you can use to create virtual representations of servers, storage, networks, and other physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual machines simultaneously on a single physical machine.” — AWS Amazon

At its core, virtualization is like having a collection of separate machines neatly contained witihin your main machine. Imagine you’re using Windows but need to run Linux or another OS simultaneously. Virtualization provides three main ways to achieve this:

  1. Full OS Installation: The first approach involves replacing your current OS with the one you need. If you require both Windows and Linux, you will have to switch between them unable to use simultaneously.
  2. Dual Boot: Another option is to install a parallel OS alongside your main one. While this allows access to multiple OS, you can use a host operating system one at a time. A reboot is necessary to switch between them.
  3. Virtualization Technology: Here’s where virtualization shines. It enables you to run another OS, called the guest OS, alongside your host machine (ex: running Windows on a Mac). This guest OS operates as a separate application coexisting with your primary system.

Advantages of Virtualization

Virtualization offers several advantages:

  1. Cost-Efficiency: It’s more cost effective than maintaining multiple physical machines.
  2. Easy Recovery and Maintenance: Since it is software-based, it’s easier to recover from failures and perform maintenance tasks.
  3. Fast Provisioning: Setting up virtual machines is quick making it efficient for development.
  4. Enhanced Productivity: Developers can test applications across various operating systems without the need for multiple physical devices.

What is a Container?

A container is an isolated, lightweight silo for running an application on the host operating system. Containers build on top of the host operating system’s kernel (which can be thought of as the buried plumbing of the operating system), and contain only apps and some lightweight operating system APIs and services that run in user mode, as shown in this diagram.”Microsoft

Containerization is a modern approach to software packaging and deployment that simplifies the process of building, shipping, and running applications. At its core, containerization aims to encapsulate an application, along with its dependencies and runtime environment, into a single, lightweight package known as a container. These containers are isolated from one another and share resources with the host operating system.

How Containerization Works

Imagine a container as a self-sufficient unit that contains everything an application needs to run consistently across different environments. This includes the application code, libraries, runtime, system tools, and settings. Unlike traditional virtualization, which emulates an entire operating system, containerization operates at the application level.

Advantages of Containerization

Containerization offers several advantages:

  1. Consistency: Containers ensure that an application runs consistently, regardless of where it’s deployed, whether on a developer’s laptop, a testing server, or in production.
  2. Isolation: Containers are isolated from each other and from the host system, preventing conflicts between applications and ensuring security.
  3. Portability: Containers can run on any system that supports the containerization platform, making it easy to move applications between environments.
  4. Resource Efficiency: Containers are lightweight, consume fewer resources, and start quickly compared to traditional virtual machines.

Now that we have a fundamental understanding of containerization, let’s explore how Docker, a leading containerization platform, leverages these principles to transform the world of software development and deployment.

What is Docker?

Docker takes virtualization to the next level by introducing containerization. It’s a platform used for containerizing software applications, allowing developers to build applications with their dependencies neatly packaged into containers. These containers can be effortlessly transported to any server, making it an excellent choice for DevOps enthusiasts.

Key Differences Between Docker and Virtualization

  1. Resource Utilization: Virtualization requires you to allocate specific resources (CPU, RAM) for each virtual machine, which can lead to resource wastage. In contrast, Docker containers utilize the host OS resources efficiently, sharing libraries and resources only when necessary.
  2. Image vs. Container: In virtualization, you create complete virtual machine images that include the OS and your application. With Docker, you start with a Docker image (like a recipe) and create multiple containers (dishes) from it. Each container shares the image’s core components but remains lightweight and focused on the application.
  3. Isolation: Virtualization offers full isolation between virtual machines, making them completely separate. Docker provides process-level isolation, meaning containers are isolated but share the host OS kernel.
  4. Size and Performance: Virtual machines tend to be heavyweight, consuming significant disk space and memory. Docker containers are lightweight, starting quickly and utilizing fewer resources.

Suppose you’re developing a web application that relies on specific versions of programming languages, libraries, and databases. With Docker, you can create a container image containing your application and all dependencies. This image can be deployed consistently across development, testing, and production environments, ensuring your app runs the same everywhere.

Practical Usage of Docker

Now that we’ve established what Docker is, let’s explore how to use it:

  • Docker Client: The Docker client is your primary tool for interacting with Docker. It’s the software you use to manage and run containers.
  • Docker Host: Your Docker host runs the Docker daemon, which manages Docker objects like containers and images.
  • Docker Objects: These include Docker images (templates) and Docker containers (individual servings). Images are used to create containers, and you can create multiple containers from a single image.
  • Docker Registry: Think of this as a store for Docker images. Docker Hub is the default registry, providing a vast collection of images. You can also use self-hosted registries like Nexus or AWS Elastic Container Registry (ECR).

Let’s say you’re part of a development team working on a microservices-based project. Each microservice is contained within a Docker container. Using Docker Compose, you can define and manage the entire application stack, including databases and external services, as code. This makes it easy to replicate the development environment across the team.

What the Container Client looks like

Understanding Docker Containers and Images

To grasp the essence of Docker, we need to explore the concepts of Docker images and Docker containers. Imagine a Docker image as a blueprint or a source code for creating containers. It’s like the recipe for your favorite dish. On the other hand, a Docker container is an instance or an executable photocopy of that image, much like enjoying a serving of your prepared meal.

In the world of Docker, the Docker file is a pivotal component. It serves as the recipe book that guides the creation of Docker images. In simpler terms, you need a Docker file to build a Docker image, and without an image, you can’t run a Docker container. The Docker file is essentially a text file containing the configuration details of your image.

Now, the fascinating part is that Docker images are highly portable and shareable. They can be distributed among users through various means, such as Docker registries, zip files, cloud services like Google Drive, or dedicated container registries like Amazon ECR. This means you can package your application and share it seamlessly with others.

On the other hand, Docker containers are more like instances of your application. They run based on Docker images but are isolated from one another and typically run on your local system. Think of it as having multiple game installations from a single setup file — you can run many containers from a single Docker image. This flexibility and isolation are at the core of Docker’s power and efficiency.

So, in summary, Docker images are the recipes, and Docker containers are the meals created from those recipes. Docker files are the cooking instructions, and Docker images can be easily shared, while Docker containers are like enjoying your prepared dishes on your local system. This understanding lays the foundation for effectively using Docker in DevOps practices.

How to Run Basic Docker Commands

We will show you how to use basic Docker commands whenever you’re creating a container. We will use our local host terminal to input the commands.

docker — — version: Reviews the version of Docker and see if it is up and running.

docker run : used to create and start a new container from a specified image.

whenever you install the Docker it comes with by default Hello World image. Even if you delete it, it can create a new hello world image.

If I want to run a container which contains a MySQL application. So I will type Docker pull MySQL. Right?

docker pull (Image) : Pulls a new container into docker.

We are pulling a container that contains MySQL to execute it. After that your PC will be it so your Mac has a full MySQL Installation

docker images: Reveals the available images that downloaded into your Docker Client

docker run: Runs the new container

docker ps/ls: Views list of running containers on your system.

docker rm [CONTAINER_NAME]: deletes a container from your system.

docker stop [CONTAINER_NAME]: Stops a container on your system that is running.

docker pause[CONTAINER_NAME]: pause the process within a running container.

exit: Exit a docker container

Virtualization and containerization are two powerful technologies that have revolutionized the world of software deployment and management. Virtualization enables the efficient utilization of physical hardware resources which allows multiple operating systems to run independently on a single server. On the other hand, containerization tools such as Docker offers a lightweight and portable solution for packaging applications and their dependencies, ensuring consistency across various environments.

In the next article, I will show how to use Docker files help streamline software installation and configuration for dev teams, making it easier to build and share images with Docker.

--

--