Understanding the Difference Between Virtualization and Containerization

Augustine Tetteh Ozor
8 min readJun 11, 2023

--

Credit :https://www.bmc.com/blogs/containers-vs-virtual-machines/

Virtualization and Docker are both technologies used for deploying and managing software applications, but they have different approaches and purposes. Here are the key differences between virtualization and Docker:

Virtualization
> Virtualization creates virtual machines (VMs) that mimic the behavior of physical machines. It allows multiple operating systems to run simultaneously on a single physical server.
> Each virtual machine runs its own guest operating system, which requires separate resources, including memory, disk space, and CPU.
> Virtualization provides complete isolation between virtual machines, ensuring that applications and operating systems do not interfere with each other.
> Hypervisors are used to manage and allocate the physical resources to virtual machines.

Image from https://www.altexsoft.com/blog/docker-pros-and-cons/

Docker (Containerization)
> Docker is an open-source platform for containerization, which allows applications to be packaged along with their dependencies and run in isolated containers.
> Containers are lightweight and share the host machine’s operating system kernel, eliminating the need for separate guest operating systems.
> Docker containers are portable and can be easily deployed across different environments, such as development, testing, and production.
> Docker uses containerization technology to provide resource isolation and security for applications, while sharing the host system’s resources efficiently.
> Docker containers are based on Docker images, which are read-only templates containing the application and its dependencies.

Advantages of using Docker over virtualization:
> Efficiency: Docker containers are lightweight and consume fewer resources compared to virtual machines, allowing for better utilization of system resources and higher application density on a single host.
> Portability: Docker containers provide consistent behavior across different environments, making it easy to deploy applications on any platform or infrastructure, from local development environments to cloud servers.
> Rapid Deployment: Docker containers can be created and started quickly, enabling fast deployment and scaling of applications. The containerized application can be distributed as a single unit, simplifying deployment processes.
> Isolation: Docker containers provide application-level isolation, ensuring that each container runs independently and does not affect other containers or the host system. This isolation helps in maintaining system stability and security.
> Versioning and Rollbacks: Docker images and containers allow for easy versioning and rollbacks. By tagging and managing different versions of images, it becomes straightforward to revert to a previous version if issues arise.
> Ecosystem and Tooling: Docker has a large ecosystem of tools and services built around it, such as Docker Compose, Docker Swarm, and Kubernetes, providing robust container orchestration and management capabilities.

Credit:https://www.google.com/url?sa=i&url=https%3A%2F%2Flearnkarts.com%2Fblog%2Fwhat-is-docker-container-architecture%2F&psig=AOvVaw3vK3186ydFPsQlwNcsSEWR&ust=1686551815217000&source=images&cd=vfe&ved=0CBIQjhxqFwoTCPjMgKDNuv8CFQAAAAAdAAAAABAY

Docker Architecture:

— Docker Daemon: The Docker daemon is a background process that runs on the host system and manages Docker containers, images, networks, and storage.
— Docker Client: The Docker client is a command-line tool or a graphical user interface that allows users to interact with the Docker daemon.
— Docker Images: Docker images are read-only templates that contain the application code, dependencies, and runtime environment required to run an application.
— Docker Containers: Docker containers are lightweight, isolated, and executable instances created from Docker images. They run applications with their own filesystem, processes, network interfaces, and resource allocations.

Building Docker Images:
— Dockerfile: A Dockerfile is a text file that contains instructions to build a Docker image. It specifies the base image, sets up the environment, installs dependencies, and copies the application code into the image.
— Docker Build: The Docker build command reads the Dockerfile and executes the instructions to create a Docker image.

Running Docker Containers:
— Docker Run: The Docker run command creates and starts a Docker container from a Docker image. It allocates necessary resources, sets up the container’s network, and runs the specified command or the default command defined in the Docker image.
— Container Isolation: Docker containers are isolated from each other and from the host system, providing application-level isolation and preventing interference between containers.

Managing Docker Containers:
— Docker CLI: The Docker command-line interface (CLI) provides a set of commands to manage Docker containers, such as starting, stopping, restarting, and removing containers.
— Docker Compose: Docker Compose is a tool that allows the definition and management of multi-container applications. It uses a YAML file to define the services, networks, and volumes required for the application.

#Example 

version: '3'
services:
web:
build:
context: ./webapp
dockerfile: Dockerfile
ports:
- 8080:80
volumes:
- ./webapp:/var/www/html
depends_on:
- db
db:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=myapp
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword
volumes:
- ./data:/var/lib/mysql

— Container Lifecycle: Docker containers can be started, paused, stopped, and restarted. Containers can also be scaled horizontally by creating multiple instances of the same container image.

Docker Registry:
— Docker Hub: Docker Hub is a public registry where users can find and share Docker images. It provides a vast collection of official and community-driven images that can be used as a base for building custom images.
— Private Registries: Docker also supports private registries, allowing organizations to host their own Docker images securely.

Container Orchestration:

— Docker Swarm: Docker Swarm is a native clustering and orchestration solution provided by Docker. It enables the management and scaling of a cluster of Docker nodes to deploy and manage distributed applications.
— Kubernetes: Kubernetes is a popular container orchestration platform that can be integrated with Docker. It provides advanced features for container management, scaling, load balancing, and high availability.

A Dockerfile is a text file that contains a set of instructions used to 
build a Docker image. It serves as a blueprint for creating a reproducible
and self-contained environment for running applications within Docker
containers. The Dockerfile specifies the base image to use, sets up the
environment, installs dependencies, copies application code into the image,
and defines the commands to run when a container is started.

Here is a breakdown of the basic elements and syntax used in a Dockerfile: 👇

1. Base Image:
* The Dockerfile begins with specifying the base image that forms the starting
point for the new image. It provides the foundation of the environment.
- Example: `FROM node:14`

2. Working Directory:
* The `WORKDIR` instruction sets the working directory inside the container
where subsequent instructions will be executed.
- Example: `WORKDIR /app`

3. Copying Files:
* The `COPY` instruction copies files or directories from the host machine to
the container's filesystem.
- Example: `COPY package*.json ./`

4. Installing Dependencies:
* The Dockerfile can include instructions to install any necessary
dependencies for the application using package managers like
`npm`, `apt-get`, or `pip`.
- Example: `RUN npm install`

5. Running Commands:
* The `RUN` instruction executes commands within the container at build time,
allowing you to perform setup tasks or install additional software.
- Example: `RUN echo "Building the application"`

6. Exposing Ports:
* The `EXPOSE` instruction documents the ports that the container will listen
on at runtime. It does not actually publish the ports.
- Example: `EXPOSE 8080`

7. Environment Variables:
* The `ENV` instruction sets environment variables within the container.
- Example: `ENV NODE_ENV=production`

8. Starting the Application:
* The `CMD` instruction specifies the command to run when a container is
started from the image.
- Example: `CMD ["node", "app.js"]`

How to Write a Dockerfile:
> Create a text file named “Dockerfile” (without any file extension) in your project directory.
> Start with a suitable base image depending on your application requirements.
> Add the necessary instructions, such as setting the working directory, copying files, installing dependencies, and configuring the environment.
> Customize the Dockerfile based on your application’s specific needs.
> Save the Dockerfile.

Dockerfile is Ready: Now here are the steps to build and run Docker images and containers:

  1. Install Docker:
    — Install Docker on your machine by following the official Docker documentation for your operating system.

Unbuntu Example:

> sudo apt update -y
> sudo apt install docker.io -y
> sudo systemctl start docker
> sudo systemctl enable docker
> sudo systemctl status docker

>> Adding ubuntu user to docker group to enable communication
** The docker group was created immediately docker was installed.

> sudo usermod -aG docker ubuntu
> exit

This will enable the user to be added to the docker group.

> ssh into the machine again to verify
> id

2. Create a Dockerfile:
— Create a text file named “Dockerfile” in your project directory.
— Specify the base image, set up the environment, install dependencies, and copy the application code into the image.
— Here’s an example Dockerfile for a simple Node.js application:

Dockerfile
# Use the official Node.js image as the base image
FROM node:14
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the application code to the working directory
COPY . .
# Specify the command to run when a container is started
CMD ["node", "app.js"]

3. Build the Docker Image:
— Open a terminal and navigate to your project directory.
— Run the following command to build the Docker image:

— Replace “your-image-name” with a name of your choice, and don’t forget the dot at the end (to specify the current directory as the build context).

docker build -t your-image-name

4. Run the Docker Container:
— Once the image is built, you can run a container from it.
— Run the following command to start a container from the image:

— Replace “your-container-name” with a name of your choice and “your-image-name” with the name you specified during the image build process.
— The “-d” flag runs the container in detached mode (in the background), and the “-p” flag maps the container’s port to the host’s port (e.g., 8080:8080).

docker run -d -p 8080:8080 - name your-container-name your-image-name

5. Access the Application:
— If your application exposes a web server on port 8080 (as per the example Dockerfile), you can access it by opening a web browser and visiting

http://localhost:8080
ORip-address and port number
44.25.15.205:808

— Make sure the container is running and the mapped port is not in use by any other process.

#Docker #Kubernetes #Virtualization #Dockerfile

😃 💻 ☁️ 👏

--

--