Evolution from Physical Servers to Containers: Container, Docker, and Beyond — EP-1

Pravallikaperavali
6 min readFeb 16, 2024

--

Before we jump into containers, let’s rewind and see how we used to handle applications.

In the good old days, applications were directly deployed onto physical servers. These servers were like dedicated hosts, often underutilized and inflexible in scaling. The below issues were seen while handling physical servers.

Traditional Application Deployment:

  • No isolation of resources: Applications were deployed directly onto physical servers. Each server hosted one or a few applications. And, overutilization by one app can crash the entire system.
  • Drawbacks: Limited resource utilization, difficulty in scaling, long downtimes, management, and expense.
Traditional Physical servers

Then, in came Virtual Machines (VMs), acting as a lifeline by allowing multiple virtual instances on a single physical server, solving the resource efficiency puzzle.

Virtualization:

Now that we have seen how applications were deployed on physical servers, let’s see how virtual machines improved the situation.

Virtualization
  • The “Host Machine” runs software called “Hypervisor”, which is responsible for managing and creating virtual machines.
  • Each virtual machine (like “VM 1” and “VM 2”) is a complete and independent instance with its own full operating system (Guest OS) and dependencies (app binaries, and libraries).
  • The hypervisor provides isolation between VMs, ensuring that they operate independently of each other.

However, VMs are bulkier because they include an entire operating system and dependencies (app binaries, and libraries) taking GBs of space which is heavy. This leads to more significant resource consumption, slower deployment times, and a slow bootup process compared to the lightweight and fast-starting containers which is the hero of our blog today.

What is a Container?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Containers

• Containers are light-weight (Mbs)

• Highly portable, scalable, and less expensive

• Applications are not fully isolated and security is a concern.

Security Challenges in Containerized Environments:

Containers use a technology called namespaces and control groups (cgroups) to provide isolation for applications. Each container has its own isolated file system, process space, network, and user ID space. However, they share the host operating system (OS) kernel. While this shared kernel provides efficiency and lightweight characteristics, it might lead to concerns if there are vulnerabilities in the kernel.

However, Containers, despite sharing the host OS kernel, are chosen for their resource efficiency and portability. Security in containers is a multifaceted aspect and containers can be robust when following best practices, but vulnerabilities in the shared kernel should be monitored and addressed through regular updates and security measures, configuring proper access controls, and implementing network segmentation.

Now, where does a term called docker come in here?

Let’s see What is a Docker?

Docker is a platform that enables the creation, deployment, and management of containers and it’s an open-source project based on Linux containers. It provides tools and a runtime environment for packaging applications and their dependencies into containers, making it easier to build, ship, and run applications consistently across various environments.

In essence, Docker is a technology that facilitates the use of containers like creation, deployment, and management of containers and simplifies the development and deployment of software applications.

OK! Let’s make it easy and remember as below:

A container is a bundle of Applications, Application libraries required to run your application, and the minimum system dependencies while Docker is a platform that lets us do the creation, deployment, and management of containers.

Containers vs Virtual Machine:

Comparison between Virtual Machines and Containers

Containers and virtual machines are both technologies used to isolate applications and their dependencies, but they have some key differences:

  1. Resource Utilization:

• Containers are lighter and faster because they share the host OS kernel.

• VMs are more resource-intensive with a full OS and hypervisor.

2. Portability:

  • Containers are portable, running on any system with a compatible OS. This feature is designed to address the humorous “it works on my machine” problem.

• VMs are less portable, needing a compatible hypervisor to function.

3. Security:

• VMs offer higher security with isolated operating systems for each VM.

• Containers provide less isolation, sharing the host OS, potentially making them less secure.

Containers are Light-weight in nature:

Containers are lightweight because they use a technology called containerization, which allows them to share the host operating system’s kernel and libraries, while still providing isolation for the application and its dependencies. This results in a smaller footprint compared to traditional virtual machines, as the containers do not need to include a full operating system. Additionally, Docker containers are designed to be minimal, only including what is necessary for the application to run, further reducing their size.

Why do Containers boot up quickly?

containers generally boot up faster than Virtual Machines (VMs). The quick startup of containers is dependent on their lightweight nature, sharing the host OS kernel, and avoiding initializing all its components, services, and processes (mentioned below for more info) associated with booting a complete operating system, which is a process VMs typically go through. Containers can start almost instantly, providing faster deployment and responsiveness compared to the relatively delayed boot time of VMs.

  • Components: Software and hardware elements in the operating system, like device drivers and kernel modules.
  • Services: various services that run in the background to perform specific tasks such as network services and system logging are started during boot.
  • Processes: Operating systems manage various processes, individual programs, or tasks, initializing essential ones during boot that are needed for proper system and application functioning.

Conclusion: Wrapping Up the Container Tale

So, to sum it all up, we’ve traveled through time — from old-school physical servers with their limitations to the modern era of Docker containers. Let’s remember the key points:

Old Struggles: Physical servers had issues like not using resources efficiently and being tricky to scale.

Virtual Machines to the Rescue: Virtual Machines (VMs) came in, letting us run multiple instances on one server, solving resource problems.

Enter Containers: Docker containers stepped onto the stage, offering a lighter solution by packaging apps with what they need, making deployment quick and efficient.

Docker’s Job: Docker, our guide in this container world, makes creating, deploying, and managing containers a breeze.

Containers vs. VMs: Comparing the two, containers shine in using resources well, being portable, and starting up super-fast.

Light in Nature: Containers stay lightweight by including only what’s essential, leaving the hefty baggage behind.

Fast Boot-up: The secret to containers starting up quickly lies in their simplicity — they share a few things with the computer they’re on, skipping the slow startup process.

Meet you in EP-2. STAY TUNED

--

--

Pravallikaperavali

"Engaging in AWS projects in my corner through writing and publishing. Thanks for joining my space! Hit follow for more!✨ #AWS #Cloud #LearnByDoing"