From Physical Servers to VMs to Docker Containers. What’s Next?

Moneer Rifai
4 min readApr 20, 2017

--

Where are we heading to in our fast journey of tech evolution?

Let’s quickly look back at the last few years of computing evolution, and think about what our next move is.

If you have been around the IT industry for more than a few years, then you might be well aware of how fast things are changing. No doubt, it is so much easier to spin up servers and deploy applications than it was half a dozen years ago. Not only faster, but cheaper, more reliable and efficient. Let’s do a quick rundown of this short history of computing.

Physical Computing — AKA “The Dark Ages”

In the not-so-distant past, if you or your company wanted to deploy an application or run a service, then you had to go out and buy an actual physical server. I remember doing exactly this as recently as 6 years ago. If a customer wanted to setup an internal application, like a SharePoint site for example, then I would spec what they needed and have our purchasing department order the hardware. In 4–8 weeks the hardware arrives, and I would assemble it, install the OS, run the updates, install all the required libraries and components for the application to run, and only then I can start working on the actual application that the customer hired us to do.

Photo credit: KN6KS via Visual Hunt / CC BY-NC

It was a frustrating, expensive, and time-consuming process. Scaling was difficult, because it typically entailed buying more hardware, and efficiency was non-existent.

Virtualization to the rescue — AKA “The Renaissance”

Things changed when virtualization became popular, with the advent of tools like VMware and Hyper-V. Organizations were able to provision resources much faster and allocate these resources according to the needs and requirements of their applications. Virtualization also meant that resources can be added, removed or scaled accordingly as those needs changed. It was a complete game-changer.

With that being said, as the pace of application development increased, deploying these applications on virtual machines was lacking in many ways. For one, virtualization still had the overhead of the hypervisor and VM guest operating systems. You still had to go through a lot of software installs and updates. What worked on one VM on a specific hypervisor might not work elsewhere. This prevented applications from being very “portable”.

Containerization — AKA “The Modern Era”

Photo credit: Derell Licht via VisualHunt / CC BY-ND

Container technology, led by companies like Docker, has alleviated most of these limitations mentioned earlier. A container, much like a shipping container, standardized the process by taking an application and everything that this application needs to run and wrapping it in a single unit. This process lets these “contained” applications run in any environment, regardless of the underlying infrastructure. Containers are very portable and lightweight, and they are very efficient because they take advantage of sharing resources, as opposed to VMs that require the entire guest operating system to themselves.

The power of containers is in standardization. I like to think of it in terms of shipping containers. If I wanted to import some product from Asia, then all I have to do is get that product in whatever quantity I want, find a shipping container, and then fit my product into that container. The next step would be to find a shipping carrier and tell them where I want that container delivered, and it will be transported along with many other containers of the same exact size albeit different contents.

It is important to note that containers still need physical infrastructure to run on, but with the availability of cloud computing services like AWS and Azure, one no longer needs to worry about managing hardware.

What Comes After Containerization?

Contanerization is hot now, but it would be short-sighted not to assume that this evolution will continue. I am sure that one day we will write about how inefficient and time-consuming containerization was!

If I had to venture a guess, I would say that serverless computing will be the next big thing.

With serverless, one no longer has to worry about the underlying resources. Amazon is leading the field with their Lambda service. Lambda will let you run your code without provisioning or managing any of the underlying infrastructure. The deployment model becomes simply “deploying your code” and not deploying VMs or containers. You can instruct Lambda to run your code in repsonse to certain events, as in “if you get this HTTP request, then change this value in the database table or modify this object in S3”. As a result, you can build entire applications without provisioning resources.

Amazon seems to be investing a lot of effort into this serverless model, and other cloud providers are following suit. It will be very interesting to see how this plays out over the next few years.

Originally published at moneerrifai.com.

--

--