Kubernetes, docker, containerisation are some of the buzzwords in today’s DevOps world.

If you are starting out a new journey as devop or if you find interest in exploring the “Cloud Native” world, then this might be a good read for you for a start — “Kubernetes in a Nutshell”

From the Kubernetes documentation we find the following definition

“Kubernetes is a portable, extensible open-source platform for managing containerised workloads and services, that facilitates both declarative configuration and automation.”

If you’re truly new to cloud native environment then this might have perplexed you. Fear not, we’ll try to break it down throughout this article.

But first, we need to know what containerisation means and how it is helping in today’s large scale application development and deployment in the cloud.

Below are two simple examples of architecture of a traditional hosted application.

Figure 1

For each one, we have a host machine running a particular operating system.

Then we can choose either of the two:

  1. Have a hypervisor which can facilitate running multiple VMs (Virtual Machines). This is required if we want some kind of isolation between the applications.
  2. The other is the most simple way of doing things. We design our app to run in a specific host and deploy them on the host machine as it is. This requires careful app design as resources and dependencies are shared between all the apps.

After this is done, we expose our APIs to the outer world so that everyone can interact with our application.

Simple, right?

Well, maybe. You see, it’s all easier said than done if you consider your local environment is at par with the environment inside the VMs or the host machine. But this is not always the case. 
Application development, simple or complex comes with a rat’s nest of dependencies, libraries and packages.

We try and test out multiple things in our local system where the configuration is very different with that of our hosted VMs. Also, for large applications, we might not have a single production VM, instead different VMs setup for different set of environments, like dev, stage, prod, etc.

Many a time we find our code works perfectly fine in development but breaks as soon as it is deployed!

Having all the environments in sync, i.e keeping the application components updated or change them to address the business needs, becomes a huge overhead in maintenance.

Containerisation to the rescue!

So what containerisation does, is that it creates an OS-level virtualisation. Meaning, multiple applications can run on a shared OS having their own set of independent environment, dependencies, libraries and all the runtime components needed to run the desired application.

Containers are lightweight, maybe few megabytes unlike their VM counterpart which may go upto several gigabytes. This is because in VM, a whole computer system is virtualised whereas only the OS is virtualised in containers.

Figure 2

We package this container containing all the dependencies, binaries, libraries and the application module as an image where it gets deployed to our hosted server or even to our local machine! Doing this guarantees us that the environment within a container is same in every system (server) where it has been deployed.

Containerisation also solves another problem for us. Rather than having all the backend and/or frontend as a complete system, we can break down our software application into smaller, more task specific components.

For example, if we have an e-commerce application, then we can have the authentication and authorisation packaged in a container. Likewise, shopping cart, transaction system, feed, etc as separate services all packaged in different containers respectively. This encourages us to have a micro-service based design approach for our large application. 
We can individually develop, test and deploy these services without breaking the whole app!

Implementing containerisation can be complex. So rather than re-inventing the wheel, there are software platforms which does this out of the box.

Docker is one such software platform which helps us to create, deploy and run applications by using containers. It is the “Container Manager” in figure 2 and supports containerisation.

Now it all sounds great to have micro-services building our large scale e-commerce app, but what if one of the container fails? Or during a festive season, there is a sudden increase in traffic and our hosted server gets overloaded and slows down, less assured stops completely?

Failures, network issues, spikes in traffic are some of the few hurdles which come complimentarily with cloud native environment.

Here we need something to maintain the state of our containers, like spinning up new ones if they are destroyed, load balance the traffic, having replicates, perform updates, to name a few. We call it container orchestration.

Kubernetes does this orchestration part for us! 
Now if you read the Kubernetes definition above -

“Kubernetes is a portable, extensible open-source platform for managing containerised workloads and services, that facilitates both declarative configuration and automation.”

Now it is not so perplexing I assume.

  1. It is open-sourced by Google
  2. Manages container workloads and services (orchestrates the containers).
  3. It follows configuration based approach and automation. So, little or no coding skill is required!

Imagine having our e-commerce app having 5 micro-services each of which are deployed in containers inside a single node (hosted server). 
To eliminate the chance of node failure, like network outage, we multiply the nodes by — say 3. Now we have a total of 3x5 = 15 containers running, distributed evenly on 3 nodes. We now have a cluster of nodes!

How can we manage this setup? By managing, let’s say we want to update our shopping cart module, or scale up one of the container to have 2 replicas. Or add a completely new container designed for a new feature. 
For maintaining the cluster state, we want all the nodes (3 numbers) to have these changes (container updates) once they are rolled out.

Performing these changes means performing a state change in our cluster and every node inside the cluster should be in sync. Not to mention, we also want minimum or no downtime in our application services during this transition otherwise thousands of users might get upset!

Kubernetes helps us in performing this cluster management. It is a tool that automates the deployment, management, networking, scaling and availability, of container based application.

There are multiple use cases of Kubernetes which makes it one of the preferred platform tool for setting up a cloud environment for large scale applications. Maybe I’ll leave that for future reads but I hope this gave you a brief idea on Kubernetes and containerisation. If so, why not try out things and explore the power of Kubernetes by spinning up a single node cluster in your local setup!

Sounds interesting? Let me know, and to help get you started, do give a read on the Kubernetes official documentation and make your apps cloud ready.

Build something great. Go Cloud Native!