What Kubernetes, When & Where we use….….……

Niluka Sripali Monnankulama
Many Minds
Published in
6 min readFeb 12, 2020

Hope you know about Docker and containers, So let’s take it to the next level.
In this case Kubernetes will be coming to the picture, and let’’s see
What Kubernetes needs is and see When and Where we might want to use it.

So what Kubernetes????

Before we get into exactly what Kubernetes. Let’s see when you might want to use it.

You’re in the situation where you’ve been using Docker for a little while or you the least know what Docker is.

And maybe you have deployed on a few different servers.

And this is great.

And we used Docker Compose in the past to actually go through and manage these kinds of deployments, and they’re pretty simple for something that’s really small.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

And It needs to your website or application to grow up and drives a lot of traffic your way and you need to scale up really fast.

How are you going to go from the three servers that you have now to 40 servers 50 servers?

How are you going to go past the handful of servers that you can keep track of in your mind when you really need to scale after business.

In this situation where would you have possibly put a specific container if you need to work with it or how are you deciding what containers go where,

This is where Kubernetes comes in..

Kubernetes is a platform for working with containers not specifically docker actually..

Just containers, in general, you can use alternatives to Docker to manage the containers also but Kubernetes gives you a few key things. It gives you more that’s going to under the hood it’s a platform you can build on it and extend it but at its core what Kubernetes gives you are a means to do Deployments. An easy way to Scale and gives you Monitoring and Recovering.

Let’s take a look at how we actually do this with Kubernetes..

Deployments

So in Kubernetes, you have a master node. And this is part of a cluster.

So it knows of other servers that you create that you can then deploy containers to the actual process of deployment is pretty simple too.

So in this way you talk to Kubernetes as you tell it,

  1. What kind of image you would like to create a container from.
  2. And you give it some other criteria and it creates what’s called a Deployment.

So that would be your application in this case and your deployment can say a bunch of different things.

  1. You can specify I need a certain amount of CPU’s
  2. a certain amount of RAM
  3. A specified amount of file storage.

All of these things are held inside your deployment and Kubernetes will keep track of that for you. It is not like when we do Docker composed to do a deployment where we just push a container out to run a deployment is something that kind of keeps on going it has a deployment controller that

Monitoring

If your application goes down Kubernetes is going to know about it and

Auto Recover

it’s going to try to everything it can to auto-heal so it’ll spin another container up and recover for itself because the deployment is not just about that initial launching of the container it’s something much bigger in Kubernetes.

Scaling

So I mentioned before that you’re getting more traffic than you’re expecting and you need to scale your application.

Naive Scaling. ➡️

So the naive way to scale would be all I’m going to deploy one app container per server but that’s not necessarily like useful and a lot of cases right.

Sometimes you’re going to be in a situation where that’s not the most efficient use of your resources. So this is what I would call naive scaling and this is the way like this is a good way to do it when you have a view application servers but if you want to keep costs down you can’t just open up a server every time you need to deploy a single container unless by chance Here’s building up a server that is exactly the right size for that. So the way that scaling works in communities is it will figure out where to put it for you.

Scaling deployment

So scaling a deployment is done by modifying the deployment so the deployment like I said before can hold on to how many CPU’s needs how much RAM it needs. It also can hold onto the scale so you can say I need to deploy five different NGINX nodes you put them in the best spot given that they have these hardware requirements that might bring us to another situation of how well how am I going to connect to my particular container and that’s where another thing inside of Kubernetes called services comes into play.

Services

So say NGINX is one of our services. We have multiple nodes that are running and we need to connect to those but we want to connect to them in a smart fashion. So services let us manage all of these and then they also put a load balancer in front and give us public accessibility to this particular service. We can have multiple services just we can have multiple deployments.

Our NGINX maybe we need two containers for that but our database service we only need one container to actually run it. So we have those on two separate things and they might be on the same machine or Kubernetes might decide to put them on separate machines.

Scheduling

It really depends on what your scheduler thinks is the best usage of your resources at the time and you can tweak that within Kubernetes,

  1. You can tell it to prioritize and even distribution
  2. You can tell it to prioritize you like fully utilizing the resources on a single node.
  3. You can write your own schedulers there’s a lot you can do with this but at its core, Kubernetes is a platform for allowing you to actually maintain deploying containers into production.

Once you get beyond a certain scale but it definitely works if you’re you’re not at scale if you don’t have 50 servers that you’re working with which are called nodes and Cooper entities.

But even if you’re not web-scale you can still benefit from Kubernetes that is because it gives you the automated health checks it can give you rolling restarts and deployments so that you make sure that when you deploy a new application you’re never cutting off anything that needed access to that service.

Note~~~~~

And I’ve seen extensions to Kubernetes that allow you to do things like manage your Let’s Encrypt free SSL certificates automatically so you don’t have to do it yourself every three months when you know it’s going to be out.

I hope you got something from this.……🙃🧤

--

--

Niluka Sripali Monnankulama
Many Minds

An IT professional with over 7+ years of experience. Member of the WSO2 Identity & Access Management Team.