What is Kubernetes?
Kubernetes is a tool for managing and deploying your containerized applications. The key word here is “containerized” if you didn’t reached this step in you applications you should start there and go Dockerize your applications. It will be a big first step to make your environments uniform (dev, test, staging, preprod, prod).
Once it’s done Kubernetes might be useful for you to ease deployments, stability, redundancy, high availability… A lot of big words which usually rhymes with lot of money and time (or money and money to sum up).
Note: I will say Docker and Dockerize in the rest of the story but you can use other containerization method like rocket with Kubernetes and my remarks will still apply.
First you will need more than 2 or 3 servers to run your applications. If you don’t, Kubernetes will just suck resources with very little profit (since you probably don’t have that much stuff to manage on 3 servers that you can’t already handle with some well configured docker).
One of the server will see its resources used mostly by Kubernetes (around 72%). It’s the core containers of Kubernetes that are running there. These containers are managed by Kubernetes pretty much the same way it manages others containers. You could say that Kubernetes is self managed (which is kind of what every new technology in IT do, apply this great new piece of software to itself : “Look how my new language can self compile!” 😃)
To summarize, if you need to deploy dockerized on more than 3 servers, Kubernetes will help you doing this:
- Manage your resources in the best way with no effort to optimize server usage
- Scaling your applications across your servers with no effort
- Self heal your applications in case of server failure with no effort
- Run oneshot jobs or cronjobs on available server with no effort
- And much more…
Did I mention that Kubernetes enables a lot of DevOps key features with no effort?
Joke aside, if you already use Docker in your day to day life to develop or to deploy, you’re just one step away from scalability provided by Kubernetes.
After a few minutes on Kubernetes you will discover some new vocabulary. So let’s explain these new words:
- k8s is the abbreviation of Kubernetes (“k” then 8 letters then “s”, just like i18n for internationalization) you will see this a lot in discussion (on StackOverflow, help forums, mailings…)
- A pod in Kubernetes is a container which is running somewhere
- A node is a virtual machine or a physical one on which Kubernetes can run pods
- A cluster is a set of nodes. Kubernetes can manage multiple clusters via contexts
With that in mind we can begin to answer the big question of the beginning!
Because you won’t tell what to do, but what you want!
While using Kubernetes you will launch some commands, but most of the time you will write description files. It might seem disorienting at the beginning. It’s not the “old’good’way” we used to do it, but this “describe state and apply” over “tell the server to do X” philosophy is growing (other tools like Terraform are based on this philosophy).
You will be able to version your architecture
Since you describe your architecture (number of services, pods, load balancers, …) you will be able to version those file in some VCS (git, mercurial, you name it…) and keep track of how your architecture evolves. You could imagine keeping track of your services increase, numbers of pods increase and then plan for resources increase in advance.
You will have cheap redundancy
Create redundant applications is usually painful from the devOps prespective. You have to setup multiple servers which are identical (same OS version, apache/nginx versions, php/python/node versions…). You have to setup a deployment process that ensure no downtime, a rollback procedure… All this is not trivial to setup. Kubernetes allow you to create a ReplicaSet in which you set the number of replicas and everything is managed by Kubernetes. When you deploy, Kubernetes will ensure that no downtime happen and if by any chance your application is buggy and start crashing, a seamless rollback is done.
You will have autoscale
Kubernetes allows easy autoscale inside your cluster. You have a cluster which is usually under used and some room is available to mount new pods. Kubernetes can automatically mount new pods (and unmount them) according to a metric (RAM, CPU, custom) you provide him. This ensures you will get the most of your cluster. This kind of autoscale is called horizontal autoscale.
If there is horizontal, there should be vertical too, and there is! You can tell kubernetes not to create new pods, but to boost existing once (reserve more RAM or CPU)
And finally there is cluster autoscale which usually is handled by your cloud provider. You can ask for a minimum number of nodes in your cluster for the day to day jobs but you can also configure a maximum nodes number for when it’s crazy outside. When Kubernetes tries to create new pods but doesn’t have the needed resources, it will wait for a new node to be ready and then create pods there.
You will be agnostic of your cloud provider
Kubernetes is implemented by the three biggest cloud providers (Google with the GKE, Amazon with EKS and Azure with AKS). If you are not on one of these you can still install Kubernetes on your servers (virtual or baremetal). And this means that changing your kubectl config and launching a few commands is all you need to replicate your architecture on another service.
Your resources will be optimized!
Usually when you deploy applications, you choose a server X which is well oversized for you app (because traffic will grow in the future and switching server is costly) and a second one Y probably a bit oversized too for your database. And your resources are under used this way. With Kubernetes you can size your nodes just right and when you need more power you add a new one in the cluster and just tell Kubernetes to scale your app (by giving more resources to the pods or by creating more pods).
You can also colocate pods on the same server. When before you were creating a server for each service (to isolate them), now you just need a pod (which is isolated) and kubernetes manages these pods and tries to optimize resources.
Why not Kubernetes?
I was really pro Kubernetes all along this article but here are some cons:
I don’t use docker
The cost of switching an architecture to docker can be quite high. And it means that you will change your mindset as a devOps which can challenging.
I only manage 3 (or less) servers.
You probably don’t need Kubernetes here. First it will eat most of the resources of one server and your 3 services mgith fit on the remaining 2 but the gain seems quite low. However if you plan to make this architecture grow, it’s another story!
Doing stuff the ol’good’way is so easy you can get it done in a day (maybe). Doing the same stuff with Kubernetes — especially if it’s your first time — might take more time. It’s time to invest for the future!