How Kubernetes replace people: Pods
This small article’s mission is to introduce people with Kubernetes by comparison with old by-hand methods. I’m also new at Kubernetes, and that’s how I see this awesome project from my point of view. Feel free to correct me in some places :)
Hello everyone! Welcome Jim, Jim works as a plain old Software Engineer at ReplaceItWithSomeName company (Later RIWSN). They’re delivering some products every week at Friday.
Deployment of full server structure takes them about 3 to 5 hours and most of the time DevOps and other people stay up late to deliver stuff for angry customer so that their users can receive updates. RIWSN use default simple tools that both DevOps and Developers know: Java/NodeJS, Apache/Nginx, Jenkins, Docker and umpteenth shell scripts to automate delivery process. More over, this stack differs from one project to other e.g signing keys are in different folders, project lays at /var/project in one container and / in another etc. Instead of his company, Jim knows about Kubernetes.
Let’s start with simple example. It’s Friday evening, 6:00 PM. Our Jim works from 9 to 6, as any other in his company. And as any other Jim wants to home and not struggling with overtime and lack of sleep. But life suggested some other presents to him: right after exit door his boss found him, caught and said these sweet words: “Jim, we need your help”. Later on he gets information that their server went down by bad deployment from new incompetent guy named Insertyour Namen. Of course, it’s up to Jim his next action, should he just go home as a “incompetent, not friendly, not good for this company guy” or stay up late as “guy that can work any time we wish and handle other’s problems”, but I will present you golden key to this situation: Just prepare anything so that this problem never happen.
Jim remembers that everything could happen the good way: deployment couldn’t go wrong by new guys, everything secured from most problems, as a result boss will never ask to fix this problem and Jim could go home happily. Everything could happen the good way with Kubernetes! So how Kubernetes can help Jim go home just in time?
Not only deployment, but automatisation
There’s sticky word called “Orchestration”. You can see it right on main page of kubernetes.io, and it’s not for nothing here. Basically Orchestration is automatisation a lot of things at once. And it’s right. Kubernetes can handle not only your server, but storage for him, load balancing, security and other stuff. The other point is that everything is standardised. You can have 50+ projects and their deployment structure probably won’t look the same, but they all will be in the same format and same possible properties. Well, let’s start from the back. There’s few popular deployment strategies:
First is used in our RIWSN company, and almost unstandardised at all. The first thing is that everything is deployed on machine itself, and that’s first problem. Let’s say some guy that left company was deploying the product(e.g on Debian), he loaded some dependencies to machine(think in mind build-essentials, not from apt but some weird website that went couple of weeks ago so that you can’t find this dependency anymore), executed some command in a specific order, maybe wrote shell scripts for this, uploads some configs and everything working. Right now, boss says that we need to move everything on new machine(somehow it wasn’t Debian but e.g RHEL). And Jim have good times struggling with deployment process.
Next type cuts many problems from the first type of deployment. Docker brings containers. Containers gives abstraction from machines. On any machine that has docker your project will work. The downside that you need to spend little time to write Dockerfile, that is file that describes how to build and run your project with environment variables and other stuff. After Dockerfile built, you can start with single command on any machine.
But Docker don’t gives you control over few deployments. Let’s say that you have project with 4 different microservices that works together. Best way would be to use Docker-compose. With Docker-compose you can deploy all deployments, expose some ports, connect them with each other and a lot more. This much more suitable for big project, but not enough…
There’s a pod for that!
That’s what Jim put his mind on. Let’s start with what it is at all. Is it just another “container”? Or is it something more complex? We’re about to find out.
Pod is not a container, but more like environment for this container with it’s own IP address and possibly storage. You can run not only one container in a pod, as well as connect not only one volume. Good example will be Uber’s way of handling mysql databases: they have MySql process, they have additional process that helps them keep first process “in fit”, and of course one volume to save state of database.
Although Uber uses only docker for this, so it can be seen as a possibility ;)
The point is, Pod gives you abstraction from processes, so you don’t operate on separate process, but more on what whole bunch of processes gives you as a result. Similar idea works with threads in a program. You can have multiple threads in a single process, but you still think of it as a single system.
But that’s not all. As a git gives you ticket to beautiful open-source world, pod gives you ticket to automatisation and bounties. Here’s some of my favorite features that pods give access to:
Managing multiple containers as one entity
That’s we already discussed. One pod can have multiple containers under one host. They can see each other via localhost and all see shared volumes, that helps you with limitation “one container — one process”
Container placement — you control where to deploy
Kubernetes is not just one process on your machine, it’s network of computers that communicates as single system with one target — provide best uptime, scalability and availability. This organism can by himself scale your pods to handle load from network or do other cool stuff. But you always have last word, as a president. So if you have two kubernetes nodes: one in Frankfurt, and other somewhere in Kansas, you can(if you want) to deploy this specific pod to Frankfurt. So there will be no pod in Kansas, everything to satisfy you.
Suppose that you started one server(luckily in container), and then you faced with problem, you need another onge because of some scalability problem, without Kubernetes you’ll need to start new container(or even new server), configure your router/discovery server to handle two servers. Kubernetes can handle this for you with a change of one value.
That’s what I’ve already described in first point. You can setup deployment so that when one pod is short of cpu, Kubernetes will handle it by auto-scaling pods. Back to your two servers, if you have one pod that is overwhelmed in Frankfurt, if you wrote your deployment right, Kubernetes will bootstrap another pod in Kansas so that will handle load from network. And that’s cool!
Once in my company we had experience in deploying Hyperledger Fabric onto Kubernetes, that was horrible. But one thing that was really cool is that all volumes Kubernetes handled by itself. You create free empty volumes that can be claimed, and they’re used as needed by pods. But it shouldn’t always be complicated, for more simple problems you can use more simple ways of volume management.
Resource usage monitoring
This point covers as limits: you can put resource limits on your pod, so that it won’t eat all ram/cpu of machine. But Kubernetes also handles resource monitoring for you. They even have their own web ui for this!
Health checks(Liveness, readiness) — HTTP probe, TCP probe, Command probe
That’s where I had “wow effect” when I discovered this. You have ability to setup liveness and readiness check to pod. Liveness check gives you constant checks of your pod live status.
There’s also readiness check. Readiness check gives you constant checks of your pod on startup to answer next question “Does this pod already ready to handle requests?”
You can make this checks in any way that satisfies you: TCP, HTTP or even command on this pod.
Seamless rolling updates ❤
Readiness check won’t be so awesome without this point. When you need to update your api, you always have some small downtime(e.g when you swap two containers). Kubernetes gives you seamless updates, that is updates that has no downtime at all. It works like that: you have pod that’s working fine, but you have to make update to your api, you make new container, update kubernetes deployment. After that Kubernetes starts new pod but don’t kill old until new one is ready to handle requests(via readiness check of course). And that’s awesome!
Jim also told this points to his coworkers, showed some graphs, and they all said that this is game changer. But they all concerned about security, management, access and other stuff that haven’t been mentioned earlier, but that’s story for other parts of Jim’s journey. At least Jim succeeded to shake others up with new technologies that will save many hours in a future. He’s hero of cubicles today.