Kubernetes is an orchestration tool for Docker. So, clearly, originally we started by using Docker and later on discovered the advantages that Kubernetes offered.
When we were just starting our project, we set a goal to build it with Microservices. Our main target was organizing continuous integration and deployment, something that’s truly hard to achieve with Tomcat only without any kind of containerization.
It was around this time that containerization had started to grow in popularity and we thought- why not use it? We had previously tested the use of Microservices on another project, but the CallMonkey project was the first one where we actually dived deep into Microservices architecture.
Example of Microservices Architecture
Today, we have 19 containers set which is a rather big number.
What were the advantages of Docker?
The use of Docker didn’t come without advantages for our team. It gave us independence from the operating system.
Wherever Docker works, the infrastructure is actually the same. It’s not that we have to configure separately for Windows, Linux, or any other platform. Docker gives you a united platform that allows to scale and deploy your application through containers.
What problems did we have in Docker?
When we first started using Docker, it didn’t have a big API and it didn’t give a lot of opportunities that were actually necessary. At first, in order to make the containers visible for one another, we used the Docker’s CLI. When we first created containers we gave them network names so they would communicate with each other. Later on, as we followed technology blogs closely, it turned out that a new network feature was now supported called Docker Network. What was its flow? Docker Network allowed us to first create a network where we could later on add our containers that could communicate through DNS.
However, I had the feeling that it was really hard for us to delete the containers and replace them with new ones, even though this is quite a usual practice during the hectic development process. So, from the perspective of managing the project there were some apparent issues.
On top of all this, we had the programs integrated with Bamboo with lot of scripts operating on Bash. So, when you have a lot of containers in your project, there are some issues that may require the use of additional tools.
There was another issue, too. Because Microservices architecture implies that there should be a functionality to scale, we needed a way to scale our project horizontally which is quite a common practice for Microservices.
While using Docker we also had a project-specific issue connected with virtualization. Our servers we use here at SFL are virtual servers that operate on Windows server. So you build a Linux server inside Windows server and the team works on it. When you use Docker, you are allocated some RAM and CPU and while working on the project we faced an issue which we didn’t know the origin of: once we connected one Tomcat server, we immediately ran out of RAM. It was really disappointing, so we had to shift from dynamic allocations to static allocations and the problem was solved.
So it was there that we started thinking: “Is there any kind of tooling, or a framework which would ensure easier configurations, their scaling, replication and higher availability?”. As we conducted the research, the first thing we came across was Kubernetes.
Initializing Work with Kubernetes
When we first came across Kubernetes technology and started looking through its documentation, it turned out that it wasn’t very user friendly. The internet community didn’t seem very positive about the way usage was explained and while looking through the files it sometimes seemed rather complicated to get going.
Kubernetes technology is based on a rather common idea: it is a kind of a “master” and for the rest of the nodes which you want to manage or to be part of your cluster, agents called kubelets or Minions are installed. The master manages the Minions, and when you work with the Docker instance, both the master and the minions appear on the node and you can operate them successfully.
As we progressed into studying the documentation, we singled out the solution that would allow not to do much server-side configuration. The solution is as follows: a container is connected to Docker, to which in its turn is connected the Kubernetes ETCD cluster, the API Server, Proxy, Scheduler etc. All of these are rather hard to configure separately, but Docker’s solution where you can run a command which would do all of these steps was a literal blessing for us initially.
It was really good for the team to start from this point and understand what Kubernetes could actually bring to the table for our projects. As soon as we started using Kubernetes, the impressions were very good!
There are a couple ways to configure Kubernetes. The first one is the tool provided by its CLI, where you can create the commands and code you want from the console.
The second one is through .json and the third one takes place through .yaml.
So the first advantage of Kubernetes that we noticed is that having a replication there is very easy and is just a matter of configuration which really lacks in Docker.
Advantages of Kubernetes
Kubernetes offers a wide range of advantages: first, it poses high scalability, easier container management and helps to reduce the delay in communication, another issue which maybe be connected with Docker.
What is the communication scenario in Docker? Let’s say we have a company microservice, there exists a search Microservice working on ElasticSearch, we get connected so there is connection between them. And because of the network establishing this communication can often last very long, causing unnecessary delays.
As our team was trying to solve the issue in Docker, we understood that it was connected with the network. So, when we shifted to Kubernetes we no longer had similar instances.
Another advantage of Kubernetes is that when we build our Microservices, we can add as many lifetime replications as we want. So if the project expands, making changes to it doesn’t take lots of effort.
Imagine you have three servers. One of the servers is overloaded and the other two are rather free compared to the first one. So there’s the master, and there are the minions on the servers, so you want to reduce the load on a particular server. And when you run the command for the master to build some more replications, the load balancer decides which nodes are less loaded at the moment, so it spans the load to other less loaded nodes. So the use of Kubernetes was rather comfortable from this perspective as well.
Solving Network Issue with Kubernetes
Kubernetes also solved the Network issue we had with Docker. Kubernetes in itself doesn’t have a network tool, but there are add-on services written by Core-OS. It’s a wonderful tool that allows to streamline the communication within your microservices.
In every microservice there is a gateway which keeps the communication with the external world. The internal microservices that we have aren’t visible from the outside world. When a request comes to our gateway, our gateway decides to send a request to Company MS requiring, for instance, that the user should authenticate.
When this request comes in, it doesn’t refer to an IP or a specified source, and there is no information about its routing whatsoever. This information is dealt with by the proxy and the server. So any internal communication within Kubernetes is managed by a load balancer.
The same happens when internal microservices communicate with one another, the load balancer also works.
I have tried GatewayMS on my own when I had several company replications from company MS. So when a request comes to the company MS, the load balancer decides which company MS to send this request to at the time.
If we tried to solve all these issues with Docker only, it would have been rather complicated, with a need to setup our own load balancer.
There is another advantage of Kubernetes that we are about to implement in our projects. All of our microservices are currently located in one container, which was initially done because of lack of resources, and we are going to separate them once we transfer to the cloud. Kubernetes offers rolling updates which allows for lifetime deployment of your application which today is quite an issue for many projects.
When working with Java you work with byte codes and a deployment solution was something that we needed to minimize downtime so the team could work on hot fixes and ensure maximal performance and availability.
The agile development principles also added to the success of the project overall, and little by little we are reaching the time of using the rolling updates.
Kubernetes also solves the issue of resources- you don’t have to reinvent the wheel and it’s a huge benefit for both the client and the team.
It’s also important that our Microservices are stateless, so when one of the microservices is turned off, it doesn’t contain a state so the next request doesn’t have a dependency on it. Overall, the team gets the freedom to implement these rolling updates.
Let’s say you have three replications from Company MS and you want to change one version for another. It creates instances from the new version and forwards a part of your traffic to the new instance. Then this action is repeated for the rest of the instances.
Kubernetes also has a rollback feature, which allows you to undo the changes to the code if you find it necessary. So you can easily return to a previous version of your codes.
While these are the main advantages of Kubernetes that we have used in our project, I believe there are many more stored ahead.
About this author
Arthur Asatryan is a Java Engineer with a heart for innovation and complex algorithmic solutions.