Part 1: Journey to Kubernetes (K8s) at Haven Technologies

At Haven Technologies, we run our applications and services across thousands of containers throughout the software development process across various environments at Haven Tech. We define containerized applications in Docker compose (.yml) files. These Docker compose files drive our development process in Continuous Integration (CI) environments, on which developers build and Quality Assurance Engineers test to get a final working product.

After a successful test, we move our applications to more stable environments like UAT, Production, and eventually to Demo. These environments have to be more stable with limited access to developers, and we have DevOps and CloudOps teams to manage them. As our company grew, managing these many containers became difficult and inefficient. Especially with a release every two weeks, the influx of changes every release brought became difficult to manage with Convox.

We began to research Kubernetes. Kubernetes (K8s for short) is the most mature container orchestration system that exists today as a top CNCF project. It blends well with the container strategy we have at Haven Technologies, and has massive support and features that come natively that I’ll be discussing in this and future blogs, like autoscaling, a very rich API, automatic alerting, load balancing, logging, a firewall, easy job scheduling, and a good integration with AWS which helped primarily to improve on our security, among many other things.

EKS Kubernetes High Level Architecture

We use the AWS managed version of Kubernetes, “EKS” as it is known. It’s composed of master and worker nodes, as a part of a Node group which we have defined using Terraform. We define the size of the node group and number of nodes in a node group, thereby defining what the entire cluster (composed of nodes) will be in Terraform.

It gives us the ability to rebuild and replicate clusters by tweaking the cluster configurations defined in Terraform, while specifying the size of nodes in every cluster as we want. Having every EKS cluster defined and configured in Terraform enables us to have all AWS resources required to spin up the cluster in a single Terraform repo which we can grow iteratively.

Lastly, we have a multiple Availability Zones (AZ) Kubernetes cluster setup, consisting of multiple nodes, each of them placed in at least one of the AZ’s assigned to the cluster. So, when one node or AZ goes down (which happens, and is a reason you see many companies’ sites fail at times), our apps have at least two backup nodes and AZ’s serving containers to pick up traffic immediately; so when it does happen, our customers won’t notice a difference in the performance of the site.

Very Rich API

The EKS master, as seen in the image below, is the Control Plane composed of some K8s components. One such component is the API server with very rich API. It translates the call to all worker nodes through each node’s “kubelet” (the messenger between master and node) via “kube-proxy”.

Kubernetes cluster architecture

Further, Kubernetes API documentation is very thorough and now open sourced. It’s the most widely used cloud orchestration platform, which makes it easy to research and integrate with other open source cloud native technologies.

We also use Kubernetes ingress and private endpoints with the API, which reduced costs and improved our security, which our last blog post explains here: https://medium.com/haven-life/critical-decisions-we-made-while-migrating-to-kubernetes-40a3189ab868

More Kubernetes features we use, and future blog posts

One of the most valuable features Kubernetes provided to us is its autoscaling (and descaling) abilities. We decided to dedicate our next blog post on Kubernetes autoscaling, so stay tuned.

In closing, these are some of the other features of Kubernetes we use at Haven Technologies that we’ll post more about too:

--

--