Happy trip to Kubernetes in our company.

Flavio AAndres
Condor Labs Engineering
9 min readNov 26, 2021

Think about the task as a developer of creating a new service in your job, asking for virtual machines with a specific capacity, cloning the code directly in the machine, and executing all your services manually in the PROD machines and, of course in the release phase the way you scale is directly managed by the IT people.

Now, there are many ways to handle the services in your different environments which we’ve been learning little by little and now, I’ll tell you a brief summary of the story about how we implemented our first service in Kubernetes and also, taking advantage of working directly in the cloud.

Note: We were focused on creating easy ways for our developers to implement and deliver services. For this, we started from the development phase until the final step.

🐳 Docker And Kubernetes.

As you probably know, Kubernetes is a tool that allows us to deploy, manage, scale, and publish your services in an “easy” way. K8s takes advantage of the way we currently virtualize our OS, Apps, and services using Docker. And this is how we start with Kubernetes: Docker.

Docker is well known for running your services as a “container” agnostic of the OS, drivers, and conditions where you’re running it. It could be a Windows or Linux machine or even your personal PC. In all environments, your service will work the same way.

Those containers should contain all the necessary resources, dependencies, and drivers your application needs. For example, You need to run a NodeJS App that internally connects with MongoDB and ORACLE. This service should install in the container the NodeJS runtime, the MongoDB dependencies in our project, and also the ORACLE Driver required to use it in NodeJS.

🐳 Dockerizing our apps.

Our first step was to dockerize all our components creating a pipeline from development to production trying to reduce the friction from working the development environment to running it into a Docker Container in your local machine.

We started creating the Dockerfile for each service and then we created a docker-compose with the services in our repository. Each service has its own Dockerfile to run the project, and internally we told docker-compose to create a network and communicate the services between them.

This docker-compose implementation was thought out for our developers. Running docker-compose up —build <service-name> we guaranteed they could continue making their changes without problems and also, we achieved having our services dockerized as the first requirement to implement Kubernetes.

👩🏽‍💻 Researching about K8s in Local environments

Continuing with our developers-first approach, we needed to establish how developers could test their changes into a simulated environment as similar as possible to a real Kubernetes cluster. Also, the developers could change the number of instances of their service, update resources requests and limits, and also update the Docker images used by K8s for the services.

We found Minikube as the first solution to manipulate K8s and test everything in our local environment. We saw Minikube as a good approach to easily run the services and test the changes in resources or code regardless of the OS, and also:

  • Test easily the new features added to our deployments/pods
  • Test new features of Kubernetes.
  • Play with Kubernetes in a stable sandbox.

So, we had everything to start deploying our services in Kubernetes and that was what we did: we started to create the deployment objects to play with this.

👩🏽‍💻 Creating the deployment files and running K8s for our services.

As you probably know you can create different kinds of objects inside Kubernetes such as Pods, Replica sets, Deployments, DeamonSets, StateFullSets, and more. Each object has a different behavior inside the cluster and in this case, we used deployment objects.

The deployment object allows us to be confident our service will always be up and running. Those objects check that we always have the numbers of replicas defined in our desired state. If something fails in some pod, the deployment object will take over to delete the pod and create a new one to maintain the number of replicas.

We need to establish some rules and good practices for the deployments files:

  • Those files should always belong to a specific namespace. You must not create objects in the default namespace.

All deployment files should have the following metadata:

  • app: well-structured names for the apps.
  • project: project name of the component.
  • type: In this field, you can specify whether the service is an API/Worker/frontend
  • component: If this belongs to a project, you should use only the name of the component.
  • Priority: this is an optional field that works for Autoscaling pods
  • All pods or deployments must have requested resources and also the limits for them. In this way, K8s would know when to restart the pod if the consumption increases suddenly.

Helm 🧑🏻‍💼

Helm is known as the Package Manager of Kubernetes modules, plugins, and CRDs (Custom Resources Definition). By doing this, you can install many services directly in your Kubernetes cluster without sending the .yaml files directly with kubectl commands. Helm takes over creating the deployments, services, and namespaces that your new service needs.

Helm not only works as a package manager. You can also customize the values and behavior of your K8s Object by using “charts”. A chart is a public or internal repository where you store your templates of K8s objects and the possible values your app could have.

A deployment file would look like this:

...
specs:
- name: {{ .Values.metadata.appName }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: {{ .Values.secretName }}
resources:
requests:
memory: {{ .Values.resources.requests.memory }}
cpu: {{ .Values.resources.requests.cpu }}

Using those templates we can send to Helm the values it should contain through a YAML file or directly in the terminal using helm-cli. This definitively allows having different settings for our environments and sending deployments and rollbacks by changing the .Values.tag in the Helm template. You can find more info here!

Making it easy to work in local environments

To manually deploy a service in Minikube we had to build the image in docker every time we made a change. If you are only managing one service this would be easy to handle, but if we work with many services in a repository that needs to be running to work as expected, we should have a way to run those builds automatically and restart the pods to take the new image.

We researched how to automatize this and found Skaffold, a tool to create a complete dev environment fully integrated with Kubernetes and Minikube.

Skaffold takes over to build all the images that you need, restart the pods and listen for more changes. With this, you can achieve a hot-building feature sending everything to minikube, the devs won’t have to take care of this task. It was a win for us to find this tool.

And finally, in the image, you can see how we completely integrated Kubernetes into our flow. Now the next step is to deploy all our services in a real k8s cluster in the cloud.

Making it even easier for the dev team

Something that caused friction for the developers was the way they had to run all the code locally. They had to make changes using docker-compose and then, test using Skaffold. This may generate little delays in the development workflow.

Also, we couldn’t run and develop our application directly in Skaffold because it performs a re-build each time we update the files in our code. We even tried to configure a feature to sync the file directly in the pod, but due to our folder structure, this was a bit difficult.

For those reasons, we started to look for different tools to make the workflow even easier.

It’s common for us to watch worldwide conferences about the topics we want to learn. In this case, a member of our team found a conference where Ellen Körbes talked about many tools to make life easier in the dev workflow. This really helped us to find Tilt. Check the talk here.

Tilt is an open-source tool that is focused on generating a comfortable and customizable rebuild for Docker and Kubernetes. Tilt makes really easy to manage development in a local environment of many services that need to communicate among them. Also, it’s focused on the Developer Experience.

Once we implemented Tilt, we were able to use our services in the dev phase by running: tilt up. With a well-written configuration and settings, you can get reloads in a few milliseconds using the sync feature. Also, it has easy integration with Helm which is the most used package manager for K8s.

🔥 Implementing Kubernetes cluster in the Cloud

Having previous experience with several Amazon Web Services services helped us to configure different kinds of pre-requisites to have all the EKS Cluster up and running. I suggest you read about networking, computing, Identity, and Access Manager to know how exactly EKS works.

Let’s talk about EKS. The Elastic Kubernetes Service in AWS provides us with a self-managed master node. This master node is in charge of creating, deleting, scheduling, managing the networking of all the pods and objects created over the cluster.

Amazon takes over to handle this node's increase in consumption or some sudden failure in some of the main k8s services.

For this reason, we have the task to manage the different nodes where our pods run. Each node is an EC2 instance of any type. Of course, the pricing of all the EC2 instances you buy will be charged separately.

Note: If you really are trying to create the EKS Cluster, consider checking this file and choose wisely the instance type of your nodes.

To create or upgrade this EKS cluster we tried two things. First, we went to the official documentation to manually deploy the service. In fact, it wasn’t an easy task since we had to configure everything from zero.

Before creating de EKS cluster we had to complete a long list of prerequisites. We needed to create the user with enough permissions, the VPNs, Security Groups, and more permissions, and also set up manually the nodes or EC2 instances we wanted in the cluster. We wrote about this process here.

After implementing this manually (to know how the tools work behind the scenes) we found eksctl that is a command-line tool to create, upgrade or manage your cluster and the nodes with your AWS account.

To use this we just needed to create a user with 4 Policies provided in the documentation. By doing this, we are now able to create the cluster using a single command. Also, this tool provides a way to use a settings file to configure and set up our requirements for the cluster. Something like:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: cebroker-test-cluster
region: us-east-1
nodeGroups:
- name: MediumTypeGroup
instanceType: t3.medium
desiredCapacity: 1
volumeSize: 20
ssh:
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
publicKeyName: eks-cebroker-nodes
- name: LargeTypeGroup
instanceType: c5.xlarge
desiredCapacity: 1
volumeSize: 50
ssh:
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
publicKeyName: eks-cebroker-nodes

In the YAML above we are requesting a cluster located in the us-east-1 region and two EC2 instances with different types, t3.mediumand c5.xlarge. We can also establish how many volume sizes those EC2 instances we will have in an easy way. eksctl will use this file to provide in AWS the necessary networking settings and set up automatically all the clusters.

Finally, we sent all our deployment files directly to EKS using kubectl to interact with the cluster. With this last step we had to take, we sent our first Services based on a K8s Cluster, using the advantages of the cloud.

Of course, we are just starting with these new ways of managing all our infrastructure and we know there are many things to continue researching and implementing such as:

  • Ways to monitor our resources and the general status of our cluster.
  • How to take out the logs from the K8s cluster
  • Which services should we implement to track the network
  • How we should implement the CI/CD for those K8s services

I hope this blog has been very useful to you that are just starting with this K8s world. We will continue sharing our new discoveries about Kubernetes with you, wait for the next post 🔥

--

--

Flavio AAndres
Condor Labs Engineering

I like to experiment with all. Backend developer at @condorlabs.io. NodeJS/AWS/Serverless/+