Kubernetes in practice: what to expect if you migrate to Kubernetes

Mazen Ammar
Turo Engineering
Published in
5 min readJun 6, 2018

Recently, the engineering world has been abuzz over the microservices architecture, and the benefits microservices can provide for large-scale applications. As Turo accelerates as the leader in the car-sharing space, and towards our mission of putting the world’s 1 billion cars to better use, we realized in early 2017 that we would soon be hitting the point at which our monolithic application would start to hinder that progress.

With the future of Turo in mind, we decided it was time to migrate to microservices. But breaking a monolithic application into microservices is not as simple as just separating the code. They need to be able to communicate with each other, be deployed independently, and be easily managed. More precisely, we would need a tool to help us orchestrate and manage our new microservices effectively. Enter Kubernetes.

What is Kubernetes?

Kubernetes is a container orchestration tool developed and open-sourced by Google. It is a platform that is designed to completely manage the life cycle of containerized applications with scalability and availability in mind — exactly what we would need for microservices.

Kubernetes allows you to define what your system should look like, and how it should behave, and works to make your system match that desired state. For example, imagine you have 5 microservices running in Kubernetes. You want all of these microservices to have healthchecks, and restart if the healthchecks fail twice in a row. You want 3 of these microservices to be private, and for internal use only by the other microservices. The other 2 should have external load balancers. With some minor configuration, Kubernetes can set up and manage all of that for you.

On the flip side, you can also tell Kubernetes how you expect your system to adapt. For example, imagine you have a microservice that for the most part has very low traffic, but at certain points (and maybe even randomly) gets very large spikes of traffic. Kubernetes can automatically scale the number of pods to match your current traffic (a pod is a group of one or more containerized applications — in this context, it would represent an instance of a microservice).

If you’re new to Kubernetes and would like to learn more, check out the links at the end for some great resources for getting started. Now let’s dive into our experience migrating to Kubernetes and what we learned along the way.

The Migration

In January of 2018, Turo officially began running on Kubernetes. Although mostly still a monolithic application, all of our services are run on Kubernetes. So how’d it go? How’s it been since? For the most part, Kubernetes has definitely lived up to hype! I will say, however, that it can sometimes be apparent just how young Kubernetes is — the initial release was in 2014. Since there are many reviews of Kubernetes’ functionality out there, I will focus more on our experience using Kubernetes, and the lessons we learned along the way.

Adding new microservices is incredibly easy.

Turo’s goal is to fully decompose our monolithic application into microservices, and to add any new functionality as microservices, whenever possible. With Kubernetes, we have been able to do so efficiently and with ease. Since we already have our Kubernetes cluster up and running, adding a new microservice is as simple as building a Docker image for the microservice, and writing up a few short configuration YAML documents to describe the desired state of that microservice (e.g. how many pods, public or private, etc.). With one command to push the YAML up to the cluster, Kubernetes will automatically begin deploying your new microservice. It really is that simple.

Logging just kinda happens.

As every engineer knows, logs are invaluable for debugging. At Turo, we use Sumo Logic as our log aggregator/manager. The amazing folks over at Sumo Logic built an open-source Sumo Logic+FluentD log shipper for Kubernetes. By adding a pod running the Sumo Logic+FluentD docker image to each node (i.e. “physical” server) running in our Kubernetes cluster, every pod (and every container inside) logs all standard output straight to Sumo Logic, categorized and parsed for easy searching… without any additional configuration. It’s like magic. This means that any new microservice added will automatically start logging as soon as it’s up and running inside your Kubernetes cluster.

A large, active, and supportive community.

Go take a look at the Kubernetes GitHub repo. Look at some of the stats — number of contributors, number of pull requests, the last commit timestamp — it’s incredible just how how many people all over the world are contributing to Kubernetes every day. On Slack, there’s a Kubernetes workspace open to all that has a #kubernetes-novice channel just to help new Kubernetes users with any questions or issues they may be facing. The community makes Kubernetes as great as it is. And as the community continues to grow, so does Kubernetes. It is constantly being improved upon by the community, and perhaps more importantly, is being driven by the community.

We had to turn to Spinnaker to continue using Red/Black deployments.

At Turo, we use a Red/Black (aka Blue/Green) deployment strategy, which means we build and deploy the new version, test it before it receives traffic, and then swap it with the old version to start sending traffic to the new version. This reduces downtime and allows for testing, in a production environment, before receiving any live traffic. Unfortunately, Kubernetes does not, at the time of this writing, have a built-in Red/Black deployment, so we had to decide between building it ourselves, or finding a tool that provides the functionality. We decided to go with Spinnaker, a continuous delivery platform open-sourced by Netflix, which has Red/Black deployments already.

Make sure to set your resource constraints!

With Kubernetes, you can set the desired amount, as well as the max limit, of resources (CPU and Memory) that a pod should use, which is awesome. However, you can also simply not define those, and the pod will use an unlimited number of resources. This is extremely dangerous, and can crash an entire node in your cluster (it happened to us several times in our staging cluster!).

Imagine you introduce a bug that causes one of your microservices to spike memory usage. If there are no resource constraints set, this microservice could attempt to use all of the node’s memory, causing the node to crawl to a halt, and crash. Not only that, but because Kubernetes is always working to keep all of your microservices up and running, it will try to reschedule the lost pods onto other nodes, creating a domino effect in your cluster. You should ALWAYS set resource constraints for all of your pods. ALWAYS.

While Kubernetes is not perfect, it has made migrating to and running a microservices architecture so much easier. Yes, there’s some missing functionality here and there, but it’s important to remember that Kubernetes is still young. I’m really excited to see what the future will bring for Kubernetes. If you’re running a microservices architecture, considering a platform to run one, definitely take a look at Kubernetes.

Resources

The fundamental concepts of Kubernetes: https://kubernetes.io/docs/concepts/

Interactive tutorial of Kubernetes via command line:
https://kubernetes.io/docs/tutorials/kubernetes-basics/

Create Hello World in a local Kubernetes cluster:
https://kubernetes.io/docs/tutorials/hello-minikube/

Useful commands for Kubernetes’ command line tool:
https://kubernetes.io/docs/reference/kubectl/cheatsheet/

--

--