Multi-cluster Kubernetes load balancing in AWS with Yggdrasil

Yggdrasil, from Norse mythology, is the world tree that links the nine realms.

Yggdrasil is a tool we wrote to allow our services to be load balanced across multiple Kubernetes clusters running in AWS. It behaves as an Envoy control plane, generating configuration from Kubernetes Ingress resources. Yggdrasil is agnostic to the Ingress controller allowing it to work with existing resources.

At uSwitch we’re running almost everything on Kubernetes (you can read more about that here). It’s brought us a lot of benefits, but people were concerned it would introduce a single point of failure.

In an ideal world nothing would ever break, but we all know that occasionally an upgrade can go awry or some unanticipated scenario can take out your cluster. For some of our most important applications this was a deal breaker because going down even for a few minutes could lose us a lot of money.

To mitigate against a cluster-wide outage, we wanted to be able to deploy the same application to multiple Kubernetes clusters and have intelligent load balancing between them. Unfortunately there wasn’t an easy way to achieve this when running Kubernetes in AWS.

Federation has come a long way but federated Ingress only works on GCP. Heptio’s Contour was potentially an option but we wanted something that wouldn’t require drastically changing our existing setup; we just wanted to put something on top of it.


The proposal

We decided early on that we wanted the solution to build on top of things that were already familiar to developers. We didn’t want deploying an application to multiple clusters to be vastly different from deploying to a single one.

As almost all our applications were being exposed via Ingress, it made sense to build on top of that. However, Ingresses can be created, updated and deleted very rapidly, so we wanted something that could cope with this kind of dynamic environment and Envoy seemed like an ideal choice due to its dynamic configuration capabilities, circuit breaking, health checking, etc.

We started off with a small Envoy cluster with static config for each Ingress that we wanted to be load balanced across multiple clusters. Envoy would listen on the same host as the host defined in the Ingress, with each Ingress load balancer being configured as an address for a given host.

For example, example.com would have ingress.a.com, ingress.b.com and ingress.c.com as addresses causing it to load balance across them. The diagram below helps describe this topology.

We tested this out with one of our non-critical production services. We configured it to be served via Envoy, deployed it to each cluster and then scaled the number of replicas to zero in one of the clusters. Our monitoring showed that no users saw errors while Envoy continued to health check the broken cluster.

Below you can see a graph of requests going to an application running in multiple clusters behind Envoy.

You can see where we turned off the application in Cluster B: requests drop to close to zero for that cluster and requests increase in the other clusters as load is redistributed.

A few requests remain in cluster B as Envoy is checking whether it is healthy again. Once it’s turned back on, Envoy starts redistributing the load back to that cluster.

This was a good start and developers soon started using it. Despite this, it was static, and adding multi-cluster support for a new Ingress would require some understanding of Envoy configuration language. And, as more people used it, the config grew and became unwieldy. We needed something that could automate the Envoy configuration for us.

Enter Yggdrasil

Envoy was built with the idea of it being able to dynamically discover configuration via GRPC. This is where Yggdrasil comes in. Yggdrasil is an Envoy control plane, generating configuration for Envoy dynamically from Ingress objects in a Kubernetes cluster. It regularly searches for Ingresses of a specific class and will use the host and load balancer address from the Ingress object to create Envoy config, it will then combine Ingresses from different clusters with the same host together. The Envoy nodes are then deployed with very minimal config; they’re just pointed at Yggdrasil as a source of dynamic config.

Once this was deployed it became relatively trivial to set up an Ingress that was multi-cluster: you just create an Ingress with the right Ingress class, deploy it to as many clusters as you like (our Continuous Integration tool can target multiple clusters at once) and watch as traffic is automatically load balanced across them.


What’s next

Yggdrasil allowed us to get load balancing across our clusters in a fairly simple way that did not require us to change the way our clusters were set up, or create more methods of communication between them.

However, it does have some limitations: for example, you can’t have it communicate with a different set of certificates on a per-ingress basis, which can be a problem for apps that require mutual TLS. The setup is fairly opinionated and based around what we were doing, but I think the fundamental building blocks are now in place and we can definitely look at doing more complex things with it in the future. The project is totally open source, so if anyone wants to try it out, give feedback, raise issues, etc., please do.

https://github.com/uswitch/yggdrasil

If you want a more detailed look at how you go about creating an Envoy control plane we will be writing another blog going into more depth in the coming weeks.


We’re looking for more people to join uSwitch’s Cloud Infrastructure team that work on on tools like Yggdrasil and run our Kubernetes clusters. Our careers page has more information on becoming a Platform Engineer at uSwitch.