Kubernetes Ingress, why I don’t use it

Unfortunately as we all know the kubernetes documentation is not great, this is why there’s a lot of confusion out there about a lot of things, one of them is for sure kubernetes Ingress.

What is an ingress?
Very simple! An ingress is just a layer proxying traffic to different services, depending on hostname or path, so that an unique load balancer can be used for several microservices.

An infrastructure without ingress will look like this:

With ingress the infrastructure schema will be like this:

So now you don’t have to worry about having different load balancers, you can just specify paths, hosts and destination services in the yaml file that defines your ingress (see more at https://kubernetes.io/docs/concepts/services-networking/ingress/).

It’s cool, isn’t it?

Not really… and I’ll tell you why.

When I think about a production infrastructure, I always try to design things with the idea of high availability, that obviously means being able to monitor and prevent downtime of every single piece of your system and when it comes to deploy microservices I always want to be sure that anyone of them can be deployed without affecting other microservices, which means, in first place, that I want to be sure that every deploy is completely isolated from the other ones.

From my experience I’ve seen the number of production deployments, thanks to kubernetes and a microservices approach, going from not more than one per day up to 20 or 30 per day (on different services obviously) and here is where having a single ingress can be painful, for different reasons:

  1. if you need to update a configuration, add a path for a new service or a new hostname you need to update your ingress, which means (if you are in a continuous intgration environment) you’ll have to add a step in your pipeline to check the configuration of the ingress for every deploy (which means somehow your sharing a single layer between all microservices), or you need a separate pipeline to update your ingress (which means a new microservice is not able to be deployed independently but needs you ingress to be updated )
  2. you’re introducing complexity and a layer that could be a single point of failures (if not working will make unreachable all your services)
  3. another reason is that I always like designing production environment with monitoring in mind. Metrics are fundamental to be elastic and prevent downtimes, so again having different ELBs helps you having a more efficient monitoring strategy, because you know there’s no a single piece of the stack that’s being shared

This are some of the reasons why I decide, at least for now, not to use kubernetes ingress, together with the fact that in general most of the traffic between microservices happens inside the cluster, just a few services are exposed to the external world.

Obviously every application architecture is different and there’s no a single analysis that is perfect for every scenario, so it’s really up to you, but I hope I gave you some good ideas.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.