Save on your AWS bill with Kubernetes Ingress

César Tron-Lozai
ITNEXT
Published in
6 min readJun 12, 2018

--

The wonders of Kubernetes

One of the first concept you learn when you get started with Kubernetes is the Service.

Internal Services allow for pod discovery and load balancing. If you need to make your pod available on the Internet, I thought, you should use a service with type LoadBalancer. This will have different effect depending on the cloud provider; on AWS, for example, it will create an ELB for each service externally exposed.

ELBs are amazing but they are not cheap. They cost about $20 a month (plus extra per GB of data processed).

If you’ve got a several services this will lead to a hefty bill.

Here comes the Ingress

It took me some time to get familiar with Kubernetes Ingress.

Kubernetes is good at abstracting common problems, and giving you different concrete implementation for it.

This is exactly what Ingresses do in the context of making your endpoints external. You specify in a common format how certain services should be routed and you leave the job of implementing this routing to an Ingress Controller.

For example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80

This Ingress resource will expect to receive request made to host foo.bar.com and route the traffic to service s1 or s2 depending on the request path. You can also do routing based on the host (example below)

To be clear the Ingress does not replace internal services. You’d still want to have an internal service for each of your app/micro-service — this means your pods are always accessible at the same address, regardless of how they die or get created. Then the Ingress will route external traffic to internal services

Request => Ingress => (Internal) Service => Pod

By default Kubernetes comes with no Ingress Controller installed. It is your job to pick one (or more if you have more complex need).

My requirements

We use Kubernetes namespaces to separate our different environments. For example in a single Kubernetes cluster we might have a test , demo , and staging namespaces. I don’t want to have a separate ELB for each. This is too expensive.

I want to point the DNS for test, demo and staging to a single endpoint:

ALB Ingress controller

I first came across the ALB Ingress Controller and it sounded very promissing. Instead of creating an Elastic Load Balancer, it can create the newish Application Load Balancer.

So this Ingress Controller can transform Ingress Resource such as:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.com
http:
paths:
- path: /
backend:
serviceName: foo
servicePort: 80
- host: bar.com
http:
paths:
- path: /
backend:
serviceName: bar
servicePort: 80

Into a single ALB, with two target groups: one for the foo service when the the host is foo.com and one for the bar service when the host is bar.com

However there is a major issue for me because, at the time of writing:

The ALB ingress controller does not support routing across multiple namespaces

If you create one Ingress resource in test and one in staging , the ALB Ingress Controller will create two ALBs, which defeats the whole point. I’ve raised this issue on Github but it doesn’t seem to be moving yet.

Solution 1: NGINX Ingress controller

Some time ago I came across this Github issue about Cross-namespace Ingress but I unfortunately I didn’t read carefully enough. After giving it a second read I realised the solution was really simple with the NGINX Ingress controller.

The architecture is pretty simple. You have a single ELB pointing to NGINX which distribute the traffic internally:

NGINX Ingress Controller

And the good news is that this is really easy to setup too!

  1. Install the NGINX Ingress Controller as explained here.
  2. In each of your namespace define an Ingress Resource

For test:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingresse-test
namespace: test
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.blop.org
http:
paths:
- backend:
serviceName: myApp
servicePort: 80
path: /

For staging:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingresse-staging
namespace: staging
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: staging.blop.org
http:
paths:
- backend:
serviceName: myApp
servicePort: 80
path: /

For demo:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingresse-demo
namespace: demo
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: demo.blop.org
http:
paths:
- backend:
serviceName: myApp
servicePort: 80
path: /

And voila! There is nothing more to do. The NGINX Ingress controller will process those resources and automatically create a single ELB. Then you simply point your test , demo , and staging DNS to the ELB and job done!

If you want to you can tune NGINX by writing configMaps but you don’t have to. It works well out of the box.

For better availability you can increase the number of replicas for the nginx-ingress-controller :

kubectl -n ingress-nginx scale deploy nginx-ingress-controller --replicas=3

SSL termination

You can easily terminate SSL traffic too. You can either choose to terminate SSL at the ELB level, or with NGINX.

Personally I prefer terminating SSL at the ELB as it is very easy to setup (a single annotation when installing the NGINX Ingress controller). But others might like the additional control offered by NGINX.

Solution 2: Ambassador

The second solution I came across doesn’t actually use Ingress.

Ambassador is a Kubernetes-native microservices API gateway built on top of the Envoy Proxy.

It offers a similar architecture to NGINX Ingress controller:

Ambassador pods route the traffic in your cluster

First you need to install Ambassador, which is very easy to do.

kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml

This will install 3 Ambassador pods which will route the traffic according to your pods. This redundancy provides high availability and helps with scaling the load.

Then you need to create a LoadBalancer type service to point to those pods. For AWS you do it like that:

apiVersion: v1
kind: Service
metadata:
name: ambassador-main
namespace: ambassador
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your_aws_cert_for_https"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "*"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Module
name: ambassador
config:
use_proxy_proto: lower
use_remote_address: true
spec:
type: LoadBalancer
ports:
- name: ambassador
port: 443
targetPort: 80
selector:
service: ambassador

This will create a single ELB . Like above, you can then point all your DNS names to this endpoint and let Ambassador do the routing.

With Ambassador you don’t need Ingress resources. You simply add annotations to your services. So in our case this would look like that:

If we had a service for our bar application in test , staging and demo we would simply add the getambassador.io/config annotation

For test:

kind: Service
apiVersion: v1
metadata:
name: test-bar
namespace: test
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: test-bar
prefix: /
service: test-bar:8080
host: test.blop.org
spec:
type: NodePort
selector:
app: bar
ports:
- protocol: TCP
port: 8080
targetPort: 8080

For staging:

kind: Service
apiVersion: v1
metadata:
name: staging-bar
namespace: test
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: staging-bar
prefix: /
service: staging-bar:8080
host: staging.blop.org
spec:
type: NodePort
selector:
app: bar
ports:
- protocol: TCP
port: 8080
targetPort: 8080

And for demo:

kind: Service
apiVersion: v1
metadata:
name: demo-bar
namespace: demo
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: demo-bar
prefix: /
service: demo-bar:8080
host: test.blop.org
spec:
type: NodePort
selector:
app: bar
ports:
- protocol: TCP
port: 8080
targetPort: 8080

Ambassador also provides additional features like:

  • gRPC support
  • Authentication
  • Rate limiting
  • Istio (Service mesh) integration

If you want to learn more about Ambassador check it out here.

If you, like me, have been using multiple ELB for too long, I hope this simple solution can make your AWS bill a bit lighter!

It goes without saying that this architecture works with GCE and Azure too.

PS: Other Ingress controller worth mentioning:

--

--

Head of engineering. Java, Scala, Haskell. Works with Kubernetes. Follow me on Twitter @cesarTronLozai