Ingress Deep Dive

Maria Valcam
Quiqup Engineering
Published in
3 min readAug 17, 2018

It is been over a year now that Quiqup started using Kubernetes for its ProductionAE environment. Most of you would know about the main resources that Kubernetes offers: Deployments, Pods, Services and Ingress. Today, I want to explain the magic beneath Ingresses.

Introduction to Ingresses

Services and Pods have connectivity just inside the cluster, in simple words, we cannot access them through Internet. By defining an Ingress resource, we specify how we want external clients to connect to our services.

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: api-ingress
spec:
rules:
- host: api.quiqup.com
http:
paths:
- backend:
serviceName: core-api-app
servicePort: 80

An Ingress Controller configures itself reading the Ingress resources and routes the incoming traffic. It would act like a L7 proxy, with the ability to terminate SSL and define routing rules based on the domain name and path of the request.

Deep Dive

In this section, I will explain our current setup and then guide you through an example.

Setup

The Ingress Controller does not come within Kubernetes. So for this to work, we need to deploy pods of an Ingress Controller. At Quiqup, we use an HAProxy and we deploy it using a StatefulSet (this will deploy an instance of HAProxy in every node of our cluster). This Ingress Controller will then read all the ingresses resources from our Kubernetes API to configure itself.

Now we have 3 configured HAProxy pods but… Pods do not have connectivity to external networks. We need something with an external IP that routes all the incoming traffic to our HAProxy pods. To do this, we can define a special type of Services - a LoadBalancer Service.

kind: Service
spec:
type: LoadBalancer
ports:
...

This will create a LoadBalancer in CGP and give us an external IP that clients can use to send their requests. We would then define DNS rules that links our domains to this load balancer’s IP.

Reaching QuiqDash

To understand how it really works, I am gonna guide you through an example. We want to connect to quiqdash.quiqup.com:

  • To start this workflow, I would open a browser and type https://quiqdash.quiqup.com.
  • My browser will try to resolve quiqdash.quiqup.com domain name using a DNS server. The DNS server will find the DNS rule (that we previously created) and it will return the IP of our HAproxy load balancer.
  • The browser will send the HTTPS request to our HAproxy load balancer.
  • Our load balancer will terminate SSL and send a HTTP request to the quiqdash Service.
  • kube-proxy (a kubernetes component deployed to every node) will receive the request to quiqdash Service and redirect it to the correct pod (if there are several pods, it will load balance the request).
  • Finally, our Quiqdash pod will receive the request and can reply back to the client.

Conclusion

I hope the Kubernetes world is now less of a mystery. Keep in mind that there is no magic behind Kubernetes, it all uses protocols and structures that anyone can inspect and understand.

Please, leave a comment if you liked this blog post

--

--

Maria Valcam
Quiqup Engineering

Engineer with an MBA. I am interested in Business, Doversity and Engineering.