IKS Deployment Patterns #1: Single-Zone Cluster, App exposed via LoadBalancer (NLB) and ALB (Ingress Controller)

What is the simplest way to start? 
How can I deploy and expose my application in my IBM Cloud Kubernetes Service (IKS) directly with the LoadBalancer service?
How can I preserve the source IP address of the clients connecting?

LoadBalancer vs. ALB / Ingress Controller

When should I use which one?
The LoadBalancer is typically a Layer4 (in the OSI layers) load balancer and is implemented via using an NLB (Network Load Balancer). For a Kubernetes cluster this typically means TCP and UDP (in some cases SCTP). The LoadBalancer service has no concept of anything above in the higher layers (like application layer), it does not understand HTTP for example, which is a Layer7 protocol.

ALB / Ingress controllers are fundamentally reverse proxies and typically used when the application uses a protocol that the proxy understands and can provide additional features, functionality and therefore value. Typically micro services are reached (and even talk to each other) over HTTP. If this the case, a Layer7 proxy that can do smart decision based on HTTP headers, GET, POST parameters, cookies, etc. is a great tool for request routing, application level load balancing, incorporating higher protocol level (L7) information in the routing decisions. ALBs / Ingress controllers are typically run as user-space daemons in Kube pods.

If the protocol is unknown to the ALB, like a binary protocol such as MQTT, RTMP, MySQL, PostgreSQL, etc. a proxy-like load balancer (like the ALB) does not give much benefit over a Layer4 load balancer (like the LoadBalancer service). Therefore if your ALB will not process HTTP request (if it is not terminating the TLS connection for HTTPS), we suggest you use the IKS LoadBalancer service that is more efficient and faster in packet processing, forwarding, able to keep the source IP address of the connecting clients and horizontally scale to multiple worker nodes.

Example Deployment Pattern

In this article we are going to go through the steps to deploy an example application with the following deployment pattern bellow:

Steps to Expose App directly via LoadBalancer

  1. Sign up and create a single-zone IKS cluster using the IBM Cloud Console. Documentation on deploying a cluster and specifically how single-zone clusters work. Important: you have to use the paid tier.
  2. Download and apply the the following example Deployment and Service resource yaml, which will expose the echoserver application via the LoadBalancer service on port 1884
    You can also apply it directly:
    $ kubectl apply -f https://raw.githubusercontent.com/IBM-Cloud/kube-samples/master/loadbalancer-alb/iks_single-zone_cluster_app_via_LoadBalancer.yaml
  3. Check the IP address of the LoadBalancer service:

Test the App

  1. To test load the IP:port you specified in your browser or initiate curl commands (like my example): 
    $ curl http://{your IP here}:1884/
  2. You shall see a response like the following

You can see the source IP address in the client_address field because we applied the externalTrafficPolicy: Local in the LoadBalancer Service resource.

Steps to Expose App via the ALB / Ingress Controller

  1. Sign up and create a single-zone IKS cluster using the IBM Cloud Console. Documentation on deploying a cluster and specifically how single-zone clusters work. Important: you have to use the paid tier in order to use ALBs.
  2. Check if everything came up and the ALBs are running fine. Useful commands on the IKS Ingress/ALB Cheat sheets.
  3. Download, edit and apply the the following example Deployment and Ingress resource yaml, which will expose the echoserver application via the ALB / Ingress controller on both port 80(http) and 443(https). 
    $ kubectl apply -f iks_single_or_multi-zone_cluster_app_via_ALB.yaml Note: do not forget to edit theHost and secretName part.
  4. To test load the host you specified in your browser or initiate curl commands (like my example): 
    $ curl https://echoserver.arpad-ipvs-test-aug14.us-south.containers.appdomain.cloud/
  5. You shall see a response like the following
Response to a successful curl delivered via the IKS ALB

Notice in the x-forwarded-for and x-real-ip header your see the IP address of the worker node. This happens because kube-proxy is doing source NAT within the Kubernetes cluster and masks the original source IP of the client.

If you want to enable source IP preservation, you have to patch the IKS ALB (you can find further documentation about this step here). To set up source IP preservation for all public ALBs in your cluster, run the following command:

$ kubectl get svc -n kube-system |grep alb | awk '{print $1}' |grep "^public" |while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec": {"externalTrafficPolicy":"Local"}}'; done

Once patch applied you shall see the original source IP address of the client showing up in the x-forwarded-for and x-real-ip header:

Summary

As you learn more about your workload you can adjust and even switch between patters as needed. Different applications will require different patterns; please let us help you with your pattern! To read about other patterns follow this link to the IBM Cloud Blog or this on Medium.com.

Contact us

If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.