Using Kong Ingress with Linode Kubernetes Engine in Akamai Connected Cloud

Brent Eiler
8 min readAug 22, 2023

--

So if you have read any of my previous posts, you have probably figured out that I spend a lot of my time building and playing around in the Akamai Connected Cloud. I recently decided to build some various microservices that needed to be authenticated. I decided to build this in LKE. And to provide ingress I decided to use Kong because I had used it in some past projects — just not with LKE. I realized there was no real documentation out there on setting up Kong as an ingress controller in LKE, especially if you need to integrate with the Nodebalancers and read the X-Forwarded-For header, which I did for this project. So, I decided to write this up to help anyone else who may want to run Kong as ingress in LKE.

The great part about this is that you can use the Helm chart for the initial set up. This saves a bunch of time. However, some modifications need to be made for your back-end containers to receive valid XFF headers. And in fact, changes must be made just for those containers to receive traffic at all. In this post we will assume you have deployed an LKE cluster already (if not, see my previous posts). You will also need kubectl and Helm installed on your local machine. It would be helpful if you have Git so you can clone the repo and therefor not need to create all new files. We will cover the following:

  1. Modify and Install Kong into the cluster using Helm
  2. Configure an application to use the new ingress
  3. Validate functionality with a container that echos XFF and remote_addr

If you do not have an Akamai Connected Cloud/Linode account, sign-up here for a free $100 credit.

Getting started

To get started, make sure you have installed Helm on your local machine or bastion. Also, make sure you have kubectl and a valid kubeconfig for the LKE cluster you would like to install Kong into. Last, you can either manually download the files from my repo or clone the repo: https://github.com/eilerb1011/LKE-Kong. I like to place all of these in one directory for ease of use.

Step 1 — modifying and installing Kong

The first step to getting Kong up and running with proxy protocol in LKE is to — you guessed it — install Kong. However, we must make some modifications to the base installation to allow Kong to receive proxy protocol headers and to make sure Kong requests the Cloud Load Balancers are set up to send proxy protocol. If you have downloaded my values.yaml, you are ready to rock with no changes. But first, lets unpack the changes that MUST be made in a base install for this to work.

The first set of modifications goes in the Kong environmental variables. These variables override the defaults and tell Kong that it should listen for proxy protocol and that proxy protocol will tell us what the real end user IP header is. These are set in the “Kong parameters” section of the values.yaml file. Find the env: section and add the following lines to the bottom.

REAL_IP_HEADER: proxy_protocol
PROXY_LISTEN: 0.0.0.0/0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
TRUSTED_IPS: 0.0.0.0/0,::/0

Next, about 280+ lines down, you will find the block that sets the Kong proxy service configuration. Simply replace annotations: {} with:

annotations:
service.beta.kubernetes.io/linode-loadbalancer-proxy-protocol: v1

Notice, we remove the double curly braces altogether. This annotation will tell Kong that when it creates a service type LoadBalancer, it should include an annotation that will tell the Cloud Controller Manager to set Proxy Protocol v1 when it creates a Linode Nodebalancer — the front end cloud load balancer that will pass traffic back to your LKE cluster. Without this line, you can manually set the Nodebalancer to send Proxy Protocol v1 headers to the cluster. However, if your nodes ever recycle, that Nodebalancer also gets refreshed and will no longer set those headers unless you manually intervene. Save your values.yaml file (if you have not used mine from Github) and it is install time.

To install Kong, just issue the following commands from the same directory you have your kubeconfig and values.yaml.

helm repo add kong https://charts.konghq.com
helm repo update
helm --kubeconfig myconfig.yaml install kong/kong --generate-name --set ingressController.installCRDs=false -n kong --create-namespace --values values.yaml

These commands will set your Helm to use the Kong repo, update your repos, install Kong in a new namespace named kong (if it does not exist) using your kubeconfig (myconfig.yaml here). And it will use the custom settings in the values.yaml file that make proxy protocol work.

Now that Kong is installed, you should see the following:

kubectl --kubeconfig myconfig.yaml get pods --namespace kong

kubectl --kubeconfig myconfig.yaml get services --namespace kong

kubectl output

You will notice that a LoadBalancer Service has been created by Kong. And if you check your Akamai Connected Cloud portal for the new NodeBalancer, you will see that new NodeBalancer (it starts with ccm) and 2 configurations (port 80 and port 443) that are both created to support Proxy Protocol v1 with targets of the LKE nodes that make up your cluster:

Akamai Connected Cloud screen

Step 2: Configuring a new App with Kong Ingress

Now that Kong is in place, we can configure a new ingress group, a deployment and then expose that deployment through a service and ingress policy. To kick this off, I hope you cloned the repo down to your local working directory. Now we will deploy a sample nginx application that will echo back to us the X-Forwarded-For header as well as run a request_remote_addr so we can see how this helps us. To do so, kick off the following commands:

kubectl --kubeconfig myconfig.yaml apply -f nginx-test.yaml
kubectl --kubeconfig myconfig.yaml get pods

This should bring the container up and running on port 5000 locally. Next you will need to create a service to expose this to the cluster. This service will expose the container on a internal cluster IP running on port 80 and give it an easy to reference name for use within the cluster. Your ingress will use this name and port in the next step. Run the commands below to create and view your new service.

kubectl --kubeconfig myconfig.yaml apply -f nginx-service.yaml
kubectl --kubeconfig myconfig.yaml get services

Finally, you will create an ingress group and create an ingress to expose your nginx container at path /nginx by referencing the service you just created. Before running the commands, lets unpack the magic that will make this work. Within the nginx-ingress.yaml there are a few lines to make sure you are familiar with:

First are the annotations. Strip-path does exactly what it sounds like. This allows the container to think it is the root of the URL, when in reality it is whatever I want it to be (see path). The http-forwarded designation set to preserve, preserves the http headers all the way to the container. The preserve host, keeps whatever host header is passed to Kong from the outside. In this case, we have our host set to blank. So Kong will answer for any host header sent to it.

Next are the path directives. Specifically, path, service and (port) number are important here. These are the path as exposed to the Internet and the service as exposed within the cluster. So despite the test container thinking it is the root path of the URL and it answering on port 5000, we should be able to access it at http://[host]/nginx without specifying a port.

annotations:
konghq.com/strip-path: "true"
konghq.com/http-forwarded: "preserve"
konghq.com/preserve-host:" true"

...
paths:
- path: /nginx
backend:
service:
port:
number: 80
kubectl --kubeconfig myconfig.yaml apply -f kong-ingress.yaml
kubectl --kubeconfig myconfig.yaml apply -f nginx-ingress.yaml
kubectl --kubeconfig myconfig.yaml get ingress
kubectl output

After running the above commands you should see the output above that will give you your Internet facing address. This Internet facing nginx container, that is running locally on / and port 5000, is exposed to the Cluster on port 80 and further exposed to the world at /nginx. You can test this by typing http:// and then the 4 octets of the I.P. Address that are listed following the comma in the ADDRESS space of your get ingress command followed by /nginx. — http://[I.P.Addr.ess]/nginx

Step: Validate It!

The app should be live on the Internet now using the exposed I.P. Address listed in your get ingress output. To check the results, go to the I.P. /nginx and you should see something similar to the following in your web browser:

Headers: XFF = 50.104.116.2 and remote_add = 10.2.4.2

This is reporting back the X-Forwarded-For header and a get remote_addr request. As you can see there is a disparity here. The XFF header should be reporting back the IP Address of where you are running these tests from, whereas the remote_addr is actually giving you an internal IP Address from your own LKE Cluster. You can verify this by issuing the following command: kubectl --kubeconfig myconfig.yaml get pods -o wide --namespace kong This should return an output similar to this:

The last test to run to validate the importance of this configuration is to change your Nodebalancer Proxy Protocol to None. You can quickly do this within the portal by selecting your Nodebalancer, going to Configurations, clicking the down arrow on the Port 80 config, and changing the selection to None. Then scroll down and click save.

Akamai Connected Cloud Console view

Give it just a minute or so for the changes to take effect. Then open your web browser and go back to your test container at /nginx. You should be getting an error at this point.

In conclusion, when using Kong to front-end your microservices in LKE or any other Cloud Kubernetes that may have a load balancer or proxy sitting in front of it, the ability to pass and support things like proxy protocol can be very beneficial in giving you accurate IP information on your end users. Without the ability, you might have to rely on a method like request_remote_addr, which may not give you the information you need.

--

--