Using Kong to access Kubernetes services, using a Gateway resource with no cloud provided LoadBalancer
In this article I look at how to deploy Kong as an API Gateway within a Kubernetes cluster to provide managed access to your services. I do this on a cloud service that does not provide a Kubernetes compatible, external LoadBalancer service. I am also using the new Kubernetes Gateway rather than Ingress resource.
If you have followed any of my previous articles, you will know that I use an Australian cloud provider called Binary Lane, who provide reliable, cost effective, but limited range of services.
Providing only a limited range of services means you have to do all of the Kubernetes work yourself but this provides the opportunity to learn and also allows you to control how your solutions work. It also means you are not locked in to any one cloud supplier. It’s cost effective too.
In this article I look at how to add Kong API Gateway to your Kubernetes cluster to provide access to your services. It is a long article as there is quite a bit to do. For this reason I have split out the theory about API Gateways in Kubernetes clusters into a separate article. If you do not understand the role of an API Gateway, I suggest you read that article first.
Whilst Kong is a reputable, enterprise-level API Gateway, it can be very tricky to set up. At the end of this article I will provide some hints on how to debug problems. If you are adjusting the design in this article, make sure you get your names and ports correctly configured.
Network design
Kubernetes networking is a complex topic and not one I am able to tackle here but we do need to think about out network design at a high-level.
If you have been following along with the other articles, you should have a Kubernetes cluster installed on your Binary Lane (or other cloud provider’s) servers. You will have three nodes installed on a Virtual Private Cloud (VPC) private subnet that is not accessible to the Internet, except by way of a private VPN (not shown above). There will be a gateway server that has both Internet and VPC interfaces that provides our ingress from the Internet to our cluster (and also our egress from our cluster to the Internet, set up in previous articles).
We now have our basic network topology. We now want to be able to manage the APIs our services provide to the outside world. This will be done via the Kong Gateway API.
Connecting from the Internet (ingress)
With a full-service provider, such as AWS, Google Cloud or Azure, you can use their services to set up Kubernetes in a way that an Internet connection is created for you automatically. This is done using a Kubernetes Service with the type of LoadBalancer
.
Whilst Binary Lane has load balancer services they cannot be managed via Kubernetes and so you have to either create and configure the load balancer yourself or create your own ingress point. To avoid being locked in to a specific cloud provider feature, I prefer to create my own ingress point with an NGINX reverse proxy running on my gw
server.
In this architecture, NGINX running on the gw
server, performs two functions:
- Routes all valid requests to the Kubernetes cluster for Kong to process
- Load balances requests across Kubernetes nodes
As Kong is going to be exposed as a NodePort
Service, it will be accessible from any node in the cluster. This allows NGINX to load balance requests across the nodes.
It is interesting to note that the Service itself will randomly load balance across available pods and so any load balancing by NGINX is only there to survive node failures or overloads. You can read more about Service load balancing here.
For the purposes of this solution, you can consider the NGINX as a manually configured, replacement, external load balancer.
Kubernetes Service
Kubernetes Services allow access to a service that might be provided by one or more pods. It ensures that if pods are terminated and rescheduled, possibly on different nodes, the Service continues to be a single point of contact, routing requests to the pods as required, regardless if the node the request comes in on.
By using a Service, you have a single, stable IP address to which you can route requests. The Service provides a form of load balancing across the available pods. Kubernetes also adds a reference to the Service in the cluster’s DNS, allowing the service to be accessed by name. A number of different variations are registered:
<service name>.<namespace>.svc.local
<service name>.<namespace>.svc
<service name>.<namespace>
<service name>
API gateway
Whilst an NGINX gateway and Kubernetes Service goes some way towards providing an external interface to your services, it has limited functionality and must be set up manually.
An API Gateway is a cluster component that solves this problem. It forms part of your solution and sits in front of your services to provide additional features as I describe here.
By being part of the cluster an API Gateway can configure itself automatically based on changes to resources within the cluster.
Kong API Gateway
There are a number of different API Gateway technologies but for this article I chose Kong, which is available as a free open-source solution and as a paid-for enterprise installation. We will be using the open source version. You can find extensive, official documentation on Kong here.
Kong have been actively working with the Kubernetes community to define a new standard for a gateways within a cluster. This has resulted in a new Kubernetes Resource Type called the Gateway API, not to be confused with the actual gateway itself.
This is how Kong works at a high-level.
Incoming traffic from any client outside the cluster hits Kong. Based on rules within its configuration, it then processes the request and then passes the request on to the appropriate service inside or outside the cluster.
Kong is able to get its configuration either statically from Kubernetes resource manifests (DB-less installation) or from a database. Kong now recommends that the DB-less installation is used for new installations, and that is what we will do.
Kong has a mature plugins capability. This allows 3rd parties to develop feature extensions for Kong as plugins. The plugins sit in the traffic flow and can assist with things like rate limiting and authentication.
Finally, to manage Kong, it provides a Management UI that allows you to manage plugins, configurations etc. The Management UI interfaces to Kong via the Admin API.
Looking at the extensive documentation on Kong it is clear that there is more to Kong than we can cover here, so I will only be introducing the basics.
DB-less Installation
I think it is important to understand how a DB-less installation works.
In this configuration, Kong installs two components:
- Kong Ingress Controller (KIC)
- Kong Gateway
The Gateway does all the routing of user traffic via its proxy. The Kong Ingress Controller (KIC) takes the configuration from Kubernetes resource definitions (such as HTTPRoute
) and converts them into rules that the proxy understands and uploads the rules to the proxy in real-time. In this way, changes to the Kubernetes configuration are automatically applied to the proxy.
The KIC obtains information about the Kubernetes cluster using the internal Kubernetes API, which is designed to manage the cluster. When you use kubectl
, you are actually interacting with the Kubernetes API. It is through this API that applications, like Kong and kubectl
, can find out about, and make, changes to the cluster. It is through this API that the KIC is able to keep the Gateway in sync with the cluster resource files without the need for a backing database.
Note: If you have a DB-less install, Kong keeps its internal state in memory. This means that the Management User Interface cannot update the state and can only read it. Also, several actions (like setting the debug level) that can be carried out through the Admin API on the gateway, will respond with an error that there is no database. This means that configuration has to be via the Helm chart values file or through the Kubernetes resource definition files.
Services to Test
Before we get into the deployment of Kong, it is important that we have some services that we can use Kong to access. After all, an API Gateway without an API to serve is not much use at all!
The simplest way to create a service is by deploying NGINX as a web server, configured with static content. If you have never done this on Kubernetes before, you can read how to do it in one of my other articles.
I suggest you create 2 services:
- Hello world 1: 2 replicas
- Hello world 2: 1 replica
These should be accessible to pods in the cluster via a ClusterIP
Service. In my article on setting up such services, I createNodePort
Services so I can check the services from my browser. If you do this, you need to remember to use the internal cluster IP and ports rather than the external ones. You can read more about service types in another of my articles.
Check that your two services are up and running and can serve the Hello World HTML.
In the example I am describing here, my two services are on:
http://<node IP address>:30082
— replies with ‘Hello World 1 !!’http://<node IP address>:30082
— replies with ‘Hello World 2 !!’
Note that neither takes any path and if you add a path (eg: http://<node IP address> :30082/world1
) , you will get a 404 error. This is important as it can lead to problems later on if you do not realise this is the case.
Now we have our services up and running, let's access them via Kong.
Creating Kubernetes Gateway resources
In DB-less mode, we configure our Kong API gateway by creating Kubernetes Ingress
or HTTPRoute
resources. When they are applied to the cluster, Kong will use them to define the routing rules within its proxy component. This, in turn, directs incoming traffic to your services.
Whilst Ingress
resources work, they had a limited feature set. With the new Gateway
resource developed by Kong in conjunction with the Kubernetes community, far more advanced management of your APIs is possible.
We will be using the Gateway
resources with Kong. To do this we first need to apply the new Custom Resource Definitions (CRDs) to our cluster that support the GatewayClass
and Gateway
resources. You can do this by running this command from wherever you run kubectl
for your cluster. For me this is on my k8s-master
server.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
There are some experimental features that we will NOT be using in this article but I have added them here for reference.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/experimental-install.yaml
We can now define two resources for Kubernetes, GatewayClass
and Gateway
. I have explained what these do in this article and will not repeat it here for brevity.
GatewayClass
We will define a GatewayClass
that introduces the Kong technology to our cluster. Create the following file:
kong-gw-class.yml
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong-class
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
There are a few things to note about this file:
- There is no namespace as it is a cluster level resource
- Annotations define solution specific options and for Kong these start with
konghq
- The
konghq.com/gatewayclass-unmanaged
annotation is set to‘true’
(as a string) because Kong is being manually set up rather than automatically through an operator (which is also an option, see here) - The Ingress Controller is the Kong Ingress Controller (
konghq.com/kic-gateway-controller
) and is configured in thecontollerName
field
Now create the class with:
kubectl apply -f kong-gw-class.yml
We can now create a Gateway
that will use this class. Note that you can create many Gateway
instances that refer to the same GatewayClass
.
Gateway
In the case of a manually installed Kong Gateway (like the one we are creating), you need to create the following file:
kong-gw-gateway.yml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong-gateway
namespace: kong
spec:
gatewayClassName: kong-class
listeners:
- name: world-selector
hostname: worlds.com
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
Again, there are some things you need to make note of in this file:
- It has a
name
that will allow it to be referenced later (kong-gateway
) - The
GatewayClass
refers to the name of theGatewayClass
we created above - This
Gateway
defines only one listener, which is the ingress point for this API Gateway - The listener is given a unique
name
that is a URL compatible string - The listener is bound to
port
80 - The
hostname
is used as a matching field and is optional - You can control which services can be connected to this listener (
allowedRoutes
) by way of their namespace — as this defaults to thesame
namespace and we want to connect to others, it is changed toAll
namespace - The
Gateway
specification expects the gateway to listen on a single port (80) over HTTP.
Gateways are specific to a namespace, which we need to create before we install the API Gateway:
kubectl create namespace kong
Now create the resource with:
kubectl apply -f kong-gw-gateway.yml
Now we have a GatewayClass
and Gateway
resource defined, we can now install the application itself and this will form the implementation of these two resources.
Install Kong
We will now install Kong using a Helm chart. If you do not have Helm, you can find out how to install Helm here.
Kong CRDs
Before we install Kong, we need to install the Kong Custom Resource Definitions (CRDs). You can do that by running this command from wherever you run kubectl
for your cluster. For me this is on my k8s-master
server.
Note the use of the -k option instead of the normal -f. This option triggers the kustomize option which installs a set of manifests from a folder.
kubectl apply -k https://github.com/Kong/kubernetes-ingress-controller/config/crd
You may find that this creates a set of warnings about missing dependencies. You do not need to worry about these as the dependencies will be automatically installed for you.
Kong Application
When installing Kong into a Kubernetes cluster, two components are installed:
- Kong Ingress Controller (KIC) — converts Kubernetes resource definitions into Kong Gateway configurations
- Kong Gateway — acts as the router to the services based on configuration inserted by the Kong Ingress Controller (KIC)
First, add the Kong repository to your local Helm:
helm repo add kong https://charts.konghq.com
helm repo update
If you search for Helm charts with:
helm search repo kong
You will find two entries:
NAME CHART VERSION APP VERSION DESCRIPTION
kong/kong 2.33.3 3.5 The Cloud-Native Ingress and API-management
kong/ingress 0.10.2 3.4 Deploy Kong Ingress Controller and Kong Gateway
As we want the DB-less configuration, we will usekong/ingress
. Before we install it, we need to override some of the values. Create the following file:
kong-values.yml
#controller:
# ingressController:
# env:
# LOG_LEVEL: trace
# dump_config: true
gateway:
admin:
http:
enabled: true
proxy:
type: NodePort
http:
enabled: true
nodePort: 32001
tls:
enabled: false
# ingressController:
# env:
# LOG_LEVEL: trace
Because we are installing KIC and Kong Gateway through a parent Helm chart, the configuration of these two applications are below the controller
and gateway
labels respectively. I have left the controller
in there just as a reminder.
You will also see a number of lines that are commented out. These are useful if you want to debug what is happening via the Pod logs.
We are overriding the proxy configuration as Binary Lane does not provide a load balancer that Kubernetes can configure. We are telling Kubernetes to set up a NodePort
Service instead of a LoadBalancer Service. We are exposing the gateway on port 32001, which will be available on all nodes in our cluster.
We created the kong
namespace earlier so we are now ready to install Kong:
helm install kong kong/ingress -f kong-values.yml -n kong
You can now check that the installation worked as expected. It may take a minute or two to be ready:
kubectl get all -n kong
This should give you something like this:
NAME READY STATUS RESTARTS AGE
pod/kong-controller-68cddcbcb7-z46lh 1/1 Running 0 45s
pod/kong-gateway-687c5b78db-5qvgd 1/1 Running 0 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-controller-validation-webhook ClusterIP 10.110.172.40 <none> 443/TCP 46s
service/kong-gateway-admin ClusterIP None <none> 8444/TCP 46s
service/kong-gateway-manager NodePort 10.100.254.169 <none> 8002:30698/TCP,8445:30393/TCP 46s
service/kong-gateway-proxy NodePort 10.96.24.196 <none> 80:32001/TCP 46s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-controller 1/1 1 1 45s
deployment.apps/kong-gateway 1/1 1 1 45s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-controller-68cddcbcb7 1 1 1 45s
replicaset.apps/kong-gateway-687c5b78db 1 1 1 45s
You will see that there is a Management UI service exposed via a NodePort
. This will not work as it expects to be able to see the admin API. As we are installing a DB-less installation, the only use of the management UI is to look at the configuration.
You can test your deployment by curling the proxy address from a node in the cluster:
curl localhost:32001
You should get the response:
{
"message":"no Route matched with those values",
"request_id":"7fc9db053e3029105581890e81effe12"
}
The request_id
is unique to this transaction and if you run your curl command again, you will see a different value. This is added by Kong and allows you to trace requests through your system. Pretty neat eh?
You are now ready to configure your new API Gateway to route requests to the test services you created earlier.
Adding routes
A route is added to your Gateway
resource through an HTTPRoute
resource. There are other resource types for different levels of routing. We will now create one of these to connect the host worlds.com/world1
to our hello-world-1-svc
and worlds.com/world2
to hello-world-2-svc
.
I will describe one HTTPRoute
resource and I will let you create the other. Create the resource file:
hello-world-1-route.yml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example-1
annotations:
konghq.com/strip-path: 'true'
spec:
parentRefs:
- name: kong-gateway
namespace: kong
hostnames:
- worlds.com
rules:
- matches:
- path:
type: PathPrefix
value: /world1
backendRefs:
- name: hello-world-1-svc
port: 80
kind: Service
In this file, we have added a Kong specific annotation, konghq.com/strip-path: ‘true’
which will strip the incoming, matched path from the request to the southbound Service. Other lines include:
- A definition of the
Gateway
to use (inParentRefs
), is referenced by name and namespace - An optional reference to the
hostname
to match to the appropriate listener in theGateway
- The
rules
to match the incoming request to for this route - The
backendRefs
that defines the service to route the request to for this match (note that the name is the internal DNS name of the service and the port is the unmappedclusterIP
port for the service
In this route, the match is the prefix of /world1
, which is then stripped off before being passed to the service.
We now create this route with:
kubectl apply -f hello-world-1-route.yml
As we have created a gateway
with a NodePort
Service, we can now test out our service. The NodePort
Service is available on all nodes in the cluster. I generally use the k8s-master
node but any can be used. You can now test with:
curl -H "Host: worlds.com" <k8s-master IP address>:32001/world2
You can see I have set the hostname to worlds.com
as a Host
header to allow the routing to take place. You should see your test service response come back.
You can now add the second HTTPRoute
resource to manage requests to the second service.
Now we have our services accessible on the nodes in the cluster, we can carry out our final step — configuration of the gw
server.
Configuring the ingress point
If we were on AWS, Azure or Google Cloud, we would have the option to leave the Gateway service as a LoadBalancer and have an ingress point automatically be created for us. As we are on Binary Lane, we must create our own.
If you have been following my articles, you know that I have a gw
server acting as an ingress point from the Internet to the cluster. This is a simple deployment of NGINX.
We will configure this to route all requests using a round robin load balancer to each and every node in the cluster.
Log in to the gw
server and, as the root user, update the following file (remember to replace the fields in < > with your own values):
/etc/nginx/sites-available/worlds.conf
upstream k8s_cluster {
server <k8s-master>:32001;
server <k8s-node1>:32001;
server <k8s-node2>:32001;
}
server {
listen 80;
listen [::]:80;
server_name worlds.com;
location / {
proxy_pass http://k8s_cluster;
include proxy_params;
}
}
Typically the proxy parameters are set up in a separate file:
/etc/nginx/proxy_params
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
This passes through the Host
header and other details, which we need for the routes to be effective.
Now enable the site, test the config and then restart NGINX:
ln -s /etc/nginx/sites-available/worlds.conf /etc/nginx/sites-enabled/
nginx -t
systemctl restart nginx
You can now test this with the following (replace < > fields with your own values):
curl -H "Host: worlds.com" <gw server public IP address>/world1
You should get the response from your server.
Congratulations, you have now managed to install Kong and configure it to connect to your services.
Note: ths set up is not production ready. We have not implemented any secure TLS connections, neither to our
gw
server nor within our cluster. That is another project. I have also not described how to create your own domain name and link it to the public address of thegw
server. If you do this, remember to change theworlds.com
reference everywhere.
Debugging Kong
If things go wrong with Kong, it can be difficult to debug. Here are a few pointers that I found on my journey of Kong installation:
- Use
kubectl describe
on theGatewayClass
,Gateway
and controller/gateway pods and carefully look at the result — I have been very stuck in places until I realised that this was pointing to the solution - View the controller and gateway logs with
kubectl logs <pod name> -n kong
- Increase the logging level using the
kong-values.yml
file (uncomment the lines shown earlier) - Access the Management UI ( you will need to get the
NodePort
address usingkubectl get svc -n kong
) — use the HTTP port and then port forward the Admin API with (note the addition of the--address
option to bind the service externally):
kubectl port--forward <gateway pod name> 8001:8001 --address <k8s-master IP address>
- After port forwarding port 8001, access the Admin UI via a REST API tool such as Postman
- There is a debug port that you can access with:
kubectl port-forward -n kong <controller pod nae> 10256:10256 --address <k8s-master IP address>
Summary
This has been a long article but necessary due to all the steps that need to come together to install an API Gateway like Kong.
In this article we:
- Reviewed our network topology
- Looked at how Services help access out services
- Saw how Kong works with Kubernetes
- Created some test services to use
- Installed and configured the
GatewayClass
andGateway
resources - Installed and configured Kong
- Configured our own external load balancer
- Considered ways to debug problems with the API Gateway installation
By the end we were able to access our test services from the Internet by way of our Kong API Gateway.
If you found this article of interest, please give me a clap as that helps me identify what people find useful and what future articles I should write. If you have any suggestions, please add them in the comments section.