Implementing Kong Gateway on k8s
How to leverage Kong as a safe solution to expose your cluster of micro-services in Kubernetes
At project44, we grew from a single monolithic backend service to a cluster of micro-services in a short amount of time! We found ourselves in need of a robust solution to replace our existing reverse-proxy, and the top of our basic list of requirements had the following points:
- Single Source of Truth; maintaining a standard for auth and security should be simple to achieve.
- Simplified Ingress Creation; onboarding a new service should be a breeze!
- Modular Extendability; exceptions are not exceptional, and plenty of custom additions are going to be required.
In this article, we’ll build a small proof-of-concept which will meet all of the criteria above and leave the reader with a configured instance of Kong ready to be extended and customized further for their own cluster of micro-services.
Requirements
- Helm v3
- A local Kubernetes: eg. minikube, kind, microk8s.
We’ll use minikube in this article, but any will do!
While this article is written for the developer who is absolutely new to Kong, it is not a tutorial on all of the tools listed here. Some experience with Kubernetes and Helm would benefit the reader.
What is Kong Gateway?
Kong is an open-source, multi-platform reverse-proxy for RESTful APIs which we deployed to our Kubernetes cluster at project44 because it allowed us to achieve all of the above:
DB-less configuration offers us the ability to implement Kong configured-as-code.
The Kong Kubernetes Ingress Controller (KIC) makes service-onboarding as simple as writing one small k8s manifest, used by Kong to route a user’s request by its URL path.
Bundled and Custom Plugin support allows us to configure the behavior of Kong as our backend gateway by manipulating otherwise acting upon in-/out-bound requests and responses.
How To Implement Kong in a Cluster
See the final result of this project housed in this Github repository.
It’s important to note that Kong isn’t just for Kubernetes.
Kong can be deployed into a system which depends on docker-compose, or even on a bare-metal system of application. For the purposes of this article, however, we will only be discussing just one method to implementing Kong into a Kubernetes cluster. We will prefer to build Kong into a custom Helm Chart so we can save this configuration and apply it in any cluster, locally or remotely, time and time again.
Starting up with Kong using Helm
We will run Kong without any distractions, alone in a cluster, and see how we can check on its vitals.
Begin with a directory within a new project, ./helm/kong/
and create a new Chart which depends on a recent version of Kong with its applicable values…
# ./helm/kong/Chart.yamlapiVersion: v2
name: kong
version: "1.0.0"
dependencies:
- name: kong
version: 2.8.0
repository: https://charts.konghq.com
We’re going to use the official Kong Helm Charts as a foundation upon which to build our implementation. The
repository
listed above refers to the location where Kong keeps its default configuration. Specifically, these are the variables andvalues.yaml
found in the Kong Helm Charts repository.
# ./helm/kong/values.yamlkong: # DB-less mode
env:
database: "off" # Prepare for k8s Ingress manifests...
# The `Ingress` manifests in our applications
# won't work without the Kong Ingress Controller
ingressController:
enabled: true
ingressClass: kong # There's an HTTP2 bug in Kong which creates
# an excess of noise in the proxy logs.
# The fix will be in Kong 2.8.2
# https://github.com/Kong/kong/pull/8690
admin:
tls:
parameters: []
The
values.yaml
file is our primary configuration space for Kong in Kubernetes.
In the project root, start your cluster and install your Chart using Helm…
$ minikube start$ helm dep up ./helm/kong$ helm install kong ./helm/kong
Helm does a lot of the heavy lifting in preparing the Kong Chart into manifests which Kubernetes will understand. Check out the Helm Docs for details on
helm dep up
andhelm install
specifically.
And that’s it! Kong is running live; you may inspect it through your Kubernetes UI…
And open a tunnel to your cluster and make a request…
$ minikube tunnel$ curl -X GET localhost
> {"message":"no Route matched with those values"}
Your preferred kubernetes tool may have a different equivalent;
minikube tunnel
is just one of several methods within minikube to allow access to your cluster applications.
At this stage, our Kong Gateway is alive, but it routes requests to nowhere. This is neat, but not useful yet. Feel free to clean up your cluster so we can return to it with a fresh slate…
$ helm uninstall kong
Routing to a Backend Service
The power in Kong lies in its ease of service onboarding. At this point, Kong is configured in your project repository to exist, but it doesn’t service any backend applications just yet; for the sake of keeping a whole project in a complete unit, we’re going to define a service alongside the Kong Helm Chart.
In a real-world scenario, it’s much more likely you may be deploying your backend service separately from Kong. It’s all the same to Kong so long as the gateway and your service end up in the same cluster!
We are going to deploy an echo-server to our cluster to demonstrate the ease of onboarding. One might view this step
To add this new service, there are three new files to add; most of them are standard manifests expected of a service deployed into a Kubernetes Cluster.
# ./helm/kong/templates/echo/deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- image: ealen/echo-server:latest
imagePullPolicy: IfNotPresent
name: echo
ports:
- containerPort: 80
env:
- name: PORT
value: "80"
resources: {}
The Kubernetes Deployment stands as a declarative state for our application; this is where we defined the image which we would like to deploy into our cluster.
# ./helm/kong/templates/echo/service.yamlapiVersion: v1
kind: Service
metadata:
name: echo
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: ClusterIP
selector:
app: echo
The Kubernetes Service is responsible for network access to our application within the cluster.
# ./helm/kong/templates/echo/ingress.yamlapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
spec:
ingressClassName: kong
rules:
- http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echo
port:
number: 80
The Kubernetes Ingress manages network access coming from outside of the cluster.
We have minimal components in our Ingress manifest, but it’s enough to get the job done! Here are the most important parts we’ve added:
ingressClassName
Recall back to the creation of values.yaml
where we specified our ingressController.ingressClass
as “kong”; this flags up to the Kong Ingress Controller that this Ingress is meant to be managed by that Ingress Controller in particular.
paths
Requests passed through Kong are routed by path, and this specification instructs Kong to take any request whose base path contains the Prefix
/echo
and pass it through to the backend service specified within.
backend
Finally, which application running in the cluster should handle the request after Kong is done with it?
There are plenty of other annotations to add for further customization of behavior, but for now, we can start with this minimal model.
With these additions complete, let’s install our Kong implementation into our cluster once more…
$ helm install kong ./helm/kong
Give the cluster a beat to settle and open your tunnel to make a request to your echo service by the desired path: localhost/echo
…
$ minikube tunnel$ curl -X GET localhost/echo
Kong should be returning a response from the echo-server which begins like this:
{
"host": {
"hostname": "localhost",
"ip": "<any IP>",
"ips": []
},
"http": {
"method": "GET",
"baseUrl": "",
"originalUrl": "/echo",
"protocol": "http"
},
"request": {
"params": {
"0": "/echo"
},
"query": {},
"cookies": {},
"body": {},
"headers": {
"host": "localhost",
...
...
...
}
Throw whatever you can at it; extend the path to localhost/echo/more/UUID-HERE
, add on some parameters and headers; the service we chose to deploy into our cluster should mirror the entire request back out to you inside the request
body, confirming Kong is sending the entire payload over to the service we are requesting at /echo
.
Clean up your cluster so we can return to it with a fresh slate and make it a little more interesting…
$ helm uninstall kong
Adding Modular Plugins
The primary reason we liked Kong so much was the robust support of modular plugins and integrations. The Kong Hub has plenty of useful plugins available off-the-shelf; applying bundled plugins is just as easy as routing an application.
In this demo, we’re going to apply a rate-limiting plugin across all requests into Kong with a rate reachable by manual human speed: 5 requests/min.
We’re going to add a new file to our project…
# ./helm/kong/templates/plugins/rate-limiting.yamlapiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: global-rate-limiting
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
config:
minute: 5
policy: local
plugin: rate-limiting
Some line items of note to flag:
kind: KongClusterPlugin
Kong’s Custom Resource Definition for plugins which apply to the entire cluster.
config
All custom configurations which apply to this plugin; the possible values here are found in the plugin documentation. Most significantly, the local
policy is one found in the plugin documentation; we could opt for a more robust solution using an external memory cache instead of using local memory, but we’ll keep our process simple for now.
plugin: rate-limiting
The official name of the plugin to apply in the cluster.
metadata.name: global-rate-limiting
An identifiable, custom name for this particular instance of the plugin. We could have called it anything.
This is all we need to add! Reinstall your implementation of Kong…
$ helm dep up ./helm/kong$ helm install kong ./helm/kong
Let the cluster get settled before tunneling in and make your request to the primary service: localhost/echo
…
$ minikube tunnel$ curl -X GET localhost/echo
And you should have the same response as before. Make a few more requests — see what happens when you exceed 5 requests in a minute…
{
"message": "API rate limit exceeded"
}
We’ve chosen to make a global rate-limit, but we have the opportunity to create distinct limits for different classes of users based on identity using the Kong concept of a Consumer if we so desire.
Lastly, don’t forget to clean up!
$ helm uninstall kong$ minikube stop
Conclusion
Through this tutorial, we’ve successfully deployed to our cluster an instance of Kong as a Gateway, using configuration-as-code, to proxy a backend service behind a global rate-limit. And we did it all in a totally replicable fashion in less than 100 lines of code!
It’s a decent start to a gateway which may grow far beyond this modest example! Authentication and custom routing are all basic manifests away, and running a local instance of this gateway is a breeze as the entire configuration is saved as a set of Kubernetes manifests.
Join us in our next Kong article where we dive deep into the creation of custom plugins written in Python!