Running services with Knative & Kong

Sep 23 · 7 min read
Knative & Kong was a match made in heaven … for us!

Here at we use Knative EXTENSIVELY … because it is truly an amazing project!

Most developers and sysadmin view Knative as a serverless framework, but it does so much more (the serverless component is just a small part of it).

It abstracts service deployment and management by defining services with only one yaml per service while offering additional features like header-based routing and traffic-splitting … all of this allows us to run different versions of the SAME service in Direktiv. It’s a pivotal part of our live-migration feature!

In this post I will show how to use the Kong ingress controller with Knative to get even more out of it!

Setting it up

Firstly, you’ll need a Kubernetes environment (obviously). Pick a service from your favourite cloud provider or install k3s (I prefer this).

In case you have never used Knative we will start with the installation. Knative can be easily installed with provided YAML files:

# kubectl apply -f kubectl apply -f

This installs the serving part of Knative, but it also requires a network component to handle the routes. The installation instruction suggests Kourier, Ambassador, Contour and Istio. All of those components are of course valid options and work perfectly fine but there is another option which is not mentioned here: Kong for Kubernetes

Kong provides an ingress controller which supports Knative out of the box. All services can be automatically routed through Kong and appropriately managed.

The benefits (for us) are obvious. Using the full feature-set of Kong makes it easier for Knative services to focus on business functionality while Kong is handling supporting functionality like authentication, response transformation and request limiting.

Have a look at the following diagram (it shows services accessed from the outside and inside of a cluster):

Service accessibility from outside of cluster to the inside

Knative differentiates between external services, accessible from outside the cluster, and private services, for cluster internal use only. To support both we will install two instances of Kong in our Kubernetes cluster.

Kong comes with Helm charts so we will use those for installation. Please look at the Kong Helm chart github page for all configuration options. We will use a basic approach for this example.

To ensure the two instances are separate we configure each with their own namespace: kong-internal and kong-external. The installation is almost identical except the type of Kubernetes service. Kong for external services uses a LoadBalancer whereas the internal one uses a ClusterIP (and is therefore not accessible from outside the cluster).

To differentiate between the two we are assigning a different ingress class as well.

Let’s install Kong for the external services:

# helm repo add kong
# helm repo update

# kubectl create namespace kong-external
# helm install -n kong-external kong-external kong/kong

Let’s install Kong for the internal services:

# kubectl create namespace kong-internal
# helm install -n kong-internal --set proxy.type=ClusterIP --set ingressController.ingressClass=kong-internal kong-internal kong/kong

The services will be available within a few seconds and all pods in both namespaces should be in running state.

# kubectl get pods -n kong-internal
kong-internal-kong-77dc9c4cf-tb2b5 2/2 Running 0 24s

# kubectl get pods -n kong-external
svclb-kong-external-kong-proxy-9cbcl 2/2 Running 0 30s
kong-external-kong-79668fdf45-rk7s2 2/2 Running 0 30s

To use external services in a production environment there are DNS changes required as outlined in the Knative documentation. In this case will use for testing. Knative provides a Kubernetes Job called default-domain that configures Knative Serving to use as the default DNS suffix:

# kubectl apply -f

After we have set up all components and they are up and running we need to “glue” them together now.

Sounds hard, but it’s actually very easy ... we just need to tell Knative to use Kong as the ingress controller …

This is a simple command to modify one of Knative’s configmaps. This defines the default ingress class and can be overwritten per service. In our case, all requests are getting routed via the external Kong if not specified otherwise.

kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"kong"}}'

Configuration & Testing Services

Time to put this setup into action and see how it works. The first step is installing services. We want to install an internal service and external service to work with Kong.

For this I suggest using the two service examples below:

Internal Service for Kong
External Service for Kong
Applying service configurations for Kong

As you can see they are almost identical except the internal service has additional annotations and labels. But what do they do?

  • cluster-local: This label marks it a cluster accessible service.
  • “1”: If a service scales from zero it takes a few seconds to get the first response. With this setting there is always one instance of the service running.
  • kong-internal: During installation the internal Kong instance specified the ingress class kong-internal. This setting ensures the request is routed through the internal Kong instance instead of the external one.

The configuration of the service itself (container name, environment variables, secrets etc.) is similar to a default Kubernetes deployment and is basically a subset of Kubernetes’ PodSpec.

Knative creates and deploys the services during installation and marks them as ready if that process finishes successfully. Deploying services the first time can take longer, in particular if the images in the services haven’t been downloaded to the Kubernetes node. If this process is successful the services are ready to be consumed:

# kubectl get ksvc
helloworld-internal http://helloworld-internal.default.svc.cluster.local helloworld-internal-00001 helloworld-internal-00001 True
helloworld-external helloworld-external-00001 helloworld-external-00001 True

Good ol’ curl

What would software troubleshooting be without curl?

The external service can be called with a simple curl command. Because is installed, which mimics DNS, the call would look like:

# curl
Hello Go Sample v1!

The second service is an internal service with a Kubernetes internal URL http://helloworld-internal.default.svc.cluster.local. To call the internal service we need to run an interactive pod with curl and call the service from there:

# kubectl run test-service --rm -i --tty --image curlimages/curl -- sh
# curl http://helloworld-internal.default.svc.cluster.local

Using Plugins

At the beginning we mentioned Kong’s plugins and how useful they have been in Direktiv. In this example, we’re going to connect a plugin (request limiter plugin) to the Knative service.

Installing or applying a plugin to a service is a two step process:

Step 1: Create the plugin: this example will use the request limiter plugin so we need to create it first. This will limit the requests to a maximum of 5 per minute.

Step 2: Annotate the services: using the name of the created plugin. This can be done during service creation or afterwards. We will annotate the external service with the request limiter. Repeated calls to the service will now return an error if it exceeds the limit.

# kubectl annotate ksvc helloworld-external

All done — rate limit plugin deployed!

The second configuration shows how to route an external request to an internal knative service based on the requested path in the URL. We are leveraging Knative’s routing using the “Host” header. If a request arrives at the Gateway, Kong in this case, Knative uses the header to route to the backend service specified in the “Host” header of the request. To enable path-based routing, Kong can add this header automatically via a request transformer plugin.

Path Based Routing example

The service to route to needs to be deployed to the ingress controller with the same ingress class (i.e. there needs to be an internal service in the external Kong ingress controller). As seen before, a simple label will deploy a Knative service as “cluster-local”:

Kong cluster-local installation

The next step is to configure the plugin to add the header. The request transformer plugin needs to be installed in the ingress controllers’ namespace kong-external:

Kong external cluster install

After successfully creating the plugin there needs to be a corresponding ingress using the plugin.

The following YAML adds a route /hello. This route is using the previous plugin adding the host header to the request and routes the request to the Kong proxy again which returns the result of the Knative service. This can be tested with a simple curl request:

“hello” route added for request

Final thoughts

Knative enables easy, version-controlled deployments and Kong adds a rich set of features around APIs and routing. Unfortunately Knative is not getting the appreciation it deserves because it is so much more than just a serverless solution (these are my thoughts ONLY). But both tools together can simplify microservices deployment and management in Kubernetes environments significantly.

As always … happy to answer any questions!

From Confusion to Clarification

Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To know more about us, visit


Written by

Direktiv is a cloud native event-driven serverless workflow engine. It easy to use, powerful enough for enterprise and open-source!

Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To know more about us, visit