Inner API Gateway for Cloud With Kong

Danuka Praneeth
The Startup
Published in
4 min readApr 26, 2020

--

Kong from the film King Kong

Hope you are not confused by the thumbnail of this article. So this is not about the Kong from the film King Kong, but a different Kong for your cloud-native platform. I will walk you through the steps to deploy and configure an inner API gateway for your environment with an opensource product called Kong.

What is this Kong?

Kong is an API gateway/platform for multi-cloud and hybrid systems. It is an opensource product having more advanced features in the enterprise version. All the features of this product is available as plugins and those can be easily integrated to API requests via an annotation to a configuration file. Kong gateway is available in different deployment patterns depending on your requirement and the available plugins will vary accordingly. Here I am discussing the simplest deployment pattern from those with minimum resources (db-less mode) to use this as an Ingress controller with advanced plugins for your container platform.

Kong API Gateway

Furthermore I am selecting few plugins from the many available, to cover key domains of any API ecosystem and walk-through the steps to install. Since some of those plugins are available only in the enterprise version, I will be installing the Kong for kubernetes Enterprise as below.

Deploying Kong in your container platform.

The commands I am using here are for the OpenShift container platform. As a best practice for managing resources, lets create a new namespace for Kong resources.

$ oc create namespace ingress-controller

Then save the license filename license as a secret in our namespace.

$ oc create secret generic kong-enterprise-license --from-file=./license -n ingress-controller
$ oc create secret -n ingress-controller docker-registry kong-enterprise-k8s-docker \
--docker-server=kong-docker-kong-enterprise-k8s.bintray.io \
--docker-username=<your-bintray-username@kong> \
--docker-password=<your-bintray-api-key>

To install using YAML manifests,

$ oc apply -f https://bit.ly/k4k8s-enterprise

To install using Helm charts (helm3),

$ helm install demo kong/kong
--namespace ingress-controller \
--values https://l.yolo42.com/k4k8s-enterprise-helm-values \
--set ingressController.installCRDs=false

Now you should have created the below components, an ingress controller pod, service and a deployment in your environment. If you get errors due to user permission, you can update the default user ID in the SecurityContext of deployment config file.

Kong Resources

Integrate you application with Kong Ingress Controller

Now you need to create an ingress resource for your micro-service to expose it via the Kong ingress controller. Any ingress resource you deploy in your OpenShift cluster will be automatically detected by the Kong ingress controller and will start processing API requests. You can use the below sample template, change the namespace, resource name, API context path, service name, service port and deploy it using the command provided.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{unique resource name}}
namespace: {{name space of the micro service}}
annotations:
konghq.com/strip-path: "true"
configuration.konghq.com: "https-only"
spec:
rules:
- http:
paths:
- path: /{{context path}}
backend:
serviceName: {{OpenShift service name}}
servicePort: {{OpenShift service port}}

If you need to use any KongClusterPlugins/KongPlugins, you just need to define the name of required plugins as an annotations in your ingress resource. The simplicity in adding and removing any plugin of your choice to any API is the most coolest feature of this product for me.

annotations:
konghq.com/plugins: 'bot-detect, CORS', ip-restrict'

I will separately discuss on these Kong plugins in my next article.

$ oc apply -f <<file name>>

So now your container platform design will looks like below with this Kong API gateway acting as the ingress controller for all your external API traffic.

Updated container platform design

Now you can deploy a fallback service to handle API requests without a valid URI.

$ oc apply -f https://bit.ly/fallback-svc

Create ingress rule to redirect the invalid API requests to the fallback service.

$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fallback
spec:
backend:
serviceName: fallback-svc
servicePort: 80
" | oc apply -f -

If you need to integrate API endpoints from external systems outside of your container platform, then use ExternalName service in k8s.

$ echo "
kind: Service
apiVersion: v1
metadata:
name: proxy-to-external
spec:
type: ExternalName
externalName: www.github.com
" | oc create -f -

Now create an ingress rule to expose the above service via the Kong ingress as an API endpoint.

echo '
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-from-k8s-to-ext
annotations:
konghq.com/strip-path: "true"
konghq.com/protocols: https
spec:
rules:
- http:
paths:
- path: /v1/getcommits
backend:
serviceName: proxy-to-external
servicePort: 80
' | oc create -f -

Now our API gateway deployment is ready and we can configure the plugins to utilize the full functionalities of this product in the next article.

Kong is a continually evolving product and you can read more in their office web page.

Thanks!

--

--

Danuka Praneeth
The Startup

Senior Software Engineer | BSc (Hons) Engineering | CIMA | Autodidact | Knowledge-Seeker