Deploying Kong API gateway in Kubernetes — All the what, why, where and (some) how questions

Francisco Bobadilla
IoTOps
Published in
4 min readAug 4, 2020
Kong GW image showing some plugins

What is Kong?

Kong is an API platform which can be deployed in multiple infrastructures either in the cloud or in your premises as well as IoT edge devices and it connects all your micro-services regardless of their location. It is currently the most used API gateway out there.

In kubernetes it can replace the Ingress controller allowing for more flexibility and control over each of the published endpoints with a single configuration or a per service type of approach.

It can also be deployed to allow a service mesh, but in this article we will focus on deploying Kong in kubernetes as api gateway and ingress controller.

Its foot print is minimal.

Why is it used for?

Kong is built with modern architectures in mind, as such it solves most the problems we are most likely to encounter while deploying our micro services.

Here are some of the out-of-the-box features we find in Kong HQ

  • Declarative Configurations
  • Governance improvement
  • Interconnection improvement
  • End-to-end automation
  • Increase compliance and security
  • A lot of plugins already available
  • Authentication
  • Analytics
  • Transformations
  • Serverless

Is it anyone using it already?

Yes, here is a list of kong’s customers. In addition to that it is an open source software so there a lot of small companies and startups leveraging their solution to deliver value in a timely manner.

You can check their kong nation community forum to realize there are a lot of folks already using it.

Is it open source? Is it expensive?

It is licensed under the Apache 2.0, so it is open source and can be used for personal and commercial projects, they also provide and enterprise version of it with ever more features and support.

Where it can be deployed?

Kong Gateway can be deployed on an instance with the binary, it can also be deployed with a docker image with a docker run or a docker compose file. And lastly but not least it can be deployed in kubernetes, on your custom made cluster or any cluster running on any CSP (Cloud Service Provider).

Through out this article and the ones that will follow we will focus on the kubernetes deployment with the ingress-controller used as API Gateway.

If you are looking for another way of the deployment it is very well documented in their Docs section.

Kong gw is also very light so it supports ARM64 based architectures so it can also be deployed on a IoT edge device or your hybrid k8s multi cluster and interconnect all your µServices.

How can it be deployed?

In kubernetes it can be deployed with two main approaches.

If you are starting with kong gw probably the helm chart is the best place to start as you have all the available features and you can experiment with different set of options.

If you already know what you are after you can go ahead and tailored the yaml manifest to meet your needs.

In all the cases it can be deployed with or without a database, for the remaining of the article and future articles we will focus on the DB-less mode.

At the time of this writing they do not provide an installer like kongctl, like fluxcd for instance. I do not discard that that may change in the near future.

How to configure it?

Kong GW can be configured in three ways, and do remember that we are deploying kong in a kubernetes without a database. The options are:

  1. Restful calls to the Kongh Admin API
  2. Custom config YAML file place in a specific location for kong to pick it up at startup time, or
  3. kubernetes CRDs yaml

In all the cases the changes are instantaneously and cluster wide or multi cluster wide applied.

The second options does not really apply to us although we could have that in a config map and have it mounted on the pod, although this it absolutely not recommended.

The first option is a good place to start testing your configurations and see if they are applied properly. Also if you do have the Kong custom yaml file for the 2 method, you could feed the file through the kong admin API.

As you probably guessed it, the third approach is the one we like, and there are a lot of reasons for it.

  • Kubernetes friendly
  • Declarative approach
  • Ability to version control
  • Ability to automate control
  • Easier and faster roll back

Once more through out the following articles we well focus on this way of configuring kong.

Can it be extended if so how?

Kong GW is based of the nginx ingress controller, so its coded with the lua language.

As mentioned above Kong is a battery included type of application and as such it comes with a lot of plugins already available that will suit most use cases, but if not it can be extended. You can write your own plugin and distribute it as you like.

To be continue…

There are more Kong GW related articles so stay tuned for more and feel free to leave a comment if you want to read about a particular subject.

Resources and links of interest

--

--

Francisco Bobadilla
IoTOps
Editor for

Head of DevOps @ ThirdPartyTrust. Full time father and husband. Outdoor enthusiast. And passionate about of open source solutions.