Kong API Gateway - From Zero to Production

Let’s start by exploring the API gateway architecture pattern and then slowly deep dive into the details of running a production-grade Kong API gateway.

Arun Ramakani
The Startup
7 min readMay 22, 2020

--

Generally with many organizations the backend APIs are consumed by multiple front-facing applications like the mobile app, web, and other kiosk applications. In addition to this, many other internal and external integrators may have a need to consume these APIs. We will end up applying some of the below architecture characteristics to every individual APIs to support the above requirements. That will be a hell lot of work.

  1. Authentication/Authorization
  2. Monitoring
  3. Logging
  4. Traffic control
  5. Caching
  6. Audit and Security
  7. API Administration

API Gateway Architecture Pattern

API gateway architecture pattern attempts to take away all these cross-cutting concerns in managing these APIs and put them all across in a single plane. This will provide us with a lot of architecture advantages like

  • A unified way to apply cross-cutting concerns
  • Out of the box, plugins to apply cross-cutting concerns quickly
  • A framework for building custom plugins
  • Managing security in a single plane
  • Reduced operation complexity
  • Easy governance of 3rd part developers and integrators
  • Finally saving the cost of development and operations

Kong API Gate

Kong is one of the popular opensource API gateways which can help us to manage APIs deployed anywhere from a simple infrastructure to a complex multi-cloud environment. Kong’s ability to handle different protocols like REST, GRPC, Graphql enables us to manage almost all of our APIs. Kong strikes a perfect balance between the open-source and enterprise offerings.

If you are an organization focused on open-source, kong comes with an open-source version with very good community support. Also, there is a perfect base framework set for you to extend kong with your own plugin. If you are an enterprise looking for support and some additional features to support large enterprise needs, then kong enterprise offering comes to the rescue.

Kong Anatomy

Kong involves two important components,

a) The kong data plane that handles the actual API proxy itself applying all cross-cutting concerns configured for the given API. This is built over Nginx as the base

b) The kong control plane that receives the configuration on how to proxy an API and persist the same. Kong comes with two different persistence model 1) Kong DB Less Mode 2) Kong DB Mode

DB Vs DB Less Mode

Kong DB mode persists all the configurations in a DB with a couple of choices(Cassandra / PostgreSQL).

I personally prefer PostgreSQL. Operating PostgreSQL is simple and has managed services offerings from most of the cloud providers. You could apply your own database trade-off analysis.

Cassandra has its own advantage of horizontal scaling to match the horizontal scalability of Kong. You are the better judge to decide if you are operating at that scale. It's also important to note that kong stores all the configuration in memory for better performance. DB is reached mostly to refresh config on change.

The recommended way to maintain a single source of truth configuration is to maintain them all in Git. This will enable us to use GitOps, scale kong nodes easily, and swap DB with minimal effort. We will look at this in a later section of this blog.

To reduce the complexity and create more flexible deployment patterns, Kong 1.1 ships with the ability to enable DB-less mode. In this mode the entire configuration is managed in-memory loaded from a configuration file. This will enable horizontal scaling and works well with Continuous Delivery / Deployment pipelines.

Some of the pulugins like rate limiting will not work fully in a DB less model.

Data Plane, Control Plane Segregation & Security

With the latest version of kong, it is possible to separate control and data planes in a Kong cluster.

The control plane is where operators access kong for pushing configs and fetching logs. Whereas the data plane is the traffic that is actually being proxied.

On each kong node, there is a port exposed serving the API traffic(data plane), and another port for operators to configure kong(control plane). The ability to enable and disable kong data /control plane will give us the below flexibilities

  • Making a node control-plane only for operators
  • Making a node data-plane only for API traffic

This enables the possibility to proxy API traffic through a network segment and operat kong in a different network segment providing better persistance layer isolation and security.

Kong Installation

Kong comes with a wide range of installation options. Some of them are directly supported by kong, while few other options are supported by the community. Look at different installation options here. I personally use docker based deployment because of its portable. Let's look at a single node kong set up with PostgreSQL.

Step 1: Create a docker network to deploy kong and PostgreSQL with connectivity

docker network create kong-net

Step 2: Start the Postgres DB docker into the “kong-net” network

docker run -d — name kong-database — network=kong-net -p 5432:5432 -e “POSTGRES_USER=kong” -e “POSTGRES_DB=kong” -e “POSTGRES_PASSWORD=kong” postgres:9.6

Step 3: Run migration script on the Postgres DB and get it ready for Kong

docker run — rm — network=kong-net -e “KONG_DATABASE=postgres” -e “KONG_PG_HOST=kong-database” -e “KONG_PG_PASSWORD=kong” kong:latest kong migrations bootstrap

Step 4: Start the actual kong docker

docker run -d — name kong — network=kong-net -e “KONG_DATABASE=postgres” -e “KONG_PG_HOST=kong-database” -e “KONG_PG_PASSWORD=kong” -e “KONG_PROXY_ACCESS_LOG=/dev/stdout” -e “KONG_ADMIN_ACCESS_LOG=/dev/stdout” -e “KONG_PROXY_ERROR_LOG=/dev/stderr” -e “KONG_ADMIN_ERROR_LOG=/dev/stderr” -e “KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl” -p 8000:8000 -p 8443:8443 -p 127.0.0.1:8001:8001 -p 127.0.0.1:8444:8444

You can check if kong is up and running with the following command.

curl -i http://localhost:8001/

Kong For Kubernetes

If all of your APIs exist within a Kubernetes cluster, the best way to deploy kong is by using the Kong Ingress controller. This installs a couple of containers in a pod, one acting as a control pane and other the data pane. The advantage of this model is that it's Kubernetes-native and automatically discovery APIs through the API server.

Installing kong into Kubernetes is pretty simple. Run the below kubectl.

kubectl apply -f https://bit.ly/kong-ingress-dbless

This will setup kong with ETCD persistence, few CRD, and a POD with both controller and kong proxy.

Kong Configuration

Lets quickly understand few terminologies needed to configure kong

  • Service — The kong object that binds the upstream API with kong

curl -i -X POST — url http://localhost:8001/services/ — data ‘name=example-service’ — data ‘url=http://mockbin.org'

In the above example, we create a service named “example-service” pointing to an upstream API “http://mockbin.org”. “http://localhost:8001/services/” is the kong admin endpoint to create a service object

  • Route — The kong object that binds a service with route path for the API consumers

curl -i -X POST — url http://localhost:8001/services/example-service/routes — data ‘hosts[]=example.com&paths[]=/mockbin’

In the above example, we create a route for the “example-service” service pointing to the path “/mockbin”. “http://localhost:8001/services/{service-name}/routes” is the kong admin endpoint to create a route object under the specific service.

Now hit http://localhost:8000/mockbin/, to see if the proxy is working. There are many nuances to configure APIs. See detailed document at doc.

If you are using Kong for Kubernetes, the technologies remain the same. But the configuration will be a Kubernetes custom resource YAML.

The Plugins

Plugins are the extensions that we can use to extend kong. There are many open-source plugin implementations of cross-cutting concerns like Basic Authentication, JWT, LDAP Authentication, IP Restriction, Rate Limiting, Prometheus, Zipkin, etc. Enterprise version of kong comes with a lot more plugin. Getting into plugins is a big topic by itself. Read more about individual plugins here.

GitOps With decK

Traditionally kong configuration is done through the admin panel or admin APIs. Both has its own problems

  1. With the admin panel we do manual changes. There is no single source of truth and history trace.
  2. Using admin APIs directly means storing the configuration as HTTP/HTTPS requests as mentioned in the above samples. That is not a convenient way to store configurations

DB less mode and kong for Kubernetes already manages configuration in a declarative fashion. The same can be version controlled in git easily. For DB mode, we can use a tool named decK to manage declarative configuration.

decK is a CLI tool to configure Kong declaratively using YAML file

A declarative service and route configuration pointing to the upstream HTTPs service will look like the below YAML.

Look at the demo on how kong works with decK.

In addition to declarative config management, “decK” can help in drift detection to identify any manual changes to the kong cluster.

“decK” CLI can pick up changes to the YAML and apply the changes to the cluster. If “decK” identifies that there are some additional configurations in the kong setup more than what we have in git, it can create a drift alert.

Let’s front face our APIs with a gateway. See you in an upcoming article🏄

--

--

Arun Ramakani
The Startup

#ContinuousDevOps #Kubernetes #Microservices #CloudNativeApps #DevOps #Agile