KONG — The Microservice API Gateway

‘What is KONG?

Kong is Orchestration Microservice API Gateway. Kong provides a flexible abstraction layer that securely manages communication between clients and microservices via API. Also known as an API Gateway, API middleware or in some cases Service Mesh. It is available as open-source project in 2015, its core values are high performance and extensibility.

Kong is a Lua application running in Nginx and made possible by the lua-nginx-module.

The question is, Why Kong?

If you are building for web, mobile or IoT (Internet of Things) you will likely end up needing common functionality to run your actual software. Kong can help by acting as a gateway (or a sidecar) for microservices requests while providing load balancing, logging, authentication, rate-limiting, transformations, and more through plugins.

But for sure, kong is template engine that will help accelerate development time and it is support configurable plugins. And communities support development and make it stable. And no need to reinvent the wheel.

A lot of authentication plugin that we could choose from Basic authentication, JWT, LDAP until the most used — Oauth2.

Security plugin that additional security layers such as ACL, CORS, Dynamic SSL, IP Restriction.

Traffic control plugin is a very useful for limited cost such as rate limiting, request size limiting, response rate limiting and others.

Analytics and monitoring plugin that visualise, inspect and monitor API traffic such as Prometheus, data dog and Runscope.

Transformation plugin that transform request and responses on the fly such as Request Transformer, Response Transformer.

Logging plugin that log request and response data using the best transport for your infrastructure: TCP, UDP, HTTP, StatsD, Syslog and others.

So, now in this article we would try basic how to setup and use KONG. To do so, it is required to have basic:

Install Kong Community Edition

Kong is available to install in multiple operating environments. For the easiest installation, we use docker, and for this tutorial is required to have basic knowledge docker. Please follow the basic tutorial installation and using docker.

Now we start install KONG.

First — Create a docker network — this network will be use for kong and our API server

$ docker network create kong-net

Second — Start database for Kong — there is two option database : Postgres or Cassandra. For know we use postgres.

$ docker run -d --name kong-database \
--network=kong-net \
-p 5555:5432 \
-e “POSTGRES_USER=kong” \
-e “POSTGRES_DB=kong” \
postgres:9.6

Third — Prepare database, run migration with Kong container:

$ docker run --rm \
--network=kong-net \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
kong:latest kong migrations up

Fourth — Start the Kong, when the migrations have run and your database is ready, start a kong container.

$ docker run -d --name kong \
--network=kong-net \
-e “KONG_LOG_LEVEL=debug” \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
-e “KONG_PROXY_ACCESS_LOG=/dev/stdout” \
-e “KONG_ADMIN_ACCESS_LOG=/dev/stdout” \
-e “KONG_PROXY_ERROR_LOG=/dev/stderr” \
-e “KONG_ADMIN_ERROR_LOG=/dev/stderr” \
-e “KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl” \
-p 9000:8000 \
-p 9443:8443 \
-p 9001:8001 \
-p 9444:8444 \
kong:latest

Five, Check Kong Instance.

$ curl -i http://localhost:9001

The respond would be:

HTTP/1.1 200 OK
Server: openresty/1.13.6.2
Date: Wed, 18 Jul 2018 03:58:57 GMT
Content-Type: application/json
Connection: keep-alive
Access-Control-Allow-Origin: *

Now Kong is up and ready to be used. The next thing is prepare the API server that contain service routes and can be accessed as REST API use to be.

Setup API server routing using node.js

Now prepare the API server, for this tutorial we are going to use node.js as API server.

To make it simple, please clone the code from GitHub faren-NodeJS-API-KONG.

And it should contain like this in terminal:

$ ls -l
total 48
-rw-r — r — 1 farendev staff 186 Jul 18 11:37 Dockerfile
-rw-r — r — @ 1 farendev staff 31716 Jul 16 10:36 Kong.postman_collection.json
-rw-r — r — 1 farendev staff 100 Jul 18 11:37 README.md
-rw-r — r — 1 farendev staff 878 Jul 18 11:37 index.js
-rw-r — r — 1 farendev staff 307 Jul 18 11:37 package.json

Let’s build docker image and run it, by execute instruction below:

$ docker build -t node_kong .

$ docker run -d --name=node_kong --network=kong-net node_kong

Check all docker has been run, by execute below:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13586f83e52 node_kong “npm start” 2 minutes ago Up 2 minutes 10000/tcp node_kong
41156cad5c86 kong:latest “/docker-entrypoint.…” 6 days ago Up 6 days 0.0.0.0:9000->8000/tcp, 0.0.0.0:9001->8001/tcp, 0.0.0.0:9443->8443/tcp, 0.0.0.0:9444->8444/tcp kong
f794a0e9506c postgres:9.6 “docker-entrypoint.s…” 6 days ago Up 6 days 0.0.0.0:5555->5432/tcp kong-database

Check API server by access its API. We need to get IP container on docker network kong-net. After that get into container kong shell and check the API from it.

Execute this command bellow on terminal:

$ docker network inspect kong-net


“Containers”: {
“41156cad5c864af4ad8615c051fac8da7f683238a6c8cc42267f02813f14810f”: {
“Name”: “kong”,
“EndpointID”: “fe1cec9f6f31a015ab29a100fdd54b609abea11bbfa00f5e9ca67cc6175d7b2f”,
“MacAddress”: “02:42:ac:13:00:03”,
“IPv4Address”: “172.19.0.3/16”,
“IPv6Address”: “”
},
“d13586f83e52df8866b9879ba0537d58c21fc1b95978dde0580b017ce1a7b418”: {
“Name”: “node_kong”,
“EndpointID”: “5677f7588b7daef391cf8cecec6a3ede0155f99f7d86e0e14dd5970ff0570924”,
“MacAddress”: “02:42:ac:13:00:04”,
“IPv4Address”: “172.19.0.4/16”,
“IPv6Address”: “”
},
“f794a0e9506c7330f1cc19c5c390f745823c29dd4603e0d727dae4e8a68caa8d”: {
“Name”: “kong-database”,
“EndpointID”: “51737ca4e2a4b0e30d25db86e197e653a81e6206893588f4dae7b4a0a50e2799”,
“MacAddress”: “02:42:ac:13:00:02”,
“IPv4Address”: “172.19.0.2/16”,
“IPv6Address”: “”
}
},

Please make sure the IP on node_kong on bold font, and make sure execute curl the IP bellow, it is definitely different every machine that you’ve installed.

$ docker exec -ti kong sh
/ # curl -i 172.19.0.4:10000/api/v1/customers
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 110
ETag: W/”6e-Tf3vAGLC3XH0dFR2pCIzWdG8/5c”
Date: Wed, 18 Jul 2018 10:09:32 GMT
Connection: keep-alive
[{“id”:5,”first_name”:”Dodol”,”last_name”:”Dargombez”},{“id”:6,”first_name”:”Nyongot”,”last_name”:”Gonzales”}]

Above respond showing the node.js server API is alive and can serve GET method REST /api/v1/workers

Setup KONG as API-gateway to API server routing

Now we have complete KONG engine and node.js API service and start register our API to Kong. Here is the flow would be:

Routes are entry-points in Kong and define rules to match client requests. Once a Route is matched, Kong proxies the request to its associated Service. And the service that has been defined will be direct to API server that is ready to serve.

For example (Warning: IP might very different for every machine)

API server that is live on server http://172.19.0.4:10000/api/v1/customers

We set routes path /api/v1/customers

And set the service host to http://172.19.0.4:10000, and path /api/v1/customers

So, when client request to kong (in this case kong is live at localhost:9000) with path route /api/v1/customer : in complete client request http://localhost:9000/api/v1/customers , Kong will proxy it to 172.19.0.4:10000/api/v1/customers

To start please import postman collection file on github NodeJS-API-KONG (https://github.com/faren/NodeJS-API-KONG) — kong.postman_collection.json.

So let’s see in practice take a look postman collection that has been imported:

For this tutorial, we would like have above scenario:

REST for customers and clients.

First the all, we need to register service customer and then register match routes requested. Please make sure for the IP that on this tutorial might very different with every machine, please take a look how to get node_kong ip from docker network.

Add service: customers

Find collection Kong, folder Services, POST Services — Create:

POST: localhost:9001/services/

Headers: Content-Type:application/json
Body:
{
“name”: “api-v1-customers”,
“url”: “http://172.19.0.4:10000/api/v1/customers"
}
Respond:
{
“host”: “172.19.0.4”,
“created_at”: 1531989815,
“connect_timeout”: 60000,
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”,
“protocol”: “http”,
“name”: “api-v1-customers”,
“read_timeout”: 60000,
“port”: 10000,
“path”: null,
“updated_at”: 1531989815,
“retries”: 5,
“write_timeout”: 60000
}

List service

Find collection Kong, folder Services, GET Services — List:

GET: localhost:9001/services/

Respond:
{
“next”: null,
“data”: [
{
“host”: “172.19.0.4”,
“created_at”: 1531989815,
“connect_timeout”: 60000,
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”,
“protocol”: “http”,
“name”: “api-v1-customers”,
“read_timeout”: 60000,
“port”: 10000,
“path”: null,
“updated_at”: 1531989815,
“retries”: 5,
“write_timeout”: 60000
}
]
}

Now we have created service customers, next we create routes for service customers.

Add Routes: customer

Find collection Kong, folder Routes, POST Routes — Create (take a notice to highlight API below):

POST: localhost:9001/services/api-v1-customers/routes/

Headers: Content-Type:application/json
Body:
{
“hosts”: [“api.ct.id”],
“paths”: [“/api/v1/customers”]
}
Respond:
{
“created_at”: 1531991052,
“strip_path”: true,
“hosts”: [
“api.ct.id”
],
“preserve_host”: false,
“regex_priority”: 0,
“updated_at”: 1531991052,
“paths”: [
“/api/v1/customers”
],
“service”: {
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”
},
“methods”: null,
“protocols”: [
“http”,
“https”
],
“id”: “4d9503c3-d826–43e3–9063-ed434a949173”
}

List routes

Find collection Kong, folder Routes, GET Routes — List:

GET: localhost:9001/services/

Respond:
{
“next”: null,
“data”: [
{
“created_at”: 1531991052,
“strip_path”: true,
“hosts”: [
“api.ct.id”
],
“preserve_host”: false,
“regex_priority”: 0,
“updated_at”: 1531991052,
“paths”: [
“/api/v1/customers”
],
“service”: {
“id”: “d28c20e4–94d3–4c3b-9a0d-688ac8dbf213”
},
“methods”: null,
“protocols”: [
“http”,
“https”
],
“id”: “4d9503c3-d826–43e3–9063-ed434a949173”
}
]
}

Now we can access API customer from KONG (http://localhost:9000/api/v1/customers)

GET: localhost:9000/api/v1/customers

Headers: Host:api.ct.id
Respond:
[
{
“id”: 5,
“first_name”: “Dodol”,
“last_name”: “Dargombez”
},
{
“id”: 6,
“first_name”: “Nyongot”,
“last_name”: “Gonzales”
}
]

Conclusion

Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong runs in front of any RESTful API and is extended through Plugins, which provide extra functionality and services beyond the core platform.

To better understand the system, this is a typical request workflow of an API that uses Kong:

Once Kong is running, every request being made to the API will hit Kong first, and then it will be proxied to the final API. In between requests and responses Kong will execute any plugin that you decided to install, empowering your APIs. Kong effectively becomes the entry point for every API request.

Next article, tutorial to enable Oauth2 to KONG — this article.

Like what you read? Give faren a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.