Hybrid API management with Kong

Ivan Rylach
Checkr Engineering
Published in
5 min readJun 9, 2020
Photo by Tianshu Liu on Unsplash

What is an API Gateway?

API Gateway is a common architectural pattern to manage access to APIs. It sits between your clients and API providers and acts as a single entry point for the traffic into the system. This centralized solution allows organizations to set up a governance model, which puts requirements on how the traffic is routed to servers (L4 or L7 of OSI model, load balancing) and what are the security requirements for traffic (TLS cipher suite, request authentication, IP whitelisting). It also helps you to enforce service license agreements (rate-limiting and quotas) and provides auditing and analytics (request logging and metering). The challenge arises when the organization grows to the point, where on one hand teams should follow the established governance model and on the other they should be able to work and iterate on their solutions rapidly, ideally, in a self-serving manner.

At Checkr we chose Kong as our API Gateway.

https://konghq.com
https://konghq.com/

Imperative management

Engineering teams had to interact with GUI to present changes to routing.

Originally, when Kong was presented into the system, the organization was quite small and the imperative management was enough for one team to handle the API Gateway. If you wanted to make a change to it (like presenting a new route or a rate limit), you could just access a GUI.

At some point we had to present a user acceptance testing (UAT) environment, so customers and partners could evaluate Checkr APIs ahead of the main launch. Almost immediately engineering teams ran into different API Gateway configurations. Teams would enable some functionality in production and miss it in UAT. We had to rethink the whole approach.

Declarative management

It was clear that we needed a solution, which would
1. allow us to deploy the system without manual steps,
2. allow us to view the history of changes,
3. require a review and an approval from other members of the team.
A declarative way of managing configurations was the winner.

Luckily, there are tools that allow us to switch from imperative management to a declarative one. Since we already have been using Kubernetes and Helm to handle our workloads, we decided to pick Kong Kubernetes Ingress Controller to help us with this challenge.

Helm supports inheritance of declarations and it allows us to create Kubernetes Ingress rules, apply Kong plugins to them once in the parent file, and then only override the target domain for different environments.

Here is an example of how such a setting would look like.

  1. KongPlugin resource to enable JWT authentication
# Source: jwt.yamlapiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: jwt
plugin: jwt

2. service and ingress resources with kong.com/plugins annotation

# Source: values.yamlservices:
web-service:
ports:
- port: 3000
targetPort: 3000
selector: web
ingresses:
web-api:
annotations:
konghq.com/plugins: jwt

3. We augment web-api ingress in the declaration for a specific environment — staging in this case.

# Source: staging.yamlingresses:
web-api:
hosts:
- host: api.staging.com
serviceName: web-service
servicePort: 3000

4. When helm renders the template, we get the following result

# Source: ingresses.yamlapiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-api
annotations:
konghq.com/plugins: jwt
labels:
app: web-api
chart: microservice
release: checkr
spec:
rules:
- host: api.staging.com
http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 3000

Dealing with Kong consumers

As mentioned previously, we also leverage Kong to authenticate an incoming request by validating an API key or an authorization bearer token. Every API key must be associated with a Kong consumer, so we can enable rate-limiting per application identity. Since new developers and partners can sign up for the platform in a self-serving manner, we can not manage API keys and consumers using a declarative approach. By design, it requires manual intervention, which would negatively impact user experience — developers should be able to invoke our APIs as soon as they sign up.

Additionally, during a migration to the declarative configurations, we had to upgrade to a newer version of Kong with zero downtime, which required us to have multiple Kong deployments running concurrently with the same set of API keys and consumers.

Checkr has been using Kafka for a while, so to guarantee eventual consistency between multiple Kong deployments we implemented a total order broadcast.

We use Kafka consumer groups to implement a total order broadcast to make sure that multiple Kong deployments operate with the same set of consumers, their API keys and rate limiting settings.

Here, App Identity API service publishes all app changes to a Kafka topic. Consumers Controller, which is responsible for management of Kong consumers, has one deployment per corresponding Kong Control Plane (Admin API). Kafka guarantees order of messages per partition, hence to make sure that all changes for a given app are processed in the same order as they occurred, App Identity API uses the app unique identifier as a topic partition key.

Multi-region availability

The option to propagate Kong consumer changes through Kafka helps us to have multi-region deployments of the system as well. This provides us with a global control plane for app identities. We leverage Kafka MirrorMaker to replicate a Kafka topic into another region, where we deploy a local Consumers Controller.

Cross-region propagation of Kong resources.

Putting it all together

As a result we have gotten a hybrid system, where routing configurations, which do not change frequently and require auditing and manual validation, are being stored using git and consumers, their API keys, and rate-limiting settings are propagated into Kong automatically with guarantees of eventual convergence.

A word of gratitude

This project required a lot of effort from our Platform and SRE teams. Many thanks to Arturo Contreras, Brandon Hsieh, Michael Bonifacio, Ravi Ambati, Renjie Xu, Saso Matejina, Stefan Liedle, Zhuojie Zhou, and everybody else who was involved in this journey.

If you are interested in building a fairer future, take a look at Checkr’s careers page and reach out to us!

--

--