Kong Gateway Enterprise 3.4 and Amazon Elastic Kubernetes Service (EKS) 1.27

Hybrid Mode Overview

Claudio Acquaviva
15 min readAug 28, 2023

One of the most powerful capabilities provided by Kong Gateway Enterprise is the support for Hybrid deployments. In other words, it implements distributed API Gateway Clusters with multiple instances running on several environments at the same time.

In this sense, Kong Gateway Enterprise provides a topology option, named Hybrid Mode, with a total separation of the Control Plane (CP) and Data Plane (DP).

The Control Plane (CP) is used by admin for configuration and the Admin API is served from. The Data Plane (DP) serves traffic for the proxy. Each DP node is connected to one of the CP nodes. Instead of accessing the database contents directly in the traditional deployment method, the DP nodes maintain connection with CP nodes, and receive the latest configuration.

Kong and AWS Reference Architecture

Both Control Plane and Data Plane run on an Elastic Kubernetes Service (EKS) Cluster in different namespaces.

  • The communication between the Control Plane and the Data Planes is based on mTLS tunnels. The Control Plane publishes APIs and policies across all existing data planes using a specific tunnel. On the other hand, using another tunnel, each data plane reports back the control plane with metrics regarding API request processing.
  • PostgreSQL Database is located behind the CP and is deployed in the same namespace as the CP. The database is used as the CP metadata repository and it is required for some specific Kong Gateway Enterprise capabilities such as Kong Developer Portal, Kong Vitals, etc. For production-ready deployments we recommend consuming and external Amazon RDS for PostgreSQL infrastructure.
  • Kong Data Planes do not require a database as they are connected to the Kong Control Plane.

And a distributed and hybrid mode architecture, supporting both Cloud and On-Premises workloads, would look like this:

  • The Control Plane runs on an EKS cluster in AWS Cloud. It is used by admins to create APIs, policies and API documentation based on Swagger, OpenAPI, etc.
  • Data Plane #1 runs on an on-prem EKS Anywhere cluster to expose the services and microservices deployed in all environments we may have, including application servers, legacy systems and EKS Anywhere clusters.
  • Data Plane #1 leverages AWS services like Cognito for OIDC-based authentication processes, OpenSearch for log processing, etc. to implement policies to make sure the microservices or services are being safely consumed.
  • The architecture includes Data Plane #2, which is running on the AWS Cloud along with the Control Plane, to support the microservices and services that have been migrated from the on-prem environment or new microservices developed in cloud environments like ECS, EC2/ASG, etc.

This tutorial is intended to be used for labs and PoC only. There are many aspects and processes, typically implemented in production sites, not described here. For example: Digital Certificate issuing, Cluster monitoring, etc. For a production ready deployment, refer Kong on AWS CDK Constructs, available here.

This blog post will focus on the simpler deployment, running both Control Plane and Data Plane on the same Cluster in different namespaces.

Amazon Elastic Kubernetes Services (EKS)

First of all let's create our EKS Cluster with the following eksctl command:

eksctl create cluster --name kong34-eks127 --version 1.27 \
--region us-west-1 \
--nodegroup-name standard-workers \
--node-type t3.2xlarge \
--nodes 1 \
--with-oidc \
--max-pods-per-node 200

Since we're going to install our own PostgreSQL in the same Kong CP namespace, we need to enable the EBS CSI driver. Create a new IAM for the CSI first:

eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster kong34-eks127 \
--region us-west-1 \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRole

Now, use the role for the CSI Addon. Use your AWS account to run the command:

eksctl create addon --name aws-ebs-csi-driver \
--cluster kong34-eks127 \
--region us-west-1 \
--service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKS_EBS_CSI_DriverRole

Kong Gateway Enterprise Setup

Now, we are ready to deploy Kong Gateway Enterprise. We can summarize the setup in these following steps:

  • Certificate generation required to implement mTLS for connecting Kong Data Planes (DP) to connect with Kong Control Plane (CP).
  • Use helm to install Kong CP and DP.
  • Scale Kong DP nodes on EKS using horizontal pod Autoscaler.
  • Access Kong DP and CP through an AWS Load Balancer

mTLS Setup

Mutual TLS, or mTLS for short, is a method for mutual authentication. mTLS ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private key. The information within their respective TLS certificates provides additional verification.

mTLS is often used in a Zero Trust security framework to verify users, devices, and servers within an organization. It can also help keep APIs secure.

Zero Trust means that no user, device, or network traffic is trusted by default, an approach that helps eliminate many security vulnerabilities.

The communication between the Control plane and the Data planes is based on mTLS tunnels. The Control plane publishes APIs and policies across all existing Data Planes using a specific tunnel. On the other hand, using another tunnel, each Data Plane reports back the Control Plane with metrics regarding API request processing.

Before using Hybrid mode, you need a certificate/key pair. Kong Gateway provides two modes for handling certificate/key pairs:

  • Shared mode (Default) Use the Kong CLI to generate a certificate/key pair, then distribute copies across nodes. The certificate/key pair is shared by both CP and DP nodes.
  • PKI mode Provide certificates signed by a central certificate authority (CA). Kong validates both sides by checking if they are from the same CA. This eliminates the risks associated with transporting private keys.

Create Certificate/key pair

To have an easier deployment we’re going to use the Shared Mode and OpenSSL to issue the pair. The command below creates two files cluster.key and cluster.crt.

openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout ./cluster.key -out ./cluster.crt \
-days 1095 -subj "/CN=kong_clustering"

Create namespace for kong control plane and kong data planes

kubectl create namespace kong
kubectl create namespace kong-dp

Note that we need to create two namespaces here as we are going to install CP and DP in the same EKS cluster, but in different namespaces

Create a Kubernetes secret with the pair

For Control Plane namespace

kubectl create secret tls kong-cluster-cert --cert=./cluster.crt --key=./cluster.key -n kong

For Data Plane namespace

kubectl create secret tls kong-cluster-cert --cert=./cluster.crt --key=./cluster.key -n kong-dp

Mount the license key as Kubernetes Secret

You will need Kong license to enabled all its enterprise capabilities. You can get one by contacting Kong Sales here.

If you don't supply Kong Enterprise License keys , you may not be able to run it in "Free Mode", where all the enterprise features will be disabled.

The license is be provided as a JSON. Save the license text (in JSON format) provided to you in a file named as license.json

For Control Plane namespace:

kubectl create secret -n kong generic kong-enterprise-license --from-file=license=./license.json

For Data Plane namespace:

kubectl create secret -n kong-dp generic kong-enterprise-license --from-file=license=./license.json

As secrets are namespaced resources, we are mounting the license.json as secrets in two namespaces.

Kong Control Plane Installation

Install Helm chart

Add Kong’s Helm repository

helm repo add kong https://charts.konghq.com
helm repo update

Install the Control Plane

Install the helm chart using the following command.

helm install kong kong/kong -n kong \
--set ingressController.enabled=true \
--set ingressController.installCRDs=false \
--set ingressController.image.repository=kong/kubernetes-ingress-controller \
--set ingressController.image.tag=2.11 \
--set image.repository=kong/kong-gateway \
--set image.tag=3.4 \
--set env.database=postgres \
--set env.role=control_plane \
--set env.cluster_cert=/etc/secrets/kong-cluster-cert/tls.crt \
--set env.cluster_cert_key=/etc/secrets/kong-cluster-cert/tls.key \
--set cluster.enabled=true \
--set cluster.tls.enabled=true \
--set cluster.tls.servicePort=8005 \
--set cluster.tls.containerPort=8005 \
--set clustertelemetry.enabled=true \
--set clustertelemetry.tls.enabled=true \
--set clustertelemetry.tls.servicePort=8006 \
--set clustertelemetry.tls.containerPort=8006 \
--set proxy.enabled=false \
--set admin.enabled=true \
--set admin.http.enabled=true \
--set admin.type=LoadBalancer \
--set enterprise.enabled=true \
--set enterprise.portal.enabled=false \
--set enterprise.rbac.enabled=false \
--set enterprise.smtp.enabled=false \
--set enterprise.license_secret=kong-enterprise-license \
--set manager.enabled=true \
--set manager.type=LoadBalancer \
--set secretVolumes[0]=kong-cluster-cert \
--set postgresql.enabled=true \
--set postgresql.postgresqlUsername=kong \
--set postgresql.postgresqlDatabase=kong \
--set postgresql.postgresqlPassword=kong \
--set admin.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb" \
--set manager.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb"

From the data plane communication perspective, the most important settings are:

  • env.role=control_plane to configure this Kong Gateway instance as the control plane
  • cluster.tls.servicePort=8005 as the API and policy publication mTLS tunnel port
  • clustertelemetry.tls.servicePort=8006 as the tunnel port that the data plane will use to report back the control plane with metrics

Validating the Installation

kubectl get all -n kong

Completing the Kong control plane installation may take 2–3 minutes to complete.You should see output similar to following, with pods in Running State

Expected Output

NAME                                  READY   STATUS      RESTARTS   AGE
pod/kong-kong-74557454fc-wlhjs 2/2 Running 0 42s
pod/kong-kong-init-migrations-27dwk 0/1 Completed 0 42s
pod/kong-postgresql-0 1/1 Running 0 42s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-kong-admin LoadBalancer 10.100.91.71 a7be2711e233545bea1f782b8b9a2309-264700326.us-east-1.elb.amazonaws.com 8001:31289/TCP,8444:31214/TCP 42s
service/kong-kong-cluster ClusterIP 10.100.146.81 <none> 8005/TCP 42s
service/kong-kong-clustertelemetry ClusterIP 10.100.139.225 <none> 8006/TCP 42s
service/kong-kong-manager LoadBalancer 10.100.138.99 aa1f5a3f441674a23b5badcea19fda4a-668602520.us-east-1.elb.amazonaws.com 8002:32533/TCP,8445:30547/TCP 42s
service/kong-kong-portal NodePort 10.100.34.148 <none> 8003:30389/TCP,8446:30193/TCP 42s
service/kong-kong-portalapi NodePort 10.100.158.236 <none> 8004:31747/TCP,8447:30134/TCP 42s
service/kong-postgresql ClusterIP 10.100.47.150 <none> 5432/TCP 42s
service/kong-postgresql-hl ClusterIP None <none> 5432/TCP 42s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-kong 1/1 1 1 43s

NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-kong-74557454fc 1 1 1 43s

NAME READY AGE
statefulset.apps/kong-postgresql 1/1 43s

NAME COMPLETIONS DURATION AGE
job.batch/kong-kong-init-migrations 1/1 32s 43s

Checking the Kong Gateway Enterprise Rest Admin API port

Use the Load Balancer created during the deployment

export CONTROL_PLANE_LB=$(kubectl get service kong-kong-admin --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}' -n kong)

curl -s http://$CONTROL_PLANE_LB:8001 | jq .version

Expected Output

"3.4.0.0"

Configuring Kong Manager Service

Kong Manager is the Control Plane Admin GUI. It should get the Admin URI configured with the same Load Balancer address:

kubectl patch deployment -n kong kong-kong -p "{\"spec\": { \"template\" : { \"spec\" : {\"containers\":[{\"name\":\"proxy\",\"env\": [{ \"name\" : \"KONG_ADMIN_API_URI\", \"value\": \"$CONTROL_PLANE_LB:8001\" }]}]}}}}"

Logging to Kong Manager

Login to Kong Manager using the specific load balancer:

export KONG_MANAGER=$(kubectl get svc -n kong kong-kong-manager --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}')

echo $KONG_MANAGER:8002

Copy the output from above in a browser locally and you should see the Kong Manager landing page.

Kong Data Plane Installation

Install Helm chart

helm install kong-dp kong/kong -n kong-dp \
--set ingressController.enabled=false \
--set image.repository=kong/kong-gateway \
--set image.tag=3.4 \
--set env.database=off \
--set env.role=data_plane \
--set env.cluster_cert=/etc/secrets/kong-cluster-cert/tls.crt \
--set env.cluster_cert_key=/etc/secrets/kong-cluster-cert/tls.key \
--set env.lua_ssl_trusted_certificate=/etc/secrets/kong-cluster-cert/tls.crt \
--set env.cluster_control_plane=kong-kong-cluster.kong.svc.cluster.local:8005 \
--set env.cluster_telemetry_endpoint=kong-kong-clustertelemetry.kong.svc.cluster.local:8006 \
--set proxy.enabled=true \
--set proxy.type=LoadBalancer \
--set enterprise.enabled=true \
--set enterprise.license_secret=kong-enterprise-license \
--set enterprise.portal.enabled=false \
--set enterprise.rbac.enabled=false \
--set enterprise.smtp.enabled=false \
--set manager.enabled=false \
--set portal.enabled=false \
--set portalapi.enabled=false \
--set secretVolumes[0]=kong-cluster-cert \
--set proxy.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb"

Again, the most important settings are:

  • env.role=data_plane to configure this Kong Gatewayinstance as a Data Plane
  • env.database=off, which unlike the control plane, does not require a database to store its metadata and instead gets all API and policy definition using the specific mTLS tunnel it builds with the control plane.
  • env.cluster_control_plane=kong-kong-cluster.kong.cvs.cluster.local:8005 referring to the Control Plane’s Kubernetes FQDN to get the Data Plane connected to it.
  • env.cluster_telemetry_endpoint=kong-kong-cluster.kong.cvs.cluster.local:8006 referring to the same Control Plane's FQDN.
  • proxy.type=LoadBalancer to define how to expose the data plane to the API consumers
  • proxy.annotations to ask for an AWS NLB.

Checking the Installation

kubectl get all -n kong-dp

Sample Output

NAME                               READY   STATUS    RESTARTS   AGE
pod/kong-dp-kong-b98c776fc-87qtf 1/1 Running 0 28s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-dp-kong-proxy LoadBalancer 10.100.210.146 ab1f04a70e5fe4b7fac778cfff4840ec-1485985339.us-east-1.elb.amazonaws.com 80:32280/TCP,443:32494/TCP 29s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-dp-kong 1/1 1 1 29s

NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-dp-kong-b98c776fc 1 1 1 29s

Checking the Data Plane from the Control Plane

curl $CONTROL_PLANE_LB:8001/clustering/status

Expected Output

{
"43fccc25-1a05-4dce-bf04-dc8841cf8091": {
"config_hash": "df22b8971e544f31f20e46209f04b6fe",
"hostname": "kong-dp-kong-b98c776fc-87qtf",
"ip": "192.168.56.203",
"last_seen": 1672841370
}
}

Checking the Data Plane Proxy

Use the Load Balancer created during the deployment

export DATA_PLANE_LB=$(kubectl get svc -n kong-dp kong-dp-kong-proxy --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}')

curl $DATA_PLANE_LB

This step could take 2–3 minutes to show correctly as Kong Data plane try to connect with Kong Control Plane. You may receive curl: (6) Could not resolve host:. If you receive such message in output, wait for 2-3 minutes and retry.

Expected Output

{
"message": "no Route matched with those values"
}

Data Plane Elasticity

One of the most important capabilities provided by Kubernetes is to easily scale out a Deployment. With a single command we can create or terminate pod replicas in order to optimaly support a given throughtput.

This capability is specially interesting for Kubernetes applications like Kong for Kubernetes Ingress Controller.

Here’s our deployment before scaling it out:

kubectl get service -n kong-dp

Sample Output

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                      AGE
kong-dp-kong-proxy LoadBalancer 10.100.210.146 ab1f04a70e5fe4b7fac778cfff4840ec-1485985339.us-east-1.elb.amazonaws.com 80:32280/TCP,443:32494/TCP 6m59s

Notice, at this point in the workshop, there is only one pod taking data plane traffic.

kubectl get pod -n kong-dp -o wide

Sample Output

NAME                           READY   STATUS    RESTARTS   AGE     IP               NODE                             NOMINATED NODE   READINESS GATES
kong-dp-kong-b98c776fc-87qtf 1/1 Running 0 7m51s 192.168.56.203 ip-192-168-35-145.ec2.internal <none> <non

Manual Scaling Out

Now, let’s scale the deployment out creating 3 replicas of the pod

kubectl scale deployment.v1.apps/kong-dp-kong -n kong-dp --replicas=3

Check the Deployment again and now you should see 3 replicas of the pod.

kubectl get pod -n kong-dp -o wide

Sample Output

NAME                           READY   STATUS    RESTARTS   AGE     IP               NODE                             NOMINATED NODE   READINESS GATES
kong-dp-kong-b98c776fc-6gwgh 1/1 Running 0 12s 192.168.46.35 ip-192-168-35-145.ec2.internal <none> <none>
kong-dp-kong-b98c776fc-87qtf 1/1 Running 0 8m22s 192.168.56.203 ip-192-168-35-145.ec2.internal <none> <none>
kong-dp-kong-b98c776fc-8q9bg 1/1 Running 0 12s 192.168.52.71 ip-192-168-35-145.ec2.internal <none> <none>

As we can see, the 2 new Pods have been created and are up and running. If we check our Kubernetes Service again, we will see it has been updated with the new IP addresses. That allows the Service to implement Load Balancing across the Pod replicas.

kubectl describe service kong-dp-kong-proxy -n kong-dp

Sample Output

Name:                     kong-dp-kong-proxy
Namespace: kong-dp
Labels: app.kubernetes.io/instance=kong-dp
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kong
app.kubernetes.io/version=3.1
enable-metrics=true
helm.sh/chart=kong-2.14.0
Annotations: meta.helm.sh/release-name: kong-dp
meta.helm.sh/release-namespace: kong-dp
Selector: app.kubernetes.io/component=app,app.kubernetes.io/instance=kong-dp,app.kubernetes.io/name=kong
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.210.146
IPs: 10.100.210.146
LoadBalancer Ingress: ab1f04a70e5fe4b7fac778cfff4840ec-1485985339.us-east-1.elb.amazonaws.com
Port: kong-proxy 80/TCP
TargetPort: 8000/TCP
NodePort: kong-proxy 32280/TCP
Endpoints: 192.168.46.35:8000,192.168.52.71:8000,192.168.56.203:8000
Port: kong-proxy-tls 443/TCP
TargetPort: 8443/TCP
NodePort: kong-proxy-tls 32494/TCP
Endpoints: 192.168.46.35:8443,192.168.52.71:8443,192.168.56.203:8443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m46s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 8m44s service-controller Ensured load balancer

Reduce the number of Pods to 1 again running as now we will turn on Horizontal pod autoscalar.

kubectl scale deployment.v1.apps/kong-dp-kong -n kong-dp --replicas=1

HPA — Horizontal Autoscaler

HPA (“Horizontal Pod Autoscaler”) is the Kubernetes resource to automatically control the number of replicas of Pods. With HPA, Kubernetes is able to support the requests produced by the consumers, keeping a given Service Level.

Based on CPU utilization or custom metrics, HPA starts and terminates Pods replicas updating all service data to help on the load balancing policies over those replicas.

HPA is described at https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ . Also, there’s a nice walkthrough at https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

Kubernetes defines its own units for cpu and memory. You can read more about it at: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ . We use these units to set our Deployments with HPA.

Install Metrics Server

Installation of metrics server is required for HPA to work. Install metrics server as follows

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

And test it as follows

kubectl get pod -n kube-system

Now you should see a new pod metrics-server- in Running state

Sample Output

NAME                                  READY   STATUS    RESTARTS   AGE
aws-node-6cv6p 1/1 Running 0 20h
coredns-79989457d9-bwbzr 1/1 Running 0 20h
coredns-79989457d9-rm2fk 1/1 Running 0 20h
ebs-csi-controller-59c8fb5d88-bqpnj 6/6 Running 0 28m
ebs-csi-controller-59c8fb5d88-qgms2 6/6 Running 0 28m
ebs-csi-node-r72f8 3/3 Running 0 28m
kube-proxy-sg8mb 1/1 Running 0 20h
metrics-server-679799879f-wvlns 1/1 Running 0 35s

Turn HPA on

Still using Helm, let’s upgrade our Data Plane deployment including new and specific settings for HPA:

....
--set resources.requests.cpu="300m" \
--set resources.requests.memory="300Mi" \
--set resources.limits.cpu="1200m" \
--set resources.limits.memory="800Mi" \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=1 \
--set autoscaling.maxReplicas=20 \
--set autoscaling.metrics[0].type=Resource \
--set autoscaling.metrics[0].resource.name=cpu \
--set autoscaling.metrics[0].resource.target.type=Utilization \
--set autoscaling.metrics[0].resource.target.averageUtilization=75

The new settings are defining the ammount of CPU and memory each Pod should allocate. At the same time, the “autoscaling” sets are telling HPA how to proceed to instantiate new Pod replicas.

Here’s the final Helm command:

helm upgrade kong-dp kong/kong -n kong-dp \
--set ingressController.enabled=false \
--set image.repository=kong/kong-gateway \
--set image.tag=3.4 \
--set env.database=off \
--set env.role=data_plane \
--set env.cluster_cert=/etc/secrets/kong-cluster-cert/tls.crt \
--set env.cluster_cert_key=/etc/secrets/kong-cluster-cert/tls.key \
--set env.lua_ssl_trusted_certificate=/etc/secrets/kong-cluster-cert/tls.crt \
--set env.cluster_control_plane=kong-kong-cluster.kong.svc.cluster.local:8005 \
--set env.cluster_telemetry_endpoint=kong-kong-clustertelemetry.kong.svc.cluster.local:8006 \
--set proxy.enabled=true \
--set proxy.type=LoadBalancer \
--set enterprise.enabled=true \
--set enterprise.license_secret=kong-enterprise-license \
--set enterprise.portal.enabled=false \
--set enterprise.rbac.enabled=false \
--set enterprise.smtp.enabled=false \
--set manager.enabled=false \
--set portal.enabled=false \
--set portalapi.enabled=false \
--set secretVolumes[0]=kong-cluster-cert \
--set proxy.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb"
--set resources.requests.cpu="300m" \
--set resources.requests.memory="300Mi" \
--set resources.limits.cpu="1200m" \
--set resources.limits.memory="800Mi" \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=1 \
--set autoscaling.maxReplicas=5 \
--set autoscaling.metrics[0].type=Resource \
--set autoscaling.metrics[0].resource.name=cpu \
--set autoscaling.metrics[0].resource.target.type=Utilization \
--set autoscaling.metrics[0].resource.target.averageUtilization=75

Checking HPA

After submitting the command check the Deployment again. Since we’re not consume the Data Plane, we are supposed to see a single Pod running. In the next sections we’re going to send requests to the Data Plane and new Pod will get created to handle them.

kubectl get pod -n kong-dp

Sample Output

NAME                           READY   STATUS    RESTARTS   AGE
kong-dp-kong-889d59d8f-qfmxl 1/1 Running 0 44s

You can check the HPA status with:

kubectl get hpa -n kong-dp

Sample Output

NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
kong-dp-kong Deployment/kong-dp-kong 0%/75% 1 5 1 2m24s

Kong Service and Route

In order to define an API to Kong, we’ll first need to add a Service. A Kong Service refers to the upstream APIs and microservices it manages.

Before we can start making requests against the Kong Service, you will need to add a Kong Route to it. Routes specify how (and if) requests are sent to their Services after they reach Kong. There can be multiple Routes to a Service.

The following Kong Service is based on the HTTPbin service deployed in the same kong-dp namespace:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: kong-dp
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
namespace: kong-dp
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/kong/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 8000
EOF
http $CONTROL_PLANE_LB:8001/services \
name=httpservice \
url='http://httpbin.kong-dp.svc.cluster.local:8000'

The following Kong Route exposes the previously created Kong Service with the /httpbin path:

http $CONTROL_PLANE_LB:8001/services/httpservice/routes \
name='httpbinroute' \
paths:='["/httpbin"]'

The Kong Control Plane is responsible for publishing any construct defined, including Kong Services and Routes, to the Kong Data Plane. So, both should be available for consumption:

http $DATA_PLANE_LB/httpbin/get

Kong Plugins

After creating Services and Routes, we can start defining policies to protect and control the upstream APIs. Kong provides an extensive list of ready-to-use plugins. Each plugin is responsible for implementing specific functionality. For example:

  • Authentication/Authorization: plugins to implement all sorts of security mechanisms such as OIDC (OpenID Connect), Basic Authentication, LDAP, Mutual TLS (mTLS), API Key, OPA (Open Policy Agent) based access control policies, etc.
  • Rate Limiting: to limit how many HTTP requests can be made in a given period of time.
  • Serverless: integration with AWS Lambda Functions.
  • Log Processing: to externalize all requests processed by the Gateway to 3rd party infrastructures.
  • Analytics and Monitoring: to provide metrics to external systems including Datadog and Prometheus.
  • Traffic Control: plugins to implement Canary Releases, Mocking endpoints, Routing policies based on request headers, etc.
  • Proxy Caching: to cache commonly requested responses in the Gateway.
  • Transformations: plugins to transform requests before routing them to the upstreams and plugins to transform their responses before returning to the Consumers, transform GraphQL upstreams into a REST API, etc.

As an example, the following command defines and applies a Rate Limiting policy:

http -f post $CONTROL_PLANE_LB:8001/plugins \
name=rate-limiting \
instance_name=rl1 \
config.minute=5 \
config.policy=local

You can delete the Kong Plugin, Route and Service with:

http delete $CONTROL_PLANE_LB:8001/plugins/rl1
http delete $CONTROL_PLANE_LB:8001/services/httpservice/routes/httpbinroute
http delete $CONTROL_PLANE_LB:8001/services/httpservice

Kong Ingress Controller

For Kubernetes deployments, Kong provides KIC (Kong Ingress Controller). Using KIC, you can manage Kong objects and Kubernetes Ingresses. Ingresses are declared using Custom Resource Definitions (CRDs) and Kubernetes-native tooling.

For example, the following declaration creates an Ingress, which will be processed by KIC, to expose the same Kubernetes service httpbin with path /httpbin. The declaration can be submitted to the Kubernetes cluster as a regular kubectl command.

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: httpbinroute
namespace: kong-dp
annotations:
konghq.com/strip-path: "true"
spec:
ingressClassName: kong
rules:
- http:
paths:
- path: /httpbin
pathType: ImplementationSpecific
backend:
service:
name: httpbin
port:
number: 8000
EOF

You should get the same results consuming the Ingress:

http $DATA_PLANE_LB/httpbin/get

Kong Ingress Controller CRDs

KIC also provides specific CRDs to manage Kong Objects like Plugins.For example, the following declaration creates an Ingress. For example, here's the declaration to use the same Rate Limiting we did before.

cat <<EOF | kubectl apply -f -
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rl1
namespace: kong-dp
config:
minute: 5
policy: local
plugin: rate-limiting
EOF

You should add an annotation to the Ingress to enable the plugin

kubectl annotate ingress httpbinroute -n kong-dp konghq.com/plugins=rl1

Further Reading

--

--