Multi Amazon VPC Kong Konnect Data Plane 3.7 deployment with EKS 1.30 and AWS Transit Gateway

Claudio Acquaviva
16 min readJun 7, 2024

--

Introduction

The native hybrid deployment capabilities is one of the most critical benefits Kong brings to the table. Having the Control Plane totally managed by Kong as we manage how, when and where the Data Planes are deployed provide a powerful and highly flexible solution for the Kong Gateway Cluster implementation.

Considering a full AWS based deployment, one very compelling topology we usually face is the multi VPC scenario. In such a situation, the services and applications are running on a VPC as Kong, protecting these workloads, runs on a second VPC. Of course, in a deployment like this we should manage private IPs, since exposing the application with public ELBs is not recommended.

Kong Konnect Data Plane and AWS Transit Gateway

AWS provides several options to solve the VPC-to-VPC connectivity, including VPC peering, Private Link, etc. For Kong Gateway Clusters implementations AWS Transit Gateway is the recommended solution.

AWS Transit Gateway implements a hub-and-spoke architecture where VPC can be attached to and start communicating. The following diagram depicts the architecture where Konnect Data Plane Nodes and the Upstream Services run on different VPCs and connect through AWS Transit Gateway. Note the Upstream Services are getting exposed with an Internal Network Load Balancer (NLB), meaning that only components running in the same VPC may consume it. Through AWS Transit Gateway, it is also consumable by components running on other VPCs attached to it. That is the case of the Konnect Data Plane Nodes.

Kong Konnect Dedicated Cloud Gateways

Recently, Kong launched the Dedicated Cloud Gateways. With that you can have your Data Plane Nodes fully managed by Kong in Konnect. Dedicated Cloud Gateways provide some very critical and important benefits:

  • Kong handles gateway upgrades automatically.
  • You can deploy your Dedicated Cloud Gateway in public or private modes.
  • Dedicated Cloud Gateways can automatically manage the Data Plane auto-scaling.

This blog post describes how to deploy your self managed Data Plane Nodes and get them connected to your Upstream Services running on a different VPC by leveraging AWS Transit Gateway.

In the second part of the blog post we will examine the Dedicated Cloud Gateway capability provided by Konnect to deploy fully managed Data Planes.

AWS Networking highlights

An Amazon VPC is a logically isolated virtual network where we deploy resources like EC2s, EKS Clusters, etc. The following diagram was taken from the AWS VPC documentation page:

Typically, a VPC is defined in an AWS region and has multiple Subnets. A Subnet is a range of IP addresses in the VPC and must reside in a single Availability Zone.

An Availability Zone (AZ) is an isolated location defined in an AWS Region. A Region has multiple Availability Zones. For example, here are the AZs of the “us-east-2” (Ohio) AWS Region:

% aws ec2 describe-availability-zones --region us-east-2 | jq '.AvailabilityZones[].ZoneName'
"us-east-2a"
"us-east-2b"
"us-east-2c"

The Availability Zones are connected via low-latency links to provide replication and fault tolerance.

A Public Subnet is connected to the Internet, otherwise it is called a Private Subnet. You connect a Subnet to the Internet with an Internet Gateway. So, after creating the Internet Gateway and attaching it to the VPC, you manage the Route Table for the Subnet that directs internet-bound traffic to the Internet Gateway.

Create the EKS Cluster for the Upstream Services

Let’s create two EKS Clusters first: one for the Upstream Services and a second one where the Konnect Data Planes are going to be running. In order to create them, we are going to use eksctl, the official CLI for Amazon EKS. By default, eksctl creates the EKS Cluster on a separate VPC. So, after getting the clusters running, we are going to configure AWS Transit Gateway to get both VPCs connected.

Here the eksctl command. To get a better control of the VPC it defines the CIDR (Classless Inter-Domain Routing) the cluster should use for the IP addresses.

eksctl create cluster -f - <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: upstream-eks130
region: us-east-2
version: "1.30"

vpc:
cidr: 11.10.0.0/16

managedNodeGroups:
- name: node-upstream
instanceType: c5.2xlarge
minSize: 1
maxSize: 8
EOF

You can delete the cluster with:

eksctl delete cluster --name upstream-eks130 --region us-east-2

Checking the VPC

By default, eksctl defines a new VPC. You can get its Id and explore it with the following commands.

% aws eks describe-cluster --name upstream-eks130 --region us-east-2 | jq '.cluster.resourcesVpcConfig.vpcId'
"vpc-05beced77af54aca4"

aws ec2 describe-vpcs --region us-east-2 --filters \
--vpc-ids vpc-05beced77af54aca4

Subnets

You can check the Subnets, created by eksctl, of your VPC. Note that:

  • The Subnets are tagged following the EKS Cluster name.
  • A Tag Name has a suffix that describes if it’s Public or Private as well as the Availability Zone where it’s been created.
% aws ec2 describe-subnets --region us-east-2 \
--filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/*" \
| jq -r '.Subnets[] |
"\(.SubnetId)" +
" - subnet Name: " +
"\(.Tags[] | select(.Key=="Name") | .Value)"'
subnet-002c0622f2852512f - subnet Name: eksctl-upstream-eks130-cluster/SubnetPublicUSEAST2A
subnet-0befef218a0868335 - subnet Name: eksctl-upstream-eks130-cluster/SubnetPrivateUSEAST2A
subnet-07cf922431516ec1e - subnet Name: eksctl-upstream-eks130-cluster/SubnetPublicUSEAST2B
subnet-02fe464b49b111955 - subnet Name: eksctl-upstream-eks130-cluster/SubnetPrivateUSEAST2C
subnet-058ccefcb5417a67c - subnet Name: eksctl-upstream-eks130-cluster/SubnetPrivateUSEAST2B
subnet-05092380f6ede6b32 - subnet Name: eksctl-upstream-eks130-cluster/SubnetPublicUSEAST2C

Internet Gateway

Similarly to the Subnets you can check the InternetGateway also created by eksctl and associated to the VPC:

% aws ec2 describe-internet-gateways --region us-east-2 \
--filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/*" \
| jq -r '.InternetGateways[] |
"\(.InternetGatewayId)" +
" - " +
"\(.Attachments[] | .VpcId)"'
igw-0ba77379ed9ce111b - vpc-05beced77af54aca4

Route Tables

Also by default, eksctl creates one Route Table per Private Subnet and one Route Table for the Public Subnets:

% aws ec2 describe-route-tables - region us-east-2 \
--filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/*" \
| jq -r '.RouteTables[].Tags[] |
select(.Key=="Name") | .Value'
eksctl-upstream-eks130-cluster/PrivateRouteTableUSEAST2A
eksctl-upstream-eks130-cluster/PrivateRouteTableUSEAST2B
eksctl-upstream-eks130-cluster/PublicRouteTable
eksctl-upstream-eks130-cluster/PrivateRouteTableUSEAST2C

For example, here are the subnets associated with the PublicRouteTable:

% aws ec2 describe-route-tables --region us-east-2 \
--filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/PublicRouteTable" \
| jq -r '.RouteTables[].Associations[].SubnetId'
subnet-07cf922431516ec1e
subnet-002c0622f2852512f
subnet-05092380f6ede6b32

Again, they are called PublicSubnets because the Route Table they are associated with has a Route to the InternetGateway:

% aws ec2 describe-route-tables --region us-east-2 \
--filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/PublicRouteTable" | jq -r '.RouteTables[].Routes'
[
{
"DestinationCidrBlock": "11.10.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
},
{
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": "igw-0ba77379ed9ce111b",
"Origin": "CreateRoute",
"State": "active"
}
]

AWS Load Balancer Controller

As mentioned before, the Upstream Service will be exposed with Internal NLB. To get an easier deployment, let’s install the AWS Load Balancer Controller in the cluster. The AWS Load Balancer Controller supports several installation processes. The process described next is the recommended one, based on IRSA. Please, refer to the official documentation to learn more about the Controller.

IAM OIDC provider

The Controller requires the OIDC provider for the cluster. You can turn that on with another eksctl command:

eksctl utils associate-iam-oidc-provider \
--cluster upstream-eks130 \
--region us-east-2 \
--approve

IAM Policy

The Controller also requires a collection permissions to provision Load Balancers. The following iam_policy.json file has all needed permissions. Download the file and create the IAM policy with the following commands:

curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.8.0/docs/install/iam_policy.json

aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam-policy.json

Kubernetes Service Account and IAM Role

Now, we need to create and associate both Kubernetes Service Account and an IAM Role which refers to the IAM policy previously created. Again, eksctl does the job. The following command creates:

  • The aws-load-balancer-controller Kubenetes Service Account inside the kube-system namespace.
  • The upstream-eks129-role IAM Role with the AWSLoadBalancerControllerIAMPolicy policy attached.
eksctl create iamserviceaccount \
--cluster=upstream-eks130 \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--override-existing-serviceaccounts \
--role-name upstream-eks130-role \
--attach-policy-arn=arn:aws:iam::<your_aws_account>:policy/AWSLoadBalancerControllerIAMPolicy \
--region us-east-2 \
--approve

Install AWS Load Balancer Controller

The last step uses the Helm Charts to install the Controller:

helm repo add eks https://aws.github.io/eks-charts

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \
--set clusterName=upstream-eks130 \
--set region=us-east-2 \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller

Upstream

The Upstream Service is based on the Go Bench Suite application, useful for benchmarking. Create the namespace first and then apply the declaration. Note the Service uses the AWS Load Balancer Controller annotation to request an Internal NLB.

Also, as an exercise, to get a better understanding of and control over the deployment, we are going to deploy the NLB in the Private Subnet of the same Availability Zone as our EKS Cluster’s Node has been created.

First, check the Availability Zone where our EKS Cluster Node is running:

% kubectl get node -o json | jq -r '.items[].metadata.labels."topology.kubernetes.io/zone"'
us-east-2b

Although it’s not needed, If you will, get the Private Subnet Id associated to the Availability Zone:

aws ec2 describe-subnets --region us-east-2 --filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/SubnetPrivateUSEAST2B" --query "Subnets[0].SubnetId"
"subnet-058ccefcb5417a67c"

Now, deploy the application. The Load Balancer Controller, by default, deploys a NLB. Note the it has another annotation to specify it should be deployed in the same Availability Zone where the Kubernetes Node is:

kubectl create namespace upstream

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: upstream
namespace: upstream
spec:
replicas: 1
selector:
matchLabels:
app: upstream
template:
metadata:
labels:
app: upstream
version: v1
spec:
containers:
- name: upstream
image: mangomm/go-bench-suite:latest
command: ["./go-bench-suite", "upstream"]
---
apiVersion: v1
kind: Service
metadata:
name: upstream
namespace: upstream
labels:
run: upstream
annotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "true"
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip"
"service.beta.kubernetes.io/aws-load-balancer-subnets": "eksctl-upstream-eks130-cluster/SubnetPrivateUSEAST2B"
spec:
type: LoadBalancer
loadBalancerClass: "service.k8s.aws/nlb"
ports:
- port: 8000
targetPort: 8000
nodePort: 30080
selector:
app: upstream
EOF

Check the Deployment

% kubectl get pod -o wide -n upstream
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
upstream-bc448c694–7m8v5 1/1 Running 0 17h 11.10.72.131 ip-11–10–73–142.us-east-2.compute.internal <none> <none>

Check the NLB

Get the NLB Domain Name with:

% kubectl get service -n upstream
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
upstream LoadBalancer 10.100.137.113 k8s-upstream-upstream-52709624f3–942476a572c8902a.elb.us-east-2.amazonaws.com 8000:30080/TCP 32s

Check the Load Balancer Type and Scheme. Note the NLB name is used by its Domain Name:

% aws elbv2 describe-load-balancers --region us-east-2 \
--name "k8s-upstream-upstream-52709624f3" | jq ".LoadBalancers[].Type"
"network"

$ aws elbv2 describe-load-balancers --region us-east-2 \
--name "k8s-upstream-upstream-52709624f3" | jq ".LoadBalancers[].Scheme"
"internal"

The “service.beta.kubernetes.io/aws-load-balancer-nlb-target-type”: “ip” annotation specifies that the route traffic goes directly to the Pod IP, behind the Kubernetes service. Check the AWS Load Balancer documentation to get more familiar with the Annotations. You can check it out with the following commands:

Get the NLB Arn fist:

% aws elbv2 describe-load-balancers --region us-east-2 \
--name "k8s-upstream-upstream-52709624f3" | jq ".LoadBalancers[].LoadBalancerArn"
"arn:aws:elasticloadbalancing:us-east-2:<your_aws_account>:loadbalancer/net/k8s-upstream-upstream-52709624f3/942476a572c8902a"

Not get the Target Group Arn:

% aws elbv2 describe-target-groups -region us-east-2 \
--load-balancer-arn "arn:aws:elasticloadbalancing:us-east-2:<your_aws_account>:loadbalancer/net/k8s-upstream-upstream-52709624f3/942476a572c8902a" | jq '.TargetGroups[].TargetGroupArn'
"arn:aws:elasticloadbalancing:us-east-2:<your_aws_account>:targetgroup/k8s-upstream-upstream-903ec36ecd/8bbf2dddae509eef"

And now check the Target pointing the the Pod Address and Port:

% aws elbv2 describe-target-health --region us-east-2 \
--target-group-arn "arn:aws:elasticloadbalancing:us-east-2:<your_aws_account>:targetgroup/k8s-upstream-upstream-903ec36ecd/8bbf2dddae509eef"
{
"TargetHealthDescriptions": [
{
"Target": {
"Id": "11.10.72.131",
"Port": 8000,
"AvailabilityZone": "us-east-2b"
},
"HealthCheckPort": "8000",
"TargetHealth": {
"State": "healthy"
}
}
]
}

Consume the Upstream Service

Since it’s an Internal NLB, you won’t be able to consume externally, for example, from your laptop. Deploy a Pod to consume the NLB and Upstream Services:

kubectl run --rm=true -i --tty ubuntu --image=claudioacquaviva/ubuntu-awscli:0.4 -- /bin/bash

# http k8s-upstream-upstream-52709624f3–942476a572c8902a.elb.us-east-2.amazonaws.com:8000/json/valid
HTTP/1.1 200 OK
Content-Length: 67
Content-Type: text/plain; charset=utf-8
Date: Sat, 01 Jun 2024 22:26:31 GMT
Server: fasthttp
{
"time": "2024–06–01 22:26:31.68602284 +0000 UTC m=+182.160841052"
}

Exit the Pod to delete it.

Create the EKS Cluster for the Konnect Data Plane

Similarly to what we’ve done, create another EKS Cluster, this time for the Konnect Data Planes. Again, we have explicitly defined the CIDR for the cluster.

eksctl create cluster -f - <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: kong37-eks130
region: us-east-2
version: "1.30"

vpc:
cidr: 12.10.0.0/16

managedNodeGroups:
- name: node-kong
instanceType: c5.2xlarge
minSize: 1
maxSize: 8
EOF

You can delete the cluster with:

eksctl delete cluster --name kong37-eks130 --region us-east-2

Try to consume the Upstream Service

As expected, if you try to consume the same Upstream service, this time from the new EKS Cluster, the request will hang:

kubectl create namespace kong

kubectl run -n kong --rm=true -i --tty ubuntu --image=claudioacquaviva/ubuntu-awscli:0.4 -- /bin/bash

# http k8s-upstream-upstream-52709624f3–942476a572c8902a.elb.us-east-2.amazonaws.com:8000/json/valid
http: error: Request timed out (0s).

Create and Configure AWS Transit Gateway

Create TGW

With the two EKS Clusters and respective VPCs in place, it’s time to create the Transit Gateway and attach the VPCs to it. The following command creates it:

aws ec2 create-transit-gateway --region us-east-2 \
--tag-specification 'ResourceType=transit-gateway,Tags=[{Key="Name",Value="tgw1"}]'

Check if it’s available with:

aws ec2 describe-transit-gateways --region us-east-2 \
--filters "Name=tag:Name,Values=tgw1" --query "TransitGateways[].State"

And get the Transit Gateway Id with:

aws ec2 describe-transit-gateways --region us-east-2 \
--filters "Name=tag:Name,Values=tgw1" --query "TransitGateways[].TransitGatewayId"
[
"tgw-05300030c4f2b9fec"
]

Attach the Upstream VPC to the Transit Gateway

Make sure your Transit Gateway is available before starting to attach the VPCs to it. There are many options to do it, but, basically, we need to specify which Subnet should be used for the Transit Gateway. We are going to restrict it to the Subnet where the NLB is deployed. At least one Subnet must be selected and on Subnet per Availability Zone. Please read the Transit Gateway Attachment documentation to learn more about it.

Get the Upstream Cluster’s VPC Id with:

aws eks describe-cluster --name upstream-eks130 --region us-east-2 --query "cluster.resourcesVpcConfig.vpcId"
"vpc-05beced77af54aca4"

To attach the VPC to the Transit Gateway we need to specify the VPC’s Subnet Ids. Get the Private Subnet Id where the application and NLB are deployed. That’s the subnet Transit Gateway will use to route traffic.

aws ec2 describe-subnets --region us-east-2 --filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/SubnetPrivateUSEAST2B" --query "Subnets[].SubnetId"
[
"subnet-058ccefcb5417a67c"
]

Attach the VPC using its Id, Subnets and the Transit Gateway Id:

aws ec2 create-transit-gateway-vpc-attachment --region us-east-2 \
--transit-gateway-id tgw-05300030c4f2b9fec \
--vpc-id vpc-05beced77af54aca4 \
--subnet-ids "subnet-058ccefcb5417a67c" \
--tag-specification 'ResourceType=transit-gateway-attachment,Tags=[{Key="Name",Value="tgw1-upstream"}]'

Check if the attachment is available:

aws ec2 describe-transit-gateway-vpc-attachments --region us-east-2 --filters "Name=tag:Name,Values=tgw1-upstream" --query "TransitGatewayVpcAttachments[].State"

Attach the Konnect Data Plane VPC to the Transit Gateway

Repeat the same process for the second VPC. However, we are going to use three existing Public Subnets, where the Deployment and Pods run by default.

Get the VPC Id:

aws eks describe-cluster --name kong37-eks130 --region us-east-2 --query "cluster.resourcesVpcConfig.vpcId"
"vpc-04e3b5a5f9de06d12"

Now get the three Private Subnet Ids:

aws ec2 describe-subnets --region us-east-2 --filters "Name=tag:Name,Values=eksctl-kong37-eks130-cluster/SubnetPublic*" --query "Subnets[].SubnetId"
[
"subnet-056ff1b1c971313aa",
"subnet-0929867562e51469b",
"subnet-083b2f9e57ed6e273"
]

Attach the VPC similarly to what we did previously:

aws ec2 create-transit-gateway-vpc-attachment --region us-east-2 \
--transit-gateway-id tgw-05300030c4f2b9fec \
--vpc-id vpc-04e3b5a5f9de06d12 \
--subnet-ids "subnet-056ff1b1c971313aa" "subnet-0929867562e51469b" "subnet-083b2f9e57ed6e273" \
--tag-specification 'ResourceType=transit-gateway-attachment,Tags=[{Key="Name",Value="tgw1-kong"}]'

Check if you attachment is available:

aws ec2 describe-transit-gateway-vpc-attachments --region us-east-2 --filters "Name=tag:Name,Values=tgw1-kong" --query "TransitGatewayVpcAttachments[].State"

VPC Route Tables

There’s one last and very important step left: we have to tell the VPCs when and how to route the network traffic to the Transit Gateway. We do that by adding new routes to the VPC’s route tables.

Konnect Data Plane Cluster Routes

Let’s start with the Konnect Data Plane’s Route Table. By default, eksctl tags the Route Table with some predefined names. If you get the PublicRouteTable you see there are two Routes defined:

  • Local Target to route VPC internal traffic
  • Route to Internet Gateway
% aws ec2 describe-route-tables --region us-east-2 --filters "Name=tag:Name,Values=eksctl-kong37-eks130-cluster/PublicRouteTable" --query "RouteTables[].RouteTableId"
[
"rtb-08efb2bda04c12687"
]

% aws ec2 describe-route-tables --region us-east-2 --route-table-ids rtb-08efb2bda04c12687 --query "RouteTables[].Routes[]"
[
{
"DestinationCidrBlock": "12.10.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
},
{
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": "igw-0396932251e4d591e",
"Origin": "CreateRoute",
"State": "active"
}
]

The new Route says the traffic should be redirected to the Transit Gateway for addresses defined in the CIDR we used when we created the Upstream EKS Cluster:

aws ec2 create-route --region us-east-2 \
--route-table-id rtb-08efb2bda04c12687 \
--destination-cidr-block "11.10.0.0/16" \
--transit-gateway-id tgw-05300030c4f2b9fec

Upstream Cluster Routes

Again, we are going to work with the Route Table defined for the same Private Subnet. Get the Route Table Id:

aws ec2 describe-route-tables --region us-east-2 --filters "Name=tag:Name,Values=eksctl-upstream-eks130-cluster/PrivateRouteTableUSEAST2B" --query "RouteTables[].RouteTableId"
[
"rtb-07fbc75dbdd76d2c5"
]

Add the new Route referring the Transit Gateway the Kong EKS Cluster’s CIDR:

aws ec2 create-route --region us-east-2 \
--route-table-id rtb-07fbc75dbdd76d2c5 \
--destination-cidr-block "12.10.0.0/16" \
--transit-gateway-id tgw-05300030c4f2b9fec

Try to consume the Upstream Service again

You should be able to consume the Upstream Service now:

# http k8s-upstream-upstream-52709624f3–942476a572c8902a.elb.us-east-2.amazonaws.com:8000/json/valid
HTTP/1.1 200 OK
Content-Length: 70
Content-Type: text/plain; charset=utf-8
Date: Sun, 02 Jun 2024 14:49:59 GMT
Server: fasthttp
{
"time": "2024–06–02 14:49:59.790928843 +0000 UTC m=+59190.265747067"
}

Kong Konnect Data Plane deployment

In this next step we are going to deploy the self managed Konnect Data Plane in its EKS Cluster.

AWS Load Balancer Controller

Repeat the process we did for the Upstream EKS Cluster to install the AWS Load Balancer Controller including:

  • Turning the IAM OIDC Provider on.
  • Creating the Kubernetes Service Account and IAM Role with eksctl create iamserviceaccount command (since we’ve used the same AWS account to create both Clusters, use the same IAM Policy). Name them accordingly.
  • Installing the AWS Load Balancer Controller

Digital Certificate and Private Key pair issuing

First of all, we need to create the Private Key and Digital Certificate both Konnect Control Plane and Data Plane use to build the mTLS connection.

For the purpose of this blog post, the secure communication will be based on the PKI Mode. You can use several tools to issue the pair including simple OpenSSL commands like this:

openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout ./kongcp1.key \
-out ./kongcp1.crt \
-days 1095 \
-subj "/CN=konnect_cp1" \
-addext "extendedKeyUsage=serverAuth,clientAuth"

The -addext “extendedKeyUsage=serverAuth,clientAuth” option tells that the certificate can be used for both server and client Authentication, meaning that, from the Konnect standpoint, it can be used for both Control Plane and Data Plane.

Control Plane creation

The first thing to do is create the new Konnect Control Plane (CP). You need to have a Konnect PAT (Personal Access Token) in order to send requests to Konnect. Read the Konnect PAT documentation page to learn how to generate one.

Create a Konnect Control Plane with the following command. It configures the PKI Mode for the CP and DP communication, meaning we are going to use the same Public Key to both CP and DP.

Create an environment variable with your PAT:

PAT=kpat_eIYnOh7L8HSRM….

Create the Control Plane with:

curl -X POST \
https://us.api.konghq.com/v2/control-planes \
--header "Authorization: Bearer $PAT" \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data '{
"name": "cp1",
"description": "Control Plane 1",
"cluster_type": "CLUSTER_TYPE_HYBRID",
"labels":{},
"auth_type": "pki_client_certs"
}'

Get the CP Id with:

CP_ID=$(curl -s https://us.api.konghq.com/v2/control-planes \
--header "Authorization: Bearer $PAT" | jq -r '.data[] | select(.name=="cp1") | .id')

Get the CP’s Endpoints with:

% curl -s https://us.api.konghq.com/v2/control-planes/$CP_ID \
--header "Authorization: Bearer $PAT" | jq -r ".config"
{
"control_plane_endpoint": "https://9863135b6c.us.cp0.konghq.com",
"telemetry_endpoint": "https://9863135b6c.us.tp0.konghq.com",
"cluster_type": "CLUSTER_TYPE_CONTROL_PLANE",
"auth_type": "pki_client_certs",
"cloud_gateway": false,
"proxy_urls": []
}

Now we need to add the Digital Certificate. Use the CP Id in your request:

cert="{\"cert\": $(jq -sR . ./kongcp1.crt)}"

curl -X POST https://us.api.konghq.com/v2/control-planes/$CP_ID/dp-client-certificates \
--header "Authorization: Bearer $PAT" \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data $cert

Konnect Data Plane

Now, inject the Digital Certificate and Private Key pair in a Kubernetes secret.

kubectl create secret tls kong-cluster-cert -n kong --cert=./kongcp1.crt --key=./kongcp1.key

Here’s the declaration we’re going to be using. The main point here is we are asking AWS to deploy another NLB, this time for the Konnect Data Plane. Please check the documentation to learn all options to deploy the Data Plane:

cat > values.yaml << 'EOF'
image:
repository: kong/kong-gateway
tag: "3.7"

secretVolumes:
- kong-cluster-cert

admin:
enabled: false

manager:
enabled: false

proxy:
annotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "false"

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 9863135b6c.us.cp0.konghq.com:443
cluster_server_name: 9863135b6c.us.cp0.konghq.com
cluster_telemetry_endpoint: 9863135b6c.us.tp0.konghq.com:443
cluster_telemetry_server_name: 9863135b6c.us.tp0.konghq.com
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false
EOF

Deploy the Konnect Data Plane with a Helm command:

helm install kong kong/kong -n kong --values ./values.yaml

You should see the Data Plane running:

% kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-kong-6755865687-t8hfk 1/1 Running 0 63s

Check the Konnect GUI also:

Kong Gateway Service and Route

Now, let’s create a new Kong Service and Route. You can use the Konnect GUI if you like or, again, the Konnect RESTful API:

Kong Gateway Service

The Kong Service will refer to the Upstream EKS Cluster’s NLB. Due to the Transit Gateway attachments, the Kong Data should be able to consume it.

http https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services name=service1 \
url='http://k8s-upstream-upstream-52709624f3-942476a572c8902a.elb.us-east-2.amazonaws.com:8000' \
Authorization:"Bearer $PAT"

Get your new Gateway Service Id with:

SERVICE_ID=$(http https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services/service1 \
Authorization:"Bearer $PAT" | jq -r ".id")

Kong Route

Use the Service Id to define the Kong Route:

http https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services/$SERVICE_ID/routes name='route1' paths:='["/route1"]' Authorization:"Bearer $PAT"

Consume the Route

Get the Load Balancer DNS name

% kubectl get service kong-kong-proxy -n kong -o json | jq -r ".status.loadBalancer.ingress[].hostname"
k8s-kong-kongkong-29bdf46bcc-5a7c65d4d83660fe.elb.us-east-2.amazonaws.com

Consume the Kong Route. Again, since both VPCs are attached to the Transit Gateway, the Data Plane should be able to consume the NLB.

http k8s-kong-kongkong-29bdf46bcc-5a7c65d4d83660fe.elb.us-east-2.amazonaws.com/route1/json/valid
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 70
Content-Type: text/plain; charset=utf-8
Date: Sun, 02 Jun 2024 17:35:52 GMT
Server: fasthttp
Via: kong/3.7.0.0-enterprise-edition
X-Kong-Proxy-Latency: 1
X-Kong-Request-Id: c5347455eae58a3b06952c05edfaa633
X-Kong-Upstream-Latency: 4
{
"time": "2024–06–02 17:35:52.751754738 +0000 UTC m=+69143.226572949"
}

Conclusion

This blog post described Kong Konnect Data Plane deployment to connect to an Upstream Service running on a different VPC through AWS Transit Gateway.

In the second part of this series we are going to explore how to deploy a fully managed Dedicated Cloud Gateway, which also leverages AWS Transit Gateway in a multi-account scenario.

Kong Konnect simplifies API management and improves security for all services infrastructure. Try it for free today!

--

--