Kong Konnect Data Plane 3.5 Deployment and Plugin Configuration with AWS Secrets Manager, IRSA and EKS 1.28

Claudio Acquaviva
10 min readDec 11, 2023

Introduction

With the launch of Kong Gateway 3.5 version, AWS Secrets Manager consumption, through IRSA, for both Data Plane deployment and Plugin configuration has become much easier.

In fact, with Kong Gateway 3.5, AWS Secrets Manager can be configured in multiple ways. To access secrets stored in the AWS Secrets Manager, Kong Gateway needs to be configured with an IAM Role that has sufficient permissions to read the required secret values. Kong Gateway can automatically fetch IAM role credentials based on your AWS environment, observing the following precedence order:

  • Fetch from credentials defined in environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
  • Fetch from profile and credential file, defined by AWS_PROFILE and AWS_SHARED_CREDENTIALS_FILE.
  • Fetch from an ECS container credential provider.
  • Fetch from an EKS IRSA (IAM Roles for Service Account.
  • Fetch from EC2 IMDS metadata. Both v1 and v2 are supported.

Kong Gateway 3.5 also supports role assuming which allows you to use a different IAM role to fetch secrets from AWS Secrets Manager. Check the documentation to learn more about Kong Gateway Secret Management with AWS.

This blog post describes how to leverage IRSA to:

  • Deploy a Konnect Data Plane in EKS using Digital Certificates and Private Keys stored in AWS Secrets Manager.
  • Configure a Kong Plugin with secrets also stored in AWS Secrets Manager. As an example, we’re going to explore the Request Transformer Advanced Plugin configuration.

Kong Konnect Control Plane / Data Plane connection and Secrets

A Kong Konnect Data Plane (DP) deployment establishes a mTLS connection with the Konnect Control Plane (CP). The CP and DP follow the Hybrid Mode deployment and implement a secure connection with a Digital Certification and Private Key pair. The encrypted tunnel is used to publish any API definition and policies created by the CP to all DPs connected.

AWS Secrets Manager, IRSA (IAM Role for Service Account) and External Secrets Operator

AWS Secrets Manager helps you create and maintain the Secrets lifecycles. Many AWS services can store and use secrets in Secrets Manager including Amazon EKS clusters.

EKS clusters can consume AWS Secrets Manager to store and support secrets through IRSA (IAM Role for Service Account). IRSA is an AWS general framework that allows applications running in EKS to access AWS services (including AWS Secrets Manager) in a controlled manner, based on permissions defined in AWS IAM (Identity Access Management) roles and temporary AWS credentials issued by AWS STS (Secure Token Services).

The following diagram shows a high-level overview of the architecture:

To learn more about Kong Konnect and IRSA please refer to the previous blog post “Kong Konnect Data Plane 3.4 and Amazon EKS 1.28 Deployment with AWS Secrets Manager, IRSA and External Secrets Operator”.

Kong Konnect Data Plane Deployment Plan

Let’s get started with the typical “values.yaml” file we use for a Kong Konnect Data Plane. Please check the Konnect documentation to learn more about the Data Plane Deployment:

image:
repository: kong/kong-gateway
tag: "3.5"

secretVolumes:
- kong-cluster-cert

admin:
enabled: false

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 1234567890.us.cp0.konghq.com:443
cluster_server_name: 1234567890.us.cp0.konghq.com
cluster_telemetry_endpoint: 1234567890.us.tp0.konghq.com:443
cluster_telemetry_server_name: 1234567890.us.tp0.konghq.com
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false

The “cluster_cert” and “cluster_cert_key” variables define the Digital Certificate and Private Key pair the Data Plane (DP) should use to connect its Konnect Control Plane. In fact, the standard Konnect Data Plane deployment process, available in the “Self-Managed Hybrid Data Plane Node” page, provides a button to generate the pair, which is supposed to be injected in the Kubernetes cluster as a secret.

Please keep in mind this is the configuration file generated by the Konnect Data Plane deployment process. For a production-ready environment you might want to consider other variables to get your Data Plane running. Check the Configuration for Kong Gateway page to learn more about them.

Before running the Helm command, we should create a Kubernetes secret with both “cluster_cert” and “cluster_cert_key” and use them in our “values.yaml” file.

The idea is to implement a new process where:

  • The pair would be stored in AWS Secrets Manager.
  • The “values.yaml” would refer to new secrets.

AWS Secrets Manager

Digital Certificate and Private Key pair issuing

First of all, we need to create the Private Key and Digital Certificate both Konnect Control Plane and Data Plane use to build the mTLS connection.

For the purpose of this blog post, the secure communication will be based on the default “pinned mode”. You can use several tools to issue the pair including simple OpenSSL commands like this:

openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout ./kongcp1.key \
-out ./kongcp1.crt \
-days 1095 \
-subj "/CN=konnect-cp1"

Secrets creation

You can create your secrets in AWS Secrets Manager by running the following commands:

aws secretsmanager create-secret --name kongcp1-crt --region us-west-1 --secret-string "{\"cert\": \"$(cat ./kongcp1.crt)\"}"

aws secretsmanager create-secret --name kongcp1-key --region us-west-1 --secret-string "{\"key\": \"$(cat ./kongcp1.key)\"}"

IRSA (IAM Role for Service Accounts)

IRSA is a feature that allows you to assign an IAM role to a Kubernetes Service Account. It leverages the Kubernetes support for Dynamic Admission Control, more precisely the Mutating Admission Webhook, preinstalled in any EKS cluster.

The webhook gets to work when a new Pod, referring to a Service Account, is scheduled to be created. The Service Account should have an annotation with an AWS IAM Role. As the name implies, the webhook mutates the Pod injecting the temporary AWS Credentials, issued by AWS STS, necessary to call the requested AWS Services by the Role annotated in the Pod. The webhook also injects a new Service Account Token into the Pod.

When a Pod is being created, the webhook calls the Kubernetes API Server, more precisely the OIDC Issuer, to generate a JWT token for the Service Account used in the Pod declaration. A volume is mounted for the new Pod with the Token, similarly to what happens with the default Service Account volume.

EKS Cluster Creation

Create an EKS cluster with eksctl with a command like this:

eksctl create cluster --name kong35-eks128 --version 1.28 --region us-west-1 --nodegroup-name kong-workers --node-type t3.large --nodes 1

Create a Policy with permissions to list the existing Secrets

In order to get our Pod consuming the secrets stored in AWS Secrets Manager, we need a policy defining the permission to access them. The policy will be attached to a Role which will be assumed by the Pod during the deployment.

aws iam create-policy \
--policy-name list-secrets-policy \
--policy-document '{
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:ListSecrets",
"secretsmanager:GetSecretValue"
],
"Resource": "*"
}
]
}'

Associate the IAM OIDC Identity Provider

We can do that using a specific eksctl command:

eksctl utils associate-iam-oidc-provider --cluster kong35-eks128 --region=us-west-1 --approve

Create the IAM Service Account

Now, let’s create the Service Account we are going to use for our Pod deployment. eksctl provides a command to create IAM Service Accounts. In fact, the following command creates:

  • The EKS Service Account in your namespace
  • The IAM Role with our Policy attached.

Create the namespace first

kubectl create namespace kong

Now the the eksctl command:

eksctl create iamserviceaccount \
--name kong35-eks128-sa \
--namespace kong \
--cluster kong35-eks128 \
--region us-west-1 \
--approve \
--role-name kong35-eks128-role \
--override-existing-serviceaccounts \
--attach-policy-arn $(aws iam list-policies - query 'Policies[?PolicyName==`list-secrets-policy`].Arn' - output text)

Check the Service Account

If you check the Service Account, you will see it has the required annotation referring to the Role created by the iamserviceaccount command.

% kubectl describe sa kong35-eks128-sa -n kong
Name: kong35-eks128-sa
Namespace: kong
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<your_aws_account>:role/kong35-eks128-role
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>

Check the Role and Policy

You can check the Role and Policy also. iamserviceaccount adds two conditions for the “sub” and “aud” claims:

% aws iam get-role --role kong35-eks128-role
{
"Role": {
"Path": "/",
"RoleName": "kong35-eks128-role",
"RoleId": "AROAXAKB57VPX7BMUHHKE",
"Arn": "arn:aws:iam::<your_aws_account>:role/kong35-eks128-role",
"CreateDate": "2023–10–25T17:58:28+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<your_aws_account>:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B:sub": "system:serviceaccount:kong:kong35-eks128-sa",
"oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B:aud": "sts.amazonaws.com"
}
}
}
]
},
"Description": "",
"MaxSessionDuration": 3600,
"Tags": [
{
"Key": "alpha.eksctl.io/cluster-name",
"Value": "kong35-eks128"
},
{
"Key": "alpha.eksctl.io/iamserviceaccount-name",
"Value": "kong/kong35-eks128-sa"
},
{
"Key": "alpha.eksctl.io/eksctl-version",
"Value": "0.163.0-dev+e4222edab.2023–10–24T06:23:06Z"
},
{
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
"Value": "kong35-eks128"
}
],
"RoleLastUsed": {}
}
}

As expected the Role refers to our Policy:

% aws iam list-attached-role-policies - role kong35-eks128-role
{
"AttachedPolicies": [
{
"PolicyName": "list-secrets-policy",
"PolicyArn": "arn:aws:iam::<your_aws_account>:policy/list-secrets-policy"
}
]
}

Kong Konnect Data Plane deployment

Finally, we are ready to deploy the Konnect Data Plane (DP) in the EKS Cluster. Since we have issued the Private Key and Digital Certificate pair and have stored them in AWS Secrets Manager, the first thing to do is create the new Konnect Control Plane (CP). You need to have a Konnect PAT (Personal Access Token) in order to send requests to Konnect. Read the Konnect PAT documentation page to learn how to generate one.

Create a Konnect Control Plane with the following command. It configures the Pinned Mode for the CP and DP communication, meaning we are going to use the same Public Key to both CP and DP.

curl -X POST \
https://us.api.konghq.com/v2/control-planes \
--header 'Authorization: Bearer <your_pat>' \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data '{
"name": "cp1",
"description": "Control Plane 1",
"cluster_type": "CLUSTER_TYPE_HYBRID",
"labels":{},
"auth_type": "pinned_client_certs"
}'

Get the CP Id with:

curl -s https://us.api.konghq.com/v2/control-planes \
--header 'Authorization: Bearer <your_pat>' | jq -r '.data[] | select(.name=="cp1") | .id'
<your_cp_id>

Get the CP’s Endpoints with:

% curl -s https://us.api.konghq.com/v2/control-planes/<your_cp_id> \
--header 'Authorization: Bearer <your_pat>' | jq -r ".config"
{
"control_plane_endpoint": "https://1234567890.us.cp0.konghq.com",
"telemetry_endpoint": "https://1234567890.us.tp0.konghq.com",
"cluster_type": "CLUSTER_TYPE_CONTROL_PLANE",
"auth_type": "pinned_client_certs"
}

Now we need to Pin the Digital Certificate. Use the CP Id in your request:

cert="{\"cert\": $(jq -sR . ./kongcp1.crt)}"

curl -X POST https://us.api.konghq.com/v2/runtime-groups/<your_cp_id>/dp-client-certificates \
--header 'Authorization: Bearer <your_pat>' \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data $cert

Konnect Vault

Kong Konnect provides Secrets Management capabilities supporting environment variable based secrets and Cloud Services based Secret Managers including AWS Secrets Manager.

We have to create a Konnect Vault to tell our Konnect Control Plane and its Data Plane we are storing our secrets in AWS Secrets Manager in a specific AWS region:

curl -X POST \
https://us.api.konghq.com/v2/control-planes/<your_cp_id>/core-entities/vaults \
--header 'Authorization: Bearer <your_pat>' \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data '{
"prefix": "aws-secrets",
"name": "aws",
"config":{ "region": "us-west-1" }
}'

Konnect Data Plane

With the CP and Vault created, let’s deploy the DP. As we discussed in the beginning of this post, we are taking the typical “values.yaml” declaration and changing it to use the Digital Certificate and Private Key pair we get through IRSA.

The two updates for the “values.yaml” are:

  • In order to use IRSA, we have to add the same Kubernetes Service Account created with the “eksctl create iamserviceaccount” command we ran before.
  • Inside the “env” section we change both “cluster_cert” and “cluster_cert_key” settings with references to the secrets created.

Besides, use the CP endpoints for the specific settings.

image:
repository: kong/kong-gateway
tag: "3.5"

deployment:
serviceAccount:
create: false
name: kong35-eks128-sa

admin:
enabled: false

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 1234567890.us.cp0.konghq.com:443
cluster_server_name: 1234567890.us.cp0.konghq.com
cluster_telemetry_endpoint: 1234567890.us.tp0.konghq.com:443
cluster_telemetry_server_name: 1234567890.us.tp0.konghq.com
cluster_cert: "{vault://aws/kongcp1-crt/cert}"
cluster_cert_key: "{vault://aws/kongcp1-key/key}"
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false

Deploy the Konnect Data Plane with a Helm command:

helm install kong kong/kong -n kong --values ./values.yaml

You should see the Data Plane running:

% kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-kong-6755865687-t8hfk 1/1 Running 0 63s

Check the Konnect GUI also:

Plugin Configuration

We can use Kong Secrets Management along with AWS Secrets Manager to configure Kong Plugins also. As an example, let’s enable the Request Transformer Plugin with another secret stored in AWS Secrets Manager:

Kong Gateway Service and Route

First, let’s create a new Kong Service and Route. You can use the Konnect GUI if you like or, again, the Konnect RESTful API:

Kong Gateway Service

http https://us.api.konghq.com/v2/control-planes/<your_cp_id>/core-entities/services name=service1 \
url='http://httpbin.org' \
Authorization:"Bearer <your_pat>"

Get your new Gateway Service Id with:

http https://us.api.konghq.com/v2/control-planes/<your_cp_id>/core-entities/services \
Authorization:"Bearer <your_pat>" | jq -r '.data[] | select (.name == "service1") | .id'
<your_service_id>

Kong Route

Use the Service Id to define the Kong Route:

http https://us.api.konghq.com/v2/control-planes/<your_cp_id>/core-entities/services/<your_service_id>/routes name='route1' paths:='["/route1"]' Authorization:"Bearer <your_pat>"

Consume the Route

Get the Load Balancer DNS name

% kubectl get service kong-kong-proxy -n kong -o json | jq -r ".status.loadBalancer.ingress[].hostname"
a19339c88829d4bdf91fcf9382a93cba-433883132.us-west-1.elb.amazonaws.com

Consume the Kong Route

http a19339c88829d4bdf91fcf9382a93cba-433883132.us-west-1.elb.amazonaws.com/route1/get

Request Transformer Advanced Plugin

First, create a specific secret to be used by the plugin:

aws secretsmanager create-secret --region=us-west-1 --name kong-secret --secret-string "kong_key:kong_secret"

Configure the Plugin with the following command

curl -X POST https://us.api.konghq.com/v2/control-planes/<your_cp_id>/core-entities/services/<your_service_id>/plugins \
--header 'Authorization: Bearer <your_pat>' \
--header 'Content-Type: application/json' \
--data '{
"name": "request-transformer-advanced",
"instance_name": "rta",
"config": {
"add": {
"headers": ["x-header-1:123", "{vault://aws/kong-secret}"]
}
}
}'

If you consume the Route again, you should see two new headers as configured with the plugin

% http a19339c88829d4bdf91fcf9382a93cba-433883132.us-west-1.elb.amazonaws.com/route1/get
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 666
Content-Type: application/json
Date: Sun, 10 Dec 2023 21:31:53 GMT
Server: gunicorn/19.9.0
Via: kong/3.5.0.1-enterprise-edition
X-Kong-Proxy-Latency: 1
X-Kong-Request-Id: 5ed0c64fb9492f67b91bf04dbf33ae28
X-Kong-Upstream-Latency: 124
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"Kong-Key": "kong_secret",
"User-Agent": "HTTPie/3.2.2",
"X-Amzn-Trace-Id": "Root=1–65762e49–531a499c619785a4100f0315",
"X-Forwarded-Host": "a19339c88829d4bdf91fcf9382a93cba-433883132.us-west-1.elb.amazonaws.com",
"X-Forwarded-Path": "/route1/get",
"X-Forwarded-Prefix": "/route1",
"X-Header-1": "123",
"X-Kong-Request-Id": "5ed0c64fb9492f67b91bf04dbf33ae28"
},
"origin": "192.168.8.200, 13.57.30.171",
"url": "http://a19339c88829d4bdf91fcf9382a93cba-433883132.us-west-1.elb.amazonaws.com/get"
}

Conclusion

Kong Konnect simplifies API management and improves security for all services infrastructure. Try it for free today!

This blog post described Kong Konnect Data Plane deployment to:

  1. Take advantage of the inherently flexible capability provided by the Konnect Data Plane deployment to integrate with AWS IRSA to restrict and control the access to AWS Services.
  2. Externalize Konnect Data Plane secrets to AWS Secrets Manager implements a safer deployment, leveraging AWS Identity and Access Manager (IAM) roles.
  3. Configure Kong Plugins with secrets stored inside AWS Secrets Manager

--

--