Kong Konnect Data Plane 3.7 Deployment and Plugin Configuration with EKS 1.30 Pod Identity and AWS Secrets Manager

Claudio Acquaviva
25 min readMay 28, 2024

--

Introduction

In Kubernetes, a Secret is an object that stores sensitive information. The most common types of secrets are: passwords, tokens, digital certificates, encryption keys, etc.

At the same time, Secrets lifecycle processes should be implemented to support and control several challenges including:

  • Secret sharing with users and applications
  • Provide a strong Secret storage to support plain text or unencrypted data
  • Secret Policy with rules related to reuse, duration, special characters, etc.
  • Manage fine-grained policies to access the secrets/
  • Audit and monitor the secrets usage
  • Implement and automate secrets rotation

In summary, the Secrets lifecycle should be externalized from the Kubernetes cluster while allowing the secrets to be consumed by the coming deployments as they were locally created.

With Kong Gateway 3.7, AWS Secrets Manager can be configured in multiple ways. To access secrets stored in the AWS Secrets Manager, Kong Gateway needs to be configured with an IAM Role that has sufficient permissions to read the required secret values. Check the documentation to learn more about Kong Gateway Secret Management with AWS.

In November, 2023, Amazon introduced EKS Pod Identity, a new feature that simplifies how an EKS deployment can obtain temporary AWS credentials to consume AWS Services, based on IAM roles. EKS Pod Identity is an alternative to the existing IRSA (IAM Roles for Service Accounts) mechanism also used to securely consume AWS services. Amazon EKS documentation has a good comparison of both mechanisms, EKS Pod Identity and IRSA.

This blog post describes how to leverage AWS EKS Pod Identity capability to:

  • Deploy a Konnect Data Plane in EKS using Digital Certificates and Private Keys stored in AWS Secrets Manager.
  • Configure a Kong Plugin with secrets also stored in AWS Secrets Manager. As an example, we’re going to explore the Request Transformer Advanced Plugin configuration.

Kong Konnect Control Plane / Data Plane connection

A Kong Konnect Data Plane (DP) deployment establishes an mTLS connection with the Konnect Control Plane (CP). The CP and DP follow the Hybrid Mode deployment and implement a secure connection with a Digital Certification and Private Key pair. The encrypted tunnel is used to publish any API definition and policies created by the CP to all DPs connected.

AWS Secrets Manager and Amazon EKS Pod Identity

AWS Secrets Manager helps you create and maintain the Secrets lifecycles. Many AWS services can store and use secrets in Secrets Manager including Amazon EKS clusters.

EKS clusters can consume AWS Secrets Manager to store and support secrets through Pod Identity. Pod Identity is an AWS general framework that allows applications running in EKS to access AWS services (including AWS Secrets Manager) in a controlled manner, based on permissions defined in AWS IAM (Identity Access Management) roles and temporary AWS credentials issued by AWS STS (Security Token Service). The AWS EKS Pod Identity Agent abstracts, from the Pod perspective, all moving parts responsible for issuing the credentials.

The following diagram shows a high-level overview of the architecture:

Kong Konnect Data Plane Deployment Plan

Let’s get started with the typical “values.yaml” file we use for a Kong Konnect Data Plane. Please check the Konnect documentation to learn more about the Data Plane Deployment:

image:
repository: kong/kong-gateway
tag: "3.7"

secretVolumes:
- kong-cluster-cert

admin:
enabled: false

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 1234567890.us.cp0.konghq.com:443
cluster_server_name: 1234567890.us.cp0.konghq.com
cluster_telemetry_endpoint: 1234567890.us.tp0.konghq.com:443
cluster_telemetry_server_name: 1234567890.us.tp0.konghq.com
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false

The “cluster_cert” and “cluster_cert_key” variables define the Digital Certificate and Private Key pair the Data Plane (DP) should use to connect its Konnect Control Plane. In fact, the standard Konnect Data Plane deployment process, available in the “Self-Managed Hybrid Data Plane Node” page, shown below, provides a button to generate the pair, which is supposed to be injected in the Kubernetes cluster as a secret.

Please keep in mind this is the configuration file generated by the Konnect Data Plane deployment process. For a production-ready environment you might want to consider other variables to get your Data Plane running. Check the Configuration for Kong Gateway page to learn more about them.

Before running the Helm command, we should create a Kubernetes secret with both “cluster_cert” and “cluster_cert_key” and use them in our “values.yaml” file.

The idea is to implement a new process where:

  • The pair would be stored in AWS Secrets Manager.
  • The “values.yaml” would refer to new secrets.

AWS Secrets Manager

Digital Certificate and Private Key pair issuing

First of all, we need to create the Private Key and Digital Certificate both Konnect Control Plane and Data Plane use to build the mTLS connection.

For the purpose of this blog post, the secure communication will be based on the PKI Mode. Please, check the documentation to learn more about the secure communication modes. You can use several tools to issue the pair including simple OpenSSL commands like this:

openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout ./kongcp1.key \
-out ./kongcp1.crt \
-days 1095 \
-subj "/CN=konnect_cp1" \
-addext "extendedKeyUsage=serverAuth,clientAuth"

The -addext “extendedKeyUsage=serverAuth,clientAuth” option tells that the certificate can be used for both server and client Authentication, meaning that, from the Konnect standpoint, it can be used for both Control Plane and Data Plane.

Secrets creation

You can create your secrets in AWS Secrets Manager by running the following commands:

aws secretsmanager create-secret --name kongcp1-crt --region us-west-1 --secret-string "{\"cert\": \"$(cat ./kongcp1.crt)\"}"
aws secretsmanager create-secret --name kongcp1-key --region us-west-1 --secret-string "{\"key\": \"$(cat ./kongcp1.key)\"}"

Try to consume the AWS Secrets Manager from an EKS Pod

Create the EKS Cluster

First of all, create an EKS cluster with eksctl with a command like this:

eksctl create cluster --name kong37-eks130 --version 1.30 --region us-west-1 --nodegroup-name kong-node --node-type t3.large --nodes 1

Deploy a Pod

Now, deploy a Pod running Ubuntu Operating System with the AWS CLI already installed in a “kong” namespace. The Pod plays the same role any Kubernetes deployment interested in consuming an external AWS Service does:

kubectl create namespace kong
kubectl run -n kong --rm=true -i --tty ubuntu --image=claudioacquaviva/ubuntu-awscli:0.4 -- /bin/bash

# aws --version
aws-cli/2.15.46 Python/3.11.8 Linux/5.10.215–203.850.amzn2.x86_64 exe/x86_64.ubuntu.24 prompt/off

Try to consume the AWS Secrets Manager Service

Inside the Pod, if you try to consume the AWS Secrets Manager secrets you get an error:

# aws secretsmanager list-secrets --region us-west-1 | jq -r ".SecretList[].Name"
An error occurred (AccessDeniedException) when calling the ListSecrets operation: User: arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong37-eks130-nodegroup-kon-NodeInstanceRole-mvzLB2DaFncO/<eks_node_ec2_instance_id> is not authorized to perform: secretsmanager:ListSecrets because no identity-based policy allows the secretsmanager:ListSecrets action

This is due that, from the AWS STS perspective, the caller, in our case the Pod, is assuming a Role (eksctl-kong37-eks130-nodegroup-kon-NodeInstanceRole-mvzLB2DaFncO), the standard EKS Node Instance Role) which does not have any permission to consume the AWS Service. You can check that running:

# aws sts get-caller-identity
{
"UserId": "<aws_roleid>:<eks_node_ec2_instance_id>",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong37-eks130-nodegroup-kon-NodeInstanceRole-mvzLB2DaFncO/<eks_node_ec2_instance_id>"
}

To solve that, again, we need Pod Identity to request temporary AWS credentials based on IAM Role and Policies, so the Pod can consume AWS Secrets Manager.

AWS STS is King

You may have noticed we mentioned AWS STS (Security Token Service) in the previous section. So, what is it?

It’s important to note that, in order to consume any AWS Service (including AWS Secrets Manager), we have to have AWS Credentials like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION and AWS_SESSION_TOKEN in place. Of course, you can use your existing ones. However, it’s not recommended embedding long-term AWS credentials with applications. As a best practice, we should request temporary AWS credentials dynamically to AWS STS. The supplied temporary credentials should then map to an AWS Role that has only the permissions needed to perform the tasks required.

The fundamental AWS STS request should be a “AssumeRole” request like this:

# aws sts assume-role --role-arn arn:aws:iam::<your_aws_account>:role/kong-assumerole --role-session-name app1

And the response would look like this:

{
"Credentials": {
"AccessKeyId": "ASIAXAKB57VP43NLLVP6",
"SecretAccessKey": "ZovakNbXLX+wpV50flUMapFfZLfq0IYoORPUH1aj",
"SessionToken": "IQoJb3JpZ2lu….",
"Expiration": "2024–05–26T16:53:56+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROAXAKB57VPSCQ5VNOIR:app1",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong-assume-role/app1"
}
}

The request refers to critical elements:

  • We are asking AWS STS to let the caller assume an IAM Role.
  • The IAM Role is “kong-assumerole”

Of course, for our use case, the IAM Role is expected to grant access to AWS Secrets Manager. The corresponding response presents the temporary credentials we should use. By setting the environment variables with the temporary credentials allow us to consume AWS Secrets Manager, which is included as policies in the IAM Role the caller assumed.

For example:

export AWS_ACCESS_KEY_ID=ASIAXAKB57VP43NLLVP6
export AWS_SECRET_ACCESS_KEY=ZovakNbXLX+wpV50flUMapFfZLfq0IYoORPUH1aj
export AWS_DEFAULT_REGION=us-west-1
export AWS_SESSION_TOKEN=IQoJb3JpZ2lu….

Ultimately, that’s what Pod Identity does behind the scenes. Of course, there are several other components automating the process to make it transparent from the Kubernetes perspective, but that’s what we fundamentally need.

Amazon EKS Pod Identity

EKS Pod Identity Agent

As a best practice, we should request temporary AWS credentials dynamically to EKS Pod Identity Agent. The supplied temporary credentials should then map to an AWS Role that has only the permissions needed to perform the tasks required.

The EKS Pod Identity Agent is implemented as a regular EKS AddOn and it runs as a DaemonSet on every worker node. As such you can install it with:

eksctl create addon --cluster kong37-eks130 --region us-west-1 --name eks-pod-identity-agent

You can check it later with:

$ eksctl get addon --cluster kong37-eks130 --region us-west-1
2024–05–20 11:51:58 [ℹ] Kubernetes version "1.30" in use by cluster "kong37-eks130"
2024–05–20 11:51:58 [ℹ] getting all addons
2024–05–20 11:51:59 [ℹ] to see issues for an addon run `eksctl get addon - name <addon-name> - cluster <cluster-name>`
NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE CONFIGURATION VALUES
eks-pod-identity-agent v1.2.0-eksbuild.1 ACTIVE 0

To get temporary credentials, the Pod should send a request to a specific endpoint exposed by the Pod Identity Agent. The endpoint is available at http://169.254.170.23/v1/credentials and it requires a Kubernetes Service Account Token as an “Authorization” header.

You can check Kubernetes deployment with:

% kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
aws-node-v9c4b 2/2 Running 0 6h51m
coredns-7fc597d7cd-4hq5l 1/1 Running 0 6h57m
coredns-7fc597d7cd-cmmtv 1/1 Running 0 6h57m
eks-pod-identity-agent-lkltl 1/1 Running 0 4h52m
kube-proxy-2vfvw 1/1 Running 0 6h51m

% kubectl get daemonset -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-node 1 1 1 1 1 <none> 6h58m
eks-pod-identity-agent 1 1 1 1 1 <none> 4h54m
kube-proxy 1 1 1 1 1 <none> 6h58m

EKS Auth API

The Agent, in turn, sends a request to the new EKS Auth API which provides the AssumeRoleForPodIdentity action. The actual EKS Auth API response defines the IAM Role the caller (in our case, the Pod) assumes as well as the temporary credentials it should use to consume the AWS Services described in the IAM Role.

We have to send a signed HTTP request to consume the AssumeRoleForPodIdentity action, provided by EKS Auth API. The request should be signed with a regular AWS SigV4 signature. However, just like we have for regular AWS requests where AWS SDKs and CLI abstracts the signature process for us, the new aws eks-auth assume-role-for-pod-identity CLI and eksauth SDK support the EKS Auth API.

We have to instruct the Agent how to behave when it receives a request like this though. We do it by creating a Pod Identity Association where we relate an IAM Role with the Kubernetes Service Account.

The final Agent result should be the temporary AWS Credentials your Pod should use to consume the AWS Service:

{
"AccessKeyId": "ASIAXAKB57V….",
"SecretAccessKey": "HWGtRMXswA….",
"Token": "IQoJb3JpZ2….",
"AccountId": "<your_aws_account_id>",
"Expiration": "2024–05–25T01:06:19Z"
}

In summary the process goes like this:

  1. You specify an IAM policy defining the AWS Services the Pod is allowed to consume.
  2. You create a Pod Identity Association relating the Kubernetes Service Account to the IAM Role which refers to the IAM policy.
  3. The Pod sends a request to EKS Pod Identity Agent requesting temporary AWS credentials passing the Service Account Token related to the Kubernetes Service Account as a parameter.
  4. The Agent sends an AssumeRoleForPodIdentity request to the EKS Auth API endpoint to exchange the token for temporary AWS credentials issued by AWS STS.
  5. The Pod sets environment variables with the new and temporary AWS credentials.
  6. The Pod is able to consume the AWS Service.

Kubernetes Service Account and Service Account Token

First of all, if we try to send a request, from inside the Pod we created, to the EKS Agent like this, we’ll get the following message:

# curl http://169.254.170.23/v1/credentials
Service account token cannot be empty

What is this token? In Kubernetes, the identities are managed by Service Accounts. In fact, any K8s Namespace has a “default” Service Account and all deployments have access to a token related to that “default” Service Account. The token, called Service Account Token, is a JWT available for each Pod.

Let’s check both the Service Account and its Token. Outside the Pod run:

% kubectl get sa default -n kong -o json
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"creationTimestamp": "2024–05–20T14:03:10Z",
"name": "default",
"namespace": "kong",
"resourceVersion": "12490",
"uid": "876f6dae-7eea-414d-9f88–9099f3738abe"
}
}

Assuming your still have the Pod running, you can see it is using the “default” Service Account and has a volume mounted with the specific path:

% kubectl get pod ubuntu -n kong -o json | jq -r ".spec.serviceAccount"
default

% kubectl get pod ubuntu -n kong -o json | jq ".spec.containers[].volumeMounts"
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kube-api-access-sk89h",
"readOnly": true
}
]

This is a Projected Volume added by the Kubernetes ServiceAccount Admission Controller when we deployed the Pod. Read the Projected Volume documentation to learn more.

You can check what’s inside the directory with:

% kubectl exec ubuntu -n kong - ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token

The volume is created for Kubernetes API access and it has three sources:

  • ConfigMap with the Kubernetes Cluster Certificate Authority data.
  • Namespace with namespace where the Pod has been deployed.
  • Service Account Token with the actual token related to the Service Account.
% kubectl exec ubuntu -n kong - cat /var/run/secrets/kubernetes.io/serviceaccount/token | jwt decode -
Token header
------------
{
"alg": "RS256",
"kid": "22fbc2943ab5017bd8fb425c54c9e6a384c26fc8"
}

Token claims
------------
{
"aud": [
"https://kubernetes.default.svc"
],
"exp": 1747766659,
"iat": 1716230659,
"iss": "https://oidc.eks.us-west-1.amazonaws.com/id/51BD9333C83333F6049A8D6F851A6F11",
"kubernetes.io": {
"namespace": "kong",
"pod": {
"name": "ubuntu",
"uid": "e94a0380–8d5c-40b0–924a-cd54fefb4db4"
},
"serviceaccount": {
"name": "default",
"uid": "876f6dae-7eea-414d-9f88–9099f3738abe"
},
"warnafter": 1716234266
},
"nbf": 1716230659,
"sub": "system:serviceaccount:kong:default"
}

Accessing Kubernetes API with the Service Account Token

The Service Account Token has the kube-apiserver address (“https://kubernetes.default.svc") as the audience (“aud”), meaning the token is supposed to be used inside the cluster to reach the Kubernetes API Server.

You can try it yourself. From a local terminal, define a role with rules to access the pods inside the “kong” namespace:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: kong
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
EOF

Grant the Role to the Service Account:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: kong
subjects:
- kind: ServiceAccount
name: default
namespace: kong
roleRef:
kind: Role
name: pod-reader
EOF

Now, inside the Pod, get the token and use it in a Kubernetes API request:

TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# curl -sk https://kubernetes.default.svc/api/v1/namespaces/kong/pods - header "Authorization: Bearer $TOKEN" | jq -r ".items[].metadata.name"
ubuntu

Use STS to consume the AWS Service

As an exercise, let’s request temporary credentials to STS and try to consume AWS Secrets Manager.

Create a Policy with permissions to list the existing Secrets

In order to get our Pod consuming the secrets stored in AWS Secrets Manager, we need a policy defining the permission to access them. The policy will be attached to a Role which will be assumed by the Pod during the deployment.

aws iam create-policy \
--policy-name list-secrets-policy \
--policy-document '{
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:ListSecrets",
"secretsmanager:GetSecretValue"
],
"Resource": "*"
}
]
}'

Create an IAM Role allowing the Pod to assume a Role

We don’t have any Role defined to be used with the AWS STS “AssumeRole” request yet. As a reminder, the request is responsible for issuing temporary AWS credentials to a caller. The caller assumes an IAM Role which has Policies with the permissions the caller should have.

As a reminder, here’s the current Caller Identity of our Pod. The “Arn” says it assumed the EKS NodeGroup role.

# aws sts get-caller-identity
{
"UserId": "<user_id>:<eks_node_ec2_instance_id>",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong37-eks130-nodegroup-kon-NodeInstanceRole-OiWWuhZnTMcd/<eks_node_ec2_instance_id>"
}

We use the “Arn” to create the new Role:

aws iam create-role --role-name kong-assumerole --assume-role-policy-document '{
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong37-eks130-nodegroup-kon-NodeInstanceRole-OiWWuhZnTMcd/i-059168d31640b6607"
},
"Action": "sts:AssumeRole"
}
]
}'

The new Role says that the caller, in our case the Role assumed by the EKS NodeGroup, will be allowed to assume other Roles, and all permissions attached to it. Now attached the Policy we previously created to the new Role:

aws iam attach-role-policy \
--role-name kong-assumerole \
--policy-arn arn:aws:iam::<your_aws_account>:policy/list-secrets-policy

Assume the New Role and consume AWS Secrets Manager

Let the Pod assume the new Role which allows it to consume AWS Secrets Manager

# aws sts assume-role --role-arn arn:aws:iam::<your_aws_account>:role/kong-assumerole --role-session-name app1 > ./temp_credentials.json

Check temporary credentials:

# cat ./temp_credentials.json
{
"Credentials": {
"AccessKeyId": "ASIAXAKB57VP43NLLVP6",
"SecretAccessKey": "ZovakNbXLX+wpV50flUMapFfZLfq0IYoORPUH1aj",
"SessionToken": "IQoJb3JpZ2luX2VjEEgaCXVzLXdlc3QtMSJIMEYCIQClsy7yudHUxcLQR4s7BO9gnCh+3z4EPXtD88wtgFVkMAIhAJV7FgTbJtSScJILf59fYCg0kkOEJIFo8RB5ysrwbW89KpoCCMH//////////wEQARoMNDgxNzExNDg4MzUxIgzSz4HRbMyosR4kNuAq7gFKtshLQkfUfK7jAlz5a3INvptE4HbHi/Xm5uaPx0B4+M2/MA5h4hLPLikDyN+zcSE1usaUGxyAZAfOOnyrC43lFghZeFnkyyA/sUCeZMIxF6GQialaPeViPc9gZSjL7RqH+7nnCHM1nxyj3UNNQns4PzladcCZ7Qhj6e0tWrJgD/wtlsDNfwYSTxkvkTbuvA7lP13+hnATvHDr5XwOV6QArwi7MxiW4rMQdnbsfX3Ch0RNVouWGJa7CnH7qvJ+XZIeLwBEUkChx4ZNk4R8/zkw/ybXpteheWSI5vb34sANconAonEYyRVa9QCoFWUnMJS2zbIGOpwBrt5YjheGcOMAnzlCtTxUUpdfbZKDpPtfg9drk5ViSt2VU6C3A1kZyRCkD3XY6AYkW76yyNqfdYcX9ygYxNuYibD9vVtFUob7X/TR30ofnVz3Cp2HXL114J4K02tCGyaNBF/4wTiH16KddVpDXdY9TBXCqjcYOyI/FEdJcbzy2IKXnBUSYnXa9vRDuphd8wHcm+EY3Jig0nMisrQc",
"Expiration": "2024–05–26T16:53:56+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROAXAKB57VPSCQ5VNOIR:app1",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong-assume-role/app1"
}
}

Set the AWS environment variables

export AWS_ACCESS_KEY_ID=$(cat temp_credentials.json | jq -r .Credentials.AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(cat temp_credentials.json | jq -r .Credentials.SecretAccessKey)
export AWS_SESSION_TOKEN=$(cat temp_credentials.json | jq -r .Credentials.SessionToken)
export AWS_DEFAULT_REGION=us-west-1

Check your caller again:

# aws sts get-caller-identity
{
"UserId": "<user_id>:app1",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong-assumerole/app1"
}

You should be able to see the secrets

# aws secretsmanager list-secrets --region us-west-1 | jq -r ".SecretList[].Name"
kongcp1-key
kongcp1-crt

Unset the environment variables and delete the IAM Role:

unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
unset AWS_DEFAULT_REGION

aws iam detach-role-policy \
--role-name kong-assumerole \
--policy-arn arn:aws:iam::<your_aws_account>:policy/list-secrets-policy

aws iam delete-role --role-name kong-assumerole

Use EKS Auth API to consume the AWS Service

Let’s continue with the exercise consuming the AWS Secrets Manager with the default ServiceAccount’s token and EKS Auth API. The EKS Auth API abstracts the AWS STS relationship from the Pod perspective.

Try to assume a Role with the existing Token

First, check the token with the following command, from inside the Pod.

# cat /var/run/secrets/kubernetes.io/serviceaccount/token | jwt -show -
Header:
{
"alg": "RS256",
"kid": "22fbc2943ab5017bd8fb425c54c9e6a384c26fc8"
}
Claims:
{
"aud": [
"https://kubernetes.default.svc"
],
"exp": 1747763732,
"iat": 1716227732,
"iss": "https://oidc.eks.us-west-1.amazonaws.com/id/51BD9333C83333F6049A8D6F851A6F11",
"kubernetes.io": {
"namespace": "kong",
"pod": {
"name": "ubuntu",
"uid": "e94a0380–8d5c-40b0–924a-cd54fefb4db4"
},
"serviceaccount": {
"name": "default",
"uid": "876f6dae-7eea-414d-9f88–9099f3738abe"
},
"warnafter": 1716231339
},
"nbf": 1716227732,
"sub": "system:serviceaccount:kong:default"
}

Also, as a reminder, you can also check the STS’ caller identity responsible for the AWS Service consumption:

# aws sts get-caller-identity
{
"UserId": "<aws_roleid>:<eks_node_ec2_instance_id>",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong37-eks130-nodegroup-kon-NodeInstanceRole-mvzLB2DaFncO/<eks_node_ec2_instance_id>"
}

Now, to make it easier, store the Service Account token in an environment variable.

# TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

Since the Pod’s image has the AWS CLI installed we can make the eks-auth assume-role-for-pod-identity call using the existing token. The call uses the same STS AssumeRole action we invoked directly before.

Note that EKS Auth API complains about the ‘exp’ claim. In fact, the API is expecting a different token with related Pod Identity’s claims.

# aws eks-auth assume-role-for-pod-identity --cluster-name kong37-eks130 --region us-west-1 --token $TOKEN
An error occurred (InvalidTokenException) when calling the AssumeRoleForPodIdentity operation: The token included in the request has claim 'exp' which is outside the validity range [2024–02–20T17:41:54.745170Z - 2024–08–18T17:41:54.745170Z].

If you will, you can manage the Kubernetes API permissions, as we did before, to issue new tokens supported by the EKS Auth API. However, let’s use the kubectl command externally to the Pod instead to get it faster. Note are using a specific option to define the right audience for the token:

kubectl create token default --bound-object-kind Pod --bound-object-name ubuntu -n kong --audience pods.eks.amazonaws.com
eyJhbGciOiJSUzI1NiIsImtpZCI6IjIyZmJjMjk0M2FiNTAxN2JkOGZiNDI1YzU0YzllNmEzODRjMjZmYzgifQ.eyJhdWQiOlsicG9kcy5la3MuYW1hem9uYXdzLmNvbSJdLCJleHAiOjE3MTYyMzE0MTMsImlhdCI6MTcxNjIyNzgxMywiaXNzIjoiaHR0cHM6Ly9vaWRjLmVrcy51cy13ZXN0LTEuYW1hem9uYXdzLmNvbS9pZC81MUJEOTMzM0M4MzMzM0Y2MDQ5QThENkY4NTFBNkYxMSIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoia29uZyIsInBvZCI6eyJuYW1lIjoidWJ1bnR1IiwidWlkIjoiZTk0YTAzODAtOGQ1Yy00MGIwLTkyNGEtY2Q1NGZlZmI0ZGI0In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiODc2ZjZkYWUtN2VlYS00MTRkLTlmODgtOTA5OWYzNzM4YWJlIn19LCJuYmYiOjE3MTYyMjc4MTMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprb25nOmRlZmF1bHQifQ.qOVO-ae62qPeiR-ur_IFfThbqLTZNrrZfOkDOA8oG2-YPV-lHzKanQtnYcp0wCj-Vk33OtCxRAzJkN_HcAGWlQx1FsV9fdvJzu1SF-qVzFucsB0dY9b4jT-iKjlaPHYMchWyVA6aaF8tK57t4Klap9C8qBCcj0gIHZoVuP3KLdfItwiIBaj-ZmGiQzoWv6iYlXRSHMUq6Z34J2eYKDvLeubIgtEJ9q69rqJ9cYTsUVQLYPubMRWGOVBwiYGcblp5cEXgBUYOlqPMmSv93NEZTF4FgvfSMN2PfcElKE7TLzFqx_YKyOgAJFYZ2nVAkVDJL1tNFSnAgGCOMV2UoNLoMA

Now, if you copy the new token to the same environment variable, you should get another error message:

aws eks-auth assume-role-for-pod-identity --cluster-name kong37-eks130 --region us-west-1 --token $TOKEN
An error occurred (ResourceNotFoundException) when calling the AssumeRoleForPodIdentity operation: The token included in the request has no service account role association for it.

This time EKS Auth API is not able to find any IAM Role associated with the token and, therefore, can not issue the temporary AWS credentials.

Pod Identity Association

To solve the problem we need to get our Kubernetes Service Account (and its token) associated with an IAM Role. We can do it by defining a Pod Identity Association, supported by both AWS CLI and eksctl. Here’s the eksctl command. It creates a new IAM Role named kong37-eks130-role, based on the existing list-secrets-policy, and associates it to the default Service Account.

eksctl create podidentityassociation \
--cluster kong37-eks130 \
--region us-west-1 \
--namespace kong \
--service-account-name default \
--role-name kong37-eks130-role \
--permission-policy-arns="arn:aws:iam::<your_aws_account>:policy/list-secrets-policy"

Check the Pod Identity Association with:

% eksctl get podidentityassociation \
--cluster kong37-eks130 \
--region us-west-1 \
--namespace kong
ASSOCIATION ARN NAMESPACE SERVICE ACCOUNT NAME IAM ROLE ARN
arn:aws:eks:us-west-1:<your_aws_account>:podidentityassociation/kong37-eks130/a-hk9hmjd4kroueqcet kong default arn:aws:iam::<your_aws_account>:role/kong37-eks130-role

Check the IAM Role with the following command. Note the Role is similar to what we manually created before. The main difference is the newly-introduced Service Principal pods.eks.amazonaws.com.

% aws iam get-role - role-name kong37-eks130-role
{
"Role": {
"Path": "/",
"RoleName": "kong37-eks130-role",
"RoleId": "<role_id>",
"Arn": "arn:aws:iam::<your_aws_account>:role/kong37-eks130-role",
"CreateDate": "2024–05–25T17:57:59+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
},
"Description": "",
"MaxSessionDuration": 3600,
"Tags": [
{
"Key": "alpha.eksctl.io/cluster-name",
"Value": "kong37-eks130"
},
{
"Key": "alpha.eksctl.io/podidentityassociation-name",
"Value": "kong/default"
},
{
"Key": "alpha.eksctl.io/eksctl-version",
"Value": "0.176.0-dev+5b33f073a.2024–04–25T09:34:19Z"
},
{
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
"Value": "kong37-eks130"
}
],
"RoleLastUsed": {
}
}
}

The Role says that the caller, in our case the Pod, will be allowed to assume the Role, and all permissions attached to it, when presenting a token with Principal “pods.eks.amazonaws.com”. Since we used the — audience pods.eks.amazonaws.com option in our kubectl create token command, our new token should do the job.

The caller assumes an IAM Role which has Policies with the permissions the caller should have. You can check the policy attached:

% aws iam list-attached-role-policies --role-name kong37-eks130-role
{
"AttachedPolicies": [
{
"PolicyName": "list-secrets-policy",
"PolicyArn": "arn:aws:iam::<your_aws_account>:policy/list-secrets-policy"
}
]
}

Get the Temporary AWS Credentials

Finally, we’re ready to get our credentials. Run the same command again, this time creating a file.

aws eks-auth assume-role-for-pod-identity --cluster-name kong37-eks130 --region us-west-1 --token $TOKEN > ./temp_credentials.json

If you check the file, we’ll see relevant information such as: “audience”, “podIdentityAssociation”, “assumedRoleUser”, and, of course, the actual temporary credentials:

cat temp_credentials.json
{
"subject": {
"namespace": "kong",
"serviceAccount": "default"
},
"audience": "pods.eks.amazonaws.com",
"podIdentityAssociation": {
"associationArn": "arn:aws:eks:us-west-1:<your_aws_account>:podidentityassociation/kong37-eks130/a-hk9hmjd4kroueqcet",
"associationId": "a-hk9hmjd4kroueqcet"
},
"assumedRoleUser": {
"arn": "arn:aws:sts::<you_aws_account>:assumed-role/kong37-eks130-role/eks-kong37-eks-ubuntu-f0a94ddd-dab7–4cc4-a4de-3d0d0e968ad8",
"assumeRoleId": "AROAXAKB57VPZLO7NLO6K:eks-kong37-eks-ubuntu-f0a94ddd-dab7–4cc4-a4de-3d0d0e968ad8"
},
"credentials": {
"sessionToken": "IQoJb3JpZ2luX2VjELr//////////wEaCXVzLXdlc3QtMSJGMEQCIEd7popf2zSNAuWCLqlFFSB+03zU8Eb0tGZCnEMlRQqjAiB11n3n5oTAjItZxkgL5XsM7FtdhnRVey5w372tqFHe8CqzBAgzEAEaDDQ4MTcxMTQ4ODM1MSIMFdYgzKbo0HsAqiaVKpAEVc7UHUYYpN0/4GpV5Bk1IL2kPe2WO5uDEaN+T711h0pcupQlBY9oVBZbYfSw23G85xIp9xMltc0mnA73NP3R3jAWx0u13+YyOP98dkeVL5BUagiCh53rfqcp+lzI4CHbwC++wO/2E3DyaK00WWS5vRiV0pL+JguSoFJcMJRtWAg4r+g7GH4JqS7Yz99E0wmfCXIlzdP/in/Z5ArMa01kjF6LpBT22N36YogoEZ43yKzR0rMl1I+gVgDmu8k7/+sFQIcVIup6JxTzguh0d4641vxi0vYQsMNdi1PM0VPI40eybHDKwicamTloe5RA1mGhrFTim1Cvgk4Yws7L8JYb0B9uohSYfRGlDLs2zd6K+BSbJyUT0rXd/FD6C6lPc7xPVLj4nmkLz0ULyygsMJmllWRtAQk3RzTnAR2Hfq1WSvhIEYfJKvyY8lmkaDtrqIuJ1Bn5o08dvUtyJcC5s568I4fOzcpNcelEim9GOxnIkhPjVS4K++ePO7zXl8H1SciHMSDUL65RiqCHb4U4HG81RE+0T+C/c6GjgJuX1SRm3bsaUxs/SReP+Io7BG7YRt82t7sxpHd1RgfeLLN1Xt7AtQb3TiYrjeGHeXi05rvaR0JxksuFGaM8gC/OJIfOYOfK6oZbK445CkaQEUa4yX583OPmgTQx9noPzqtC7BtMmC1odl0ZdwuDXvSw/p1p/m3wMMeerrIGOo8Br0gn0K1dR7n4uLJZ5te4Pr81BdSiV4PUfD/hXF7WKPvDq6lLh1o35G9NAxF+hx1xGXp44FMFMkswdECxUjG6a9jyufOWD7QFBWarKr1UmxpD02uaSM9onGZj2ZhI8SWjRmjobJAE5X9gfIdwATkcUh+INrljcgyc5/BQkmClVtD2XpHONCZTmtB3iiPYNGo=",
"secretAccessKey": "0wbo2YXqupxvwa8QjRxnLn1Ts/x2p3hKALnj9N+I",
"accessKeyId": "ASIAXAKB57VPVLA7GS6X",
"expiration": "2024–05–25T20:58:31–03:00"
}
}

Now use the file to set the environment variables related to the AWS credentials.

export AWS_ACCESS_KEY_ID=$(cat temp_credentials.json | jq -r .credentials.accessKeyId)
export AWS_SECRET_ACCESS_KEY=$(cat temp_credentials.json | jq -r .credentials.secretAccessKey)
export AWS_SESSION_TOKEN=$(cat temp_credentials.json | jq -r .credentials.sessionToken)
export AWS_DEFAULT_REGION=us-west-1

Consume AWS Secrets Manager

First, you may want to check the new STS caller:

# aws sts get-caller-identity
{
"UserId": "<userid>:eks-kong37-eks-ubuntu-455b59ea-8d24–431d-8dd5–58456ce208a1",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong37-eks130-role/eks-kong37-eks-ubuntu-455b59ea-8d24–431d-8dd5–58456ce208a1"
}

Assuming the Role, we are free to consume the AWS Service:

# aws secretsmanager list-secrets --region us-west-1 | jq -r ".SecretList[].Name"
kongcp1-key
kongcp1-crt

Use EKS Pod Identity Agent to consume the AWS Service

As we stated before, the EKS Auth API, with its AssumeRoleForPodIdentity action, is typically consumed by the EKS Pod Identity Agent. In fact, the Agent exposes itself through the http://169.254.170.23/v1/credentials endpoint, so, the following command should return temporary AWS credentials as well. Note we are using the same token we issued previously to consume the EKS Auth API directly.

# curl -s http://169.254.170.23/v1/credentials -H "Authorization: $TOKEN" > temp_credentials.json

# cat temp_credentials.json | jq
{
"AccessKeyId": "ASIAXAKB57VPURSP4AZ3",
"SecretAccessKey": "WYaaGeL65qnW2Qxht/GMBGc635yFct+4cSyIhItX",
"Token": "IQoJb3JpZ2luX2VjEC4aCXVzLXdlc3QtMSJHMEUCIQCCJ7d8UeTsX0ysmWa3PVFa+g2X6jIMvqKVXbs+wXc/SwIgMHYliZvdhloTaLNFisUJS9KM1HoNMkSkZwUlDDQZWPUqvAQIp///////////ARABGgw0ODE3MTE0ODgzNTEiDF5Gj5XZKereVod5aCqQBLyyg0E6mX704o9TXTAQS8hkv/3sUBv0SAiYnXBywjUKPSKN1+EbHiTKbzIjEzqK1mG63bwSGFF7FCa/vgA+R4ZX12O2Ct6T+hz0ZnsMj4DWbNmSviTE2q/DRMAYq88vvW06IgYdjTBCL15+q2Um6RPP7yyGkr3FD3Q+iLAmHjPjCcfa06P1FtVk3buD7JuWNQ/5Mq/OkIo06iyfMjMqXiQx59Ka3RWoiBS+eMIwvAwMoFFtT+ytpLGB2Z2HxEJwDflGOhqa1xVc8IvDiE3kSXYujd8KxA6HgW3NCusDAGyGRA9dXGJ1KnBg9yquKxAQchtntq1aZsGlznzukRATLWEPTFckKCyuZ1634ZFpYrJqn7Gs1hCcW16T6FbMVC07ttbQ/kS7FE0ykCjcBlh2ygi9+2kXkUIR/q717I2z9bbjF7Th+9X9QXXvvnthQFAdIRKvSPRjZrU/CBZfSPUJEV1RdjM05lWdpNalCHCP9/17DE8nux3HSytwar04B2eLbVYvvM+57WsiLUxt4Xjk/dxRy5HREBDpKEU5ZmJ2ek/SpcNY0NiNkA7bswlqI6fXu13yEIUZPhnML0CcCcQ1gDR0hMdyS+/wK4656gysV1gSZzbk+1afmyyhaRGgapbcvYHVYIUgiPh8UH6yPt+gtTU8b5g4c47W5FMKxna85fSzWP8gsQVQnbi5QnAT7oBiHTCX0seyBjqOAZk18Z244dXvL9KuYKZ5aFf/kqn8wvAu+10xOCU5SLu0BKxVgagZmgxG8eXI1gFDTNxzj7lgqGZEkZ+rp/Quzx06touqC1W0+cQrQg4ZJq37tobSe+ou2eL6q6GRZNyEyiQ6gzhSLIA/HAzL4Brzx8dEZWZvdbsrxnK3rsI7Mmb/Y8uyl9xjHYpPtWKfXOc=",
"AccountId": "<your_aws_account>",
"Expiration": "2024–05–25T19:35:19Z"
}

Then use the file again to set the same AWS environment variables. You should be able to consume AWS Secrets Manager the same way we did before.

export AWS_ACCESS_KEY_ID=$(cat temp_credentials.json | jq -r .AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(cat temp_credentials.json | jq -r .SecretAccessKey)
export AWS_SESSION_TOKEN=$(cat temp_credentials.json | jq -r .Token)
export AWS_DEFAULT_REGION=us-west-1

If you want to get back to the original credentials unset the variables:

unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
unset AWS_DEFAULT_REGION

Delete the EKS Pod Identity Association

You can delete the association and IAM Role with:

eksctl delete podidentityassociation \
--cluster kong37-eks130 \
--region us-west-1 \
--namespace kong \
--service-account-name default

The EKS Pod Identity webhook

Pod Identity is an EKS feature that allows you to assign an IAM role to a Kubernetes Service Account. So far, we have consumed AWS Secrets Manager with manual configurations. This is good to understand a bit what happens behind the scenes, but, for real world scenarios, EKS Pod Identity automates the process with the EKS Pod Identity webhook.

Every time we use an AWS SDK to consume an AWS Service we have to be authenticated. The Authentication process follows sequentially a list of options called “Credential Provider Chain”. For example: someone can be authenticated with AWS access keys or maybe IAM Identity Center.

EKS Pod Identity uses the “Container Credential Provider” option, which has been created specifically for containerized applications. In fact, this mechanism is highly recommended for EKS deployments. The Container Credential Provider relies on two environment variables: AWS_CONTAINER_CREDENTIALS_FULL_URI and AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE. These variables define exactly the configuration we need to call the EKS Pod Identity Agent.

That’s what the EKS Pod Identity webhook does automatically for us, leveraging the Kubernetes support for Dynamic Admission Control, more precisely the Mutating Admission Webhook, pre installed in any EKS cluster.

The webhook gets to work when a new Pod, referring to a Service Account, is scheduled to be created. As the name implies, the webhook mutates the Pod to inject both environment variables in our Pod:

  • AWS_CONTAINER_CREDENTIALS_FULL_URI is set with the EKS Pod Identity Agent endpoint, http://169.254.170.23/v1/credentials
  • AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE is set with the Service Account token related to the Kubernetes Service Account we use in our Pod deployment.

When a Pod is being created, the webhook calls the Kubernetes API Server to generate a JWT token for the Service Account used in the Pod declaration. A volume is mounted for the new Pod with the Token, similarly to what happens with the default Service Account volume.

After the AWS IAM role is associated with the service account, any newly created pods using that service account will be intercepted by the EKS Pod Identity webhook.

Let’s see it in action now:

Create a new Kubernetes Service Account and the EKS Pod Identity Association

This section creates a new Pod Identity Association with the kong37-eks130-sa Kubernetes Service Account and the kong37-eks130-role IAM Role.

Create the Service Account first:

kubectl create sa kong37-eks130-sa -n kong

You can check it with:

% kubectl describe sa kong37-eks130-sa -n kong
Name: kong37-eks130-sa
Namespace: kong
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>

Now, create the association to link the IAM Role, also created by the command, to the Service Account we are going to use for our Pod deployment. As we previously did, the Role has the same Policy attached.

eksctl create podidentityassociation \
--cluster kong37-eks130 \
--region us-west-1 \
--namespace kong \
--role-name kong37-eks130-role \
--service-account-name kong37-eks130-sa \
--permission-policy-arns="arn:aws:iam::<your_aws_account>:policy/list-secrets-policy"

Deploy the Pod with the Service Account

After creating the Service Account, the IAM Role and the EKS Pod Identity Assocation, we are ready to deploy our Pod, with the Service Account injected. Delete your current Pod and redeploy it addin the Service Account in the “spec” section:

kubectl delete pod ubuntu -n kong

kubectl run -n kong \
--overrides='{ "spec": { "serviceAccountName": "kong37-eks130-sa" } }' \
--rm=true \
-i --tty ubuntu \
--image=claudioacquaviva/ubuntu-awscli:0.4 - /bin/bash

This time, you should be able to consume the AWS Secrets Manager, requesting the secrets:

# aws secretsmanager list-secrets --region us-west-1 | jq -r ".SecretList[].Name"
kongcp1-key
kongcp1-crt

Check the Pod deployment

Service Account

It’s important to check the main updates Pod Identity has done for our deployment. From a local terminal run, we can see the Service Account inside the “spec” section:

% kubectl get pod ubuntu -n kong -o json | jq -r ".spec.serviceAccount"
kong37-eks130-sa

Environment Variables

Similarly, you should be able to see the AWS environment variables. Note that Pod Identity sets both AWS_CONTAINER_CREDENTIALS_FULL_URI and AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE variables.

% kubectl exec ubuntu -n kong -- env | grep AWS
AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=us-west-1
AWS_REGION=us-west-1
AWS_CONTAINER_CREDENTIALS_FULL_URI=http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE=/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token

You can also check them with kubectl command like this:

% kubectl get pod ubuntu -n kong -o json | jq ".spec.containers[].env[]"

STS Caller Identity

Inside the Pod you can check the STS Identity

# aws sts get-caller-identity
{
"UserId": "userid:eks-kong37-eks-ubuntu-3509616a-7326–41dc-ba4e-b9b8ef6e8e04",
"Account": "your_aws_account",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong37-eks130-role/eks-kong37-eks-ubuntu-3509616a-7326–41dc-ba4e-b9b8ef6e8e04"
}

Token

Just like the standard Pod Service Account Token, Pod Identity stores the new token in a volume available in the “pods.eks.amazonaws.com” directory:

% kubectl get pod ubuntu -n kong -o json | jq ".spec.containers[].volumeMounts"
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kube-api-access-hqhng",
"readOnly": true
},
{
"mountPath": "/var/run/secrets/pods.eks.amazonaws.com/serviceaccount",
"name": "eks-pod-identity-token",
"readOnly": true
}
]

You can check what’s inside the directory and decode the token with:

% kubectl exec ubuntu -n kong -- ls /var/run/secrets/pods.eks.amazonaws.com/serviceaccount
eks-pod-identity-token

% kubectl exec ubuntu -n kong -- cat /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token | jwt decode -
Token header
------------
{
"alg": "RS256",
"kid": "22fbc2943ab5017bd8fb425c54c9e6a384c26fc8"
}

Token claims
------------
{
"aud": [
"pods.eks.amazonaws.com"
],
"exp": 1716323882,
"iat": 1716237482,
"iss": "https://oidc.eks.us-west-1.amazonaws.com/id/51BD9333C83333F6049A8D6F851A6F11",
"kubernetes.io": {
"namespace": "kong",
"pod": {
"name": "ubuntu",
"uid": "f5b19d60-f752–41e0-bbfb-746b48884808"
},
"serviceaccount": {
"name": "kong37-eks130-sa",
"uid": "b517e0d4-b27e-45ee-a3e5-d11e964600f1"
}
},
"nbf": 1716237482,
"sub": "system:serviceaccount:kong:kong37-eks130-sa"
}

Note the token has the same pods.eks.amazonaws.com audience we manually specify with the kubectl create token command previously.

If you will, you can make the same call the AWS SDK does to get the temporary credentials when trying to access the external AWS Service. For example, inside the Pod, run:

# curl -s $AWS_CONTAINER_CREDENTIALS_FULL_URI -H "Authorization: $(cat $AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE)" | jq
{
"AccessKeyId": "ASIAXAKB57VP5I4BFUFY",
"SecretAccessKey": "e6KJCooTF4M0zEcgL8xyx7416Q02XGDM9nVO3ae/",
"Token": "IQoJb3JpZ2luX2VjEOb…..",
"AccountId": "<your_aws_account>",
"Expiration": "2024–05–22T20:01:40Z"
}

Kong Konnect Data Plane deployment

Finally, we are ready to deploy the Konnect Data Plane (DP) in the EKS Cluster. Since we have issued the Private Key and Digital Certificate pair and have stored them in AWS Secrets Manager, the first thing to do is create the new Konnect Control Plane (CP). You need to have a Konnect PAT (Personal Access Token) in order to send requests to Konnect. Read the Konnect PAT documentation page to learn how to generate one.

Create a Konnect Control Plane with the following command. It configures the PKI Mode for the CP and DP communication, meaning we are going to use the same Public Key to both CP and DP.

Create an environment variable with your PAT:

PAT=kpat_f7wifwatp…

Create the Control Plane with:

curl -X POST \
https://us.api.konghq.com/v2/control-planes \
--header "Authorization: Bearer $PAT" \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data '{
"name": "cp1",
"description": "Control Plane 1",
"cluster_type": "CLUSTER_TYPE_HYBRID",
"labels":{},
"auth_type": "pki_client_certs"
}'

Get the CP Id with:

CP_ID=$(curl -s https://us.api.konghq.com/v2/control-planes \
--header "Authorization: Bearer $PAT" | jq -r '.data[] | select(.name=="cp1") | .id')

Get the CP’s Endpoints with:

% curl -s https://us.api.konghq.com/v2/control-planes/$CP_ID \
--header "Authorization: Bearer $PAT" | jq -r ".config"
{
"control_plane_endpoint": "https://9816cc07fe.us.cp0.konghq.com",
"telemetry_endpoint": "https://9816cc07fe.us.tp0.konghq.com",
"cluster_type": "CLUSTER_TYPE_CONTROL_PLANE",
"auth_type": "pki_client_certs",
"cloud_gateway": false,
"proxy_urls": []
}

Now we need to add the Digital Certificate. Use the CP Id in your request:

cert="{\"cert\": $(jq -sR . ./kongcp1.crt)}"

curl -X POST https://us.api.konghq.com/v2/control-planes/$CP_ID/dp-client-certificates \
--header "Authorization: Bearer $PAT" \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data $cert

Konnect Vault

Kong Konnect provides Secrets Management capabilities supporting environment variable based secrets and Cloud Services based Secret Managers including AWS Secrets Manager.

We have to create a Konnect Vault to tell our Konnect Control Plane and its Data Plane we are storing our secrets in AWS Secrets Manager in the us-west-1 AWS region:

curl -X POST \
https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/vaults \
--header "Authorization: Bearer $PAT" \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data '{
"prefix": "aws-secrets",
"name": "aws",
"config":{ "region": "us-west-1" }
}'

Konnect Data Plane

With the CP and Vault created, let’s deploy the DP. As we discussed in the beginning of this post, we are taking the typical “values.yaml” declaration and changing it to use the Digital Certificate and Private Key pair we get through IRSA.

The two updates for the “values.yaml” are:

  • In order to use EKS Pod Identity, we have to add the same Kubernetes Service Account previously created.
  • Inside the “env” section we change both “cluster_cert” and “cluster_cert_key” settings with references to the secrets created.

Besides, use the CP endpoints for the specific settings.

image:
repository: kong/kong-gateway
tag: "3.7"

deployment:
serviceAccount:
create: false
name: kong37-eks130-sa

admin:
enabled: false

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 9876543210.us.cp0.konghq.com:443
cluster_server_name: 9876543210.us.cp0.konghq.com
cluster_telemetry_endpoint: 9876543210.us.tp0.konghq.com:443
cluster_telemetry_server_name: 9876543210.us.tp0.konghq.com
cluster_cert: "{vault://aws/kongcp1-crt/cert}"
cluster_cert_key: "{vault://aws/kongcp1-key/key}"
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false

Deploy the Konnect Data Plane with a Helm command:

helm install kong kong/kong -n kong --values ./values.yaml

You should see the Data Plane running:

% kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-kong-6755865687-t8hfk 1/1 Running 0 63s

Check the Konnect GUI also:

Plugin Configuration

We can use Kong Secrets Management along with AWS Secrets Manager to configure Kong Plugins also. As an example, let’s enable the Request Transformer Plugin with another secret stored in AWS Secrets Manager:

Kong Gateway Service and Route

First, let’s create a new Kong Service and Route. You can use the Konnect GUI if you like or, again, the Konnect RESTful API:

Kong Gateway Service

http https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services name=service1 \
url='http://httpbin.org' \
Authorization:"Bearer $PAT"

Get your new Gateway Service Id with:

SERVICE_ID=$(http https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services/service1 \
Authorization:"Bearer $PAT" | jq -r ".id")

Kong Route

Use the Service Id to define the Kong Route:

http https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services/$SERVICE_ID/routes name='route1' paths:='["/route1"]' Authorization:"Bearer $PAT"

Consume the Route

Get the Load Balancer DNS name

% kubectl get service kong-kong-proxy -n kong -o json | jq -r ".status.loadBalancer.ingress[].hostname"
ac5dd8c9832b84a1db05fb1d5a7ff597–919408190.us-west-1.elb.amazonaws.com

Consume the Kong Route

http ac5dd8c9832b84a1db05fb1d5a7ff597–919408190.us-west-1.elb.amazonaws.com/route1/get

Request Transformer Advanced Plugin

First, create a new secret to be used by the plugin:

aws secretsmanager create-secret --region=us-west-1 --name kong-secret --secret-string "kong_key:kong_secret"

Configure the Plugin with the following command

curl -X POST https://us.api.konghq.com/v2/control-planes/$CP_ID/core-entities/services/$SERVICE_ID/plugins \
--header "Authorization: Bearer $PAT" \
--header 'Content-Type: application/json' \
--data '{
"name": "request-transformer-advanced",
"instance_name": "rta",
"config": {
"add": {
"headers": ["x-header-1:123", "{vault://aws/kong-secret}"]
}
}
}'

If you consume the Route again, you should see two new headers as configured with the plugin

% http ac5dd8c9832b84a1db05fb1d5a7ff597–919408190.us-west-1.elb.amazonaws.com/route1/get
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 669
Content-Type: application/json
Date: Mon, 20 May 2024 21:44:37 GMT
Server: gunicorn/19.9.0
Via: kong/3.7.0.0-enterprise-edition
X-Kong-Proxy-Latency: 2
X-Kong-Request-Id: 66e27a4f827a1034edfb7c601ff62859
X-Kong-Upstream-Latency: 148
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"Kong-Key": "kong_secret",
"User-Agent": "HTTPie/3.2.2",
"X-Amzn-Trace-Id": "Root=1–664bc445–777b896f7a51da3f4d32a04b",
"X-Forwarded-Host": "ac5dd8c9832b84a1db05fb1d5a7ff597–919408190.us-west-1.elb.amazonaws.com",
"X-Forwarded-Path": "/route1/get",
"X-Forwarded-Prefix": "/route1",
"X-Header-1": "123",
"X-Kong-Request-Id": "66e27a4f827a1034edfb7c601ff62859"
},
"origin": "192.168.63.142, 54.219.148.201",
"url": "http://ac5dd8c9832b84a1db05fb1d5a7ff597-919408190.us-west-1.elb.amazonaws.com/get"
}

Conclusion

Kong Konnect simplifies API management and improves security for all services infrastructure. Try it for free today!

This blog post described Kong Konnect Data Plane deployment to:

  1. Take advantage of the inherently flexible capability provided by the Konnect Data Plane deployment to integrate with AWS EKS Pod Identity to restrict and control the access to AWS Services.
  2. Externalize Konnect Data Plane secrets to AWS Secrets Manager implements a safer deployment, leveraging AWS Identity and Access Manager (IAM) roles.
  3. Configure Kong Plugins with secrets stores inside AWS Secrets Manager

--

--