Kong Konnect Data Plane 3.4 and Amazon EKS 1.28 Deployment with AWS Secrets Manager, IRSA and External Secrets Operator

Claudio Acquaviva
21 min readNov 16, 2023

--

Introduction

In Kubernetes, a Secret is an object that stores sensitive information. The most common types of secrets are: passwords, tokens, digital certificates, encryption keys, etc.

At the same time, Secrets lifecycle processes should be implemented to support and control several challenges including:

  • Secret sharing with users and applications
  • Provide a strong Secret storage to support plain text or unencrypted data
  • Secret Policy with rules related to reuse, duration, special characters, etc.
  • Manage fine-grained policies to access the secrets
  • Audit and monitor the secrets usage
  • Implement and automate secrets rotation

In summary, the Secrets lifecycle should be externalized from the Kubernetes cluster while allowing the secrets to be consumed by the coming deployments as they were locally created.

Kong Konnect Data Plane and Secrets

A Kong Konnect Data Plane (DP) deployment establishes a mTLS connection with the Konnect Control Plane (CP). The CP and DP follow the Hybrid Mode deployment and implement a secure connection with a Digital Certification and Private Key pair. The encrypted tunnel is used to publish any API definition and policies created by the CP to all DPs connected.

AWS Secrets Manager, IRSA (IAM Role for Service Account) and External Secrets Operator

AWS Secrets Manager helps you create and maintain the Secrets lifecycles. Many AWS services can store and use secrets in Secrets Manager including Amazon EKS clusters.

EKS clusters can consume AWS Secrets Manager to store and support secrets through IRSA (IAM Role for Service Account). IRSA is an AWS general framework that allows applications running in EKS to access AWS services (including AWS Secrets Manager) in a controlled manner, based on permissions defined in AWS IAM (Identity Access Management) roles and temporary AWS credentials issued by AWS STS (Secure Token Services).

Lastly, the External Secrets Operator (ESO) is responsible for abstracting the secrets stored and AWS Secrets Manager and controlled by IRSA to EKS deployments. In this sense, Kong Data Planes deployments will access the secrets through the Kubernetes abstraction created by ESO.

The following diagram shows a high-level overview of the architecture:

This blob post will describe how AWS Services (Secrets Manager, IRSA, IAM and STS) can be used to support Kong Data Plane deployments in EKS.

Kong Konnect Data Plane Deployment Plan

Let’s get started with the typical “values.yaml” file we use for a Kong Konnect Data Plane. Please check the Konnect documentation to learn more about the Data Plane Deployment:

image:
repository: kong/kong-gateway
tag: "3.4"


secretVolumes:
- kong-cluster-cert

admin:
enabled: false

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 1234567890.us.cp0.konghq.com:443
cluster_server_name: 1234567890.us.cp0.konghq.com
cluster_telemetry_endpoint: 1234567890.us.tp0.konghq.com:443
cluster_telemetry_server_name: 1234567890.us.tp0.konghq.com
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false

The “cluster_cert” and “cluster_cert_key” variables define the Digital Certificate and Private Key pair the Data Plane (DP) should use to connect its Konnect Control Plane. In fact, the standard Konnect Data Plane deployment process, available in the “Self-Managed Hybrid Data Plane Node” page, provides a button to generate the pair, which is supposed to be injected in the Kubernetes cluster as a secret.

Please keep in mind this is the configuration file generated by the Konnect Data Plane deployment process. For a production-ready environment you might want to consider other variables to get your Data Plane running. Check the Configuration for Kong Gateway page to learn more about them.

Before running the Helm command, we should create a Kubernetes secret with both “cluster_cert” and “cluster_cert_key” and use them in our “values.yaml” file.

The idea is to implement a new process where:

  • The pair would be stored in AWS Secrets Manager.
  • The “values.yaml” would refer to new secrets injected by the External Secret Operator (ESO).

AWS Secrets Manager

To better understand the situation, let’s try to consume secrets stored in AWS Secrets Manager from a basic Pod deployment, using the AWS CLI. The Pod plays the same role the Konnect Data Plane does in an actual deployment.

Digital Certificate and Private Key pair issuing

First of all, we need to create the Private Key and Digital Certificate both Konnect Control Plane and Data Plane use to build the mTLS connection.

For the purpose of this blog post, the secure communication will be based on the default “pinned mode”. You can use several tools to issue the pair including simple OpenSSL commands like this:

openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout ./kongcp1.key \
-out ./kongcp1.crt \
-days 1095 \
-subj "/CN=konnect-cp1"

Secrets creation

You can create your secrets in AWS Secrets Manager by running the following commands:

aws secretsmanager create-secret --name kongcp1-crt --region us-west-1 --secret-string "{\"cert\": \"$(cat ./kongcp1.crt)\"}"

aws secretsmanager create-secret --name kongcp1-key --region us-west-1 --secret-string "{\"key\": \"$(cat ./kongcp1.key)\"}

Try to consume the AWS Secrets Manager from an EKS Pod

Create the EKS Cluster

First of all, create an EKS cluster with eksctl with a command like this:

eksctl create cluster --name kong34-eks128 --version 1.28 --region us-west-1 --nodegroup-name standard-workers --node-type t3.large --nodes 1

Deploy a Pod

Now, deploy a Pod running Ubuntu Operating System with the AWS CLI already installed in a “kong” namespace:

kubectl create namespace kong

kubectl run -n kong --rm=true -i --tty ubuntu --image=claudioacquaviva/ubuntu-awscli:0.1 -- /bin/bash

Try to consume the AWS Secrets Manager Service

Inside the Pod, if you try to consume the AWS Secrets Manager secrets you get an error:

# aws secretsmanager list-secrets --region us-west-1 | jq -r ".SecretList[].Name"
An error occurred (AccessDeniedException) when calling the ListSecrets operation: User: arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong34-eks128-nodegroup-sta-NodeInstanceRole-qnMUEeyrmgX2/i-0daf6ca843594ab20 is not authorized to perform: secretsmanager:ListSecrets because no identity-based policy allows the secretsmanager:ListSecrets action

This is due that, from the STS perspective, the caller, in our case the Pod, is assuming a Role (eksctl-kong34-eks128-nodegroup-sta-NodeInstanceRole-qnMUEeyrmgX2, the standard EKS Node Instance Role) which does not have any permission to consume the AWS Service. You can check that running:

# aws sts get-caller-identity
{
"UserId": "<aws_roleid>:<eks_node_ec2_instance_id>",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/eksctl-kong34-eks128-nodegroup-sta-NodeInstanceRole-qnMUEeyrmgX2/<eks_node_ec2_instance_id>"
}

To solve that, again, we need IRSA to request temporary AWS credentials based on IAM Role and Policies, so the Pod can consume AWS Secrets Manager.

AWS STS is King

It’s important to note that, in order to consume any AWS Service (including AWS Secrets Manager), we have to have AWS Credentials like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION and AWS_SESSION_TOKEN in place. Of course, you can use your existing ones. However, it’s not recommended embedding long-term AWS credentials with applications. As a best practice, we should request temporary AWS credentials dynamically to AWS STS. The supplied temporary credentials should then map to an AWS Role that has only the permissions needed to perform the tasks required.

The fundamental AWS STS request should be a “AssumeRoleWithWebIdentity” request like this:

curl "https://sts.amazonaws.com/?Action=AssumeRoleWithWebIdentity
&Version=2011–06–15
&RoleSessionName=app1
&RoleArn=arn:aws:iam::<your_aws_account>:role/kong-assumerole-withwebidentity
&WebIdentityToken=<your_web_token>"

And the response would look like this:

<AssumeRoleWithWebIdentityResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">

<AssumedRoleUser>
<AssumedRoleId>AROAXAKB57VPYHBMMNINJ:app1</AssumedRoleId>
<Arn>arn:aws:sts::<you_aws_account>:assumed-role/kong-assumerole-withwebidentity/app1</Arn>
</AssumedRoleUser>
<Provider>cognito-identity.amazonaws.com</Provider>
<Credentials>
<AccessKeyId>ASIAXAKB57VPY34MMKVI</AccessKeyId>
<SecretAccessKey>LMas8gc0JvirDaSVFGohNXCE9vQwK+4c6hYmKQOl</SecretAccessKey>
<SessionToken>IQoJb3JpZ…</SessionToken>
<Expiration>2023–10–16T18:51:09Z</Expiration>
</Credentials>

</AssumeRoleWithWebIdentityResponse>

The request refers to critical elements:

  • We are asking AWS STS to let the caller assume an IAM Role.
  • The IAM Role is “kong-assumerole-withwebidentity”
  • The assumption will be based on a WebIdentityToken presented by the caller. The WebIdentityToken should be generated by an OIDC based Identity Provider like Cognito for example.

Of course, for our use case, the IAM Role is expected to grant access to AWS Secrets Manager. The corresponding response presents the temporary credentials we should use. By setting the environment variables with the temporary credentials allow us to consume AWS Secrets Manager, which is included as policies in the IAM Role the caller assumed.

For example:

export AWS_ACCESS_KEY_ID=ASIAXAKB57VPY34MMKVI
export AWS_SECRET_ACCESS_KEY=LMas8gc0JvirDaSVFGohNXCE9vQwK+4c6hYmKQOl
export AWS_DEFAULT_REGION=us-west-1
export AWS_SESSION_TOKEN=IQoJb3JpZ…

Ultimately, that’s what IRSA does behind the scenes. Of course, there are several other components automating the process to make it transparent from the Kubernetes perspective, but that’s what we fundamentally need.

AWS IAM OIDC Identity Provider

To properly process the AssumeRoleWithWebIdentity call, AWS STS relies on AWS IAM to manage the Role requested and IAM Identity Provider capability to validate the token.

AWS IAM Identity Provider supports external OpenID Connect (OIDC) based Identity Provider (IdP) to manage user identities outside of AWS, including, for example, Amazon, Facebook, Okta and Salesforce IdPs. In order to do so, you create an IAM Identity Provider with external IdP configuration to establish trust with the OIDC-based IdP.

You can check the AWS IAM documentation to learn more about IAM Identity Provider and how to create a new one based on an external OIDC IdP.

With the AWS IAM Identity Provider created, the AWS Service consumption process would look like this:

  1. An application, running inside a Pod, implements the consumer authentication process with the external IdP.
  2. The IdP issues the related Access, Session and Identity tokens. For the purpose of the AWS Service consumption, we should use the Identity Token.
  3. The Pod sends an “AssumeRoleWithWebIdentity” request to AWS STS with the Identity Token and the Role it wants to assume. The Role should include Policies allowing the caller (in this case, the Pod) to consume the AWS Service required.
  4. AWS STS requests AWS IAM to validate the token and the role required.
  5. Using the trust relationship with the external IdP, previously established, AWS IAM validates the token.
  6. AWS IAM returns to AWS STS saying the request is ok. AWS STS returns to the Pod with the temporary AWS credentials.
  7. The Pod sets the AWS environment variables with the temporary AWS credentials
  8. The Pod can consume the AWS Service.

EKS OIDC Issuer

On the other hand, Amazon EKS hosts a public OIDC issuer URL associated with it. That means that AWS IAM can rely on it just like it does with external IdP as we showed before.

The diagram would be slightly different though:

To get IRSA working properly, an IAM Identity Provider should exist for the EKS cluster’s OIDC issuer URL. With that, any identity created in EKS can then assume a Role and consume the AWS Services defined in the Policies attached to the Role.

Kubernetes Service Account and Service Account Token

How about the token? In Kubernetes, the identities are managed by Service Accounts. In fact, any K8s Namespace has a “default” Service Account and all deployments have access to a token related to that “default” Service Account. The token, called Service Account Token, is a JWT available for each Pod.

The IAM OIDC Identity Provider is used by AWS IAM to establish trust between the Kubernetes API Server, responsible for issuing the Service Account Tokens. From the OIDC standard perspective, the Kubernetes API Server plays the Identity Provider Role as AWS IAM is the Relying Party.

With the IAM OIDC Identity Provider linked to the EKS OIDC Issuer, we can use the Service Account Token to assume IAM Roles via AWS STS.

curl "https://sts.amazonaws.com/?Action=AssumeRoleWithWebIdentity
&Version=2011–06–15
&RoleSessionName=app1
&RoleArn=arn:aws:iam::<your_aws_account>:role/kong-assumerole-withwebidentity
&WebIdentityToken=<service_account_token>"

Let’s check both the Service Account and its Token.

% kubectl get sa default -n kong -o json
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"creationTimestamp": "2023–10–23T15:26:19Z",
"name": "default",
"namespace": "kong",
"resourceVersion": "5422",
"uid": "16a69c44-dac9–40c9–9aa7–222e30a90176"
}
}

Assuming your still have the Pod running, you can see it is using the “default” Service Account and has a volume mounted with the specific path:

% kubectl get pod ubuntu -n kong -o json | jq -r ".spec.serviceAccount"
default

% kubectl get pod ubuntu -n kong -o json | jq ".spec.containers[].volumeMounts"
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kube-api-access-rtpbs",
"readOnly": true
}
]

This is a Projected Volume added by the Kubernetes ServiceAccount Admission Controller when we deployed the Pod. Read the Projected Volume documentation to learn more.

You can check what’s inside the directory with:

% kubectl exec ubuntu -n kong -- ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token

The volume is created for Kubernetes API access and it has three sources:

  • ConfigMap with the Kubernetes Cluster Certificate Authority data.
  • Namespace with namespace where the Pod has been deployed.
  • Service Account Token with the actual token related to the Service Account.
% kubectl exec ubuntu -n kong -- cat /var/run/secrets/kubernetes.io/serviceaccount/token | jwt decode -
Token header
------------
{
"alg": "RS256",
"kid": "eac61c096e0992a14d324de3e9060e64b20ad5e9"
}

Token claims
------------
{
"aud": [
"https://kubernetes.default.svc"
],
"exp": 1729779172,
"iat": 1698243172,
"iss": "https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B",
"kubernetes.io": {
"namespace": "kong",
"pod": {
"name": "ubuntu",
"uid": "2ae26245–4e8f-45fa-a6a4–4b68ab491e07"
},
"serviceaccount": {
"name": "default",
"uid": "16a69c44-dac9–40c9–9aa7–222e30a90176"
},
"warnafter": 1698246779
},
"nbf": 1698243172,
"sub": "system:serviceaccount:kong:default"
}

Accessing Kubernetes API with the Service Account Token

The Service Account Token has the kube-apiserver address (“https://kubernetes.default.svc") as the audience (“aud”), meaning the token is supposed to be used inside the cluster to reach the Kubernetes API Server.

You can try it yourself. From a local terminal, define a role with rules to access the pods inside the “kong” namespace:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: kong
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
EOF

Grant the Role to the Service Account:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: kong
subjects:
- kind: ServiceAccount
name: default
namespace: kong
roleRef:
kind: Role
name: pod-reader
EOF

Now, inside the Pod, get the token and use it in a Kubernetes API request:

TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# curl -sk https://kubernetes.default.svc/api/v1/namespaces/kong/pods --header "Authorization: Bearer $TOKEN" | jq -r ".items[].metadata.name"
ubuntu

EKS Cluster OIDC Issuer and IAM Identity Provider

So far, AWS IAM does not know anything about our EKS Cluster. We need to define a new IAM Identity Provider with a trust relationship with it.

EKS OIDC Issuer

As stated previously, all EKS clusters have an OIDC Issuer available. You can check it out with:

% aws eks describe-cluster --name kong34-eks128 --region us-west-1 | jq -r ".cluster.identity.oidc.issuer"
https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B

You can also hit its standard OIDC endpoint:

% curl -s https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B/.well-known/openid-configuration | jq
{
"issuer": "https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B",
"jwks_uri": "https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B/keys",
"authorization_endpoint": "urn:kubernetes:programmatic_authorization",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"claims_supported": [
"sub",
"iss"
],
"id_token_signing_alg_values_supported": [
"RS256"
]
}

The OIDC Issuer provides a specific endpoint with keys the IAM Identity Provider should use to connect and establish the trust relationship with it:

curl -s https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B/keys

Using the OIDC Provider and Pod Service Account Token

With no IAM Identity Provider defined, let’s see what happens when we try to use the Pod Service Account Token to get the temporary AWS credentials.

AssumeRoleWithWebIdentity Role

We don’t have any Role defined to be used with the AWS STS “AssumeRoleWithWebIdentity” request yet. As a reminder, the request is responsible for issuing temporary AWS credentials to a caller presenting a token. The caller assumes an IAM Role which has Policies with the permissions the caller should have.

Here’s the Role declaration:

aws iam create-role --role-name kong-assumerole-withwebidentity --assume-role-policy-document '{
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<your_aws_account>:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}'

The Role says that the caller, in our case the Pod, will be allowed to assume the Role, and all permissions attached to it, when presenting a token issued by the EKS OIDC Issuer.

Now, what happens if we try to send a request to AWS STS with the Pod Service Account Token? Note we are giving a name (app1) for the Role Session.

Inside the Pod run:

# cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImt….

# curl "https://sts.amazonaws.com/?Action=AssumeRoleWithWebIdentity&Version=2011-06-15&&RoleSessionName=app1&RoleArn=arn:aws:iam::<your_aws_account>:role/kong-assumerole-withwebidentity&WebIdentityToken=eyJhbGciOiJSUzI1NiIsImt…."
<ErrorResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
<Error>
<Type>Sender</Type>
<Code>InvalidIdentityToken</Code>
<Message>No OpenIDConnect provider found in your account for https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B</Message>
</Error>
<RequestId>727c353b-6034–4d04–825d-7ea3442e2061</RequestId>
</ErrorResponse>

As you can see, AWS STS says, since there’s no IAM OIDC Identity Provider for the EKS Issuer defined in the Role, it can not take the request. So let’s create and associate an IAM OIDC Identity Provider to the EKS Issuer.

Create the IAM OIDC Identity Provider

We can do that using a specific eksctl command:

eksctl utils associate-iam-oidc-provider --cluster kong34-eks128 --region=us-west-1 --approve

You can check the Provider with:

% aws iam list-open-id-connect-providers
{
"OpenIDConnectProviderList": [
{
"Arn": "arn:aws:iam::<your_aws_account>:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B"
}
]
}

You can also check the “ClientIDList” to see that it accepts tokens with an audience set with “sts.amazonaws.com” only.

% aws iam get-open-id-connect-provider --open-id-connect-provider-arn arn:aws:iam::<your_aws_account>:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B | jq -r ".ClientIDList"
[
"sts.amazonaws.com"
]

Try to get The AWS Credentials again

Send the same request again:

# curl "https://sts.amazonaws.com/?Action=AssumeRoleWithWebIdentity&Version=2011-06-15&&RoleSessionName=app1&RoleArn=arn:aws:iam::<your_aws_account>:role/kong-assumerole-withwebidentity&WebIdentityToken=eyJhbGciOiJSUzI1NiIsImt…."
<ErrorResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
<Error>
<Type>Sender</Type>
<Code>InvalidIdentityToken</Code>
<Message>Incorrect token audience</Message>
</Error>
<RequestId>8c20101a-c3de-4c0b-9d2f-8ef87c9d7f40</RequestId>
</ErrorResponse>

Now, STS is returning saying we have presented a token with an incorrect audience. That is expected since we are using the Pod Service Account Token which has “https://kubernetes.default.svc" as the standard audience (“aud”) and the EKS OIDC Issuer accepts token with audience defines as “sts.amazonws.com”

There are two ways to solve this problem:

  • Issue a new Kubernetes token to our Service Account with the right audience with and use it in the request:
kubectl create token default -n kong --audience sts.amazonaws.com
  • Or add a new audience to the Identity Provider Client List with:
aws iam add-client-id-to-open-id-connect-provider \
--open-id-connect-provider-arn arn:aws:iam::<your_aws_account>:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B \
--client-id "https://kubernetes.default.svc"

However, a second Service Account token, to be used just for AWS Service consumption would be a better and recommended solution. This second token would be injected to our Pod just like EKS did for the default one. That’s exactly what IRSA does for us in an automatic way..

Before jumping to IRSA, delete the Role we created before. We are going to create another one during the IRSA process described next.

aws iam delete-role --role-name kong-assumerole-withwebidentity

IRSA (IAM Role for Service Accounts)

IRSA is a feature that allows you to assign an IAM role to a Kubernetes Service Account. It leverages the Kubernetes support for Dynamic Admission Control, more precisely the Mutating Admission Webhook, preinstalled in any EKS cluster.

The webhook gets to work when a new Pod, referring to a Service Account, is scheduled to be created. The Service Account should have an annotation with an AWS IAM Role. As the name implies, the webhook mutates the Pod injecting the temporary AWS Credentials, issued by AWS STS, necessary to call the requested AWS Services by the Role annotated in the Pod. The webhook also injects a new Service Account Token into the Pod.

When a Pod is being created, the webhook calls the Kubernetes API Server, more precisely the OIDC Issuer, to generate a JWT token for the Service Account used in the Pod declaration. A volume is mounted for the new Pod with the Token, similarly to what happens with the default Service Account volume.

Create a Policy with permissions to list the existing Secrets

In order to get our Pod consuming the secrets stored in AWS Secrets Manager, we need a policy defining the permission to access them. The policy will be attached to a Role which will be assumed by the Pod during the deployment.

aws iam create-policy \
--policy-name list-secrets-policy \
--policy-document '{
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:ListSecrets",
"secretsmanager:GetSecretValue"
],
"Resource": "*"
}
]
}'

Create the IAM Service Account

Now, let’s create the Service Account we are going to use for our Pod deployment. eksctl provides a command to create IAM Service Accounts. In fact, the following command creates:

  • The EKS Service Account in your namespace
  • The IAM Role with our Policy attached.
eksctl create iamserviceaccount \
--name kong34-eks128-sa \
--namespace kong \
--cluster kong34-eks128 \
--region us-west-1 \
--approve \
--role-name kong34-eks128-role \
--override-existing-serviceaccounts \
--attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`list-secrets-policy`].Arn' - output text)

Check the Service Account

If you check the Service Account, you will see it has the required annotation referring to the Role created by the iamserviceaccount command.

% kubectl describe sa kong34-eks128-sa -n kong
Name: kong34-eks128-sa
Namespace: kong
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<your_aws_account>:role/kong34-eks128-role
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>

Check the Role and Policy

You can check the Role and Policy also. Note the Role is quite similar to the one we created manually before. iamserviceaccount adds two conditions for the “sub” and “aud” claims:

% aws iam get-role --role kong34-eks128-role
{
"Role": {
"Path": "/",
"RoleName": "kong34-eks128-role",
"RoleId": "AROAXAKB57VPX7BMUHHKE",
"Arn": "arn:aws:iam::<you_aws_account>:role/kong34-eks128-role",
"CreateDate": "2023–10–25T17:58:28+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012–10–17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<your_aws_account>:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B:sub": "system:serviceaccount:kong:kong34-eks128-sa",
"oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B:aud": "sts.amazonaws.com"
}
}
}
]
},
"Description": "",
"MaxSessionDuration": 3600,
"Tags": [
{
"Key": "alpha.eksctl.io/cluster-name",
"Value": "kong34-eks128"
},
{
"Key": "alpha.eksctl.io/iamserviceaccount-name",
"Value": "kong/kong34-eks128-sa"
},
{
"Key": "alpha.eksctl.io/eksctl-version",
"Value": "0.163.0-dev+e4222edab.2023–10–24T06:23:06Z"
},
{
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name",
"Value": "kong34-eks128"
}
],
"RoleLastUsed": {}
}
}

As expected the Role refers to our Policy:

% aws iam list-attached-role-policies --role kong34-eks128-role
{
"AttachedPolicies": [
{
"PolicyName": "list-secrets-policy",
"PolicyArn": "arn:aws:iam::<your_aws_account>:policy/list-secrets-policy"
}
]
}

Deploy the Pod with the Service Account

After creating the Service Account and IAM Role, we are ready to deploy our Pod, with the Service Account injected. Delete your current Pod and redeploy it addin the Service Account in the “spec” section:

kubectl delete pod ubuntu -n kong

kubectl run -n kong \
--overrides='{ "spec": { "serviceAccountName": "kong34-eks128-sa" } }' \
--rm=true \
-i --tty ubuntu \
--image=claudioacquaviva/ubuntu-awscli:0.1 -- /bin/bash

This time, you should be able to consume the AWS Secrets Manager, requesting the secrets:

# aws secretsmanager list-secrets --region us-west-1 | jq -r ".SecretList[].Name"
kongcp1-key
kongcp1-crt

Check the Pod deployment

Service Account

It’s important to check the main updates IRSA has done for our deployment. From a local terminal run, we can see the Service Account inside the “spec” section:

% kubectl get pod ubuntu -n kong -o json | jq -r ".spec.serviceAccount"
kong34-eks128-sa

Environment Variables

Similarly, you should be able to see the AWS environment variables. Note that IRSA, differently to what we have done previously, sets the AWS_WEB_IDENTITY_TOKEN_FILE and AWS_ROLE_ARN variables instead. The reason is the AWS SDK CLI uses them implicitly to hit AWS STS and get the environment variables. You can learn more about it in the AWS SDK documentation for Python or JavaScript, for example.

% kubectl exec ubuntu -n kong - env | grep AWS
AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=us-west-1
AWS_REGION=us-west-1
AWS_ROLE_ARN=arn:aws:iam::<your_aws_account>:role/kong34-eks128-role
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token

STS Caller Identity

Inside the Pod you can check the STS Identity

# aws sts get-caller-identity
{
"UserId": "AROAXAKB57VPX7BMUHHKE:botocore-session-1698439532",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong34-eks128-role/botocore-session-1698439532"
}

Token

Just like the standard Pod Service Account Token, IRSA stores the new token in a specific volume available in the “eks.amazonaws.com” directory:

% kubectl get pod ubuntu -n kong -o json | jq ".spec.containers[].volumeMounts"
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kube-api-access-c9r92",
"readOnly": true
},
{
"mountPath": "/var/run/secrets/eks.amazonaws.com/serviceaccount",
"name": "aws-iam-token",
"readOnly": true
}
]

You can check what’s inside the directory and decode the token with:

% kubectl exec ubuntu -n kong - ls /var/run/secrets/eks.amazonaws.com/serviceaccount
token

% kubectl exec ubuntu -n kong - cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token | jwt decode -
Token header
------------
{
"alg": "RS256",
"kid": "eac61c096e0992a14d324de3e9060e64b20ad5e9"
}

Token claims
------------
{
"aud": [
"sts.amazonaws.com"
],
"exp": 1698349285,
"iat": 1698262885,
"iss": "https://oidc.eks.us-west-1.amazonaws.com/id/B1399294EFF1F7B7024B9ABDE384194B",
"kubernetes.io": {
"namespace": "kong",
"pod": {
"name": "ubuntu",
"uid": "28b12d5b-9f08–400e-957c-aab427751c32"
},
"serviceaccount": {
"name": "kong34-eks128-sa",
"uid": "89bc9c5e-ba2d-452a-977a-a7ecc2deb729"
}
},
"nbf": 1698262885,
"sub": "system:serviceaccount:kong:kong34-eks128-sa"
}

If you will, you can invoke AWS STS yourself with the token and get new temporary AWS credentials. For example, inside your Pod run:

unset AWS_STS_REGIONAL_ENDPOINTS
unset AWS_DEFAULT_REGION
unset AWS_REGION
unset AWS_ROLE_ARN
unset AWS_WEB_IDENTITY_TOKEN_FILE

Get your token

TOKEN=$(cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token)

Send a request to STS

curl "https://sts.amazonaws.com/?Action=AssumeRoleWithWebIdentity&Version=2011-06-15&&RoleSessionName=app1&RoleArn=arn:aws:iam::<your_aws_account>:role/kong34-eks128-role&WebIdentityToken=$TOKEN" > envvars.xml

Set your new environment variables using xmllint:

export AWS_ACCESS_KEY_ID=$(xmllint - xpath "//*[local-name()='AccessKeyId']/text()" envvars.xml)
export AWS_SECRET_ACCESS_KEY=$(xmllint - xpath "//*[local-name()='SecretAccessKey']/text()" envvars.xml)
export AWS_SESSION_TOKEN=$(xmllint - xpath "//*[local-name()='SessionToken']/text()" envvars.xml)
export AWS_DEFAULT_REGION=us-west-1

Check the Caller Identity

# aws sts get-caller-identity
{
"UserId": "<your_user_id>:app1",
"Account": "<your_aws_account>",
"Arn": "arn:aws:sts::<your_aws_account>:assumed-role/kong34-eks128-role/app1"
}

Consume AWS Secrets Manager

aws secretsmanager list-secrets - region us-west-1 | jq -r ".SecretList[].Name"
kongcp1-key
kongcp1-crt

You can use the original environment variables again:

unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_DEFAULT_REGION
unset AWS_SESSION_TOKEN
export AWS_STS_REGIONAL_ENDPOINTS=regional
export AWS_DEFAULT_REGION=us-west-1
export AWS_REGION=us-west-1
export AWS_ROLE_ARN=arn:aws:iam::<your_aws_account>:role/kong34-eks128-role
export AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token

External Secrets Operator

Up to this point, IRSA has solved a big challenge for us: it automatically provides temporary AWS credentials to Kubernetes deployments, abstracted with Service Accounts.

In our case, we are leveraging IRSA to consume AWS Secrets Manager. We need another component to provide the secrets we are getting from AWS Secrets Manager as local Kubernetes secrets. That’s the role of the External Secrets Operator.

External Secrets Operator (ESO) integrates Kubernetes cluster with external secret management systems like AWS Secrets Manager and others. The operator reads information from AWS Secrets Manager and automatically injects the values into a Kubernetes Secret.

You can install ESO with Helm:

helm repo add external-secrets https://charts.external-secrets.io

helm install external-secrets \
external-secrets/external-secrets \
-n external-secrets \
--create-namespace \
--set installCRDs=true

ESO provides its own CRDs to create SecretStores and ExternalSecrets, for example. The first object we should create is a SecretStore, representing AWS Secrets Manager. One very important setting is the “auth” configuration referring to the ServiceAccount previously created. That allows ESO to hit AWS SecretsManager through IRSA:

cat <<EOF | kubectl apply -f -
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secretsmanager
namespace: kong
spec:
provider:
aws:
service: SecretsManager
region: us-west-1
auth:
jwt:
serviceAccountRef:
name: kong34-eks128-sa
EOF

You can check the SecretStore with:

% kubectl get secretstore aws-secretsmanager -n kong -o json | jq -r ".status"

Now, let’s create two ExternalSecrets, one for the Digital Certificate and the second one for the Private Key. The ExternalSecrets declaration refers to the JSON structure we used. For example, the first declaration creates an ExternalSecret named “ext-kongcp1-crt” as well as a regular Kubernetes Secret with the same name.

The ExternalSecret declaration has a reference to the AWS Secrets Manager secret’s key and property (remoteRef) and another reference to the actual Kubernetes Secret to be created (target).

cat <<EOF | kubectl apply -f -
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: ext-kongcp1-crt # Name of the ExternalSecret to be created
namespace: kong
spec:
secretStoreRef:
name: aws-secretsmanager
kind: SecretStore
target:
name: ext-kongcp1-crt # Name of the Secret to be created
data:
- secretKey: cert
remoteRef:
key: kongcp1-crt # AWS SecretsManager secret
property: cert # AWS SecretsManager secret's property
EOF

The secret is available for you:

% kubectl get secret ext-kongcp1-crt -n kong -o json | jq -r ".data.cert" | base64 - decode
-----BEGIN CERTIFICATE-----
MIIBvTCCAUSgAwIBAgIUYuXkE+EdyJKi7tfAJFBbLw5Xzl8wCgYIKoZIzj0EAwIw
FjEUMBIGA1UEAwwLa29ubmVjdC1jcDEwHhcNMjMxMDIzMTkyMjEzWhcNMjYxMDIy
….
-----END CERTIFICATE-----

Create a second ExternalSecret for the Key.

cat <<EOF | kubectl apply -f -
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: ext-kongcp1-key
namespace: kong
spec:
secretStoreRef:
name: aws-secretsmanager
kind: SecretStore
target:
name: ext-kongcp1-key
data:
- secretKey: key
remoteRef:
key: kongcp1-key
property: key
EOF

And here it is:

% kubectl get secret ext-kongcp1-key -n kong -o json | jq -r ".data.key" | base64 - decode
-----BEGIN PRIVATE KEY-----
MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDCfA6POGkQwAa07Ydia
OxkdavB0gk9gvgWm32dbIUKdu6ICL/xkNZ88bZv9R+mrLdyhZANiAAQHZGEZmkmR
….
-----END PRIVATE KEY-----

Kong Konnect Data Plane deployment

Finally, we are ready to deploy the Konnect Data Plane (DP) in the EKS Cluster. Since we have issued the Private Key and Digital Certificate pair and have stored them in AWS Secrets Manager, the first thing to do is create the new Konnect Control Plane (CP). You need to have a Konnect PAT (Personal Access Token) in order to send requests to Konnect. Read the Konnect PAT documentation page to learn how to generate one.

Create a Konnect Control Plane with the following command. It configures the Pinned Mode for the CP and DP communication, meaning we are going to use the same Public Key to both CP and DP.

curl -X POST \
https://us.api.konghq.com/v2/control-planes \
--header 'Authorization: Bearer <your_pat>' \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data '{
"name": "cp1",
"description": "Control Plane 1",
"cluster_type": "CLUSTER_TYPE_HYBRID",
"labels":{},
"auth_type": "pinned_client_certs"
}'

Get the CP Id with:

curl -s https://us.api.konghq.com/v2/control-planes \
--header 'Authorization: Bearer <your_pat>' | jq -r '.data[] | select(.name=="cp1") | .id'
<your_cp_id>

Get the CP’s Endpoints with:

% curl -s https://us.api.konghq.com/v2/control-planes/<your_cp_id> \
--header 'Authorization: Bearer <your_pat>' | jq -r ".config"
{
"control_plane_endpoint": "https://1234567890.us.cp0.konghq.com",
"telemetry_endpoint": "https://1234567890.us.tp0.konghq.com",
"cluster_type": "CLUSTER_TYPE_CONTROL_PLANE",
"auth_type": "pinned_client_certs"
}

Now we need to Pin the Digital Certificate. Use the CP Id in your request:

cert="{\"cert\": $(jq -sR . ./kongcp1.crt)}"

curl -X POST https://us.api.konghq.com/v2/runtime-groups/<your_cp_id>/dp-client-certificates \
--header 'Authorization: Bearer <your_pat>' \
--header 'Content-Type: application/json' \
--header 'accept: application/json' \
--data $cert

With the CP created, let’s deploy the DP. As we discussed in the beginning of this post, we are taking the typical “values.yaml” declaration and changing it to use the Digital Certificate and Private Key pair we get with the External Secrete Operator through IRSA.

The two updates for the “values.yaml” are:

  • In order to use IRSA, we have to add the same Kubernetes Service Account created with the “eksctl create iamserviceaccount” command we ran before.
  • Inside the “env” section we change both “cluster_cert” and “cluster_cert_key” settings with references to the secrets created with ESO.

Besides, use the CP endpoints for the specific settings.

image:
repository: kong/kong-gateway
tag: "3.4"

deployment:
serviceAccount:
create: false
name: kong34-eks128-sa

admin:
enabled: false

env:
role: data_plane
database: "off"
cluster_mtls: pki
cluster_control_plane: 1234567890.cp0.konghq.com:443
cluster_server_name: 1234567890.us.cp0.konghq.com
cluster_telemetry_endpoint: 1234567890.us.tp0.konghq.com:443
cluster_telemetry_server_name: 1234567890.us.tp0.konghq.com
cluster_cert:
valueFrom:
secretKeyRef:
name: ext-kongcp1-crt
key: cert
cluster_cert_key:
valueFrom:
secretKeyRef:
name: ext-kongcp1-key
key: key
lua_ssl_trusted_certificate: system
konnect_mode: "on"
vitals: "off"

ingressController:
enabled: false
installCRDs: false

manager:
enabled: false

Deploy the Konnect Data Plane with a Helm command:

helm install kong kong/kong -n kong --values ./values.yaml

You should see the Data Plane running:

% kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-kong-66d8759f59-hjcqg 1/1 Running 0 73s

Check the Konnect GUI also:

Conclusion

Kong Konnect simplifies API management and improves security for all services infrastructure. Try it for free today!

This blog post described Kong Konnect Data Plane deployment to:

  1. Take advantage of the inherently flexible capability provided by the Konnect Data Plane deployment to integrate with AWS IRSA to restrict and control the access to AWS Services.
  2. Externalize Konnect Data Plane secrets to AWS Secrets Manager implements a safer deployment, leveraging AWS Identity and Access Manager (IAM) roles.

--

--