Cross-account IAM roles on Amazon EKS

Justin Plute
Jan 27 · 7 min read

In this tutorial, we will deploy a simple Node.js application using the Express Web Framework onto an Amazon EKS cluster in Account A, which will use presigned S3 URLs to fetch and upload items to an S3 Bucket in a different AWS Account, Account B.

Having Pods assume cross-account IAM roles is a practical use case for multi-tenant Kubernetes clusters. In an enterprise environment, the tenants of a cluster are distinct teams within the organization. Typically, each tenant has a corresponding namespace and customers in that namespace will often want their applications to access resources in another AWS account.

In addition to discussing how Amazon EKS clusters support IAM Roles for Service Accounts (IRSA), we will dive into provisioning the necessary AWS IAM and S3 Bucket resources with Terraform in Account B. Then, we will explain how the Node.js application interacts with S3 using presigned URLs. And finally, we will demonstrate passing the correct annotations to the Kubernetes service account when deploying our Node.js app via Helm to the EKS cluster in Account A, so that it can retrieve the temporary AWS role credentials from Account B.

The full source code is located in GitHub.

Photo courtesy of aws.amazon.com.

Overview of IRSA

AWS Identity and Access Management (IAM) is AWS’s access control system for managing authentication and authorization for AWS resources. You use AWS IAM to grant users access to AWS EKS and other AWS resources, such as Amazon S3 and Athena. Fascinating enough, is that AWS IAM permissions can work right alongside Kubernetes Role-based access control (RBAC), which provides granular access controls for specific objects in a cluster or namespace.

But how do they actually work together?

With IAM Roles for Service Accounts (IRSA) on Amazon EKS clusters, you can associate an IAM role with a Kubernetes service account. This service account can then provide AWS permissions to the containers in any pod that uses that service account. With this feature, you no longer need to provide extended permissions to the worker node IAM role so that pods on that node call AWS APIs. Not only does this follow the Principle of Least Privilege, but it allows for auditability, and credential isolation.

The secret here is that OIDC federation access allows your application to assume an IAM role via the Secure Token Service (STS), enabling authentication with an OIDC provider. When the OIDC provider receives an OIDC JSON Web Token (JWT), it can then be used to assume an IAM role.

This is able to work because Amazon EKS now hosts a public OIDC discovery endpoint per cluster containing the signing keys for Project Service Account Tokens, which are actually valid OIDC JWTs for Kubernetes service accounts. In turn, this allows AWS IAM to validate and accept the Kubernetes-issued OIDC tokens. And because the OIDC JWT also contains the Kubernetes service account identity, we are able to restrict IAM roles to only certain Pods.

It’s an ideal solution and the AWS Container Services team deserves major props — primarily because this feature eliminates the need for third-party solutions, such as kiam or kube2iam. And if you’re an experienced Cluster Administrator, you know it’s somewhat of a hassle to get those IAM add-on modules production-ready.

Deploying the AWS Resources via Terraform

We’ll use Terraform, an Infrastructure as Code (IaC) offering from HashiCorp, to create the following AWS resources in AWS Account B:

  • An IAM OIDC Identity Provider (IdP). This is needed for the IAM role’s trust policy.
  • An S3 Bucket for our Node.js app to upload and fetch files.
  • An IAM role for our Node.js app with the proper S3 permissions.

If your cluster supports IRSA, it will have an OpenID Connect Issuer URL associated with it. You can view this URL in the Amazon EKS console, or you can use the following AWS CLI command to retrieve it.

Example Output:

NOTE: You must use at least version 1.16.232of the AWS CLI to receive the proper output from this command.

With this OpenID Connect Issuer URL, we can now create the IAM OIDC Identity Provider (IdP) in Account B. If you already created an Identity Provider for your EKS Cluster in Account A, you still need to create an IdP in each AWS account that you want Pods on your EKS cluster to assume cross-account IAM roles. The IdP provisioned in Account B is needed for the trust policy of Account B’s IAM role, not one created in Account A.

We will provide the output from the above AWS command as the url value in the Terraform IdP resource below.

NOTE: The thumbprint of the Root CA for EKS OIDC, valid until 2037, is statically added as there is an open bug retrieving the OIDC provider thumbprint.

The Amazon Resource Name (ARN) is how AWS uniquely identify resources. We will need to grab the ARN from the IdP output that we just created in Account B. The ARN of the IdP will be in this format:

arn:aws:iam::AWS_ACCOUNT_ID_B:oidc-provider/EKS_OIDC_PROVIDER.

This value is used as the principal in the IAM role’s trust policy. To further restrict the IAM role to only the Node.js pod on the EKS cluster, we will create an IAM condition on the Kubernetes namespace that contains our hosted Node.js application and its associated Kubernetes service account.

The value of the EKS_OIDC_PROVIDER variable below is the EKS OIDC URL from earlier, but with the https:// omitted, e.g., oidc.eks.region.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D30413.

For this tutorial, we will deploy the Node.js application to the nodejs namespace on our EKS cluster and call the Kubernetes service account, nodejs-sa.

We must now create an IAM policy that specifies the permissions that we would like the containers in our pod to have. For the IAM role we create for our Kubernetes service account, this is the policy we will use to access S3 Bucket resources in Account B:

NOTE: We must create an IAM role for our Kubernetes service account to use before we associate it with the service account. The trust relationship is scoped to our cluster and service account so that each cluster and service account combination requires its own role.

Writing the Node.js Application

The sample Node.js app will have two REST endpoints to return s3 presigned URLs so that we can upload and retrieve files stored in an S3 Bucket from Account B. The below snippet is boilerplate code to generate a presigned URLs with getObject permissions. The upload (“put”) presigned URL function is almost identical to the fetch (“get”) function, however it uses putObject as a first parameter instead of getObject.

  // creates presigned URL with getObject permissions
const url = s3.getSignedUrl('getObject', params)

return {
"statusCode": 200,
"body": JSON.stringify({ url }),
}
}

When the application is deployed on the EKS cluster, we will be able to pass an S3 Bucket key to the endpoint of our hosted application:

And with the URL returned from the response, we can then download that file directly from the S3 Bucket on the client-side.

Instead of having the application contain logic to upload and process file chunks, the presigned URL returned to the client via the signed-url-put endpoint can be used to upload to the AWS S3 service directly.

Adding Annotations to the Service Account

In the Helm Chart that we use to deploy our Node.js application to the EKS cluster, the section we’ll primarily focus on is the annotations passed to the Kubernetes service account, particularly eks.amazonaws.com/role-arn. The Amazon EKS Pod Identity Webhook on the cluster watches for pods that are associated with service accounts with this annotation and applies the following environment variables to them.

The AWS SDK or AWS CLI in the container uses the token file to assume the cross-account IAM role. The latest versions of the AWS SDK contain a new credential provider that calls sts:AssumeRoleWithWebIdentity, exchanging the Kubernetes-issued OIDC token for AWS role credentials.

Our Helm Chart will pass the Kubernetes service account annotations via .Values.serviceAccount.annotations.

Once generated, the Kubernetes service account template deployed to our cluster will look something like this:

And that’s it! We can deploy the Helm Chart to our EKS cluster and the application should be able to interact with the S3 Bucket in Account B. With IRSA, you can also use chained AssumeRole operations. Although this approach does not require creating an IdP in another AWS account, there is more work inside the container for it to assume a cross-account IAM role.

Known Limitations: In order for this solution to work, the containers in your pods must use an AWS SDK version that supports assuming an IAM role via an OIDC web identity token file. Moreover, the IAM roles for service accounts feature is available only on new Amazon EKS Kubernetes version 1.14 clusters, and clusters that were updated to versions 1.14 or 1.13 on or after September 3rd, 2019.

Further Reading

The Startup

Medium's largest active publication, followed by +606K people. Follow to join our community.

Justin Plute

Written by

Helping fellow engineers learn about the cloud ☁ Doggo dad to Charlie 🐕 Traveler ✈️ Senior SRE at Disney. Previously Amazon and Nordstrom. (he/him)

The Startup

Medium's largest active publication, followed by +606K people. Follow to join our community.

More From Medium

More from The Startup

More from The Startup

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade