Using AWS KMS for application secrets in Kubernetes

Michael Treacher
4 min readSep 17, 2017

--

Assumptions made with this article about the readers:

  • You’re running a Kubernetes cluster on AWS.
  • You’ve used the AWS CLI before.
  • You’re familiar with building and deploying Docker images.
  • You’ve ran a Cloudformation template before.

In Kubernetes there are a few different ways to store application secrets, one of the common ways is to use Kubernetes secret objects but some of the concerns I have with this approach are that:

  • It’s very easy to see all the secrets in plain text when you have access to the cluster through the dashboard.
  • Secrets are not currently encrypted at rest by default. (Kubernetes 1.7 supports this feature but it’s alpha and requires work to set this up)
  • There are complications around having to encode your secrets as base64 values without newlines.
  • Coupling application secrets to Kubernetes makes it harder to install applications on other clusters.
  • When you’re already using KMS for encrypting your application secrets, it’s an easier migration to Kubernetes if you don’t have to use Kubernetes secrets.

An option which I’ve been using recently is encrypting secrets with the AWS KMS service. Some of the tools used to get this working on Kubernetes include:

  • kube2iam (tool for allowing pods to assume IAM Roles)
  • shush (tool which makes encrypting and decrypting KMS values easy)

We have a common scenario which is that we’ve got a list of environment variables that we’d like to encrypt and have them decrypted by the application at runtime in a secure manner. Here is our list of environment variables:

Before we can encrypt these values we need to run some Cloudformation to create our KMS Key and setup appropriate access to that key:

Next we’ll need to give our Kubernetes nodes the ability to assume roles, this can be achieved by adding the following statement to our nodes IAM Role: (here we’re limiting the scope of roles the cluster can assume with the /k8s/ path)

Now that we’ve got our key setup and our node can assume roles, we need to encrypt our secrets. This will require us to login to AWS via the command line with a user that has access to perform KMS encryption. I wrote the following script to make it a bit easier to encrypt multiple KMS values using shush:

We can then run ./feed-secrets-to-shush.sh secrets.txt alias/kube-kms-example > encrypted-secrets.yaml which will create the following file:

The environment variable keys have been prepended with KMS_ENCRYPTED_. The reason for this is because we’ll be using shush exec as an entry point for our container which does the following:

  • Gathers the environment variables prepended with KMS_ENCRYPTED_.
  • Decrypts the values of the variables.
  • Injects the variables into the process that’ll be running on the container with KMS_ENCRYPTED_ removed from the key. e.g KMS_ENCRYPTED_FOO -> FOO

One of the benefits of this approach is that the decrypted secrets are stored in memory rather than as environment variables in the container, this makes it difficult to retrieve the values when you’re on the host machine.

Next we’ll create a basic Docker image that will print out the secrets so that we can test whether or not this strategy has worked:

To allow our pods to assume the role we defined earlier we’ll need to install kube2iam into our cluster. The following configuration worked on my baseline kops cluster but if you’re running a different CNI than kubenet you’ll need to change the host-interface, details on this can be found here.

We can deploy the above by running kubectl apply -f kube2iam.yaml.

Next we’ll deploy our application to the Kubernetes cluster using the following configuration defined below by running kubectl apply -f deployment.yaml.

Here we’re storing our encrypted secrets in a ConfigMap and we’re including the ConfigMap as environment variables for our Deployment spec.

The annotation iam.amazon.com/role is letting the kube2iam proxy know that when calls to the AWS metadata endpoint are made from this pod for retrieving credentials, that the role we want credentials for is arn:aws:iam::{AccountId}:role/k8s/{RoleName}. Please note that the AccountId and RoleName have been left out for privacy reasons.

When we look at the logs of our application running in our cluster we can see the following:

Success! We’ve proved that our encrypted secrets have been decrypted within our container process. I’ve made all of the code required to get the above working available on Github, thanks for reading and I hope this helps.

--

--

Michael Treacher

Devops Engineer with a current focus on Kubernetes and Golang