Secret management in Kubernetes and GCP, the journey

Pedro Brochado
Kudos Engineering
Published in
3 min readJul 17, 2018
Photo by Cristina Gottardi on Unsplash

Managing configuration values is always an issue. First you will ask “Will this value ever be different on development/live/etc?” if so, it shouldn’t be hard-coded.

Then you face another question, “What is the best way to manage those values?”

- yaml, json, xml, toml

- hard-code it on configuration classes (remember JavaEE? 🤢)

Eventually one of these configuration values is an application secret and you decide to use environment variables because you can set them on the server. Several tools can help you, e.g. direnv, autoenv, dotenv…

However, the plot thickens if you deploy to multiple servers. Managing environment variables can be laborious, and forgetting to update one of them on one of the servers is a matter of time.

And above all, what to do when your CI/CD pipeline requires “everything” (including secrets) to be code?

The real problem:

How to deploy several micro-services on a Kubernetes cluster without forgetting to update the secrets.

Solution: Kubernetes secrets

As the documentation says, the steps to add them to your service configuration are pretty easy to follow:

1. Create a secret file for your secret environment variable

2. Tell your service to use that secret

3. Finally create it on Kubernetes with kubectl create -f ./secret.yaml

At this point things are great, you can check-in your configuration and Kubernetes stores your secrets, great! 😎💯

Downsides:

Eventually you realise you still have configuration (environment variables) not checked-in with the service configuration because… secrets… Worse than that, one of them can change and forgetting to update the secrets on Kubernetes is still a matter of time.

Every time a new secret is added several steps on different phases must be executed:

  1. Add the new secret to the code
  2. Get the current secrets
  3. Edit the file and remember to encode the secret using base64
  4. Apply the secret file

Solution: Keep everything in the code!

Encrypt the original secrets file, push it alongside with the code and service configurations and add steps to interact with kubectl secrets.

Later on, add a decrypt step on your CI/CD to apply the secrets on k8s. If using Google GCB and Google KMS it is straightforward by adding just two new extra steps.

Decrypt your secrets during the deployment on GCB:

Apply them using kubectl on GCB:

Extra feature:

Kubernetes secrets work for files too. See Using Secrets as Files from a Pod for more detail. This can be very useful to mount SSL certificates on a reverse proxy. Below we have a simplified example of a Google Cloud configuration yaml file that creates a volume with secrets then mounts them in the container:

Downsides:

You need to find a way to store the key to encrypt your secrets somewhere, in our case using Google KMS was almost a no-brainer. Also, when using Google GCB as your CD/CI it’s two extra build steps. Finally, the secret is decrypted on GCB for the length of the build, clean up steps could be introduced to shorten that window.

If you like what you read and think you could contribute to our team at Kudos then take a look at our careers page to see what roles we currently have open.

--

--