Application Secrets Encryption in Kubernetes and Anthos Products

Sergey Shcherbakov
Google Cloud - Community

--

Here is Yet Another Summary of the tips and tricks for keeping application secret values encrypted at rest in Kubernetes applications. This time covering Google Anthos product flavors as well.

The following small table of contents should lead you directly to the section of interest:

Definitions and Introduction

Application Secret is a piece of sensitive information, such as passwords, API keys, or other credentials, that an application needs to access resources or services. The secrets need to be stored securely.

Let’s define an application secret as anything about application that its developer wants to keep secret. For example, passwords, private TLS certificate keys, credentials to access other services and anything else that only the application process must be able to read at runtime.

Secrets in Kubernetes Applications

In Kubernetes, a central distributed database in the control plane, known as Etcd, actively stores all Kubernetes resource definitions and metadata. This storage includes the content of Kubernetes Secrets, ConfigMaps, and virtually all Kubernetes resources defined in the cluster.

Kubernetes applications typically access the application secrets directly through Kubernetes Secret resources. These get either mounted into the application pod file system or duplicated into an environment variable.

Expectations

What would an organization with high security requirements expectations from application secrets in Kubernetes?

  • Resistance to physical access to medium — the data needs to be stored encrypted at rest so that nobody getting physical access to the storage device could retrieve the information
  • Encryption with custom keys — it should be possible to provide and manage your own generated encryption keys
  • Encryption enforcement — it would be good if secrets encryption could be enforced so that application developers accessing the cluster would not be able to switch off protection
  • Seamless encryption — encryption should not incur additional burden and requirements for the applications running on the cluster though
  • Multi-tenant access segregation — it would be great if one application would not be able to access secrets of another application running on the same cluster
  • Convenient access to secret values — when needed, authorized administrators should be able to conveniently access and inspect contents of the secrets
  • Key rotation — last but often not least, standard security recommendations prescribe regular rotation (that is replacement) of the encryption keys. Automated rotation also helps to ensure that in case of a credential leak, processes are in place to rotate the credential without risking production downtime

Default Encryption

How does Kubernetes, and Anthos products in particular, encrypt secrets?

Default data encryption in Anthos products

In GKE in Google Cloud everything persisted is encrypted automatically and by default by Google infrastructure. I recommend reviewing the Google data-at-rest encryption whitepaper for the details of that promise.

Anthos is container platform for running modern apps anywhere consistently at scale.

An Anthos on bare metal cluster is configured by default to encrypt Etcd database contents using the standard Kubernetes encryption.

In Anthos on VMware there are several other options available.

First, VMware vSphere can be configured to encrypt virtual machines with vSphere VM encryption altogether.

Always-on secrets encryption” automatically generates keys used to encrypt application secrets before they are persisted in the Etcd database. The encryption keys in this case are stored on the Admin cluster data disk in case of the Anthos control plane version 1, and the user cluster control plane node in case of control plane version 2.

There is an option of encrypting Etcd database using “HSM-based secrets encryption” on-prem hardware HSM appliance. There is support for one of the HSM vendors which is in public preview at the moment.

In the next sections we will go through the other options that can help matching higher security expectations.

What if we would like to

  • Use our own generated encryption keys and be able to revoke access to the secrets whenever we’d like
  • Enforce teams to encrypt their application secrets while not adding them more headache
  • Allow application team admins accessing their secrets, but only their secrets, and not those of a neighboring application?

Open Source Tools

Here are a couple of Open Source Tools that can help with those questions.

External Secrets

There is a rather well known open source project External Secrets Operator.

It is a Kubernetes Operator that can synchronize secret values from different sources including Google Cloud Secret Manager service with Kubernetes Secret objects. They can then be consumed by applications as usual.

This approach provides some benefits but the secret values eventually land in the Etcd database of the cluster as well. Hence leaving several questions open ended.

External Secrets is an open source tool that is not covered by Google support or SLA.

External Secrets Operator

Pros:

  • Kubernetes native widely used tool
  • Supports Workload Identity

Cons:

  • Secrets are replicated to Kubernetes Secrets and land in the cluster Etcd database
  • Doesn’t provide fine-grained Audit Access Logs
  • Open source driver and controller that needs to be installed and maintained
  • No Google Cloud support and no SLA

B3rg1a$

Another well known open source tool that can help keeping application secrets secret is Berglas. It is written by a Google engineer and utilizes Google Cloud services such as Cloud KMS, Cloud Storage and optionally Google Secret Manager to actually store application secrets content.

In this case secrets are stored securely in Google Cloud and do not leave a footprint in the Kubernetes cluster.

Cloud KMS service stores encryption keys and provides rich set of features such as Bring Your Own custom keys and automatic key rotation.

An interesting feature of Berglas is that it allows organizing application secrets into “folders”, allowing setting up fine grained access permissions.

On the negative side: no Google support and SLA and no enforcement for application teams to use it.

https://insights.project-a.com/a-painless-way-to-manage-secrets-in-google-kubernetes-engine/

Pros:

  • Integrates with Google services (Google Cloud Storage, Google Secret Manager)
  • Integrates with GKE and Anthos
  • Advanced IAM management with “secret folders”

Cons:

  • Advanced cluster setup
  • No Google support and no SLA

Google Secret Manager

Google Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud.

It is a convenient storage for all kinds of application credentials with access via Google API, gcloud, with Terraform support and rich UI interface.

It allows setting up fine grained access permissions to the secrets and it is also fully integrated into the Google Cloud ecosystem including “Cloud Logging and Monitoring” and customer managed key encryption by Cloud KMS.

There are several ways to utilize Secret Manager for Kubernetes secrets encryption.

Direct API Call

The first and straightforward way to use Secret Manager is by accessing its Google API from the Kubernetes application directly.

The Google Cloud SDK library is a convenient means to implement Secret Manager access in the application.

With this approach

  • Secrets remain encrypted in the Google Cloud at all times and only show up in the application process memory when they are needed
  • Fine grained access permissions and audit logs are provided by the Secret Manager

On the negative side, is the need to perform custom application modification to make use of this approach.

It doesn’t cover overall Kubernetes cluster data encryption.

Direct API Call to Google Secret Manager

Pros:

  • Secrets remain in the pod memory. Calls to the GSM API are HTTPS-protected
  • Supports Workload Identity for fine-grained access and audit logs

Cons:

  • Hard to adopt for existing applications
  • More code to maintain

Kubernetes Secret Store CSI Driver

There is a Kubernetes CSI driver for Google Secret Manager.

Using this Open Source mechanism, Google Secret Manager (GSM) secrets can be mounted into the application pod’s file system directly.

That is a more portable approach that does not require application modification.

You can still have secret access isolation and fine grained access and audit logs for your secrets.

No Google support and no SLA for this open source component. But the project is actively monitored by the Google engineers.

Kubernetes Secret Store CSI Driver

GSM CSI driver can also synchronize the Secret Manager secret values in the Google Cloud with the Kubernetes Secret objects inside the Kubernetes cluster. This is similar to External Secrets Operator discussed above, and relies upon the “Sync as Kubernetes Secret” Kubernetes feature. It is also possible to update the value of the secret mounted by CSI driver when there is a corresponding update in Google Secret Manager. That capability relies upon “Secret Auto Rotation” Kubernetes API feature which is in Alpha yet.

Pros:

  • Kubernetes native way of fetching secrets
  • Abstracting secret store from the app makes applications more portable
  • Separation of duties possible
  • Workload Identity for fine grained access and audit logs

Cons:

  • Open source driver and controller that needs to be installed and maintained
  • No Google support and no SLA
  • Increased severity of vulnerabilities like risk for directory traversal attacks

Init-Container

Another trick to fetch Secret Manager secrets into the Kubernetes application pods, is to fetch them, during pod start-up, in an init-container via a call to the Secret Manager API.

This can be done for example by scripting a standard gcloud call from a Cloud SDK container image and saving the secret value to the pod’s ephemeral file system. Please check this project for an implementation example.

This approach is rather straightforward, and enjoys other benefits that Secret Manager offers.

The drawback is that

  • secret values stay visible in the pods file system (directory traversal attack)
  • another container needs to be maintained
  • there may be difficulties in the service mesh environments as well
  • additional mechanism is needed to ensure that the application gets updated secret values at runtime
Init-Container

Pros:

  • Straightforward implementation
  • Supports Workload identity

Cons:

  • Needs to be added into each team manifest
  • Another container to scan and secure
  • Secrets can be stored in the file system
  • Secrets are cached and it’s hard to control their lifetime
  • Could be incompatible with Anthos Service Mesh

Kubernetes KMS Plugin

Let’s now check the standard mechanism that Kubernetes provides for customizing encryption of the Etcd database content.

Kubernetes allows customization of the etcd data encryption.

KMS encryption provider uses an envelope encryption scheme to encrypt data in etcd. The data is encrypted using a data encryption key (DEK); a new DEK is generated for each encryption. The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. The KMS provider uses gRPC to communicate with a specific KMS plugin. The KMS plugin, which is implemented as a gRPC server and deployed on the same host(s) as the Kubernetes control plane, is responsible for all communication with the remote KMS.

Kubernetes KMS Plugin

Kubernetes KMS encryption provider uses so-called envelope encryption mechanism to encrypt data in Etcd.

That is, the case when there are two keys participating in the actual data encryption. One is the so-called key encryption key or (KEK) is always stored remotely at a remote KMS provider and never leaves it. Another one (DEK) is used for the actual data encryption and is kept encrypted by the remote key when stored locally together with the data that it encrypts.

Kubernetes KMS plugin is an additional control plane component that kube-apiserver contacts to decrypt the DEK.

Whenever the kube-apiserver stores secrets, or any other resources, to the Etcd, it first encrypts them with the data encryption key DEK.

And to decrypt the DEK, kube-apisever reaches out to the KMS plugin.

KMS plugin can now call any remote KMS provider to perform decryption or encryption .

The plugin is typically a container image that can either

  • be started as a pod on the control plane node
  • or even be added as a sidecar to the kube-apiserver

HSM-based Secrets Encryption in Anthos on VMware

Anthos on VMware clusters can be configured with HSM-based secrets encryption. (This feature is yet in preview)

In that case, cluster Etcd data is getting effectively encrypted by the keys stored in an on-prem hardware HSM appliance.

The HSM provider provides this functionality via the (Kubernetes KMS plugin mechanism) as well.

Significant advantage of using the KMS plugin is that the data in Etcd is always encrypted and there is no way to bypass that without reconfiguring the entire cluster.

HSM keys also provide a high degree of security and compliance and remain in full customer control.

On the downside is to mention that an HSM appliance is an expensive system to maintain.

Also, when encrypted with a KMS plugin, Etcd data of all applications are encrypted with the same key.

Pros:

  • Mandatory encryption of all cluster secrets at rest
  • Customer Managed hardware-backed encryption keys

Cons:

  • Requires on-prem HSM hardware appliance
  • Not possible to separate tenant access permissions
  • Only available in Anthos on VMware

Kubernetes KMS Plugin for Cloud KMS

Google Cloud KMS service can be encryption backend for Kubernetes Etcd data.

There is even a KMS plugin for Google Cloud KMS service.

It is an open source component that can be used in on-prem Anthos clusters to encrypt Etcd data on the control plane nodes with the key encryption keys stored in the Cloud.

We get the same advantages of mandatory secrets encryption and full customer control over the actual encryption keys.

And the same disadvantage of missing isolation by application.

With a reliable network connection to the Google Cloud such a combination can provide similar or sometimes even better reliability guarantees than on-prem HSM systems.

It should also be mentioned that this open source component is not covered by Google support and SLA.

Kubernetes KMS Plugin for Cloud KMS

Pros:

  • Mandatory encryption of all cluster secrets at rest
  • Customer Managed Encryption Keys

Cons:

  • Not possible to separate tenant access permissions
  • Advanced cluster setup
  • No Google support and no SLA

Here is an example of configuration that turns up Cloud KMS encryption in (an on-prem cluster’s Etcd database).

# Copyright 2023 Google LLC.
# SPDX-License-Identifier: Apache-2.0

sudo mkdir -p /etc/kubernetes/user_patches
sudo bash
cat <<EOF > /etc/kubernetes/user_patches/kube-apiserveruser500+strategic.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: kube-apiserver
volumeMounts:
- mountPath: /var/run/kmsplugin
name: socket-volume
volumes:
- hostPath:
path: /var/kms-plugin
type: DirectoryOrCreate
name: socket-volume
EOF
exit
# Copyright 2023 Google LLC.
# SPDX-License-Identifier: Apache-2.0

sudo vi /etc/kubernetes/pki/encryption_config.yaml

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
- configmaps
providers:
- kms:
name: cloudKmsPlugin
endpoint: unix:///var/run/kmsplugin/socket.sock
cachesize: 100
- aescbc:
keys:
- name: "etcd_key"
secret: "07kZOy4Y5cm6/EYm4aFayNvMDvIWvXRrThO58t/Gs+E="
- identity: {}

We basically need to

  • Deploy plugin to the control plane nodes
  • Let the kube-apiserver know about the plugin
  • And turn in on in the Kubernetes encryption configuration

Kubernetes KMS Plugin + Secret Manager

Application secrets encryption is not a single-choice decision.

KMS plugin and other (application secret encryption mechanisms) can be combined together.

Together they can help reaching our original goals:

  • Mandatory encryption of all secrets in the cluster.
  • Fine grained access control and isolation of secrets of different applications
  • All that while keeping full customer control over encryption keys that protect the data
Kubernetes KMS Plugin + Secret Manager

Pros:

  • Mandatory encryption of all cluster secrets at rest
  • Customer Managed Encryption Keys
  • Workload Identity for fine-grained access and audit logs

Cons:

  • Advanced cluster setup

Summary

Let’s now summarize our overview.

Application Secrets Encryption Options Summary

We went through the (default application secrets encryption mechanisms) available in GKE and Anthos Kubernetes clusters out of the box.

Covered two common (open source tools) that help keeping application secrets outside of the Kubernetes clusters.

Discussed three ways of accessing application secrets in Google Secret Manager directly from the applications.

We have also explored the Kubernetes KMS provider and plugin mechanism for the cluster data encryption in the Kubernetes control plane.

And now to the takeaways..

Google Anthos products can support security requirements of advanced customers

Google Cloud offers services that help customers harden Kubernetes security and simplify reaching compliance goals

--

--

Sergey Shcherbakov
Google Cloud - Community

Strategic Cloud Engineer at Google. Consulting software and cloud architectures for over 10 years. "Silicon Valley" comedy series fan.