Automated Sync among GCP secrets & GKE workload

Prajakta Shete
Google Cloud - Community
10 min readDec 9, 2022

In this article we are going to integrate GCP secret manager and GKE(K8s) secrets using External Secrets Operator which is an open source K8s operator.
Also we are going to cover a very important requirement we might run into at least once while working with K8s and it’s secrets, whenever we update a secret inside Kubernetes which has been passed as an environment variable, we need to restart the pods to pick up the changes. We will address this issue by using a controller called Reloader.

Execution steps :

Requirement 1: Integrate GCP Secret manager to K8s secret

1. 1. Create new Private GKE cluster
1. 2. Create a secret in GCP secret manager
1. 3.
Enable Workload Identity on GKE cluster
1. 4. Installation of External Secret Operator using Helm
1. 5. Deploy Manifest files
1. 6. Verification

Requirement 2: Rolling upgrade on secret update

2. 1. Stakater Reloader installation
2. 2. Update Manifest files to use stakater reloader annotation
2. 3. Verification

Let’s try to understand the two requirements in detail now.

Requirement 1: Integrate GCP Secret manager to K8s secret

We need the secrets stored in GCP secret manage to be utilised inside the GKE pods. To complete this requirement we’ll use the External Secrets Operator. So what is it ? How to implement ? Below is the answer … check it out!!

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.

ESO is a collection of custom API resources — ExternalSecret, SecretStore and ClusterSecretStore that provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you.

Now that we have understood our first requirement lets make it done.
We’ll start with the pre-requisites we need for this :-

1. 1. Create new Private GKE cluster
Use the below gcloud commands to create the cluster and get the credentials for it.

$ gcloud container clusters create dev-private-cluster-01 \
--network <vpc-name>\
--subnetwork <subnet-name> \
--cluster-secondary-range-name <cluster-secondary-range-name> \
--services-secondary-range-name <services-secondary-range-name> \
--enable-private-nodes \
--enable-ip-alias \
--master-ipv4-cidr 172.16.0.16/28 \
--enable-master-global-access - zone us-central1-b --num-nodes=1 --project=<PROJECT_ID> \
--no-enable-master-authorized-networks --workload-pool=PROJECT_ID.svc.id.goog
Ensure workload Identity is enabled on GKE cluster

Get credentials for your cluster:

$ gcloud container clusters get-credentials CLUSTER_NAME

Replace CLUSTER_NAME with the name of your cluster that has Workload Identity enabled.
Make sure you have Cloud NAT configured which provides outgoing connectivity.
For more details check out on cloud NAT:
https://cloud.google.com/nat/docs/overview

1. 2. Create a secret in GCP secret manager
Create the secret from console which we need to be accessed by our pods running inside GKE. Below are some snapshots …

Click on create secret button
Fill in the details of the secret

Note :- We’ll verify the value of the secret by getting into the GKE pods in later steps, for now we have a sample value “sample-secret-value”.

Now that we have the cluster ready and GCP secret ready, we’ll proceed to install the ESO and have some workload deployed to access the secrets.

1. 3. Enable Workload Identity on GKE cluster

Enable and configure the Workload Identity on your Google Kubernetes Engine (GKE) cluster. Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services. To learn more about how Workload Identity works, see Workload Identity.

GKE pods will need access to our GCP secret manager and to authenticate with this service we will use the WKI. Note that in our case we have already taken care of this in the initial cluster creation command and now let’s configure it.

Configure Workload Identity

The following steps show you how to configure your applications to use Workload Identity if it is enabled on the cluster.

Create a namespace to use for the Kubernetes service account. You can also use the default namespace or any existing namespace.

$ kubectl create namespace NAMESPACE

Create a Kubernetes service account for your application to use. You can also use the default Kubernetes service account in the default or any existing namespace.

$ kubectl create serviceaccount KSA_NAME \
--namespace NAMESPACE

Replace the following:

KSA_NAME: the name of your new Kubernetes service account. (workload-identity-ksa)
NAMESPACE: the name of the Kubernetes namespace for the service account. (dev)

Create an IAM service account for your application or use an existing IAM service account instead. You can use any IAM service account in any project in your organization. For Config Connector, apply the IAMServiceAccount object for your selected service account.

To create a new IAM service account using the gcloud CLI, run the following command.

Note: If you’re using an existing IAM service account with the gcloud CLI, skip this step.

$ gcloud iam service-accounts create GSA_NAME \
--project=GSA_PROJECT

Replace the following:

GSA_NAME: the name of the new IAM service account.
GSA_PROJECT: the project ID of the Google Cloud project for your IAM service account.

For information on authorizing IAM service accounts to access Google Cloud APIs, see Understanding service accounts.

Ensure that your IAM service account has the required roles. You can grant additional roles using the following command:

$ gcloud projects add-iam-policy-binding PROJECT_ID \
--member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \
--role "ROLE_NAME"

Replace the following:

PROJECT_ID: your Google Cloud project ID.
GSA_NAME: the name of your IAM service account.
GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account.
ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer.
Make sure the service account has roles/secretmanager.secretAccessor to fetch secrets from GCP secret manager.

Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. This binding allows the Kubernetes service account to act as the IAM service account.

$ gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]"

Annotate the Kubernetes service account with the email address of the IAM service account.

$ kubectl annotate serviceaccount KSA_NAME \
--namespace NAMESPACE \
iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com

1. 4. Installation of External Secret Operator using Helm

Below is the helm command to install the ESO.

Note :- To automatically install and manage the CRDs as part of your Helm release, you must add the — set installCRDs=true flag to your Helm installation command.

$ helm repo add external-secrets https://charts.external-secrets.io
$ helm install external-secrets \
external-secrets/external-secrets \
-n external-secrets --create-namespace \
--set installCRDs=true

Now that we have the WKI configured, external secrets installed let us understand where we are going to use it. We have two main components when we start working with the External Secret operator, one is SecretStore and other is External Secret. These two are the object kind in K8s and comes under the apiVersion external-secrets.io/vebeta1.
We’ll look into each of them and configure simultaneously moving along.

1. 5. Deploy Manifest files

SecretStore

The idea behind the SecretStore resource is to separate concerns of authentication/access and the actual Secret and configuration needed for workloads. The ExternalSecret specifies what to fetch, the SecretStore specifies how to access. This resource is name-spaced.

SecretStore.yaml

Let’s deploy the secret-store with below command:

$ kubectl apply -f SecretStore.yaml

ExternalSecret

An ExternalSecret declares what data to fetch. It has a reference to a SecretStore which knows how to access that data. The controller uses that ExternalSecret as a blueprint to create secrets.

External-secrets runs within your Kubernetes cluster as a deployment resource. It utilizes CustomResourceDefinitions to configure access to secret providers through SecretStore resources and manages Kubernetes secret resources with ExternalSecret resources.

external-secret.yaml

Create the ExternalSecret with below command:

$ kubectl apply -f ExternalSecret.yaml

Once created make sure external secret is in SecretSynced and Ready state:

$ kubectl get externalsecret -n dev

Note: It is the job of ExternalSecret to create secrets named workload-secret with key as k8s-secret-key-testsecret in k8s which takes the reference of GCP secret key named gcp-secret-key-testsecret.
It’s time to pass the secret to our pods and verify the secret stored in GCP secret manager is properly being reflected to GKE pods or not.

Below is the sample deployment file :

deployment.yaml file

Create the sample deployment named myapp with nginx image in the dev namespace by executing below command:
Note: We are passing the secret as environment variable named WORKLOAD_SA.

$ kubectl apply -f deploy.yaml

1. 6. Verification
Verify the secret data by executing the below command which should be matching with our GCP secret value i.e. sample-secret-value:

$ kubectl get secret workload-secret -n dev -o jsonpath='{.data.testsecret}' | base64 -d

We know that Secrets can be mounted as data volumes or exposed as environment variables to be used by a container in a Pod.

Here we have stored secret value in ENV variable WORKLOAD_SA, let’s verify our newly created pod has the GCP secret value with the below command:

$ kubectl get pod -n dev
$ kubectl exec -i -t myapp-7d579d9894-zgwxc -n dev -- /bin/sh -c 'echo $WORKLOAD_SA'
kubectl command outputs

yeah!! We have successfully achieved our first requirement here. Sounds good right ? . No time to stop buddy!! … Let’s understand our second objective as well and then we’ll proceed to achieve that too.

Requirement 2: Rolling upgrade on secret update

At this point we have a successful integration of GCP secret manager with GKE (K8s) secrets, but what if someone updates the secret in GCP secret manager ? Will the pods have changes updated automatically ? Well unfortunately NO. Since we have the secret passed as an environment variables the only way to make the changes effective is to restart the pods, which doesn’t sounds good to a devops person since it is not feasible if we have 100s of pods that need to be restarted after every secret update. There are some third party solutions for triggering restarts whenever a secret changes.

Note: A container using a Secret as a subPath volume mount does not receive automated Secret updates.

A solution that we are going to use to address this issue is Reloader

Stakater Reloader

Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets, Statefulsets and Rollouts.

By default, Reloader gets deployed in default namespace and watches changes secrets and configmaps in all namespaces.

2. 1. Stakater Reloader installation
You can add a reloader to helm from our public chart repository and deploy it via helm using below mentioned commands.

$ helm repo add stakater https://stakater.github.io/stakater-charts
$ helm repo update
$ helm install reloader stakater/reloader # For helm3 add - generate-name flag or set the release name

To perform rolling upgrade when change happens only on specific secrets use below annotation.

2. 2. Update Manifest files to use stakater reloader annotation

For a Deployment called myapp having secret called workload-secret which we have created in previous steps, then update this annotation to main metadata of your Deployment :-

Use comma separated list to define multiple secrets.

kind: Deployment
metadata:
annotations:
secret.reloader.stakater.com/reload: "foo-secret,bar-secret,baz-secret"
spec:
template:
metadata:

and Reloader will trigger the rolling upgrade upon modification of any ConfigMap or Secret annotated like this (in our case it is Externalsecret):

refreshInterval: 1m - This will refresh the secrets every one minute. You can keep the time as per your requirement.

Update the deployment and external secret value with the below command :

$ kubectl apply -f deploy_after_reloader.yaml
$ kubectl apply -f ExternalSecret_reloader.yaml

2. 3. Verification
Our secrets are now synced from GCP to Kubernetes, let’s verify by updating the secret value in GCP console.

click on + NEW VERSION
update the secret value

Verify the pod has been restarted and secret value is updated with the below command :

$ kubectl get pod -n dev
$ kubectl exec -i -t myapp-845ffcddc7–2bslz -n dev -- /bin/bash -c 'echo $WORLOAD_SA'
output

Awesome!! We have successfully updated secrets inside pod which were mounted as environment variables from GCP secret manager with rolling upgrade as per our second requirement.

Finally we have automated the synchronisation among GCP secrets and GKE workloads.

References

Github

https://github.com/prajaktashete7/External-Secrets-and-Reloader

Questions?

If you have any questions, I’ll be happy to read them in the comments. Follow me on medium or LinkedIn.

--

--

Prajakta Shete
Google Cloud - Community

Passionate cloud/DevOps engineer, committed to scalable solutions, continuous learning, and innovation in tech.