GCP Secret Manager with self-hosted Kubernetes

Jacek J. Łakis
6 min readApr 4, 2023

--

My development server

I host a simple Kubernetes distro (K3S) on a small server sitting on my shelf. Last Saturday, I started rebuilding my Github Actions setup for 99th time and came across this well known problem — what to do with secrets that I don’t want to push to git? Let’s put them into some cloud-native secret management tool and then bring them to my on-shelf cluster!

I decided to try GCP Secret Manager with Workload Identity federation. I had an opportunity to deal with similar concept in Azure so it seemed quite interesting!

The plan was to create secrets in Secret Manager and use workload identity pools to trust on-premises cluster. Then, install workload identity federation webhook server and use ExternalSecretsOperator to sync GCP secrets with Kubernetes namespaces.

To learn more about Workload identity on Kubernetes, check out documentation from Azure

Here are the steps I took:

  • Expose OIDC configuration endpoint from the on-premises cluster
  • Create secrets and service account in GCP
  • Configure Workload Identity Pool in GCP
  • Install gcp-workload-identity-federation-webhook
  • Install ExternalSecretsOperator

Important: Do not repeat these steps on production environment! This article is just a set of highlights from research work made on development cluster and aims to familiarize reader with the idea.

1. Enable Service Account discovery endpoint on our on-premises cluster and expose it to public

See: Configuring self-managed cluster (Azure Workload Identity) — more production-ready configuration that considers key rotation and hosting own OIDC configuration

First, check if the cluster has OIDC Configuration enpoint enabled (see: Service account issuer discovery). In addition, jwks_uri and issuer URLs have to be accessible from the public internet.

Let’s extract client certificates and examine the current configuration:

$ export APISERVER=$(cat $KUBECONFIG | yq '.clusters[0].cluster.server')
$ cat $KUBECONFIG | yq '.clusters[0].cluster.certificate-authority-data' | base64 -d > ca.crt
$ cat $KUBECONFIG | yq '.users[0].user.client-certificate-data' | base64 -d > client.crt
$ cat $KUBECONFIG | yq '.users[0].user.client-key-data' | base64 -d > client.key
$ curl -s --cert client.crt --key client.key --cacert ca.crt -k $APISERVER/.well-known/openid-configuration | jq -r
{
"issuer": "https://kubernetes.default.svc.cluster.local",
"jwks_uri": "https://192.168.0.227:6443/openid/v1/jwks",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
]
}

Looks like discovery is enabled, but URLs are not publicly available. Let’s change them to a domain the we are controllig — g0.mycluster.eu. To do so, --service-account-issuer and --service-account-jwks-uri API Server flags need to be configured. On my K3S distro, I had to add edit K3S systemd unit file so ExecStart looks the following:

ExecStart=/usr/local/bin/k3s \
server \
'--kube-apiserver-arg' \
'--service-account-issuer=https://g0.mycluster.eu' \
'--kube-apiserver-arg' \
'--service-account-jwks-uri=https://g0.mycluster.eu/openid/v1/jwks'

With this change, my OpenID Connect configuration looks as expected:

{
"issuer": "https://g0.mycluster.eu",
"jwks_uri": "https://g0.mycluster.eu/openid/v1/jwks",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
]
}

Now, time to expose discovery endpoints to public. The easiest way for me was to do it through ingress as I already had ingress controller, cert-manager etc., configured on the cluster. I also didn’t like the idea of exposing Kubernetes API. That’s why I went a bit unconventional — serve configuration as static files via nginx after retrieving them with initContainer. Deployment I’ve used looks following:

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
initContainers:
- name: get-openind-configuration
image: curlimages/curl
command:
- sh
- -c
- |
mkdir -p /data/.well-known /data/openid/v1
curl -s --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" "https://kubernetes.default.svc.cluster.local/.well-known/openid-configuration" > /data/.well-known/openid-configuration
curl -s --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" "https://kubernetes.default.svc.cluster.local/openid/v1/jwks" > /data/openid/v1/jwks
volumeMounts:
- name: openid-discovery
mountPath: /data
containers:
- name: nginx
image: nginx:1.23.4
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: openid-discovery
mountPath: /usr/share/nginx/html
volumes:
- name: openid-discovery
emptyDir:

See complete manifests. The tricky part here is that I can can just use default service account token to access OIDC endpoint of K8S API.

I can now use https://g0.mycluster.eu as OIDC issuer URL.

2. Create secrets and service accounts in GCP

I have used Terraform to create secrets in GCP:

resource "google_secret_manager_secret" "key" {
secret_id = "key"

labels = {
type = "secret"
}

replication {
automatic = true
}
}

resource "google_secret_manager_secret_version" "key" {
secret = google_secret_manager_secret.key.id
secret_data = "Super secret information!!!"
}

I also needed a GCP service account that has read access to the secret. I will later impersonate it via workload identity:

resource "google_service_account" "secret_reader" {
account_id = "secretreader"
display_name = "Secret Reader GCP SA"
}

resource "google_secret_manager_secret_iam_binding" "binding" {
project = google_secret_manager_secret.key.project
secret_id = google_secret_manager_secret.key.secret_id
role = "roles/secretmanager.secretAccessor"
members = [
google_service_account.secret_reader.member
]
}

3. Create Workload Identity in GCP

See: Configure workload identity federation with AWS or Azure and terraform-google-kubernetes-engine/workload-identity for more insight.

We need to configure GCP to trust our OIDC provider. To do it, we gonna create a workload identity pool with a provider that points to our cluster. I also configured attribute mapping. In this example, sub and kuberenetes.io.namespaceclaims from our cluster’s JWT are mapped into subject and kubernetes_namespace fields in GCP token. Then, attribute_condition is configured to accepts workloads from eso namespace only:

resource "google_iam_workload_identity_pool" "onprem-cluster" {
workload_identity_pool_id = "onprem-cluster"
}

resource "google_iam_workload_identity_pool_provider" "onprem-cluster" {
workload_identity_pool_id = google_iam_workload_identity_pool.onprem-cluster.workload_identity_pool_id
workload_identity_pool_provider_id = "onprem-cluster"
display_name = "Onprem Kubernetes Cluster"
oidc {
issuer_uri = "https://g0.mycluster.eu"
allowed_audiences = ["sts.googleapis.com"]
}
attribute_mapping = {
"google.subject" = "assertion.sub"
"attribute.kubernetes_namespace" = "assertion[\"kubernetes.io\"][\"namespace\"]"
}
attribute_condition = "attribute.kubernetes_namespace==\"eso\""
}

secretreader SA needs to be connected to the pool. So every external identity that belongs to the pool (every token signed by our cluster) is able to impersonate this SA:

resource "google_service_account_iam_member" "main" {
service_account_id = google_service_account.secret_reader.name
role = "roles/iam.workloadIdentityUser"
member = "principalSet://iam.googleapis.com/${google_iam_workload_identity_pool.onprem-cluster.name}/*"
}

4. Install gcp-workload-identity-federation-webhook on the cluster and test token exchange

See: pfnet-research/gcp-workload-identity-federation-webhook

Unfortunately, Google doesn’t seem to share the official webhook server (like Azure does), so I had to rely on this unofficial admission webhook server. I installed it from the helm chart. I set container version in Chart.yaml’s appVersion to 0.3.1 and created small values file that deals with token permissions:

$ cat values.override.yaml
controllerManager:
manager:
args:
- --token-default-mode=0444
$ helm upgrade --install gcpwi -f values.override.yaml .

To validate if this works as expected, we can create service account:

apiVersion: v1
kind: ServiceAccount
metadata:
name: identity-sa
namespace: eso
annotations:
cloud.google.com/workload-identity-provider: "projects/<project-id>/locations/global/workloadIdentityPools/onprem-cluster/providers/onprem-cluster" # "name" field from terraform state show "google_iam_workload_identity_pool_provider.onprem-cluster"
cloud.google.com/service-account-email: "secretreader@<project-id>.iam.gserviceaccount.com"And a pod:

And a pod:

apiVersion: v1
kind: Pod
metadata:
labels:
run: identity-pod
name: identity-pod
namespace: eso
spec:
serviceAccountName: identity-sa
containers:
- args:
- sleep
- infinity
image: google/cloud-sdk:slim
name: identity-pod

When pod is requested, admission webhook injects initContainer and some useful ENVs that deal with token exchange. To exec to the pod and see if we can read our secret:


$ kubectl -neso exec -it identity-pod bash
root@identity-pod:/# gcloud secrets versions access 0 --secret=key
Super secret information!!!

Looks good!

5. Install and configure External Secrets Operator

We want ESO (External Secrets Operator) pod to have workload identity assigned, so we need to annotate its service account. We can do it by setting values in the ESO helm chart:

$ cat values.eso.yaml
serviceAccount:
annotations:
cloud.google.com/workload-identity-provider: "projects/<project-id>/locations/global/workloadIdentityPools/onprem-cluster/providers/onprem-cluster"
cloud.google.com/service-account-email: "secretreader@<project-id>.iam.gserviceaccount.com"
cloud.google.com/gcloud-run-as-user: "65534"

$ helm repo add external-secrets https://charts.external-secrets.io
$ helm repo update
$ helm -neso upgrade --install eso external-secrets/external-secrets -f values.eso.yaml

Once ESO is installed, we create a SecretStore:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: gcp-secrets
namespace: eso
spec:
provider:
gcpsm:
projectID: <project-id>

And ExternalSecret resource where we specify secrets to sync:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: gcp-secrets
namespace: eso
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: gcp-secrets
target:
name: synced-creds
creationPolicy: Owner
data:
- secretKey: first-secret
remoteRef:
key: "key"

External Secrets Operator now reconciles secrets, and here we are!

$ kubectl -neso get secret synced-creds -oyaml | yq .data.first-secret | base64 -d
Super secret information!!!%

If this helped you, you enjoyed it or have any questions or recommendations — leave a comment! Feel free to contact me on LinkedIn as well.

--

--