Dynamic secrets on Kubernetes pods using Vault

Following Armon Dadgar’s (Hashicorp CTO) twitter and blog post on why we need dynamic secrets, I wanted to create an experiment and create a web application (Ruby on Rails specifically) running on Kubernetes, backed by Vault to generate database secrets for each pod.

I wanted to create an automatic and secure process using both Vault’s Kubernetes authentication backend and database secrete store to authenticate pods by their service account so that they’d be able to request credentials on init, renew database credentials while the pod is running and revoke their credentials once they go down.

To request secrets on pod initialization I used Init Containers, renewing secrets was done with the Sidecar Container pattern and for revoking secrets I used the preStop Pod lifecycle handler.


Prerequisites

Even though the main part of this post is to show how to create, renew and revoke secrets dynamically using Kubernetes primitives, I will give a quick guide on how to set up a Minikube cluster for this experiment.

I’m using a Mac for this demonstration so the set up process might differ a bit when using different OSes, more documentation is available here.

First we need to install minikube, virtualbox, helm, kubectl, consul client and vault client.

Next we initialize the cluster and install Helm on it,

And finish by installing Consul, Vault and PostgreSQL to demonstrate a secrets backend that will be used by Vault.

Notice that Vault is using dev mode so Consul is not actually needed and will also be re-open every time Minikube is brought up again, I’ve added Consul to the mix so it would be easy to change vault off dev mode and experiment as well but it is not part of this post.

Setting up Vault

In production an operator would need to preconfigure Vault to enable Kubernetes authentication and PostgreSQL database backends before we can start issuing secrets to pods,

VAULT_POD=$(kubectl get pods --namespace default -l "app=vault" -o jsonpath="{.items[0].metadata.name}")
export VAULT_TOKEN=$(kubectl logs $VAULT_POD | grep 'Root Token' | cut -d' ' -f3)
export VAULT_ADDR=http://127.0.0.1:8200
# run this on a second terminal
kubectl port-forward $VAULT_POD 8200
echo $VAULT_TOKEN | vault login -
vault status

Vault will now return some of cluster details, notice we are now using the Vault root credentials to configure vault.

vault status output

We will now enable the database secrets backend, will use the PostgreSQL plugin that will connect to our database with credentials that can create a new role with specific credentials to a specific database and how a default time to live of 1h.

That means that our sidecar container will have to renew our credentials approximately every 60 minutes otherwise our pod will not be able to issue queries to our database.

vault secrets enable database
vault write database/config/postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="postgres-role" \
connection_url="postgresql://root:root@intended-moth-postgresql.default.svc.cluster.local:5432/rails_development?sslmode=disable"
vault write database/roles/postgres-role \
db_name=postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
expected output to all commands

This is a good time to check that an authenticated vault user (at the time being, the root user) can issue read, renew and revoke commands to Vault and receive credentials.

vault read -format json database/creds/postgres-role
vault lease renew <LEASE ID>
vault lease revoke <LEASE ID>
issuing read, renew and revoke commands against our newly created PostgreSQL secrets backend

Next we will have to create a role in Vault that can issue only these commands, create a Kubernetes service account that can authenticate using that service account’s JWT token and attach the policy to the service account.

This is explained more in depth in the Vault Kubernetes auth method, Kubernetes TokenReview API and Kubernetes service account tokens.

Creating a Vault policy to a specific role:

$ cat > postgres-policy.hcl <<EOF
path "database/creds/postgres-role" {
capabilities = ["read"]
}
path "sys/leases/renew" {
capabilities = ["create"]
}
path "sys/leases/revoke" {
capabilities = ["update"]
}
EOF
$ vault policy write postgres-policy postgres-policy.hcl
Success! Uploaded policy: postgres-policy

Creating a Kubernetes service account and cluster role bindings that allow that service account to authenticate with the ReviewToken API

$ cat > postgres-serviceaccount.yml <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: role-tokenreview-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: postgres-vault
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres-vault
EOF
$ kubectl apply -f postgres-serviceaccount.yml
clusterrolebinding "role-tokenreview-binding" created
serviceaccount "postgres-vault" created

In order to to enable the Kubernetes auth backend we need to extract the token reviewer JWT, Kuberenetes CA certificate and the Kubernetes host,

export VAULT_SA_NAME=$(kubectl get sa postgres-vault -o jsonpath="{.secrets[*]['name']}")
export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)
export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
export K8S_HOST=$(kubectl exec consul-consul-0 -- sh -c 'echo $KUBERNETES_SERVICE_HOST')

Next we can go onto and enable the Kubernetes authentication backend and create our Vault role that is attached our service account.

vault auth enable kubernetes
vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:443" \
kubernetes_ca_cert="$SA_CA_CRT"
vault write auth/kubernetes/role/postgres \
bound_service_account_names=postgres-vault \
bound_service_account_namespaces=default \
policies=postgres-policy \
ttl=24h
enabling, configuring and writing the postgresql vault role

If everything went well, we can now try creating a new pod on our cluster with the postgres-vault service account and authenticate and interact with Vault.

# outside of the pod
# get the name of the vault service
$ kubectl get svc -l app=vault -o jsonpath="{.items[0].metadata.name}"
errant-mandrill-vault
$ kubectl get svc -l app=postgresql -o jsonpath="{.items[0].metadata.name}"
intended-moth-postgresql
# create a temporary pod
$ kubectl run tmp --rm -i --tty --serviceaccount=postgres-vault --image alpine

Once we’re inside the pod we’ll fetch the service account token, log into Vault and keep the token vault issues us and fetch PostgreSQL credentials.

# some preq
$ apk update
$ apk add curl postgresql-client jq
# fetch the vault token of this specific pod
$ KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

with our KUBE_TOKEN we can now log into Vault

$ VAULT_K8S_LOGIN=$(curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://errant-mandrill-vault:8200/v1/auth/kubernetes/login)
$ echo $VAULT_K8S_LOGIN | jq
{
"request_id": "c307a1de-7475-0c07-2226-a7f360fa0fe4",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": null,
"wrap_info": null,
"warnings": null,
"auth": {
"client_token": "e1cde2be-7259-0dfe-c54d-7506e187ed22",
"accessor": "b04bce5e-4244-49ef-90ae-6c7e27d52494",
"policies": [
"default",
"postgres-policy"
],
"metadata": {
"role": "postgres",
"service_account_name": "postgres-vault",
"service_account_namespace": "default",
"service_account_secret_name": "postgres-vault-token-tb6x4",
"service_account_uid": "8231b01a-1f26-11e8-96b2-080027de9cbb"
},
"lease_duration": 86400,
"renewable": true,
"entity_id": "cfa77561-2569-3737-e9bb-56af39e85791"
}
}
# keep the vault token to issue commands
$ X_VAULT_TOKEN=$(echo $VAULT_K8S_LOGIN | jq -r '.auth.client_token')

Next we’ll request postgresql credentials from vault,

POSTGRES_CREDS=$(curl --header "X-Vault-Token: $X_VAULT_TOKEN" http://errant-mandrill-vault:8200/v1/database/creds/postgres-role)
$ echo $POSTGRES_CREDS | jq
{
"request_id": "9c017247-0965-8c5f-2534-0a46dad8f893",
"lease_id": "database/creds/postgres-role/7b7d9915-a7e9-1fe5-438d-56a93ebb11af",
"renewable": true,
"lease_duration": 3600,
"data": {
"password": "A1a-ssr0q19x8x7v8q1q",
"username": "v-kubernet-postgres-y8v8zs4tur4qp6304x25-1521200550"
},
"wrap_info": null,
"warnings": null,
"auth": null
}

We have everything we need to issue commands against our database

PGUSER=$(echo $POSTGRES_CREDS | jq -r '.data.username')
export PGPASSWORD=$(echo $POSTGRES_CREDS | jq -r '.data.password')
psql -h intended-moth-postgresql -U $PGUSER rails_development -c 'SELECT * FROM pg_catalog.pg_tables;'

If all went well, we’ll now see output from our PostgreSQL database,

pg output

Creating a deployment with dynamic secrets

Up until now we’ve just setup Vault and Kubernetes roles, service accounts and tested that we can actually automate everything, now is the time to create a Kubernetes Deployment that can actually take all those pieces and turn them into a fully automated process.

For this deployment I created a 3 replicas Ruby on Rails application that will print out the credentials on screen. The code is available over at https://github.com/gmaliar/vault-k8s-dynamic-secrets/tree/master/app.

The deployment first creates an initContainer and mounts a specific volume used to share the application credentials and another volume for the vault token credentials.

Once we apply this deployment and wait kubectl apply -f app-deployment.yml we will access all three pods that were created a validate that they in fact use unique credentials.

We will port forward into all three containers and assign them to different ports outside of the Kubernetes cluster.

Getting all the pod names,

$ kubectl get po -l app=vault-dymanic-secrets-rails -o wide
getting all pod names

And port-forwarding them locally for testing,

And accessing http://localhost:3001, http://localhost:3002, http://localhost:3003 will show us three distinct credentials.

localhost:3001
localhost:3002
localhost:3003

Conclusion

We were able to create a web application deployment that requests specific database credentials on init time, will renew them using a sidecar container and revoke them on shutdown.

We can further extend this for AWS IAM roles, additional databases supported by Vault or any other secret backend that is supported by Vault.