HashiCorp Vault’s Dynamic Secrets for Temporary Access

Shashwat Singh
7 min readJul 4, 2020

--

Introduction

Security is an important domain of concern for any organisation. Secret management is one of the most challenging areas of security for creating and granting access to users/applications. Hashicorp’s Vault is one of the solutions for managing the secrets and access related issues. In this article, we will go over the dynamic secrets feature from Vault that allows a user or application to get temporary access onto the system (MySQL Database in this case). Dynamic secrets are not present in the vault, they are created once the read call is made to vault. We will be installing Vault with consul backend and MySQL database onto Minikube cluster, and see how an application running inside the pod can query the vault to get the temporary credentials to access the database.

Architecture Diagram

Setting up the environment

These instructions are for macOS.

Start off with installing vault, kubectl and kubernetes-helm.

brew install vault kubectl kubernetes-helm

Now install virtualbox and minikube.

brew cask install virtualbox
brew cask install minikube

Start the Minikube cluster.

minikube start --memory 4096

Deploy Vault and Consul on Minikube

Start by adding googles helm charts to the local helm chart repo.

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

Install Consul to Minikube.

helm install consul hashicorp/consul \
--set global.name=consul \
--set server.replicas=1 \
--set client.enabled=true \
--set server.bootstrapExpect=1
  • global.name=true: Setting the global name to consul.
  • server.replicas=1: Limiting the number of server pods to 1 (default: 3).
  • client.enabled=true: The chart will install all the resources necessary for a Consul client on every Kubernetes node.
  • server.bootstrapExpect=1: The number of servers to wait for before performing the initial leader election and bootstrap of the cluster.

Install Vault to Minikube

helm install vault incubator/vault \
--set vault.dev=true \
--set vault.config.storage.consul.address="consul-server- 0:8500",vault.config.storage.consul.path="vault"
  • vault.dev=true: Running vault in dev mode.
  • vault.config.storage.consul.address=”consul-server-0:8500”: Address of the consul storage backend pod.
  • vault.config.storage.consul.path=”vault”: Path name in consul storage.

Vault and Consul and now installed and configured on your Minikube cluster. You can run kubectl get pods to check if the pods are running.

Deploy MySQL Database on Minikube

Add bitnami charts to local helm repo.

helm repo add bitnami https://charts.bitnami.com/bitnami

Install the MySQL heml chart.

helm install mysql bitnami/mysql \
--set root.password=password \
--set image.tag='5.7.27'
  • root.password=password: Setting root user password to “password”.
  • image.tag=’5.7.27’: Installing image version 5.7.27.

Configure Port Forwarding for Vault Server Pod

  • Get vault pod name.
export VAULT_POD=$(kubectl get pods --namespace default -l "app=vault" -o jsonpath="{.items[0].metadata.name}")
  • Get the vault root token.
export VAULT_TOKEN=$(kubectl logs $VAULT_POD | grep 'Root Token' | cut -d' ' -f3)
  • Get address of vault server pod.
export VAULT_ADDR=http://127.0.0.1:8200
  • Forward the vault pod port(8200).
kubectl port-forward $VAULT_POD 8200
  • Keep this terminal running and open a new terminal.
  • Export the above VAULT_POD, VAULT_TOKEN and VAULT_ADDR variables again in the new terminal.
  • If you are not running vault in the dev mode, run the below command to login into the vault server.
echo $VAULT_TOKEN | vault login

Check your vault CLI configuration by running vault status.

vault status

You have now forwarded the port successfully to be able to run commands in your vault server pod from your machine.

Enable vault backend and configure DB connection for user creation

Vault’s database secrets backend is not enabled by default. We need to enable that to use it in making connection with the database.

vault secrets enable database

We will now configure the connection that vault will make to DB.

vault write database/config/mysql-database \
plugin_name=mysql-database-plugin \
connection_url="{{username}}:{{password}}@tcp(mysql- slave.default.svc.cluster.local:3306)/" \
allowed_roles="mysql-role" \
username="root" \
password="password"

We are using mysql-database-plugin and the root user and the password that we created above to connect to the database. allowed_roles tell which vault roles can call this plugin. mysql-slave.default.svc.cluster.local:3306 is the name of MySQL DB service.

Now, we will create the MySQL role that will be calling the above-created database plugin to create a temporary DB user.

vault write database/roles/mysql-role \
db_name=mysql-database \
creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
default_ttl="1h" \
max_ttl="24h"

db_name is the DB connection that we created above. creation_statement describes command for creating the temporary user. default_ttl describe the lease period of the credentials and max_ttl sets the max period for which the lease can be extended.

The database plugin and the connection are now configured. Now if you try to read the credentials from the mysql-role that is created, Vault will use mysql-database-plugin to connect to the DB for creating a new user and return you with the username, password and the lease-id for the newly created user credentials. This lease-id will be used to renew or revoke these credentials.

vault read creds

We are now able to read the vault DB credentials from CLI of the vault server using port forwarding.

But, our aim is to allow our application which is running on a different pod to call the DB plugin of Vault and get the new set of temporary credentials.

For this, our application should be first authenticated with vault and should have appropriate access to call the plugin and get the credentials. We will create a service account in Kubernetes that will help us to authenticate and a vault policy that will be attached with this service account to authorize our application with Vault and its DB plugin.

Creating a vault policy

Create a mysql-policy.hcl file and add the following contents to it.

mysql-policy.hcl

The above-described policy grants read access to the mysql-role and renew and revoke access to the credential leases.

vault policy write mysql-policy mysql-policy.hcl

This command creates the mysql-policy as described in the mysql-policy.hcl file.

Create a service account in Kubernetes

Create a mysql-serviceaccount.yml file and add the following contents to it.

mysql-serviceaccount.yml

Run the below command.

kubectl apply -f mysql-serviceaccount.yml

This command creates the service account with name mysql-vault and grants it the auth-delegator role.

Enabling and configuring Vault’s Kubernetes auth backend

Start with enabling the Vault’s Kubernetes auth backend.

vault auth enable kubernetes

We will need to export a few parameters(VAULT_SA_NAME: Service account name, SA_JWT_TOKEN: Service account JWT token, SA_CA_CRT: Servie account CA certificate and K8S_HOST: Kubernetes Host) from the service account that we created above and that will be used to configure the Kubernetes auth backend.

export VAULT_SA_NAME=$(kubectl get sa mysql-vault -o jsonpath="{.secrets[*]['name']}")export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)export K8S_HOST=$(kubectl exec consul-server-0 -- sh -c 'echo $KUBERNETES_SERVICE_HOST')

Now run the below command to configure the auth backend.

vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:443" \
kubernetes_ca_cert="$SA_CA_CRT"

Further, we will need to attach the policy that we created above(mysql-policy) with the service account(mysql-vault) that will allow the users the access to call the vault DB plugin to read, renew or revoke credentials.

vault write auth/kubernetes/role/mysql \
bound_service_account_names=mysql-vault \
bound_service_account_namespaces=default \
policies=mysql-policy \
ttl=24h

The vault Kubernetes backend is now configured with the Kubernetes service account.

To check if our application is running on a different pod and is able to access and create temporary credentials from the vault, we will use pod with alpine image and use the curl command to call the vault APIs.

Create temporary credentials from an application running on a different pod in the cluster

Create a pod in the cluster running the alpine image and attach it with the above created mysql-vault service account.

kubectl run tmp --rm -i --tty --serviceaccount=mysql-vault --image alpine

Update and install the basic utilities required to call vault APIs.

apk update
apk add curl mysql-client jq

Get the service account token that’ll be used to authenticate with the vault.

KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

Login to Vault and get the X_VAULT_TOKEN that we will use to call the mysql-role.

VAULT_K8S_LOGIN=$(curl --request \
POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "mysql"}'\
http://vault:8200/v1/auth/kubernetes/login)
X_VAULT_TOKEN=$(echo $VAULT_K8S_LOGIN | jq -r '.auth.client_token')

Now use this token to call the mysql-role that will create a new user in the DB and respond with the username, password and lease-id for the credentials.

MYSQL_CREDS=$(curl --header "X-Vault-Token:$X_VAULT_TOKEN" http://vault:8200/v1/database/creds/mysql-role)echo $MYSQL_CREDS | jq

You wil get the output as shown below.

Gettnig creds from vault

You can verify the connection by the below command.

mysql -u v-kubernetes-mysql-role-Q10Mc5bP -h mysql-slave.default.svc.cluster.local -pA1a-QTbUvmN1etJ5KscQ

Replace the username and password with the values that you got from Vault.

Getting access to mysql DB

As you can see, we have access to the MySQL database using our new temporary created credentials from an application running in a different pod.

Further Reading

--

--