Adding Vault to your development Kubernetes cluster using Kind

Previously I have written about how to set up a Kubernetes cluster for development and test purposes using Kind. This article looks at adding Vault to that cluster for use with solutions that require it.

Martin Hodges
9 min readApr 3, 2024
Adding Vault to Kind cluster

In my article on creating a Kubernetes cluster using Kind for local development and testing, I created a 3 node Kind cluster and added an Istio service mesh, Grafana/Loki/Prometheus monitoring stack and a Postgres database cluster.

This is a minimal set up for application development. If you want to develop against an event-based architecture based on Kafka, you will need to add a Kafka cluster to this Kind cluster. For that, you need a four node Kubernetes cluster.

When I came to develop a Spring Boot application to utilise this stack, I realised that I will be using Vault in my production environment. Vault is a secrets and credential management solution. I have written previously on how to introduce Vault into your cloud cluster but in this article, I add it to your Kind cluster so you can develop against it.

I use a Mac for my development and so the instructions you see in my articles are for macOS.

Create your Kind cluster

Following the instructions from my earlier article, you should be familiar with how to create a Kind Kubernetes cluster to run on your local development machine.

We are going to deploy Vault as a cluster of 3 instances so you can see how it works and to test what happens when instances die.

For this reason, we are going to start out with a 4 node Kind cluster with the ports opened for all the applications we are going to use.

If required, you can reduce Vault to a single instance.

Create your Kind cluster configuration file:

kind-config.yml


apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
- containerPort: 30092
hostPort: 30092
- containerPort: 31321
hostPort: 31321
- containerPort: 30092
hostPort: 30092
- containerPort: 31300
hostPort: 31300
- containerPort: 31400
hostPort: 31400
- role: worker
- role: worker
- role: worker

This will create one control-plane node (master) and 3 worker nodes.

I have selected 3 worker nodes so that each of our Vault instances can sit on a different node.

You can also see that we expose a set of the network ports on our host machine. This is because Kind implements its Kubernetes nodes as Docker containers and we need to expose any NodePort services to our local machine. These ports are for Vault, our Postgres database and our Grafana console.

We can now start up our Kubernetes cluster with:

kind create cluster --config kind-config --name my-cluster

The name is optional. If you are only using one cluster, it is easier to leave it off. In this article, I will assume you have not used a name.

It takes a minute or two to create the cluster. Once up and running you can confirm the 4 nodes are up and Kubernetes is running with:

kubectl get nodes
kubectl get pods -A

Other applications

You can now add Istio (if required), Grafana / Loki and Postgres. We will use Postgres to test the Vault provisioning of secrets to our database and use Grafana to monitor everything.

I explain how to install these applications in this article on using Kind as a development and test environment. Add the applications and then come back to install Vault.

Installing Vault

We will install Vault using Helm. If you need to install Helm, you can find instructions in this article.

There are three ways to deploy Vault:

  1. As a standalone version for development and exploration
  2. As a single instance
  3. As a cluster

We will deploy it as a cluster. First we need to add the Helm repository to our local environment:

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update

Next we will create a namespace for Vault to hold our Vault resources:

kubectl create namespace vault

Note that this is not annotated with Istio. If you are using Istio, you need to do that to secure connections into Vault. My previous article shows how to do this.

Configure Vault deployment

Create a values file for configuring the Helm chart:

vault-config.yml

global:
enabled: true
tlsDisable: true
namespace: vault
ui:
enabled: true
serviceType: NodePort
serviceNodePort: 31400
server:
dataStorage:
storageClass: standard
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
cluster_name = "vault-integrated-storage"
storage "raft" {
path = "/vault/data/"
}

listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_disable = "true"
}
service_registration "kubernetes" {}

This creates a cluster of 3 replicas and includes a NodePort on port 31400. As this is only a development environment, we are not enabling TLS as that requires additional configuration.

You can see that we also use the standard storageClass as this tells Kind to provision the persistent storage for us.

Deployment

Now we have configured Vault, we now need to deploy it. This is done using Helm with the following command:

helm install vault hashicorp/vault -f vault-config.yml -n vault

This installs the Helm chart with the configuration file we just created.

When we check the pods (kubectl get pods -n vault), we will see:

NAME                                   READY   STATUS    RESTARTS   AGE
vault-0 0/1 Running 0 45s
vault-1 0/1 Running 0 45s
vault-2 0/1 Running 0 45s
vault-agent-injector-6b448847d-bhnz6 1/1 Running 0 45s

You will see that the pods are Running but not ready (0/1). This is because each Vault instance is not initialised and sealed.

Now check the services with:

kubectl get svc -n vault

You should see the following:

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
vault ClusterIP 10.96.113.171 <none> 8200/TCP,8201/TCP 8m22s
vault-active ClusterIP 10.96.26.76 <none> 8200/TCP,8201/TCP 8m22s
vault-agent-injector-svc ClusterIP 10.96.92.58 <none> 443/TCP 8m22s
vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 8m22s
vault-standby ClusterIP 10.96.153.16 <none> 8200/TCP,8201/TCP 8m22s
vault-ui NodePort 10.96.157.190 <none> 8200:31400/TCP 8m22s

You should now be able to access the Vault console using a browser at http://localhost:31400. Rather than using the console, we will initialise and unseal from the command line.

Initialising the Vault

Each instance of the Vault needs to be initialised and unsealed. Starting with the first instance, get a command line prompt and check the Vault status:

kubectl exec -it vault-0 -n vault -- sh
vault status

If you look at the status, it will say:

Key                Value
--- -----
Seal Type shamir
Initialized false
Sealed true
...

This shows that the Vault is not yet initialised and is sealed.

We must now initialise the Vault with:

vault operator init -n 1 -t 1

This initialisation creates a single unseal key (-n 1) and only requires one key to unseal it (-t 1). You can choose other numbers, such as 5 and 3, as required (note nt).

Take a note of the root token and the unseal keys that it will display as you will need these in the next steps.

Unseal the Vault leader (the instance you are exec’d into) with:

vault operator unseal <unseal key from previous command>

If you have elected a higher threshold than 1, you may need to repeat this multiple times with different unseal keys.

The status will now show:

Key                     Value
--- -----
Seal Type shamir
Initialized true
Sealed false
...

Now you need to join the other instances to the cluster. This needs to be done for the remaining two instances:

kubectl exec -it vault-1 -n vault -- sh
vault operator raft join http://vault-active:8200
vault operator unseal <unseal key from earlier command>
exit

kubectl exec -it vault-2 -n vault -- sh
vault operator raft join http://vault-active:8200
vault operator unseal <unseal key from earlier command>
exit

If you have a single node Vault deployment, you can initialise it from the console.

Congratulations, you now have an unsealed Vault ready for use!

So, let’s use it.

Adding dynamic credentials to Postgres

I am assuming that you have installed the Postgres database from my previous article. We will now add a secret in our Vault deployment that will allow dynamic usernames and passwords to be created for that database.

Now the Vault is initialised an unsealed, you can now access the console at https://localhost:31400.

Sign in using the Token method. Use the initial root token that you recorded earlier.

Once you log in, you can now configure and store secrets for your systems.

Connecting Vault to Postgres

We will now connect Vault to our database instance. This will allow Vault to manage all of the credentials for all users of the database including the root user, postgres.

To do this we need to tell Vault:

  1. What type of secrets we want
  2. How to connect to the database (where it is and the initial credentials to connect to it)
  3. How to create a user
  4. To give us a password

We will now do this for our Postgres instance.

Step 1: Create Secrets Engine

So Vault knows how to manage a secret, all secrets are associated with a Secrets Engine. We are managing database credentials and so we need the Databases engine.

All secrets are referenced by a path. All secrets engines are created at a root path, which you can select. Any credentials created by that engine will then appear below that path.

  • In the console, on the left menu panel, select Secrets Engines.
  • Click Enable New Engine +
  • Select Databases and click Next
  • Click Enable Engine

Step 2: Create a connection

Now we will tell Vault how to connect to our database.

Within the database engine path you just created:

  • Click Create connection +
  • Select the PostgreSQL database plugin
  • Give it a URL friendly name (eg: my-secure-db)
  • Add in the connection string:
    postgresql://{{username}}:{{@db-cluster-rw.pg”>password}}@db-cluster-rw.pg:5432/postgres
    (note that this assumes your database is in the same Kubernetes cluster in the pg namespace)
  • Add the root postgres username and password (super-secret if you have been following along the previous installation instructions)
  • Click Save

It may suggest that you rotate the root password but, for development purposes, you probably shouldn’t as it will rotate it to a password you will not have and cannot find out.

Step 3: Create a user

In Postgres, users are known as roles. We will now create a dynamic role. This is one that has a random name, password and a set time to live.

Within your database connection:

  • Click Add role +
  • Give it a name, eg: myAppAdmin
  • If not selected, select the connection name (ie: my-secure-db)
  • Choose a dynamic role type
  • For the creation statement enter:
    CREATE ROLE “{{name}}” WITH LOGIN PASSWORD ‘{{password}}’ VALID UNTIL ‘{{expiration}}’; GRANT ALL ON ALL TABLES IN SCHEMA myapp TO “{{name}}”;
    Note that this grants access to the myapp schema only
  • Click Save

Step 4: Get new credentials

Once the role has been created we can now use it to get temporary credentials to access the database.

  • Within your database engine, click on Roles at the top of the screen.
  • Click on the role you created
  • Click Generate credentials

This will now show you a hidden username and password. You can reveal them or just copy them. If you go off this page, you cannot get back to them. If you lose the credentials, you will need to create a new set of credentials.

Command line option

Note that all of the above can be done via the command line within a Vault pod.

To use the command line, you will first need to login to Vault (with an access token) and then execute a command something like:

vault login
vault write database/roles/myrole \
db_name="my-secure-db" \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"

You now have a secure way of accessing your database with short lived credentials. You can also use Vault to create and rotate static credentials that your applications can use. More on that in another article when we connect a Spring Boot App to our database and get the credentials from Vault.

Summary

In this article we created a 4 node Kubernetes cluster with a Postgres database. We then installed a multi-node Vault cluster, initialised and unsealed it.

Once we had that working, we then used it to manage the credentials for our database user.

This now gives us a development and test environment where we can use Vault secrets.

If you found this article of interest, please give me a clap as that helps me identify what people find useful and what future articles I should write. If you have any suggestions, please add them in the comments section.

--

--