Hashicorp Vault Cluster On The AWS Elastic Kubernetes Service using Helm

Prithu Adhikary
23 min readSep 11, 2022

--

What are we gonna do?

So, without any further ado :stuck_out_tongue:

Why Hashicorp Vault

We don’t store credentials/passwords in plain configuration. They should always come from a secure location, a vault! To which, only the application can authenticate and fetch the credentials it needs, such as database credentials, api keys, encryption keys, private keys, hmac keys and ya we can go on but the moral of the story is such things must not be stored as part of the configuration and should be supplied/or fetched when the application starts or as and when the need arises. This is also inline with one of the 12-factors(i.e. config) required by the modern stateless applications that tend to successfully qualify as microservices.

Vault also provides apis to do cryptographic operations, so that the keys remain inside vault and don’t leak into the application memory. For even heavily regualted environments where compliance matters(HIPAA, PCIDSS etc) and the keys are mandated to be stored inside tamper resistant Hardware Security Modules(for e.g. CloudHSM), vault enterprise has integration support and provides myriads of authentication mechanisms which we will look into later on.

Helm Charts

In simple words, helm is a nice dependency management cum templating engine for kubernetes descriptor files where a helm chart describes a bunch of kubernetes artifacts such as deployments, replicasets, services, secrets, config maps, volumes, jobs, cron jobs, service accounts, basically anything and everything that can exist in a kubernetes cluster. Since it is a templating engine in itself, the templating language support constructs like variables, conditionals, loops, dynamic values pulled from a yaml file. The magic does not stop here, helm provides various datatypes such as string manipulation, lists, associative arrays(maps) etc. It even provides methods related to cryptography such as generation of CA certificates, loading of existing certificates, signing of certificates, encryption, message digests to name a few. The benefit of this being, you can write a chart to deploy everything in a kubernetes cluster that is required to start up an instance of an RDBMS(such as postgres?), and then publish that to a repository from where people can fetch it, override some properties and deploy it for their own use, as part of the infrastructure their application runs on, just like a library that you would add to your project.

And soon you will realise, there exists already, a helm chart for postgresql ;) such as the one provided by bitnami:

https://bitnami.com/stack/postgresql/helm

The best way to reuse such a dependency is to create a helm chart of your own, declare the postgresql chart as a dependency and override the properties, alongiwth with bits and pieces constituting your application deployment.

Similarly, Hashicorp has also published a Helm Chart for Vault, the source for which could be found at:

https://github.com/hashicorp/vault-helm

Now let us create a helm chart, declare a dependency on the vault chart and override some properties. We will also deploy a simple spring boot application as part of our deployment that will pull some configuration from vault. And before all, we would be using the AWS Elastic Kubernetes Service to deploy our workload.

Generating A Helm Chart

Its simple:

$ helm create vault-cluster

The result being:

❯ tree vault-cluster
vault-cluster
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml

Well, there is already a bunch of files. Let’s understand at least the important bits.

charts/: This directory contains the dependencies. Which temselves are helm charts too! But gzipped. This folder can also contain something called a subchart(s). It is not gzippped and exist as part of the parent helm chart, in this case, the vault-cluster. The subcharts also get deployed alongiwth the parent. One thing to note, the dependencies will not get auto downloaded. You will have to cd into the parent chart directory and issue the following command so that helm downloads the gzips from their respective repositories.

$ cd vault-cluster
$ helm dep update

Chart.yaml: This is like the pom file of a java project or a package.json of a nodejs project. This contains metadata such as the parent chart name, version, maintainers, dependencies and even conditional dependencies which get deployed when a specific condition stands true.

templates/: A directory that contains, well, as the name goes, templates! That is where you use the magic that I talked about in the previous section. These templates contain kubernetes object definitions alongwith the template code which can include variables pulled from the the values.yaml file, conditionals, loops etc. etc.

values.yml: A yaml file containing the properties that will be referenced by in the templates.

Deploying A Chart

Its simple too! In its simplest form, if you wanted to deploy the chart you just created, i.e. the vault-cluster chart, then you would issue the following command:

$ RELEASE_NAME="my-vault-cluster"
$ helm install "$RELEASE_NAME" vault-cluster

This will process all the templates present in the parent, the dependencies and any of the subcharts present inside the charts directory by executing the template code and substituting any values as required, refering to the values defined in the values.yaml, and will dynamically render a big big yaml document that can be passed on to the kube api server so that it can create the objects and schedule the pods across the kubernetes cluster nodes. I have declared the RELEASE_NAME variable just to denote, what helm deploys. It deploys a release! And it has a revision(or version). So, it is evident, the helm releases are versioned.

But if you change something in the configuration, for example an environment variable that is being passed to a container, and try to run helm install, it will fail stating that the release already exists. And there is a separate command to upgrade an existing release, which also would increase the revision number and that is:

$ helm upgrade "$RELEASE_NAME" vault-cluster

And even better, if you are automating stuff through a shell script and just want to do an upsert kind of a thing, i.e. install the release if not already installed, you can issue the following command:

$ helm upgrade --install "$RELEASE_NAME" vault-cluster

Now Let’s Talk Business! Shall We!

As part of setting up the vault cluster comprising of three pods and the demo application that will pull the configuration from vault, we will:

- Generate An RSA Keypair and a Root CA Certificate

Yes. We would need it because of the following two reasons:

  • The vault helm chart will boot up three pods running the vault server. The pods are created as part of a replica set. Since, it is a cluster, the pods will communicate with each other over TLS to synchronise their data over the RAFT protocol. And where there is TLS, there will be a certificate exchange followed by certificate validation against a trust chain and so on and so forth. So, we will need a root CA certificate and the corresponding private key to sign the CSR for generating a certificate for each of the pod. To simplify stuff, we will generate a single certificate for all of the three pods with their host names as SANs or Subject Alternate Names. We will already know the hostnames of the pods because they are part of a replica set. Even the subdomain because vault pods declare so. More to come on this later.
  • Secondly, our spring boot application would also need to communicate to the vault service over TLS. So, yes, the CA certificate of the vault server needs to exist in the truststore of our demo application. To make things more interesting, we will configure MTLS or mutual authentication where the vault server will also not only validate our demo application’s client certificate, it will grant access to the secrets based on the policies tied to the client certificate.
    You will see that the configuration also takes in the client cert’s CA certificate. We will use the same root CA for both, vault as well as our demo app’s identity.

- Create ConfigMap With The PEM Encoded Root CA Certificate

So that we can mount this as a volume and add it to a JKS truststore to be supplied for the default SSL Context truststore parameter. For this we will write a Kubernetes Job, which will generate this file and store in an NFS share to be mounted on each of the demo app pods, so that it can be loaded.

- Create A Secret With an RSA Key Pair and The Vault Server Certificate Signed By The Root CA.

So that we can mount it as a volume in the vault pods. The vault server will pick it up from the volume and start a server listening for TLS connections on port 8200(the default port vault server server starts at). We will ensure that proper SANs exists for all the vault nodes as well as the service name, we will use to access vault.

- Create A Secret With an RSA Key Pair and The Demo Application Certificate

Again, we will mount this somewhere. Previously we talked about a Job that will write the truststore to the NFS share. Well, the application also needs the keystore, which will contain the private key and the client certificate it will present to vault for authentication. The certificate acts as the identity of the application verifiable against the root CA certificate. To know more about this verification process, you can refer to

- Configure AWS Auto-Unseal For Vault

Initialisation of the vault cluster by itself does not open it up for modification or access. A state called sealed state. Think of it as a metal vault or safe. You have to dial the knobs and set the combination on it and unseal it, before you can put or take out stuff.

So, we unseal the vault. That requires us to run an unseal command multiple times and supply the unseal keys(at least 3 if initialised with the default configuration). What happens under the hood is, vault recreates the root key from the unseal key shares, which it will use to decrypt another encryption key which in turn is used to encrypt or decrypt the data. In other words, if there is data already present in the vault storage, vault won’t be able to decrypt it unless you unseal it and it recreates the root key.

Furthermore, if due to some reason the server(or a pod in our case) gets restarted, it boots up to an initialised but sealed state. And your application won’t be able to authenticate and pull up the configuration or invoke any of the cryptographic operations.

Soon, you realise, either you have to write a cron job (or k8s CronJob) that keeps checking to see if the vault cluster is in the sealed state and runs the unseal command in case it is. This also means, it needs to have access to the unseal keys somehow. That is a bad idea.

Or, you can use AWS auto unseal! Which encrypts the root key with an AWS KMS key when proper credentials are provided. We will also look into the IAM user and the policy attached(for simplicity).

- Write A Vault Initialiser Job And Supporting Shell Script

When a vault server start up for the first time, it is in an uninitialised state. At this stage, it is not operational, can’t store/read anything because it does not have the keys necessary to encrypt the secrets or do any cryptographic operation.

So, we initialise it, which results in generation of a root key, a bunch of unseal keys (5 by default) and an eternally valid root token.

This works similar in case of a cluster, we initialise the cluster, typically by connecting to the active node(pod in our case).

We will write a shell script, save it in a configmap(relatively easily using helm functions) and mount it as a volume so that the Vault Initialiser Job has access to it and execute it.

- Enable Certificate Based Authentication On Vault

We will enable certificate based authentication. When we start up the vault cluster, we provide the client certificate CA trust chain, which ensures that it validates the client certificate presented during the TLS handshake by the client, against the client CA trustchain. But for the vault to authorise access to secrets, we need to import the client certificate and associate it with policies to grant capabilities on specific paths, for example, read access to the path secrets/demo-config.

- Shell Script And A Job To Create A Keystore and a Truststore

For our spring boot application, we will create a PKCS12 keystore containing the demo application’s client certificate and the corresponding RSA private key. We will also create a truststore containing the CA certificate we generated at the beginning. We will execute it as part of a Job.

So, let us begin

We will start with the generation of the CA certificate.

Generation Of RSA Keypair and a Root CA Certificate

Since, it is a CA certificate, let’s go ahead with a validity of 5 years. That will give us some time before we have to swap the CA certificate. A 4096 bit length for the RSA key will also provide enough(may be an overkill 😛) security inside our private network.

But for a CA certificate to be a CA certificate there are a couple of properties it is ought to exhibit:

Basic Constraints: It must have the basic constraints extension which contains a cA boolean flag and a pathLen constraint. The value of the cA flag states whether the public key contained within the certificate can be used to validate certificate signatures. The pathLen constraint is an integer and gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path. And since we are not going to have any intermediate certificate between our CA certificate and the vault and the application certificates, this must be set to 0.

KeyUsage Extension: The keyUsage extension is used to specify the usage of the public key contained within the certificate. It may contain one or more enum constants specifying the usage such as key encipherment, data encipherment and many more. For CA certificates, they must have it set as keyCertSign which states the key is used for validation of certificate signatures. We will go ahead with it. There is another one called the cRLSign which means the public key can also be used to validate a Certificate Revocation List signatures. Setting up CRLs or OCSP responders may make sense if we are setting up a CA for a large corporation, where we may need to issue hundreds or thousands of internal certificates but for our small use case, it is just an overkill. To know more, again, you can refer to:

Authority Key Identifier And Subject Key Identifier: These are SHA1 digests of the issuer’s public key and the subject’s public key. But root CA certificates, are always self signed (i.e. a tree’s root doesn’t have a root). So, we will see that the AKI and SKI are the same.

Let’s use OpenSSL to generate the key pair and the CA certificate. But to do that, we would need a small config file for the openssl to know what extensions to add to the certificate. Something bare minimum like:

# The Certificate Signing Request Properties
[ req ]
distinguished_name = dn_fields
default_md = sha256
x509_extensions = ca_extensions
# The Subject Distinguished Name RDNs
[ dn_fields ]
countryName = Country Name (2 letter code)
stateOrProvinceName = State or Province Name
0.organizationName = Organization Name
organizationalUnitName = Organizational Unit Name
commonName = Common Name
# The bare minimum CA extensions we want to add.
[ ca_extensions ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = keyCertSign

So, let’s quickly generate a 4096 bit RSA Private Key:

$ openssl genrsa -out ca.key 4096

And generate a Root CA certificate containing the corresponding public key:

$ openssl req -new -x509 -key ca.key -out ca.crt -config openssl.cnf

The nice thing about this command is, it will ask you for the CSR fields and then generate the certificate, containing the extensions we specified in the openssl.cnf. In our case, the output is a PEM encoded Root CA certificate stored in the ca.crt file.

If you quickly inspect the generated certificate by running our favorite:

$ openssl x509 -in ca.crt -text

You will see the extensions in place:

CA X509 Certificate Extensions

Note: AKI and SKI are the same 40 nibbles or 20 bytes (SHA1 digest’s length).

Let’s also encrypt the private key with a passphrase.

openssl pkcs8 -topk8 -in ca.key -out encrypted-ca.key -iter 1000

Key in the passphrase and you now have the encrypted-ca.key. Make sure to remove the ca.key file as it is no longer needed.

Note: If it is a production deployment you are doing, you are better off keeping the encrypted-ca.key in some safe place and not checked in along with the source code.

Preparing the Chart For The Next Steps

We created the vault-cluster helm chart previously. Let us clear out the templates directory leaving the _helpers.tpl behind. Now the contents look like this:

Let’s quickly add the vault helm chart as a dependency in Charts.yaml. The final result looks something like this:

apiVersion: v2
name: vault-cluster
description: A Helm chart for deploying a vault cluster.
type: application
version: 0.1.0
appVersion: "1.16.0"

dependencies:
- name: vault
version: 0.21.0
repository: https://helm.releases.hashicorp.com

But, just adding this dependency won’t enable us to deploy the cluster. We will have to do a dep update.

$ helm dep update

And now it looks like this:

Now, we are ready to write some templates and do some property overriding in the values.yaml.

Create ConfigMap With The PEM Encoded Root CA Certificate

Let’s create a file named configmaps.yaml under the templates folder. This, as the name goes, will store the definitions of the configmaps we would use. Since we require the contents of the ca certificate during the creation of truststore, we can save the ca certificate in a configmap and mount it later as a volume in the pods. The definition would look something like this:

---
apiVersion: v1
kind: ConfigMap
metadata:
name: ca
data:
tls.crt: |
{{ .Values.caCert | indent 4 }}

We will pass the caCert as a command line argument using the set-file option to the helm upgrade command.

Note: We could have used the .Files.Get to load the certificate, but that mandates the resources to be kept inside the helm chart directory. That will lead to duplication if we want to create a separate helm chart using the same root CA.

Create A Secret With an RSA Key Pair and The Vault Server Certificate Signed By The Root CA

Let us create a file called secrets.yaml under the templates directory. This file will contain the definitions of the secrets that we will later on use or mount as volumes.

To create a TLS Secret, we would need to pass the following to the kube-api:

---
apiVersion: v1
kind: Secret
metadata:
name: <secret name>
type: kubernetes.io/tls
data:
tls.crt: <base64 encoded pem encoded certificate>
tls.key: <base64 encoded pkcs8 unencrypted rsa private key>

But, to obtain the vault certificate, we would first need to load the CA:

$ca := buildCustomCert "base64-encoded-ca-crt" "base64-encoded-ca-key"

This is beneficial in two ways:

  • It detects if the key and the certificate correspond.
  • The loaded certificate object can then be passed to another template function called genSignedCert to generate a new certificate signed by the passed in CA certificate object.

This is how you would use the genSignedCert template function:

$cert := genSignedCert "demo-app" nil nil 365 $ca

So, if we want to load and use the CA certificate for signing other certificates, we would do it like this.

{{ $ca := buildCustomCert (b64enc .Values.caCert) (b64enc .Values.caKey) }}

We will pass the value of caKey also as a command line argument to the helm upgrade command using the set option. We will write a shell script that will prompt for the passphrase to decrypt the encrypted private key and will set it as an argument.

Let’s also generate a new certificate that will be used by the vault pods. For simplicity’s sake, we will generate a single certificate for the pods as well as the vault-active service. But to generate the certificate we would need to know the Subject and the Subject Alternate Names that should go in for the communication among the vault pods; and the demo application and the vault-active service succeeds.

The vault chart deploys a service called vault-internal, which acts as the subdomain for the vault replica set pods. So, the FQDNs of the three vault pods will take the form:

<release-name>-<pod index>.<release-name>-internal.<namespace>.svc.<cluster domain>

Rest is pretty easy to figure out, but to find out the cluster domain, let’s spin up an alpine pod and cat the contents of the /etc/resolv.conf file with the pod definition(stored in a file alpine.yaml):

apiVersion: v1
kind: Pod
metadata:
name: alpine
spec:
containers:
- name: alpine
image: library/alpine:latest
command: [ "/bin/sh" ]
args:
- -c
- >-
cat /etc/resolv.conf

If you apply the alpine.yaml,

kubectl apply -f alpine.yaml

and then check the logs of the alpine pod by running:

kubectl logs alpine

You should see something like:

nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5

And voila! You have the cluster domain now, which is “cluster.local”.

So, now we know what the FQDNs of our vault pods would be(provided we deploy the cluster under the namespace vault and name the release vault-cluster):

vault-cluster-0.vault-cluster-internal.vault.svc.cluster.local
vault-cluster-1.vault-cluster-internal.vault.svc.cluster.local
vault-cluster-2.vault-cluster-internal.vault.svc.cluster.local

Similarly, since the demo app will access the active vault pod through the vault-active service, the certificate must also have the FQDN of the vault-active service either as the Subject or in SANs. And the FQDN would be

<release-name>-active.<namespace>.svc.<cluster domain>

Which translates to

vault-cluster-active.vault.svc.cluster.local

So, let’s do this

Subject: CN=vault-cluster-active.vault.svc.cluster.local
Subject Alternate Names:
- CN=vault-cluster-0.vault-cluster-internal.vault.svc.cluster.local
- CN=vault-cluster-1.vault-cluster-internal.vault.svc.cluster.local
- CN=vault-cluster-2.vault-cluster-internal.vault.svc.cluster.local

So, the final contents of the secrets.yaml along with the function calls to generate the vault certificate and save it as a secret would look something like

# Load the CA certificate along with the private key.
{{ $ca := buildCustomCert (b64enc .Values.caCert) (b64enc .Values.caKey) }}

# Declare $subject of the vault certificate.
{{ $subject := (printf "%s-active.%s.svc.%s" .Release.Name .Release.Namespace .Values.clusterDomain) }}

# Generate a list of SANs for the vault certificate.
{{ $sans := list }}
{{ range $i, $e := until (.Values.vault.server.ha.replicas | int) }}
{{ $sans = append $sans (printf "%s-%d.%s-internal.%s.svc.%s" $.Release.Name $i $.Release.Name $.Release.Namespace $.Values.clusterDomain) }}
{{ end }}

# Issue the vault certificate using the $ca as the issuer.
{{ $vaultCert := genSignedCert $subject nil $sans 365 $ca }}

# Create Secret for the vault-cert.
---
apiVersion: v1
kind: Secret
metadata:
name: vault-cert
type: kubernetes.io/tls
data:
tls.crt: {{ b64enc $vaultCert.Cert }}
tls.key: {{ b64enc $vaultCert.Key }}

Create A Secret With an RSA Key Pair and The Demo Application Certificate

This is going to be a brief one. Actually pretty similar to what we did for the vault certificate, but simpler. Assuming CN=demo-app to be the subject for our certificate, the call to generate the signed certificate will be:

{{ $appCert := genSignedCert "demo-app" nil nil 365 $ca }}

And then we create a tls secret using the $appCert certificate, like so:

apiVersion: v1
kind: Secret
metadata:
name: app-cert
data:
tls.crt: {{ b64enc $appCert.Cert }}
tls.key: {{ b64enc $appCert.Key }}

So, the final source of secrets.yaml would look like:

# Load the CA certificate along with the private key.
{{ $ca := buildCustomCert (b64enc .Values.caCert) (b64enc .Values.caKey) }}

# Declare $subject of the vault certificate.
{{ $subject := (printf "%s-active.%s.svc.%s" .Release.Name .Release.Namespace .Values.clusterDomain) }}

# Generate a list of SANs for the vault certificate.
{{ $sans := list }}
{{ range $i, $e := until (.Values.vault.server.ha.replicas | int) }}
{{ $sans = append $sans (printf "%s-%d.%s-internal.%s.svc.%s" $.Release.Name $i $.Release.Name $.Release.Namespace $.Values.clusterDomain) }}
{{ end }}

# Issue the vault certificate using the $ca as the issuer.
{{ $vaultCert := genSignedCert $subject nil $sans 365 $ca }}

# Issue the app certificate using the $ca as the issuer.
{{ $appCert := genSignedCert "demo-app" nil nil 365 $ca }}

# Create Secret for the vault-cert.
---
apiVersion: v1
kind: Secret
metadata:
name: vault-cert
type: kubernetes.io/tls
data:
tls.crt: {{ b64enc $vaultCert.Cert }}
tls.key: {{ b64enc $vaultCert.Key }}
---
# Create Secret for the app-cert.
apiVersion: v1
kind: Secret
metadata:
name: app-cert
data:
tls.crt: {{ b64enc $appCert.Cert }}
tls.key: {{ b64enc $appCert.Key }}

Configure AWS Auto-Unseal For Vault

We have already read about why it is important. Let us just look at the configuration that makes this happen.

seal "awskms" {
}

Actually, it seems pretty short because we are supplying the required AWS specific parameters as extra environment variables.

vault:
server:
extraEnvironmentVars:
AWS_REGION: us-east-1
AWS_ACCESS_KEY_ID: <redacted>
AWS_SECRET_ACCESS_KEY: <redacted>
VAULT_AWSKMS_SEAL_KEY_ID: <redacted>

We will look into the complete vault configuration shortly, but this is the bit that makes AWS auto unseal work like a charm.

Write A Vault Initialiser Job And Supporting Shell Script And Enable Cert Based Authentication For Demo Application

Vault Initialiser Job is going to be a simple Kubernetes Job object with a pod definition that will wait till the vault-0 pod is reachable and will execute a shell script to initialize it.

Lets have a look at the shell script:

#The exit code reflects the seal status:
#
#0 - unsealed
#1 - error
#2 - sealed

unset VAULT_STATUS
unset RET_VAL
# Loop till you find vault sealed.
while : ; do
VAULT_STATUS=$(vault status)
RET_VAL=$?
[ $RET_VAL -ne 1 ] && break
sleep 3s
done

# If sealed, initialize it. Because AWS auto unseal
# is not working, that means it is uninitialized.
if [ $RET_VAL -eq 2 ]; then
# Means vault is sealed.
unset INIT_OUTPUT
INIT_OUTPUT=$(vault operator init)

# NOTE: You can capture and email the initialization
# output for rekeying or just to obtain the root token.

# Extract the root token and do a login with it.
VAULT_TOKEN=$(echo "$INIT_OUTPUT" | grep 'Initial Root Token' | awk '{ print $4 }')
export VAULT_TOKEN="$VAULT_TOKEN"
vault login token="$VAULT_TOKEN"

# Write vault policies (readable from configmap mounted as a volume).
vault policy write read-secret /vault-policies/read-secret.hcl
vault policy write create-token /vault-policies/create-token.hcl

# Create a role read-secret
vault write auth/token/roles/read-secret allowed_policies=read-secret

# Check to see if Cert Auth Method is enabled or not.
ENABLED_AUTH_METHODS=$(vault auth list)
CERT_METHOD_PATTERN=".*cert.*"

if [[ ! "$ENABLED_AUTH_METHODS" =~ $CERT_METHOD_PATTERN ]]; then
vault auth enable cert
fi

# Write demo-app certificate
vault write auth/cert/certs/demo-app display_name=demo-app policies="create-token,read-secret" certificate="@$APP_CERT/tls.crt"

# Disable v2 secrets engine as it is not compatible to spring-cloud-starter-vault-config
vault secrets disable secret

# Enable v1 secrets engine
vault secrets enable -path=secret -version=1 kv

# Write something to the demo-app secret.
vault kv put secret/demo-app message="I am Groot!"
fi

The script is pretty self explanatory, but in brief, we are waiting for the vault status to say it is sealed. If it is sealed, that means it is also uninitialised because if it was, AWS auto unseal would have unsealed it automatically. Then, we write some policies, create a role and associate the policies with the application cert we generated. That’s it!

The above script will be executed from within a vault based container so the vault binary is available. The job definition that will run this container as a pod looks like the following:

apiVersion: batch/v1
kind: Job
metadata:
name: vault-initialiser-job
spec:
template:
metadata:
labels:
app: vault-initialiser
spec:
restartPolicy: OnFailure
volumes:
- name: app-cert
secret:
secretName: app-cert
- name: ca
configMap:
name: ca
- name: vault-script
configMap:
name: vault-script
containers:
- name: vault-intialiser
image: vault:latest
imagePullPolicy: Always
volumeMounts:
- mountPath: /app-cert
name: app-cert
- mountPath: /ca
name: ca
- mountPath: /vault-script
name: vault-script
env:
- name: APP_CERT
value: /app-cert
- name: VAULT_ADDR
value: https://{{.Release.Name}}-0.{{.Release.Name}}-internal.{{.Release.Namespace}}.svc.{{.Values.clusterDomain}}:8200
- name: VAULT_CACERT
value: /ca/tls.crt
command: [ "/bin/sh" ]
args:
- -c
- >-
sh /vault-script/vault-init.sh

This also is pretty self explanatory. We spin up a pod and give the vault certain environment variables to facilitate the connectivity such VAULT_ADDR and the VAULT_CA_CERT pointing to the vault’s CA certificate so that vault can establish the valid certification path and we are spared from seeing the godforsaken SSL Handshake error. You will also see a few volumes mounted such as the app-cert so that we can enable cert based authentication for the application’s tls certificate. We also write a small key-value pair which our app will pull when it starts up.

After the Job has finished successfully, vault is ready to be connected by the application to load the configuration.

Vault Initialiser — Completed!

Shell Script And A Job To Create A Keystore and a Truststore

Now before we can finally boot up the spring boot application, we need the keystore and the truststore that the spring-cloud-starter-vault-config will use to connect to the vault service. The spring-cloud-starter-vault-config is yet another spring boot starter dependency that auto-configures a spring boot application to pull the configuration from vault.

Let’s now explore what the job looks like:

#!/bin/bash

openssl pkcs12 -export -out /app-secrets/keystore.p12 -inkey /app-cert/tls.key \
-in /app-cert/tls.crt -name key -password pass:changeit

chmod ugo+rx /app-secrets/keystore.p12

if [ -f "/app-secrets/truststore.jks" ]; then
keytool -delete -alias ca -keystore /app-secrets/truststore.jks \
-keypass changeit -storepass changeit
fi

keytool -importcert -trustcacerts -file /ca/tls.crt \
-keystore /app-secrets/truststore.jks -alias ca -noprompt \
-keypass changeit -storepass changeit

So, we just create a PKCS12 keystore named keystore.p12 with the RSA private key and the associated certificate. We get that by mounting the app-cert secret as a volume.

We also create a JKS truststore named truststore.jks and import our CA certificate.

If you look at the path these files are getting written to, you will see, it is where we have mounted the app-secrets EFS share.

So, let’s inspect the Job definition, especially the volume mounts.

---
apiVersion: batch/v1
kind: Job
metadata:
name: certificate-writer
spec:
template:
spec:
volumes:
- name: app-cert
secret:
secretName: app-cert
- name: app-secrets
persistentVolumeClaim:
claimName: app-secrets-claim
- name: ca
configMap:
name: ca
containers:
- name: certificate-writer
image: eclipse-temurin:latest
volumeMounts:
- mountPath: /app-cert
name: app-cert
- mountPath: /app-secrets
name: app-secrets
- mountPath: /ca
name: ca
command: ["/bin/bash"]
args:
- -c
- >-
sh /scripts/write-keystore.sh

Pretty straight forward. We mount the app-cert secret and the ca configmap to generate the files and the app-secrets persistent volume to write to it. You will note that we are using the eclipse-temurin image, because, it has both the tools, openssl and keytool required to do the job.

The Demo Application

Wow! That was hell lot amount of information, and yet it is not complete. For the complete configuration, please refer to the GitHub repo mentioned at the end of this entry. So, let’s quickly jump into our demo application.

First, we add the dependencies necessary:

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>

As stated earlier, we need the spring-cloud-starter-vault-config to autoconfigure our application to fetch the config from vault. We also need the other dependency too, as the name suggests, it is required to bootstrap the spring-cloud-starter-* dependencies. Otherwise, it just won’t work. We would also require the web starter dependency etc. Just look at the GitHub repo to see what else!

Next, let us just write a small controller and inject a value that we have already written to the vault during initialisation.

@RestController
public class MessageController {

@Value("${message}")
private String message;

@GetMapping("/groot-says")
public String grootSays() {
return this.message;
}
}

We also provide a the bootstrap configuration in bootstrap.yml.

spring:
application:
name: demo-app
cloud:
vault:
host: ${VAULT_SERVER}
scheme: https
kv:
enabled: true
application-name: demo-app
default-context: demo-app
enabled: true
authentication: CERT
ssl:
keyStore: file://${KEYSTORE}
trustStore: file://${TRUSTSTORE}
keystorePassword: ${KEYSTORE_PASSWORD}
trustStorePassword: ${TRUSTSTORE_PASSWORD}
keyStoreType: PKCS12
certAuthPath: cert

Don’t worry, we will be passing all those environment variables in our deployment template.

apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app-deploment
spec:
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
volumes:
- name: app-secrets
persistentVolumeClaim:
claimName: app-secrets-claim
containers:
- name: demo-app
image: 117826729006.dkr.ecr.us-east-1.amazonaws.com/demo-app:1.0
imagePullPolicy: Always
volumeMounts:
- mountPath: /app-secrets
name: app-secrets
env:
- name: KEYSTORE
value: /app-secrets/keystore.p12
- name: TRUSTSTORE
value: /app-secrets/truststore.jks
- name: KEYSTORE_PASSWORD
value: changeit
- name: TRUSTSTORE_PASSWORD
value: changeit
- name: VAULT_SERVER
value: {{.Release.Name}}-active.{{.Release.Namespace}}.svc.{{.Values.clusterDomain}}

We also need to create a service to target the pod running as part of the above deployment.

apiVersion: v1
kind: Service
metadata:
name: demo-app-service
spec:
selector:
app: demo-app
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP

Well, that’s it. Once you deploy the html chart, you should have a vault cluster running already, with an application that will pull the configuration from vault.

You can quickly look up the services:

kubectl get services -n vault

And since our service is a LoadBalancer service, you will see an output like this:

demo-app-service         LoadBalancer   172.20.115.192   a4fa3d1d7ddd54c26a4c201cae1f6c4a-1839542571.us-east-1.elb.amazonaws.com   80:30587/TCP        3m47s

And there you have the DNS name and the port. Just note, like the example above, the service is basically running on a NodePort. It is because of the way LoadBalancers services are handled by EKS. You can create an ingress and that will lead to the creation of an ALB and the service will be accessible through the ingress rather.

And if you port forward and then CURL, you will see the message we initially saved as a secret in vault.

kubectl port-forward service/demo-app-service -n vault 8080:80

And

curl http://localhost:8080
I am Groot!

Well. Thanks to have stuck around for this long and tiresome ordeal. But, you gotta do what you gotta do!

Last but not least. The GitHub Repo: https://github.com/prithuadhikary/vault-cluster-spring-boot-helm-demo

Adios!

--

--