External Vault to Kubernetes clusters integration

Igor Kanshyn
6 min readJun 22, 2023

--

This article describes how to enable Vault to Kubernetes cluster integration.

Such integration gives the ability to inject secrets into Kubernetes PODs. The secret values will be injected into a file available to PODs. The file format is customizable and it’s possible to create a shell script file that defines secret values as environment variables on Kubernetes POD.

Prerequisites

  • We will utilize Vault Agent containers here. We assume that Vault is installed on Kubernetes, but on a different Kubernetes cluster, not the one we are targeting to integrate here.
  • We need to have a working unsealed Vault cluster
  • We need a Kubernetes cluster or several ones that we will integrate
  • Kubectl, git, and helm tools are needed to be installed.
  • Linux or Mac computer is used to access the tools

Set up Kubernetes alias

Execute the following command:

alias k=kubectl

Getting access to Vault

We assume that Vault is installed on its own Kubernetes cluster and it is a high availability (HA) installation.

Checking Vault deployment

Let’s connect to your Vault Kubernetes cluster and check what pods we have:

k get po

We assume here that Vault is running on the default namespace.

You should get something like this in the result:

NAME                       READY    STATUS         RESTARTS   AGE
vault-0 1/1 Running 0 24h
vault-1 1/1 Running 0 24h
vault-2 1/1 Running 0 24h
vault-agent-injector-7d866 1/1 Running 0 24h
vkjapp-7f8f975d-r59r9 2/2 Running 0 23h

As you can see we have HA Vault installation with three vault instances and an agent POD. They all should be running.

This Vault was installed with the help of the following instruction: https://learn.hashicorp.com/tutorials/vault/kubernetes-google-cloud-gke?in=vault/kubernetes

Checking vault status

k exec -it vault-0 -- vault status

It’s essential to make sure that your Vault is initiated and NOT sealed (Sealed: false)

Key                     Value
--- -----
Seal Type shamir
Initialized true
Sealed false
...

Insert test secrets into Vault

Let’s add a test secret to validate the connectivity later.

Exec into Vault pod:

k exec --stdin=true --tty=true vault-0 -- /bin/sh

While still in the vault shell session, let’s enable kv-v2 secrets at the path secret. Execute the following:

vault secrets enable -path=secret kv-v2

Next, create a secret at path secret/devwebapp/config with a username and password:

vault kv put secret/devwebapp/config username='giraffe' password='salsa'

Finally, verify that the secret is defined at the path secret/data/devwebapp/config.

vault kv get secret/devwebapp/config

You should get the following output:

====== Metadata ======
Key Value
--- -----
created_time 2020-12-11T19:14:05.170436863Z
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
password salsa
username giraffe

Optional. Make public access to Vault UI and API.

The vault I installed did not have an external IP or domain configured to access it.

At this step, we will create a LoadBalance service to get an external IP address to access Vault from other Kubernetes clusters or the internet. Note, that this access URL is not-HTTPS and should not be used with real applications.

We will utilize a helper repository from GitHub here.

Clone the vault-kubernetes-java repository:

git clone https://github.com/grenader/vault-kubernetes-java.git cd vault-kubernetes-java/vault

Create a LoadBalancer to access Vault externally:

k apply -f vault-load-balancer.yml

Wait until the Load balancer has got an external IP and print it:

k get svc vault-lb
k get svc vault-lb -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

And store this IP and access URL into environment variables:

EXTERNAL_VAULT_ADDR=$(k get svc  vault-lb -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
VAULT_ADDR=http://$EXTERNAL_VAULT_ADDR:8200

Since we will need to work on several clusters, we will store this Vault Kubernetes cluster config in a different file for future use.

cp ~/.kube/config ~/.kube/config-vault-ha

We have stored our config in config-vault-ha file.

Preparing another Kubernetes cluster to integrate with Vault

The following steps were partially derived from https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault#deploy-service-and-endpoints-to-address-an-external-vault instruction.

At this point, we will switch to another Kubernetes (k8s) cluster. Let’s call it “client cluster”. This Kubernetes cluster will use our Vault cluster.

Since I using Google Cloud Platform (GCP) to run K8s cluster, I will use something like this:

gcloud container clusters get-credentials <cluster-name> ...

On Tanzu Kubernetes Grid Integrated Edition (TKGI) cluster we need to use

tkgi get-credentials <cluster-name>

These commands will replace ~/.kube/config file with a new cluster connection info.

Check that you are on the right cluster, for instance:

k cluster-info

And observe the results.

Create a Kubernetes service account named internal-app. We will use it later.

k create sa internal-app

Optional. Test direct connective to Vault

This is an optional step. We will call an application that will connect to Vault directly and we need to supply the Vault token.

Set your Vault Access Token into the environment variable:

VAULT_ACCESS_TOKEN=<your vault access token>

Make sure that EXTERNAL_VAULT_ADDR is set correctly. If not, look at the previous steps.

echo $EXTERNAL_VAULT_ADDR

Create a testing application pod

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: direct-vault
labels:
app: direct-vault
spec:
serviceAccountName: internal-app
containers:
- name: app
image: burtlo/devwebapp-ruby:k8s
env:
- name: VAULT_ADDR
value: "http://$EXTERNAL_VAULT_ADDR:8200"
- name: VAULT_TOKEN
value: "$VAULT_ACCESS_TOKEN"
EOF

Call the application to see the secret from Vault

k exec direct-vault -- curl -s localhost:8080 ; echo

The results should be:

{“password”=>”salsa”, “username”=>”giraffe”}

Note, this application expects a secret at “secret/devwebapp/config” only.

We have validated that the Vault secret can be correctly accessed externally.

Note, that this “direct connection” does not work for Vault servers running on HTTPS.

Installing Vault client:

Install Vault Agent via helm

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault --set "injector.externalVaultAddr=http://$EXTERNAL_VAULT_ADDR:8200"

Check status the helm installation status

helm status vault

See a new vault agent deployment

k get deploy

The output should be:

> NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
> vault-agent-injector 1/1 1 1 87s

Prepare environment variables with the client cluster details

Execute the command listed below to collect info about our current (client cluster) Kubernetes cluster:

VAULT_HELM_SECRET_NAME=$(kubectl get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith("vault-token-")).name')TOKEN_REVIEW_JWT=$(kubectl get secret $VAULT_HELM_SECRET_NAME --output='go-template={{ .data.token }}' | base64 --decode)KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}'

Check whether we have got the values in those environment variables:

echo $TOKEN_REVIEW_JWT ; echo $KUBE_CA_CERT ; echo $KUBE_HOST

Set up Kubernetes Authentication in Vault

At this step, we will connect to our Vault that runs on another Kubernetes cluster, and utilize environment variables that we set above.

Connect to Vault POD:

kubectl --kubeconfig ~/.kube/config-vault-ha exec -it vault-0 -- /bin/sh

While still in the vault shell session, enable Vault-Kubernetes authentication:

vault auth enable kubernetes

Note, that if you have already enabled Kubernetes authentication on your Vault, this command will fail with “* path is already in use at kubernetes/” This is Ok. Note, that if you did not login into this Vault for a while, you might see “… * missing client token” error message. In this case, use “vault login” to login first and then repeat the command.

Configure Kubernetes-Vault authentication

Exit “vault-0” exec command and execute:

kubectl --kubeconfig ~/.kube/config-vault-ha exec -it vault-0 -- vault write auth/kubernetes/config token_reviewer_jwt="$TOKEN_REVIEW_JWT" kubernetes_host="$KUBE_HOST" kubernetes_ca_cert="$KUBE_CA_CERT" issuer="https://kubernetes.default.svc.cluster.local"

Create a Vault policy to allow read access to secret/data/devwebapp/config path.

This is where we keep our test secret:

kubectl --kubeconfig ~/.kube/config-vault-ha exec -it vault-0 -- vault policy write devwebapp - <<EOF
path "secret/data/devwebapp/config" {
capabilities = ["read"]
}
EOF

Apply this policy to Vault:

kubectl --kubeconfig ~/.kube/config-vault-ha exec -it vault-0 -- vault write auth/kubernetes/role/devweb-app \ bound_service_account_names=internal-app \ bound_service_account_namespaces=default \ policies=devwebapp \ ttl=24h

Check Kubernetes-Vault integration

Deploy a testing application and its service to

If you have not done this earlier, fetch the testing git repo:

git clone https://github.com/grenader/vault-kubernetes-java.git
cd vault-kubernetes-java/vault

Create a deployment and a service

k apply -f show-secret-app-ha.yml

Wait until the Load Balancer has got an external IP and print it:

k get svc vkjservice
k get svc vkjservice -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

Get an External IP address

SHOW_SECRET_SERVICE_IP=$(k get svc vkjservice -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

This command will open a new browser with a page that presents a value

open http://$SHOW_SECRET_SERVICE_IP/secret

Alternatively, you can access the testing service in curl

curl http://$SHOW_SECRET_SERVICE_IP/secret

The expected result:

username=giraffe;password=salsa

Configuration review

Please, quickly review show-secret-app-ha.yml file to understand the required configuration.

cat show-secret-app-ha.yml

spec.template.metadata.annotations section defined that:

  • Kubernetes “devweb-app” role is used,
  • “secret/data/devwebapp/config” is set a secrets path in Vault,
  • database-config.txt file will be created with secrets data,
  • “username=$$$;password=$$$” is used a template to format secreted data before storing it into database-config.txt file
  • “/vault/secrets/” is a default location where the Vault agent creates secret files.

At this point, we should have an established integration between the external Vault and the Kubernetes cluster.

--

--

Responses (1)