Deploying Vault with etcd backend in Kubernetes

Jack Lei
7 min readJun 30, 2019

--

I needed a secrets management tool that can be highly available, on-premise, cloud-native, run on low resources, and easily orchestrated to make my life easier. Is that so much to ask for?

Kubernetes is easily the most popular container orchestration platform proven in public cloud and on-premise for production.

Hashicorp Vault doesn’t really have any competition for secrets management tool especially considering we don’t want to lock-in to a cloud vendor.

Hashicorp recommends using Consul’s key-value store as Vault’s storage backend. Unfortunately, Consul bundles all of its features together. The official Vault Reference Architecture recommends a m5.large on AWS or n1-standard-4 on GCE. No thank you, just give me the kv store. I’ll go with etcd, a simple and reliable key-value store.

I want this build to be as close to what I would deploy to production. That means I need to think about the lifecycle of the tools. That’s where the Operators come in. The operators oversee installation, updates, and management of the lifecycle of all of the Operators (and their associated services) running across the cluster. That is awesome. We will be using CoreOS etcd Operator and BanzaiCloud Vault Operator.

Prerequisites

A few things are assumed before we can start deploying our storage backend and Vault.

A Kubernetes cluster. You can always spin up a local kubernetes cluster with minikube. Installation instructions can be found here.

minikube start

Helm tiller initialized.

helm init --upgrade --wait
kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default

Installation

The Vault operator includes an option to install the etcd operator. Nothing wrong with the implementation but I prefer to deploy the storage backend myself.

etcd

The CoreOS etcd-operator helm chart is stable, the etcd operator is maintained, and supports the latest version of etcd.

Create etcd_operator_values.yaml with the following contents. These are the helm chart values for the etcd operator.

# etcdOperator
etcdOperator:
image:
repository: quay.io/coreos/etcd-operator
tag: v0.9.4
# backup spec
backupOperator:
image:
repository: quay.io/coreos/etcd-operator
tag: v0.9.4
spec:
storageType: S3
s3:
s3Bucket:
awsSecret:
# restore spec
restoreOperator:
image:
repository: quay.io/coreos/etcd-operator
tag: v0.9.4
spec:
s3:
# The format of "path" must be: "<s3-bucket-name>/<path-to-backup-file>"
# e.g: "etcd-snapshot-bucket/v1/default/example-etcd-cluster/3.2.10_0000000000000001_etcd.backup"
path:
awsSecret:
## etcd-cluster specific values
etcdCluster:
name: etcd-cluster
size: 3
version: 3.4.0
image:
repository: quay.io/coreos/etcd
tag: v3.4.0
enableTLS: false
# TLS configs
tls:
static:
member:
peerSecret: etcd-peer-tls
serverSecret: etcd-server-tls
operatorSecret: etcd-client-tls

Working combinations:

  • As of initial write-up: v0.9.3 of etcd-operator and v3.3.13 of etcd
  • As of Sept 2019: v0.9.4 of etcd-operator and v3.4.0 of etcd

I will cover backing up and restoring in another write-up. Install the chart.

helm upgrade --install etcd-operator stable/etcd-operator -f etcd_operator_values.yaml

Should take a few seconds, you can runhelm status etcd-operator to check.

Once the etcd operator is running, deploy the etcd cluster.

cat <<EOF | kubectl apply -f -
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "etcd-cluster-vault"
etcd.database.coreos.com/scope: clusterwide
spec:
size: 3
version: "3.4.0"
EOF

Verify that the etcd pods are running with kubectl get po -l app=etcd. At this point, you are done configuring etcd for Vault.

Vault

The CoreOS Vault operator was the go-to when it was maintained over a year ago, which is no longer the case. BanzaiCloud Vault operator showed great potential.

Let’s begin, add the BanzaiCloud Helm repository if you haven’t already.

helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com

Deploy the Vault operator without the included etcd operator configuration. You may need to override the image tag to use the latest version.

helm upgrade --install --set image.tag=0.4.17 --set etcd-operator.enabled=false vault-operator banzaicloud-stable/vault-operator

Verify that the operator is running with helm status vault-operator and check the pods kubectl get pods -l app=vault-operator. Then deploy the service account, role, and role bindings.

kubectl apply -f https://raw.githubusercontent.com/banzaicloud/bank-vaults/master/operator/deploy/rbac.yaml

In case the url changes, use the following. Contents copied from BanzaiCloud BankVaults rbac.yaml.

cat <<EOF | kubectl apply -f -
kind: ServiceAccount
apiVersion: v1
metadata:
name: vault
---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: vault-secrets
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- "*"
---kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: vault-secrets
roleRef:
kind: Role
name: vault-secrets
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: vault
---# This binding allows the deployed Vault instance to authenticate clients
# through Kubernetes ServiceAccounts (if configured so).
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault
namespace: default
EOF

Deploy the vault cluster. Make sure to specify the correct address for etcd and api_addr will be set to 127.0.0.1 for testing.

cat <<EOF | kubectl apply -f -
apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
name: "vault"
spec:
size: 2
image: vault:1.1.3
bankVaultsImage: banzaicloud/bank-vaults:0.5.2
# Specify the ServiceAccount where the Vault Pod and the Bank-Vaults configurer/unsealer is running
serviceAccount: vault
# Specify how many nodes you would like to have in your etcd cluster
# NOTE: -1 disables automatic etcd provisioning
etcdSize: -1
# Support for distributing the generated CA certificate Secret to other namespaces.
# Define a list of namespaces or use ["*"] for all namespaces.
caNamespaces:
- "vswh"
# Describe where you would like to store the Vault unseal keys and root token.
unsealConfig:
kubernetes:
secretNamespace: default
# A YAML representation of a final vault config file.
# See https://www.vaultproject.io/docs/configuration/ for more information.
config:
storage:
etcd:
address: http://etcd-cluster-vault:2379
ha_enabled: "true"
listener:
tcp:
address: "0.0.0.0:8200"
tls_cert_file: /vault/tls/server.crt
tls_key_file: /vault/tls/server.key
api_addr: https://127.0.0.1:8200
telemetry:
statsd_address: localhost:9125
ui: true
EOF

Verify the vault pods are up with kubectl get po -l app=vault. Installation is complete, now let’s get into the castle.

Let’s get in

Retrieve the keys to the castle: root token, unseal keys, and generated certificate.

Get the decoded root token and save it to the VAULT_TOKEN variable.

export VAULT_TOKEN=$(kubectl get secrets vault-unseal-keys -o json | jq -r '.data["vault-root"]' | base64 --decode)
echo $VAULT_TOKEN

The unseal keys are not needed, but here is how you can retrieve them still base64 encoded.

kubectl get secrets vault-unseal-keys -o json | jq -r '.data | to_entries[] | select(.key | startswith("vault-unseal")) | .value'

Get the Vault certificate, save it to disk, and set the VAULT_CACERT variable.

kubectl get secrets vault-tls -o json | jq -r '.data["ca.crt"]' | base64 --decode | sudo tee /etc/vault/tls/ca.pem
export VAULT_CACERT=/etc/vault/tls/ca.pem

Connect to Vault

For simplicity sake, open a new terminal and port-forward the vault-0 pod.

kubectl port-forward vault-0 8200:8200

You will need the Vault command line tool. If you don’t have it, download it. Try out vault status. You should get something like this.

> vault statusKey             Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.1.3
Cluster Name vault-cluster-f98b1f48
Cluster ID 68028c10-0f18-691d-56fa-8c046459cae1
HA Enabled true
HA Cluster https://172.17.0.17:8201
HA Mode active

Alternatively, you can navigate to https://127.0.0.1:8200 in a browser.

Clean up

kubectl delete Vault vault
kubectl delete -f https://raw.githubusercontent.com/banzaicloud/bank-vaults/master/operator/deploy/rbac.yaml
helm delete --purge vault-operator
kubectl delete EtcdCluster etcd-cluster-vault
helm delete --purge etcd-operator

Next Steps

Vault is up and running, the next thing is to configure Vault to authenticate and serve secrets. Depending on your infrastructure and your application design, this process can vary. Here is my opinionated approach implementing the Kubernetes authentication method and database secrets engine.

If you want to see the potential Vault can have for your applications, here is how I implement CI/CD with Kubernetes and Vault with a bunch of features. These include database credential rotation, non-vault aware application containers, and mitigating risk when anything is compromised.

--

--

Jack Lei

Currently a Sr. Site Reliability Engineer. Previously a Sr. Software Developer and Sr. DevOps Engineer. https://www.linkedin.com/in/jack-lei