Run HashiCorp Vault on Docker with Filesystem and Consul Backends

Happy devSecOps

(λx.x)eranga
Effectz.AI
8 min readNov 12, 2022

--

HashiCorp Vault

Secrets are the tokens which used to Authenticate/Authorized systems such as Database credentials, SSL certificates, SSH keys, Usernames and Passwords, AWS IAM credentials, API tokens, Social Security Numbers etc. In a given system there could be various secrets available. Most of the time these secrets are managed in ad-hoc way where storing them in different places in such as config files, env variables, github repositories etc in plain text format. Vault address this issue and provides a way to centrally managed the secrets. Clients(e.g client applications) can read/write secrets in the Vault. The secrets are stored in the Storage backend of the Vault. Various storage backends are supported in Vault such as Consul, Filesystem, In-Memory, PostgreSQL, S3 etc. The secrets in the Vault are encrypted at rest(inside the Vault storage) and in transit(between the Vault and the clients who wants to access the secrets). Vault provides fine-grained access control for the secrets with defining permissions(e.g who can access secrets). There is an audit trail in Vault where we can see who/when can access the secrets.

Vault Dynamic Secrets

Other main feature provides by Vault is the Dynamic Secrets. Instead of providing static, long lived credential for an application, Vault provides short lived ephemeral credentials(e.g give database credential to an application which valid for 30 days). Since the credential is valid for temporary period, the risk can be reduced if the application is compromise and leaked the credential. Other advantage is, Dynamic Secrets can assign different credentials for different applciations(not share same credentials between different applications). For an example, assume 10 web applications(which accesses the database) are there. With static credentials, all these 10 apps shares the same credentials. With Dynamic Secrets, each web app gets different short lived credentials. In this case, if one web is compromised, it’s easy to identify it and revoke the credential of that single web app. Other main feature provides by Vault is encryption as a service. Read more about more other feature of Vault from here.

Vault Key Management

As mentioned above, the secrets are stored in the Storage Backend of the Vault. The secretes stored in the Storage Backend are encrypted at rest. Two main keys are used to handle the encryption in Vault, data encryption key (DEK) and key encryption key (KEK) also known as the Master Key. The secrets stored in the Storage backend are encrypted with the DEK. This DEK is also stored in the Storage backend, along with all of the other secrets. The DEK is encrypted by the Master Key. To secure the Master Key, Vault uses Shamir’s secret sharing algorithm. With Shamir’s secret sharing algorithm, the Master Key is split into n number of key shares, we need k of n(k being the threshold) to reconstruct enough of the Master Key to decrypt the DEK from the Storage Backend and bring it into Vault’s memory. When installing the Vault, we can specify number of Master Key shares that needs to have. By default Vault splits the Master Key into 5 shares. Any 3 of these shares are required to reconstruct and obtain the plaintext version of the Master Key.

The process of reconstructing and obtaining plain text of the Master Key identified as Unsealing the Vault. The Unseal process is done by running vault operator unseal or via the API. When unsealing, the shares are added one at a time (in any order) until enough shares are present to reconstruct the key and decrypt the Master Key. This decrypted Master Key will be take into the memory to encrypt/decrypt the DEK. When a Vault server is started, it starts in a Sealed state. Once a Vault node is unsealed, it remains unsealed until one of these things happens, 1) Vault resealed via the API, 2) Vault server is restarted, 3) Vault’s storage layer encounters an unrecoverable error. There is also an API to seal the Vault. When sealing the Vault, it will throw away the Master Key in memory and require another unseal process to restore it.

Unsealing the Vault with Shamir’s key shares is a manual process(manual unsealing). It is the default unsealing method of the Vault. This process works quite well, but it can be challenging when you have many Vault clusters as there are now many different key holders with many different keys. Orchestrating the unsealing of a Vault node that happened to restart, for example, requires a lot of coordination and isn’t ideal in an automated world. For that reason, Vault developed some ways to help you automate the unsealing process(Auto Unseal) by using Cloud Provider’s KMS Solution or a Hardware HSM. Currently Vault support auto unseal with AliCloud KMS, AWS KMS, Azure Key Vault, Google Cloud KMS, and OCI KMS. This feature enables operators to delegate the unsealing process to trusted cloud providers. It delegates the responsibility of securing the unseal key from users to a trusted device or service. At startup Vault will connect to the device or service implementing the seal and ask it to decrypt the Master Key Vault read from storage.

In rest of the post I’m gonna discuss about installing and running the Vault. I have installed the Vault with Docker using Filesystem and Consul backends. The deployments which related to this post available in gitlab. Please clone the repo and continue the post.

Vault with Filesystem Backend

The Filesystem storage backend stores Vault’s data on the filesystem using a standard directory structure. It can be used for durable single server situations, or to develop locally where durability is not critical. Following is the Dockerfile to dockerize the Vault with Filesystem backend.

In here I’m using the following Vault config file which defined the Filesystem backend configuration of the Vault. Here, I configured Vault to use the Filesystem backend, defined the listener for Vault, disabled TLS, and enabled the Vault UI. Read more information from the docs about configuring Vault.

Following is the docker-compose.yaml deployment to deploy the Vault. In here, the container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have --cap-add=IPC_LOCK provided to docker run. Since the Vault binary runs as a non-root user, setcap is used to give the binary the ability to lock memory. The VAULT_API_ADDR defines the HTTP API address of the Vault.

Following is the way to run the Vault Docker container with docker-compose.yaml and interact with it. In here, I have initialized the Vault, Unseal it and add Secrets to the Vault. When adding Secrets, I have used Vault’s kv secret engine. The kv secrets engine is used to store arbitrary secrets within the configured physical storage for Vault. Key names must always be strings.

Vault features a web-based user interface (UI) that enables you to unseal, authenticate, manage policies and secrets engines. Following are some functions of Vault web. In here I have authenticated the Vault web with root token. Vault web interface can be accessed in http://<docker host ip>:8200 in host machine. In my scenario it’s http://192.168.64.75:8200 where 192.168.64.75 is the docker host IP .

Vault with Consul Backend

The Vault Filesystem backend will not scale beyond a single server, so it does not take advantage of Vault’s high availability. There are a number of other Storage backends, like the Consul backend, designed for distributed systems. The Consul storage backend is used to persist Vault’s data in Consul’s key-value store. In addition to providing durable storage, inclusion of this backend will also register Vault as a service in Consul with a default health check. We can run high available Vault service with Consul cluster.

When running Vault with Consul storage backend, first I have dockerized the Consul. Following is the Dockerfile I have used to dockerize the Consul storage backend for the Vault. In the Dockerfile I have used Consul config file consul-config.json which defined the the configurations of Consul.

Next I have dockerized Vault with Consul storage backend configs. Following is the Dockerfile and Vault config file(which defines Consul storage backend config). The path key of the config file defines the path in Consul's key/value store where the Vault data will be stored. The "address": "consul:8500" is the address of the Consul(there is a depends on attribute with consul in docker-compose entry of the Vault).

I have built the Docker images of the Consul and Vault and run them with docker-compose deployment. Following is the docker-compose.yaml deployment to deploy the Consul and Vault.

Once deployed the Vault with Consul backend I can interact with it as discussed previously(in the running Vault with Filesystem backend section), Initializing the Vault, Unseal it, add Secrets to the Vault etc.

Similar to Vault, Consul also providers web interface. In the web interface we can view the Vault health status and more information. Until initializing and unsealing the Vault, the status shown as standby. Consul web can be accessed in http://<docker host ip>:8500 in host machine. In my scenario it’s http://192.168.64.75:8500 where 192.168.64.75 is the docker host IP .

Reference

  1. https://www.melvinvivas.com/secrets-management-using-docker-hashicorp-vault
  2. https://www.bogotobogo.com/DevOps/Docker/Docker-Vault-Consul.php
  3. https://www.contino.io/insights/hashicorp-vault
  4. https://aws.amazon.com/blogs/apn/securing-and-managing-secrets-with-hashicorp-vault-enterprise/
  5. https://www.kloia.com/blog/comparison-of-unseal-options-in-hashicorp-vault
  6. https://www.bogotobogo.com/DevOps/Docker/Docker_Kubernetes_Vault_Consul_minikube_Auto_Unseal_Vault_Transit.php
  7. https://johansiebens.dev/posts/2020/12/installing-hashicorp-vault-on-digitalocean-with-hashi-up/

--

--