Access AWS DocumentDB using dynamic secrets with HashiCorp Vault

Amie Wei
HashiCorp Solutions Engineering Blog
8 min readNov 20, 2023

--

Many thanks to Narek Gevorgyan, Technical Product Manager for DocumentDB at AWS for his input and collaboration. DocumentDB is a fully managed NoSQL database service that has compatibility with MongoDB.

What is a dynamic secret and why is it important?

Data breaches can cause major disruptions including reputational damage, financial loss and legal ramification to enterprises. Based on the 2023 HashiCorp State of the Cloud Survey, secret leakage is now the most common security threat. In the same year, the global average cost per data breach amounted to 4.45 million U.S dollars.

Traditional methods of managing secrets, like static secrets, require rotation and manual upkeep, which introduces human errors and increases the risk of leakage. For instance, it’s not uncommon for someone to accidentally commit a secret into a code repository. Removing such secrets from git history can be challenging. In addition, no matter how secure your static secret store is, there is nothing stopping the user from writing down the secret on a post-it note.

Look Ma, I made the news! Oh wait…

With the HashiCorp’s Database secrets engine, applications can retrieve just-in-time secrets with role-specific permissions, ensuring they only access the data they need in DocumentDB. This reduces the risk of data breaches and adheres to the principle of least privilege.

“Using dynamic secrets: When an application starts it could request its database credentials, which when dynamically generated, will be provided with new credentials for that session. Dynamic secrets should be used where possible to reduce the surface area of credential re-use.”

— OWASP Secrets Management Cheat Sheet

What will we create today?

TL;DR: To generate dynamic secrets for DocumentDB, we will be performing the following:

Act 1 — Deploy AWS Networking + HCP Vault in AWS

Act 2 — Deploy DocumentDB with bastion host

Act 3 — Insert sample record into DocumentDB

Act 4 — Configure Vault to generate just-in-time credentials restricted to a read-only role

Act 5 — Test out the credentials by accessing the sample records in DocumentDB

Read on for instructions to deploy the above architecture. See full code in the repo on Github. ⚠️Code is not meant for production use!

Prerequisites

Have the following readily accessible. See links for free sign up and let the performance begin!

HashiCorp Cloud Platform*
Terraform CLI
Vault CLI
AWS Account
Docker

*HashiCorp Cloud Platform (HCP) enables access to Terraform Cloud and HCP Vault.

Act 1 — Deploy networking layer and HCP Vault

We are deploying the following networking and Vault resources using Terraform Cloud (TFC):

  • HashiCorp Virtual Network (HVN) and HashiCorp Platform (HCP) Vault cluster in AWS
  • AWS VPC — for later use to deploy our DocumentDB cluster and bastion host
  • VPC peering between HVN and AWS VPC

1.1. Update terraform config

In the terraform.tf file inside the terraform/network-vault directory, update the cloud block with your Terraform Cloud organization and workspace name.

Terraform Cloud can manage the lifecycle of your infrastructure and keep your statefile secure and versioned.


cloud {
organization = "your-tfc-org-name"
workspaces {
name = "aws-network-hcp-vault"
}
}

1.2. Run terraform commands to deploy infrastructure to AWS

Export your AWS access keys and HCP Service Principal keys:

export HCP_CLIENT_ID=<your-client-id>
export HCP_CLIENT_SECRET=<your-client-secret>
export AWS_ACCESS_KEY_ID=<your-aws-key-id>
export AWS_SECRET_ACCESS_KEY=<your-aws-access-key>

Run Terraform commands to deploy the infrastructure:

cd terraform/network-vault
terraform init
terraform plan
terraform apply --auto-approve

1.3. VPC Peering with HashiCorp Virtual Network (HVN)

View the HCP Vault console to see that the peering connection has been created. Peering allows your database and HCP vault to connect with each other to create and retrieve secrets

For manual steps, see HashiCorp developer docs for the HVN Quick Peering guide

⚠️ During the peering process, make sure you select the correct region where your AWS VPC is deployed.

Act 2 — Deploy DocumentDB cluster with bastion host

To securely deploy the database, we are provisioning a DocumentDB cluster inside a private VPC that can be accessed via a bastion host in the public VPC

2.1. Generate SSH key pair

Generate an SSH key pair and save your key pair locally. You will be prompted to enter a file path to save the key (/Users/yourusername/.ssh/id_rsa). Press “Enter” to accept the default location (~/.ssh/id_rsa).

ssh-keygen -t rsa -b 4096

2.2. Update terraform.tf

In the terraform.tf file inside the terraform/documentdb directory, update the cloud block with your Terraform Cloud organization ID and workspace name.

We will be managing this set of infrastructure in a separate TFC workspace to keep our design and state modular to promote better change management.


cloud {
organization = "your-tfc-org-name"
workspaces {
name = "aws-documentdb"
}
}

2.3. Run terraform commands to deploy infrastructure to AWS

cd terraform/documentdb
terraform init
terraform plan
terraform apply --auto-approve

2.4. Get the ssh command from the terraform output

# Example Terraform Output:
ssh_command = "ssh -L 27017:my-docdb-cluster.cluster-abcdefg.us-east-1.docdb.amazonaws.com:27017 ubuntu@1.123.123.123 -i ~/.ssh/id_rsa"

The command is constructed using the DocumentDB cluster address and the bastion host public ip. Update the path to your ssh private key depending on where you stored it.

⚠️ You need to run terraform locally to reference the ssh key stored locally. If you choose remote execution in Terraform Cloud, you will need to handle storing/passing the ssh key securely to Terraform Cloud.

Act 3 — Connect to DocumentDB and Insert Records

Let’s validate that we can access DocumentDB and create sample records for testing our dynamic credentials later

3.1. Access the bastion host and establish ssh tunnel

Use the previous Terraform output from the above step

ssh -L 27017:<documentdb-cluster-address>:27017 ubuntu@<bastion-public-ip> -i <path-to-your-ssh-private-key>

3.2. Connect to DocumentDB using mongosh

Mongosh is a MongoDB shell that can be used with DocumentDB. Please note that although DocumentDB has MongoDB compatibility, not all functionalities of MongoDB and the Mongosh shell are available for use.

mongosh "mongodb://<username>:<password>@<documentdb-cluster-address>:27017/?ssl=true&retryWrites=false" --tls --tlsCAFile=<path/to/global-bundle.pem>

# Example:
mongosh "mongodb://root:rootpassword@my-docdb-cluster.cluster-abcdefg.us-east-1.docdb.amazonaws.com:27017/?ssl=true&retryWrites=false" --tls --tlsCAFile=global-bundle.pem

⚠️ Use retryWrites=false as Retryable writes are not supported in documentDB as of Nov 2023.

3.3. Once connected using mongosh, create a collection and insert a document

# Create a new database named 'testdb'
use testdb

# Create a new collection named 'collaboration'
db.createCollection('collaboration')

# See what collections have been created
db.getCollectionNames()

# Insert a document into the collection
db.collaboration.insertOne({'partners':'HashiCorp & AWS'})

# See what documents are in the collection named 'testdb'
db.collaboration.find()

Act 4 — Configure the Vault Database secrets engine

Now that the database is set up, we can enable the database secrets engine in Vault and create a dynamic read-only role for DocumentDB. We will be using hvac which is a Python client library for Vault

4.1. Build Docker image

cd vault-config/
docker build -t db-configure-vault:latest .

4.2. Export Vault environment variables and DocumentDB cluster address

You can find the vault environment from the HCP Vault console.

export VAULT_ADDR=<vault-cluster-address>
export VAULT_NAMESPACE=<admin>
export VAULT_TOKEN=<very-secret-auth-token>
export DB_CLUSTER_ADDR=<endpoint-from-terraform-output>

4.3. Run container

docker run --name db-configure-vault --rm -e VAULT_ADDR -e VAULT_TOKEN -e VAULT_NAMESPACE -e DB_CLUSTER_ADDR db-configure-vault:latest

⚠️ With the — rm flag, the container will be removed automatically once it stops running.

Act 5 — Test your new credentials!

Let’s test out the Vault-generated credentials by accessing the sample records in DocumentDB

5.1. Generate dynamic credentials from Vault

vault read database/creds/docdb-read-only-role

5.2. Access DocumentDB with the dynamic credentials

Within the established ssh tunnel, login into DocumentDB using your new credentials.

mongosh "mongodb://<username>:<password>
@<documentdb-cluster-address>:27017/admin?ssl=true&retryWrites=false" --tls --tlsCAFile=global-bundle.pem

# example
mongosh "mongodb://v-token-hcp-root-docdb-read-only-ABCDE-1234567:abcdefgHIJK@my-docdb-cluster.cluster-abcdefg.us-east-1.docdb.amazonaws.com:27017/admin?ssl=true&retryWrites=false" --tls --tlsCAFile=global-bundle.pem

Once connected, see all users, including the new user that was created by Vault.

db.getUsers()

Read the collection and document that was inserted.

db.getUsers()
use testdb
db.collaboration.find()

Attempt to delete a document in testdb. Spoiler alert, you won’t be allowed as the role does not have permission.

db.collaboration.deleteOne({})

Curtain close

6.1. Summary

We deployed HCP Vault and DocumentDB in AWS and saw that we can leverage the database secrets engine to generate just-in-time credentials. In addition, we further restricted the permissions tied to the credentials to only allow read-only access. Following the principle of least privilege, Vault can help ensure applications can only perform the allowed actions and access the data when needed.

6.2. Clean Up

The infrastructure we deployed will cost money if we leave them running, so make sure to remove the resources!

  • Exit out of any SSH and database connections
  • Run terraform destroy from each of the following directories in this order: terraform/documentdb/, terraform/network-vault/
  • Note: deleting the DocumentDB cluster may take ~10 minutes depending on cluster size

Additional considerations

I’m using static secrets at the moment, how do I know whether secrets have been leaked in my organization?

  • Use a secret scanning tool to conduct periodic scanning and develop subsequent remediation steps. Note that secrets may not just be hiding in code repositories, it could also be in ticketing tools, emails etc.
  • HashiCorp recently announced the HCP Vault Radar secrets scanning tool (currently in alpha) to address secret sprawl.

I’m concerned with whether performance can scale to the number of requests sent to Vault as my organization have many apps that require database credentials.

  • Consider using the Vault Agent or Proxy. Both allow client-side caching of responses containing newly created tokens and leased secrets. It will also handle renewals of the cached tokens and leases which reduces the I/O burden for Vault clusters while securing access to leased secrets for the life of a valid token.

--

--