Manage your Database Accounts with Spring Cloud Vault Config

Marco Amann
Digital Frontiers — Das Blog
11 min readJun 11, 2021

Spring Cloud Vault Config allows your Spring application to retrieve configuration data from Hashicorp Vault. In addition, you can let Vault manage your Database Accounts for your application.

This is achieved with a secret backend called ‘database’, that allows Vault to create database accounts on the fly for each instance requesting access. In this article we discuss why this might be something you want and how to use this with Spring Cloud Vault Config.

Why you should care

Credentials are sensitive information, regardless of the way they are managed. Therefore, reducing the impact of compromised credentials and increasing the burden on an attacker to do so are amongst the few ways we can approach the problems related to credentials.

Really nice approaches that works with accounts for humans, interactive confirmations or multifactor authentication, are hard to implement for accounts used by machines.

One way to decrease the impact of lost credentials is to limit their lifetime: Stolen credentials whose lifetime has expired are worthless (hopefully, looking at you, password rotation requirements and ‘Password-7’, ‘Password-8’, …).

In addition to limiting the time credentials can live, one can limit the amount of acceptable uses. If a one-time token is already spent, stealing it afterwards is pointless.

The other stated option, increasing the difficulty for the attacker to gain access to credentials, can be approached in many ways. Saving credentials in files may be riskier than saving them only in memory. Not because it is impossible for an attacker to poke around in a running applications memory but because it is way easier to send around a file full of credentials. Or, god forbid, pushing it to GitHub or Pastebin.

So wouldn’t it be nice to have a way to automatically create a separate account for each application instance that has credentials only living in memory?

In the next section, we have a look at how Vault tries to solve this and then we walk through a sample application to learn how you can prevent getting on the list.

How Vault tries to solve this

Vault has the concept of Secrets Engines, meaning components it can manage secrets with. These encompass an integrated CA, a Key-Value store and (what we will use) a Database Secrets Engine.

When an authenticated entity, say an instance of your backend application, requests database access and is authorised to do so, this secrets engine creates a database user and password with the corresponding lifetimes and access rights. This request and creation process is visualized in the following graphic.

Ask Vault for a database account

If configured, the credentials have a limited lifetime and expire quickly if not renewed. This renewal process exists for the tokens used to communicate with vault as well as the credentials for the database. These refresh mechanism are shown below.

Token refresh mechanism

This initial authentication is normally achieved by providing some sort of token, we will discuss later. The important part here is, that this token is used to get access to other components in the system, allowing for tight control of allowed components.

Another nice concept worth mentioning here, is the token hierarchy.If a token is considered compromised, it can be revoked. This also revokes all other tokens it spawned.

So imagine, one of your CI/CD systems deploying the backend application has been breached. Using the token hierarchy, you can revoke all tokens created by this system, say the tokens used for the instances to initially authenticate with Vault. Now these also get revoked, leading to all access the affected instance has, including the database access, being revoked in a few seconds.

Of course, this requires you to be able to handle the consequences of a part of your production environment loosing access to everything.

Photo by Markus Spiske on Unsplash

Let’s Build this

We need access to a Vault installation and a database. Our demo application and the Vault need to be able to access the database.

Since running vault in dev mode is trivial, we will go that route. Be aware that this does not persist your settings.

Infrastructure

I provided a docker compose file that sets up a vault and postgres for you.

Link

Simply bring up the docker environment with

docker compose up

and watch docker do its thing. When everything is running, vault will have printed a root token. Write that down somewhere as you will need it later.

If you want, you can log in to the web ui using this token at http://127.0.0.1:8200/ . Further, the database should be accessible from the host now, if you have a postgres client installed, go ahead and test it with

psql -h 127.0.0.1 -U postgres

and the password set in the docker compose file.

Thats it for the infrastructure, please don’t restart vault as all data is stored in memory only.

Setting up Vault

We need to tell vault about the postgres database, so we need to create a Secrets Engine of type ‘database’.

Instead of using the clumsy UI, let’s use the cli. Before you can interact with vault, you need to login to vault. Set the vault address and login with the root token I told you to write down earlier.

export VAULT_ADDR=’http://127.0.0.1:8200'
vault login ████████████████████████

Now we can create the Secrets Engine. You can find the full files in the sample repo under infra/vault. The JSON used is the following:

{ ...
"allowed_roles": "quotes_readonly",
"connection_url": "postgresql://{{username}}:{{password}}@db:5432/postgres?sslmode=disable",
"username": "postgres",
"password": "supersecure",
... }

Allowed_roles defines the roles (created later), that are allowed to use this database connection. The account used here has to be able to create new accounts with privileges required by the allowed roles. We further disable TLS for the connection since certificate management is out of scope for this post.

Enable and create the secrets engine with the following commands.

vault secrets  enable database
vault write database/config/postgresql @vault_postgres_connection.json

Now we can create the roles mentioned above.

{
“db_name”: “postgresql”,
“creation_statements”: “CREATE ROLE \”{{name}}\” WITH LOGIN PASSWORD ‘{{password}}’ VALID UNTIL ‘{{expiration}}’; GRANT SELECT ON public.quotes TO \”{{name}}\”;”,
“default_ttl”: “2m”,
“max_ttl”: “0”
}

The creation statement exactly defines what vault should do in the database, the same is possible for revoke and renew. Here, we define the database permissions that accounts created with this role have, feel free to be more or less restrictive.

The default_ttl is the TTL of a minted token for this role. This token can be refreshed up until max_ttl is reached. Pay attention, if you set max_ttl, your application will have to request a completely new token. This is nor supported with Spring Cloud Vault Config and will lead to your application not being able to access the database after some time. Nasty bugs ensue.

Create the role named “quotes_readonly”, this name will later be used by our service.

vault write database/roles/quotes_readonly @vault_postgres_role.json

Now we need to define policies to allow the Spring application to use the quotes_readonly database role.

If you are impatient and just want to see things run, skip the rest of this section and use the root token for authentication. Don’t do this for serious deployments for obvious reasons.

We create two policies, a deployment policy that is allowed to create new tokens and a quote_service policy that is allowed to access the database.

The deployment Policy is quite simple:

path “auth/token/create” { capabilities = [“create”, “update”] }

The quote_service policy is equally simple, we state that tokens with this policy attached, are allowed to use the quotes_readonly role.

path “database/creds/quotes_readonly” { capabilities = [“read”] }

After you created the policies with the following, we can mint some tokens.

vault policy write quote_service quote_service_policy.hcl
vault policy write deployment deployment_policy.hcl

First, let’s create a token that would be used in your deployment mechanism:

vault token create -policy=deployment -policy=quote_service

This token has both roles attached to it. This is due to the requirements of vault that each policy can only create tokens of a child policy of itself. So if we want this token to be able to create quote_service-tokens, we need to give it the role. This does not compromise security: If we can create tokens for a policy, there is no point in not having that policy ourselves (even more so, since vault does not necessitate you to use a concept of user or service accounts).

Note down the created token somewhere for later use.

Using the deployment token, we can then create one-time tokens to bootstrap the spring application. These Tokens are wrapped, so that vault can make sure that the token has not been spent yet. Therefore, the application needs to exchange the initial bootstrap token for a “real” one as a first step. This is shown below.

Initial token exchange before using the database

No let us move on to the other components.

Setting up Postgres

In infra/postgres I provide a script to create the required tables. Don’t forget to do this, since the spring application won’t have the privileges to do so for you.

Writing the Spring Application

Some boilerplate
In the Repo I created an Entity, a JpaRepository for it as well as a RestController using it to later show how this works, feel free to pursue another approach.

Spring Cloud Config Vault dependencies
I use the following two dependencies to enable automatic configuration of our datasource using vault.

implementation(“org.springframework.cloud:spring-cloud-starter-vault-config:3.0.2”)implementation(“org.springframework.cloud:spring-cloud-vault-config-databases:3.0.2”)

Pay attention that spring-cloud-starter-vault-config conflicts with spring-vault-core (not used here but available and tempting) in the default configuration, in a way that it tries to register some Beans twice.

For the postgres, chip in the driver:

implementation(“org.postgresql:postgresql”)

Now only one file is left: The configuration. The rest of the magic happens under the hood,

Configuration
Let’s start with the JPA configuration. To prevent Spring from complaining about missing privileges, we disable usage of automatic DDL features here.

The only important part is under jpa.datasource. We set the url of the postgres database but we do not set the username and password parameters. These are injected by Spring Cloud Config Vault later.

jpa:
hibernate:
ddl-auto: none
database-platform: org.hibernate.dialect.PostgreSQLDialect
datasource:
url: “jdbc:postgresql://127.0.0.1:5432/postgres”
# username injected by cloud config
# password injected by cloud config

No let’s have a look at the juicy parts, the configuration for Vault.

Host port and scheme are boring but necessary to tell Spring where to find the Vault. In production environments you should obviously not use http.

Authentication defines the Authentication type, we use CUBBYHOLE to have one-time tokens. The used token is defined by the token field and you most likely want to inject this property into the file from somewhere else. More to that later.

We disable the KV store to prevent vault from complaining about permissions and enable the database secrets engine to be used.

The important part here is the name of the role, which coincides with the one used earlier. The properties defined at the end map to those left blank in the configuration above.

cloud.vault:
host: 127.0.0.1
port: 8200
scheme: http
authentication: CUBBYHOLE
token: s.████████████████████████
kv:
enabled: false
database:
enabled: true
role: quotes_readonly
backend: database
username-property: spring.datasource.username
password-property: spring.datasource.password
config.import: vault://

Take it for a test drive

To start the application (or rather to have it start correctly), we need to have a valid token.

I hope you saved the deployment token somewhere, we can now use it to log in to vault with it.

Now, we can create a new one-time token for the application to start. For this, we define the required policy and a ttl, within which the application has to exchange the token for a new one (done by Spring Cloud Config Vault for you).

vault token create -wrap-ttl=”5m” -policy=sample_service

Just plug in the token value to the token field in the configuration and you are good to go. Using the provided HTTP endpoint, you should be able to verify the connection was successful.

Behind the scenes
Let’s have a quick look behind the scenes, shall we?
First, let’s inspect the users in the database. Using \du, we can see a user created for our application. It has a password with limited validity and you will notice that it will vanish shortly after you kill the service.

postgres users «» I really hate screenshots of text but medium wont let me format this correctly

If you are interested in the actions performed on vault, e.g. the auto-renew by spring, you can enable and observe the audit log.

For this, execute the following the vault container:

vault audit enable file file_path=/vault/logs/vault_audit.log
apk add jq
tail -f /vault/logs/vault_audit.log|jq ‘.request.path’

We use jq here to get rid of the verbose json logging and focus on the relevant part of the json objects.

If you want, you can even manually revoke the lease in the CLI (or WebUI) to cause the a deauth in the database connection. Note however, this does not cause the spring app to be deauthenticated completely, since it has authenticated with a different token. If we were to revoke that one, all access would be prohibited.

One step further

So how do we get the initial token into the application? You can select from a large array of auth methods like letting Kubernetes inject them for you or let EC2 solve it. There will probably be a followup post regarding these options.

Conclusion

Using Spring Cloud Config Vault for our database access has intriguing benefits: Credentials that only live in the running application. This makes accidentally leaking them quite improbable but of course does not guarantee an unbreakable system.

It is worth to keep in mind that you have to trust Vault absolutely from the perspective of the application, as well as the database. Further, relying on vault to keep your accounts alive, means that if your vault goes down, your production environment comes to a halt. This may be undesirable.

Be aware how vault implements high-availability before using it in production and always have a way (or someone) to restart and unseal it at 4am.

For your reference, all interactions are shown below.

Thanks for reading! If you have any questions, suggestions or critique regarding the topic, feel free to respond or contact me. You might be interested in the other posts published in the Digital Frontiers blog, announced on our Twitter account.

--

--