Public Key Infrastructure Integration with Vault Secrets Manager

Michael Sutiono
Cermati Group Tech Blog
8 min readNov 18, 2019

In the previous article, we have discussed how we improved Cermati’s credentials management workflow by implementing our own public key infrastructure. In one of its sections, we mentioned Vault as one of the edge services that we use for certificate validation. In this article, we’re going to share our experience on integrating Vault to our public key infrastructure.

Illustration of vault number 111 from the Fallout game series.

There are quite a lot of things popularly named “vault”, such as the vaults from Fallout game series or even Ansible Vault — which we also use for some of our locally-stored secrets. The Vault we’re talking about in this article is HashiCorp Vault. It is a service that manages secrets (e.g. database credentials, cloud console access, sensitive environment variables, etc.) by securely storing (using encryption) and strictly controlling access to them.

Vault can manage various types of secrets, known as secrets engine. It can manage active directory, public cloud provider console accesses, databases, key/value storage, RabbitMQ, and even SSH.

This secrets manager also supports several authentication methods to control access to secrets engines, such as AppRole, public cloud-provided identity, JWT, Kubernetes, GitHub, LDAP, Okta, RADIUS, username-password, tokens, and TLS certificates. Most of those authentication methods are used for acquiring a Vault access token during the initial connection. Once this token has been acquired, it will be used for further Vault access.

Currently, we’re only enabling the TLS certificates authentication method on our setup. We use the Common Name field of the accessor’s certificate to identify who or which service is accessing the Vault. Its value can be either an email address of a team member or a service namespace.

The Architecture

The architecture of Cermati’s Vault production setup.

For the production environment, we set up our Vault to use etcd as its storage back-end, activating certificate verification for both Vault and etcd access and set them in a high availability mode. We put all of these behind a TCP load balancer instead of an HTTP load balancer so that the accessor’s certificate can be forwarded to Vault where it will be validated and used as an identifier of the accessor. The accessor of our Vault can be a team member and a service running on a container or VM.

Authentication and Authorization in Vault

Before configuring the Vault authentication method, we defined some important fundamental things first. These things are Vault role name along with its access policies and a list of the certificate Common Name that is allowed to log in as that particular Vault role.

We divide our Vault role into two main roles, admin and staff. The admin role is granted to the team leads level and above, while the staff role is granted to the rest of the team members. The staff role only has the read and write permissions to the secrets engine, while the admin role can maintain secrets engines, i.e. create a new one or update the configuration of an existing one, under a certain predefined path for his team to use.

Once authenticated, a client will be mapped to this role and it will be granted access to a specific secrets engine with a particular access level based on the policy definition attached to it. A secrets engine in Vault is mounted to a certain URL-like path and it is used for the policy definition. By default, a Vault policy denies all access in the system, so an empty policy grants no access permission. Below is the policy definition sample that grants all access to any secrets engine mounted to a path called secret.

policy.hclpath "secret/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}

Configuration Case Study

To give a clearer picture of how we configure our Vault access, in this section we’ll try to create the configurations needed to administer access to a secret engine based on the convention we currently use in Cermati.

This case study assumes that the Vault server has been configured with tls enabled and it has been initialized and unsealed. The secrets engine configuration is also assumed to be properly set up.

Suppose we are granting access to a new PostgreSQL database called application hosted on an AWS RDS to a new team called avengers and that database is going to be used by masterfinance product in prod environment. The avengers the team is led by Steve and his team members are Tony, Bruce, Clint, and Natasha.

We can start by defining the Vault role names as cermati-avengers-masterfinance-prod andcermati-avengers-admin. We also use those names as policy names to simplify things and map the role name with the respective policy name. Multiple policies are allowed to be attached to a role, but in most cases, we only attach one policy per role.

Then, we define who can log into each of those roles. For the cermati-avengers-masterfinance-prod role, we can include all the team member email addresses (embedded as Common Name in the team member certificate) and the masterfinance application namespace (service-cermati-avengers-masterfinance-prod), while for cermati-avengers-admin we only include Steve’s email address.

The last thing to define is the policy configurations. Below are the configuration samples for each policy.

cermati-avengers-masterfinance-prod.hcl

# For more detailed information, refer to these Vault documentations:
# - https://www.vaultproject.io/docs/concepts/policies.html
# - https://www.vaultproject.io/api/secret/databases/index.html
# - https://www.vaultproject.io/api/system/index.html
# Database specific configuration
# As of Vault version 1.1, we can use '+' to denote any number of characters bounded within a single path segment (like wildcard)
path "sys/mounts/cermati/avengers/db/rds/postgres/masterfinance/+/prod" {
capabilities = ["read", "list"]
}
path "cermati/avengers/db/rds/postgres/masterfinance/+/prod/config/* {
capabilities = ["read", "list"]
}
path "cermati/avengers/db/rds/postgres/masterfinance/+/prod/roles/*" {
capabilities = ["read", "list"]
}
path "cermati/avengers/db/rds/postgres/masterfinance/+/prod/creds/*" {
capabilities = ["read", "list"]
}
// Administration Transparency ACL
path "sys/auth" {
capabilities = ["read", "list"]
}
path "sys/mounts" {
capabilities = ["read", "list"]
}
path "sys/policies/acl" {
capabilities = ["read", "list"]
}
path "sys/policies/acl/*" {
capabilities = ["read"]
}
path "sys/policy" {
capabilities = ["read", "list"]
}
path "sys/policy/*" {
capabilities = ["read"]
}
path "sys/health" {
capabilities = ["read"]
}

cermati-avengers-admin.hcl

# Policy Management
path "sys/policies/acl/cermati-avengers-*" {
capabilities = ["read", "create", "update", "list"]
}
path "sys/policy/cermati-avengers-*" {
capabilities = ["read", "create", "update", "list"]
}
path "sys/policies/acl/cermati-avengers-*" {
capabilities = ["read", "list"]
}
path "sys/policy/cermati-avengers-*" {
capabilities = ["read", "list"]
}
# Certificate Authentication Role Management
path "auth/cert/certs/cermati-avengers-*" {
capabilities = ["read", "create", "update", "list"]
}
# Secrets Engine Management
path "sys/mounts/cermati/avengers/*" {
capabilities = ["read", "create", "update", "list"]
}
# Secrets Engine Access
path "cermati/avengers/*" {
capabilities = ["read", "create", "update", "list"]
}
# Administration Transparency
path "sys/auth" {
capabilities = ["read", "list"]
}
path "sys/mounts" {
capabilities = ["read", "list"]
}
path "sys/policies/acl" {
capabilities = ["read", "list"]
}
path "sys/policies/acl/*" {
capabilities = ["read"]
}
path "sys/policy" {
capabilities = ["read", "list"]
}
path "sys/policy/*" {
capabilities = ["read"]
}
path "sys/health" {
capabilities = ["read", "sudo"]
}

After everything is defined, we must configure several environment variables before we begin to upload the configurations. For the first time setup, we can log in to Vault using the root token acquired when initializing the Vault.

export VAULT_ADDR=<vault_host_address>
export VAULT_CACERT=<ca_certificate_used_in_vault_server>
export VAULT_CLIENT_CERT=<member_certificate_issued_by_same_CA>
export VAULT_CLIENT_KEY=<private_key_file_of_VAULT_CLIENT_CERT>
export VAULT_TOKEN=<your_vault_root_token>

The first thing to configure is the policies. We need to save the previously defined policies as hcl files and these will be uploaded to the Vault server using the vault CLI tool or the Vault REST API. We’ll use the vault CLI to simplify things:

$ vault policy write cermati-avengers-masterfinance-prod cermati-avengers-masterfinance-prod.hcl$ vault policy write cermati-avengers-admin cermati-avengers-admin.hcl

Next, we configure the authentication endpoint and the Vault role. We can configure those by executing this command:

# Enable the certificate authentication endpoint
# Run this once
$ vault auth enable cert
# Configure the authentication for cermati-avengers-masterfinance-prod
$ vault write auth/cert/certs/cermati-avengers-masterfinance-prod \
display_name=cermati-avengers-masterfinance-prod \
policies=cermati-avengers-masterfinance-prod \
certificate=@our-internal-ca.pem \
ttl=3600 \
allowed_common_names="steve@cermati.com, tony@cermati.com, bruce@cermati.com, clint@cermati.com, natasha@cermati.com, service-cermati-avengers-masterfinance-prod"
# Configure the authentication for cermati-avengers-admin
$ vault write auth/cert/certs/cermati-avengers-admin \
display_name=cermati-avengers-admin \
policies=cermati-avengers-admin \
certificate=@our-internal-ca.pem \
ttl=3600 \
allowed_common_names=steve@cermati.com

After all of the above configurations have been uploaded, we can use a certificate bundle that has its Common Name registered to interact with the secrets engine endpoint defined in the policy definition file.

Configuration Automation

There are quite substantial things to do to manage access for just one service. We can see that several configurations can be made into templates which we can integrate into a scaffolding workflow to be generated automatically. For instance, the policy files will share the same structure among different services, the only difference in the files should be the service name, therefore it is a good candidate for us to create a template for. Another thing we can do to help with the automation is we can put the list of allowed common names in a configuration file and upload it to a version control service like GitHub to make it easier to track.

We integrated this configuration automation to the pkictl tool that we mentioned in the previous article. The main idea of the automation is it will read the policy files and a configuration file, parse the setup information, then call the equivalent Vault REST API of the Vault CLI commands mentioned in the case study.

Conclusion

We integrated our public key infrastructure with the Vault secrets manager as its authentication method to further improve our credentials management. We created a convention based on a namespace that consists of an organization, team, and product name to identify a Vault role and secrets engine endpoint. After we agreed on using that convention, the access authorization configuration needs to be written to Vault in these consecutive steps:

  1. Create the policy definition files in Vault format and upload them to Vault either via Vault REST API or using Vault CLI.
  2. Enable the TLS certificate authentication method on Vault (we only need to do this once).
  3. Configure the TLS certificate authentication endpoint by specifying the name of the endpoint, policies and CA certificate to use, TTL of the Vault access token that will be generated when a user authenticated on this particular endpoint, and in our case, we use Common Name field of the certificate to determine which client certificate is authorized to get Vault access token from this endpoint.

You can utilize other certificate fields (e.g. organization unit name, DNS name included in the SAN) to be used in the TLS certificate authentication endpoint configuration. You can refer to this documentation from the Vault Project for the available configurations.

As an effort to simplify those configuration steps, we’ve automated and integrated them into our existing pkictl CLI tool. It will read the policy files and a configuration file, parse the setup information, then call the respective Vault REST API to upload those configurations.

There are a lot more things that we need to develop and improve to help Cermati’s business and organization to scale. We’re also currently hiring more engineers to help us to keep improving our system.

Stay tuned for more tech articles from us!

--

--