Just-in-time role-based access to databases with HashiCorp Boundary

--

If you have been following the HashiCorp Solutions Engineering blog, you may already be aware that Boundary is much more than PAM. You may also have learned how to leverage multi-hop to connect to targets in complex network topologies or how to integrate Vault as an SSH trusted CA to provide password-less connectivity with SSH at scale.

One of the many advantages of HashiCorp products is that they are built on-top of a REST API that allows a seamless connection between them. In this blog, we are going to explore how to integrate HashiCorp Vault and HashiCorp Boundary to provide user access to databases leveraging Vault’s database secret engine. As you probably know, the main advantage of the credentials served by these engines are:

  • Just-in-time: Created on demand (and so unique) by authorized clients
  • Time-bound: Ephemeral, with an specific time to live (TTL)
  • Least privilege: With granular permissions dependent on the user’s role

These dynamic secrets create a moving target that reduces the risk of a credential leak. By the time a credential might be used by an attacker, it most likely has expired. Furthermore, the fact that they are granted per user significantly simplifies access control and revocation.

The REST API of our products and their respective providers offer an easy way to express our configuration using Terraform. On that basis, we provide a mock environment that can be deployed using the code from this repo.

Please, do not take the code in this repo as a HashiCorp recommendation for:

1. Managing infrastructure as code with Terraform

2. Securing an enterprise environment and its components, such as databases.

Our objective is to provide a ready-to-use environment via the HashiCorp Cloud Platform (HCP). We will also explore Boundary role-based access control (RBAC) and identity provider (IdP) integrations with OIDC providers, in this case we will use Auth0.

Legacy patterns to access databases

Different organizations distinguish different kinds of roles when it comes to database management. For the sake of discussion we are going to identify four:

  • Administrator: Not nominal and different depending on the type of database
  • Schema Admin: Can control table creation and deletion within its schema
  • Dev: Can add or delete data from registries (tables)
  • HelpDesk: Basic troubleshooting. Cannot modify data

Unlike the picture below, where we have a single RDS instance, a typical enterprise scenario comprises different environments (on-prem, cloud), different types of databases (Oracle, Postgres, MSSQL,…) and, for the most part, static (long-lived) credentials. How did each of those users keep track of the location of the databases and credentials to use? It is not an exaggeration to say that Excel sheets have been the typical way of tracking this information. This pattern exhibits clear issues, to name a few:

  • Manual process.
    - Lack of credential rotation.
    - Static catalogs
    - Difficult user on-boarding and even harder off-boarding.
  • Extended network access to a large number of subnets, that allow for lateral movement.

All these issues are corrected by Boundary, providing a consistent workflow to access and interact with the database estate within the organization based on the user’s role.

Mock Environment

As aforementioned, we are going to leverage HCP Vault and HCP Boundary, together with infrastructure deployed on AWS.

Our Terraform code is set to integrate with the respective public API of the different components (AWS included). We have complicated things a bit to show how to use the Vault private access capability. In this context, HCP Boundary controllers are instructed to reach Vault via a self-managed Worker (or multi-hop workers) that should have connectivity to Vault (this is the way you would connect your self-managed Vault instance in case its API was not publicly accessible). Vault will use the peering connection and associated routes to connect with the database servers deployed within a private subnet. The AWS infrastructure is deployed in a single region, with a single VPC and a couple of subnets (public and private subnets). Our target resources are deployed in the private subnet, whose access will be proxied by the self-managed worker.

Boundary workers work as TCP proxies. In our topology the self-managed worker is configured to listen to incoming requests from Boundary clients on port 9202/TCP (and so AWS security group has been configured accordingly). This is the only incoming request an ingress worker will receive, the rest of the traffic egress from the workers, such us the connectivity towards HCP Boundary (Boundary controllers)

Deploy Boundary and Vault clusters in HCP.

HCP provides oneclick deployments of HashiCorp offering fully managed by HashiCorp’s SRE teams, which can also be expressed with Terraform and the hcp provider. HashiCorp provides $50 credit so you can try our platform for free.

Once you have your HCP account created, deploying a Vault and Boundary cluster in HCP just takes a few lines of code. For example to deploy a Boundary cluster we need

/* 
It will ask for interactive login via browser
to obtain a token to operate with the API
*/
provider "hcp" {
}
resource "hcp_boundary_cluster" "boundary" {
cluster_id = var.boundary_cluster_id # Cluster ID
username = var.username # Username of initial admin user
password = var.password # Password of initial admin user
tier = var.boundary_tier # HCP boundary tier: Standard or Plus
}

With Vault is slightly different, as we need to deploy a HashiCorp Virtual Network (HVN): an abstraction of a VPC or vNET (depending on which cloud provider we chose to deploy our cluster) that can be connected with customer owned infrastructure by means of peering or transit gateways. The example below shows how to create a HVN and connect to a given AWS VPC by means of VPC peering.

data "aws_arn" "peer" {
arn = aws_vpc.vpc.arn
}
# Create HVN
resource "hcp_hvn" "hvn" {
hvn_id = var.hvn_id # A string to identify our HVN
cloud_provider = var.cloud_provider. # Cloud provider: aws or azure
region = var.region # Region to deploy the hvn/vault
}
# Create peering
resource "hcp_aws_network_peering" "peer" {
hvn_id = hcp_hvn.hvn.hvn_id
peering_id = var.peering_id
peer_vpc_id = aws_vpc.vpc.id
peer_account_id = aws_vpc.vpc.owner_id
peer_vpc_region = data.aws_arn.peer.region
}
# Attach a network route to the previous peering
resource "hcp_hvn_route" "peer_route" {
hvn_link = hcp_hvn.hvn.self_link
hvn_route_id = var.route_id
destination_cidr = aws_vpc.vpc.cidr_block
target_link = hcp_aws_network_peering.peer.self_link
}
# Accept peering connection on the given AWS VPC
resource "aws_vpc_peering_connection_accepter" "peer" {
vpc_peering_connection_id = hcp_aws_network_peering.peer.provider_peering_id
auto_accept = true
}

Once we have created an HVN we can deploy a Vault cluster on it.

# Create Vault cluster with private and public endpoint and public UI access
resource "hcp_vault_cluster" "hcp_vault" {
hvn_id = hcp_hvn.hvn.hvn_id # HVN to deploy vault
cluster_id = var.vault_cluster_id # String to identify the cluster
tier = var.vault_tier # Vault cluster tier
public_endpoint = true # Enable API access from Internet
proxy_endpoint = "enabled" # UI access via proxy.
}
# Create admin token to operate with Vault
resource "hcp_vault_cluster_admin_token" "token" {
cluster_id = var.vault_cluster_id
depends_on = [hcp_vault_cluster.hcp_vault]
}

Configure Vault

Before we can use Boundary to control user sessions we need to set up Vault with a number of Database Engines. In this context we are going to use an AWS RDS managed instance with Postgres Engine and a DocumentDB cluster, both just accessible within the VPC.

Postgres database engine

Create a role that can be used to generate dynamic database credentials with these four steps:

  • Enable the database engine mount point
resource "vault_mount" "database" {
path = "database"
type = "database"
description = "Postgres DB Engine"
default_lease_ttl_seconds = 3600
max_lease_ttl_seconds = 7200
}
  • Configure the connection from Vault to the database
resource "vault_database_secret_backend_connection" "postgres" {
backend = vault_mount.database.path
name = "boundarydemo"
allowed_roles = ["*"]
verify_connection = false
# Going towards the private IP of the Ubuntu Server
postgresql {
connection_url = "postgresql://{{username}}:{{password}}@${var.rds_address}:5432/postgres?sslmode=disable"
username = var.db_name
password = var.password
max_open_connections = 5
}
}
  • Create a role. We are going to simplify a bit the number of roles to implement from the initial example to just three categories: DBA, read/write and readonly. For example the readonly role as configured in Vault will take this form.
resource "vault_database_secret_backend_role" "read_only" {
backend = vault_mount.database.path
name = "readonly"
db_name = vault_database_secret_backend_connection.postgres.name
creation_statements = [
"CREATE USER \"{{name}}\" WITH LOGIN ENCRYPTED PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT CONNECT ON DATABASE ${var.db_name} TO \"{{name}}\";",
"GRANT USAGE ON SCHEMA public TO \"{{name}}\";",
"GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";",
]
revocation_statements = [
"GRANT \"{{name}}\" to \"${var.db_username}\";",
"REVOKE ALL ON ALL TABLES IN SCHEMA public FROM \"{{name}}\";",
"REVOKE ALL ON DATABASE ${var.db_name} FROM \"{{name}}\";",
"REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public FROM \"{{name}}\";",
"REVOKE ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA public FROM \"{{name}}\";",
"REASSIGN OWNED BY \"{{name}}\" to \"${var.db_username}\";",
"DROP OWNED BY \"{{name}}\";",
"DROP ROLE IF EXISTS \"{{name}}\";"
]
default_ttl = 3600
max_ttl = 84000
}

As part of the creation_statements we are granting readonly role permissions to the dynamically created user by Vault.

As part of the northwinddatabase creation process we have revoked all access to the database by running the following commands. This way the dynamic users created by the readonly role will just have permissions based on what has been explicitly granted to them.

REVOKE CREATE ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON DATABASE northwind FROM PUBLIC;

In the same vein, we can configure the readwrite and dbaroles. Users associated with the readwrite role can read (SELECT) and write (INSERT,UPDATE , DELETE) data to tables.

resource "vault_database_secret_backend_role" "write_role" {
backend = vault_mount.database.path
name = "write"
db_name = vault_database_secret_backend_connection.postgres.name
creation_statements = [
"CREATE USER \"{{name}}\" WITH LOGIN ENCRYPTED PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT CONNECT ON DATABASE ${var.db_name} TO \"{{name}}\";",
"GRANT USAGE ON SCHEMA public TO \"{{name}}\";",
"GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";",
"GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";"
]
revocation_statements = [
"GRANT \"{{name}}\" to \"${var.db_username}\";",
"REVOKE ALL ON ALL TABLES IN SCHEMA public FROM \"{{name}}\";",
"REVOKE ALL ON DATABASE ${var.db_name} FROM \"{{name}}\";",
"REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public FROM \"{{name}}\";",
"REVOKE ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA public FROM \"{{name}}\";",
"REASSIGN OWNED BY \"{{name}}\" to \"${var.db_username}\";",
"DROP OWNED BY \"{{name}}\";",
"DROP ROLE IF EXISTS \"{{name}}\";"
]
default_ttl = 1800
max_ttl = 84000
}

Finally, the users associated with the dba role will have rds_superuser, CREATEDBand ALL PRIVILEGES on tables.

resource "vault_database_secret_backend_role" "dba" {
backend = vault_mount.database.path
name = "dba"
db_name = vault_database_secret_backend_connection.postgres.name
creation_statements = [
"CREATE USER \"{{name}}\" WITH LOGIN ENCRYPTED PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT rds_superuser to \"{{name}}\"",
"GRANT CONNECT ON DATABASE ${var.db_name} TO \"{{name}}\";",
"GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO \"{{name}}\";",
"GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";",
"ALTER ROLE \"{{name}}\" WITH CREATEDB CREATEROLE;",
]
revocation_statements = [
"GRANT \"{{name}}\" to \"${var.db_username}\";",
"REVOKE ALL ON ALL TABLES IN SCHEMA public FROM \"{{name}}\";",
"REVOKE ALL ON DATABASE ${var.db_name} FROM \"{{name}}\";",
"REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public FROM \"{{name}}\";",
"REVOKE ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA public FROM \"{{name}}\";",
"REASSIGN OWNED BY \"{{name}}\" to \"${var.db_username}\";",
"DROP OWNED BY \"{{name}}\";",
"DROP ROLE IF EXISTS \"{{name}}\";"
]
default_ttl = 3600
max_ttl = 84000
}

Note this is not a recommendation or best practice in terms of Postgres role definition.

The last step consists in creating the policy that will provide read access to the different paths. For the sake of simplicity we are going to create a policy that provides access to the three roles (paths) we have just created. In production use the principle of least privilege to drive your actions and configuration

# Read Permissions for RDS Postgres
path "database/creds/*" {
capabilities = ["read"]
}
# Read Permissions for DocumentDB
path "mongo/creds/*" {
capabilities = ["read"]
}

Vault will work as a credential library for Boundary. To that end Boundary requires a token with the permissions we defined in the previous step plus the following.

# boundary-controller policy
path "auth/token/lookup-self" {
capabilities = ["read"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
path "auth/token/revoke-self" {
capabilities = ["update"]
}
path "sys/leases/renew" {
capabilities = ["update"]
}
path "sys/leases/revoke" {
capabilities = ["update"]
}
path "sys/capabilities-self" {
capabilities = ["update"]
}

Once the policy has been installed, we can create a Vault token for Boundary to use. Again, consider least privilege criteria when creating tokens to access Vault.

resource "vault_token" "boundary_token_db" {
no_default_policy = true
period = "20m"
policies = [
"boundary-controller",
"policy-database"
]
no_parent = true
renewable = true
renew_min_lease = 43200
renew_increment = 86400
metadata = {
"purpose" = "service-account-boundary-database"
}
}

MongoDB engine

DocumentDB is a NoSQL database whose API is based in MongoDB, but not identical and so, slight modifications are required when it comes to Vault and the standard process to bring Mongo Secret Engine. To integrate with DocumentDB we need to trust its CA, which is available in this URL: https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem. The mongodbstanza within the vault_database_secret_backend_connection does not comprise the usage of a CA certificate as part of its connection configuration, for that reason we would need to add it manually after it has been provisioned.

# Add DB Secret engine mount point
resource "vault_mount" "database_mongo" {
path = "mongo"
type = "database"
description = "MongoDB Engine"
default_lease_ttl_seconds = 3600
max_lease_ttl_seconds = 7200
}
# Define connection as mongodb
resource "vault_database_secret_backend_connection" "mongo" {
backend = vault_mount.database_mongo.path
name = "demo-mongo"
allowed_roles = ["*"]
verify_connection = false
mongodb {
connection_url = "mongodb://{{username}}:{{password}}@${data.terraform_remote_state.local_backend.outputs.docdb_cluster_endpoint}:27017/admin?tls=true&retryWrites=false"
username = var.db_username
password = var.password
# Manually add this cert https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem as CA
}
}

After running the terraform code, log into Vault UI, Secrets Engine > mongo > Connections, select the connection demo-mongo. Click in Edit Configuration and add the certificate as TLS CA. After this simply save the change

Or if you want to keep everything within the CLI, simply do the following:

wget https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
vault write /mongo/config/demo-mongo tls_ca=@global-bundle.pem

In terms of roles we are going to work with the same role structure, that is, three different roles. We are using, for simplicity, built-in database roles.

resource "vault_database_secret_backend_role" "mongo_dba" {
backend = vault_mount.database_mongo.path
name = "dba"
db_name = vault_database_secret_backend_connection.mongo.name
creation_statements = [ <<-EOF
{ "db": "admin", "roles": [ {"role": "userAdminAnyDatabase"},{"role":"dbAdminAnyDatabase"},{"role":"readWriteAnyDatabase"}]}
EOF
]
default_ttl = 3600
max_ttl = 84000
}
resource "vault_database_secret_backend_role" "mongo_readwrite" {
backend = vault_mount.database_mongo.path
name = "read_write"
db_name = vault_database_secret_backend_connection.mongo.name
creation_statements = [ <<-EOF
{ "db": "admin", "roles": [{ "role": "readWriteAnyDatabase" }]}
EOF
]
default_ttl = 3600
max_ttl = 84000
}
resource "vault_database_secret_backend_role" "mongo_readonly" {
backend = vault_mount.database_mongo.path
name = "read_only"
db_name = vault_database_secret_backend_connection.mongo.name
creation_statements = [ <<-EOF
{ "db": "admin", "roles": [{ "role": "readAnyDatabase" }]}
EOF
]
default_ttl = 3600
max_ttl = 84000
}

Configure Boundary

Boundary makes use of a Domain model that defines the relationships between the different elements that comprise its API. The image below shows how the different components relate to one another. For simplicity we are just representing the Postgres targets.

The logic behind each of those objects is described in detail in the documentation. We will provide some comments as we describe the Terraform configuration. The first objective is to create a list of targets that will be later be used by the different users based on their roles.

Steps to create a target

  • Our first step will be to create theOrgand Project Scope that will host the rest of the objects.
resource "boundary_scope" "org" {
scope_id = "global"
name = "db-org-scope"
description = "Org for DB MGMT"
auto_create_default_role = true
auto_create_admin_role = true
}
resource "boundary_scope" "project" {
name = "db-org-project"
description = "Database MGMT"
scope_id = boundary_scope.org.id
auto_create_admin_role = true
auto_create_default_role = true
}
  • Then we will create credential store and credential libraries
resource "boundary_credential_store_vault" "vault" {
name = "vault-credential-store"
description = "Vault for Credential Brokering"
address = var.vault_private_url
token = vault_token.boundary_token_db.client_token
scope_id = boundary_scope.project.id
namespace = "admin"
worker_filter = " \\"worker1\\" in \\"/tags/type\\" " # Remove if direct connectivity
}
# Credential Library for dba
resource "boundary_credential_library_vault" "dba" {
name = "northwind dba"
description = "northwind dba"
credential_store_id = boundary_credential_store_vault.vault.id
path = "database/creds/dba" # Associated Path
http_method = "GET"
}

The expression used in the worker_filterattribute matches one of the tags defined within the worker’s configuration. We are not covering Worker configuration details as part of this blog, you can get more details on that matter in the documentation.

If you were to follow a least privilege approach three separated Credential Store Librarieswould have to be created each with a separate Vault token. Then each of those libraries should be associated with one of the Credential Store Libraries based on the tokens attached permissions.

Target mixes credentials with hosts and ports. We have defined the credentials, let’s now define our Hosts.

  • Hosts are part of a host catalog. The host catalog contains host and host-sets
resource "boundary_host_catalog_static" "rds" {
name = "db-catalog"
description = "DB catalog"
scope_id = boundary_scope.project.id
}
resource "boundary_host_static" "db" {
name = "postgres-host"
host_catalog_id = boundary_host_catalog_static.rds.id
address = data.terraform_remote_state.local_backend.outputs.rds_hostname
}
resource "boundary_host_set_static" "db" {
name = "db-host-set"
host_catalog_id = boundary_host_catalog_static.rds.id
host_ids = [
boundary_host_static.db.id
]
}
  • Once we have the host-sets defined, we can create our targets using the previously created Credential Libraries. Furthermore, targets define session connection limits to connection time, ports and path to follow and more details.
resource "boundary_target" "dba" {
type = "tcp"
name = "DBA Access"
description = "DBA Target"
ingress_worker_filter = " \\"worker1\\" in \\"/tags/type\\" "
scope_id = boundary_scope.project.id
session_connection_limit = 3600
default_port = 5432
host_source_ids = [
boundary_host_set_static.db.id
]
brokered_credential_source_ids = [
boundary_credential_library_vault.dba.id
]
}

At this point if you were to log in to Boundary Desktop as an admin user you will see the six targets just created and start to operate with the databases.

Establishing a session via UI.

This is not yet what we want or need. We need to segregate access based on the role of the user. Let’s get on with it.

Steps to create a users and roles

Every organization out there has some form of Identity Provider that holds a repository of the users, groups and other resources within the organization. Boundary can integrate natively with OIDC providers and LDAP/AD Servers as well as built-in users as part of the local password Auth method. This picture can be extended by Vault OIDC provider capabilities.

The HashiCorp Education team does a great job providing tutorials covering multiple scenarios. We have used the OIDC authentication with Auth0 to set up Auth0 as IdP for Boundary. Now we need to connect Boundary with Auth0 by means of an Authentication-Method and obtain Accounts and Users, which are finally matched to roles .

# OIDC Auth method
resource "boundary_auth_method_oidc" "provider" {
name = "Auth0"
description = "OIDC auth method for Auth0"
scope_id = data.boundary_scope.org.id
issuer = "<https://$>{data.auth0_tenant.tenant.domain}/"
client_id = data.auth0_client.boundary.id
client_secret = data.auth0_client.boundary.client_secret
signing_algorithms = ["RS256"]
api_url_prefix = var.boundary_public_url
is_primary_for_scope = true
state = "active-public"
max_age = 0
}
# Configs for DBA User [Account]
resource "boundary_account_oidc" "dba" {
name = auth0_user.dba.name
description = "DBA user from Auth0"
auth_method_id = boundary_auth_method_oidc.provider.id
issuer = "<https://$>{data.auth0_tenant.tenant.domain}/"
subject = auth0_user.dba.user_id
}
# Configs for DBA User [User]
resource "boundary_user" "dba" {
name = boundary_account_oidc.dba.name
description = "DBA user from Auth0"
account_ids = [boundary_account_oidc.dba.id]
scope_id = data.boundary_scope.org.id
}
resource "boundary_role" "dba" {
# Permissions limited to dba target
name = "dba"
description = "Access to dba target"
principal_ids = [boundary_user.dba.id]
grant_strings = [
"ids=${var.rds_target_dba};actions=authorize-session", # Permissions on RDS DBA Target
"ids=${var.documentDB_target_dba};actions=authorize-session", # Permissions on DocDB DBA Target
"ids=*;type=session;actions=read:self,cancel:self,list",
"ids=*;type=*;actions=read,list"
]
scope_id = data.boundary_scope.project.id
}
# Equivalent configuration to apply for read_write and read_only users

At this point our users can authenticate to Boundary using OIDC authentication with the credentials defined in our IdP and get access to the hosts with the proper permissions based on their roles.

Checking our workflow

At this point we have 3 users (plus a Boundary full admin user for fun) created in Auth0 that are integrated with Boundary via OIDC with their respective Accounts, Users and roles in place. Each of those users, after login, will be able to see 2 targets, one for each database with their corresponding role. If you wonder where to find the credentials, you can obtain them via a terraform output.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
auth_method_id = "amoidc_EqCLZ9JJuq"
boundary_authenticate_cli = "boundary authenticate oidc -auth-method-id amoidc_EqCLZ9JJuq"
password = "Passw0rd123!"
project-scope-id = "p_7NkPLCYckD"
user_dba_email = "dba@boundaryproject.io"
user_readonly_email = "readonly@boundaryproject.io"
user_readwrite_email = "readwrite@boundaryproject.io"

Below you can see the steps to login via CLI and accessing a host.

Similarly, using the Boundary Desktop client you can select the scope to login (On the left, Org scope level) and then simply click on the “Sign in” button that will redirect you to your predetermined browser for login.

Following on the CLI. Let’s verify our users have all different permissions, starting with the RDS instance.

Postgres RDS Access Workflow

As you can see based on the text snippet below each user establishes a connection to a local port. Based on the role of the users, the permissions vary.

boundary connect postgres authorizes a session to the given target and brings a local Postgres client.

DBA

  • Can CREATE database.
  • Can CREATE table in database northwind
  • Can INSERT
  • Can SELECT
> export BOUNDARY_ADDR=https://72a20d60-b9c3-438d-8664-dfcbaaaf0867.boundary.hashicorp.cloud
> boundary authenticate oidc -auth-method-id amoidc_l17XJAoZXb
Opening returned authentication URL in your browser...
https://dev-q6ml3431eugrpfdc.us.auth0.com/authorize?client_id=prJEtaNbo....
Authentication information:
Account ID: acctoidc_mhycKeWZdd
Auth Method ID: amoidc_l17XJAoZXb
Expiration Time: Tue, 13 Feb 2024 07:48:44 CET
User ID: u_W4zdXRaLQB
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
> boundary targets list -recursive
Target information:
ID: ttcp_f6iyFgSkzK
Scope ID: p_MfRkfe58Bd
Version: 3
Type: tcp
Name: RDS DBA Access
Description: RDS DBA Permissions
Authorized Actions:
authorize-session
read
ID: ttcp_1Xky8n4iI8
Scope ID: p_MfRkfe58Bd
Version: 3
Type: tcp
Name: DocumentDB DBA Access
Description: DocumentDB: DBA Permissions
Authorized Actions:
read
authorize-session
> boundary connect postgres -target-id ttcp_f6iyFgSkzK -dbname northwind
psql (14.10 (Homebrew), server 13.13)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
northwind=> \\conninfo
You are connected to database "northwind" as user "v-token-to-**dba**-wNVIPCDSDa1RNUeLajDv-1707202187" on host "127.0.0.1" at port "57865".
...
# Create DB
northwind=> create database dbatest;
CREATE DATABASE
# Create Table
northwind=> CREATE TABLE "test" ("fullName" varchar(255), "isUser" varchar(255), "rating" varchar(255));
CREATE TABLE
# Insert Data
INSERT INTO "test" ("fullName", "isUser", "rating")
VALUES ('Almeda Shields', 'true', '⭐️⭐️'), ('Daniele Upward', 'false', '⭐️⭐️');
INSERT 0 2
# Select data
northwind=> select * from test;
fullName | isUser | rating
----------------+--------+--------
Almeda Shields | true | ⭐️⭐️
Daniele Upward | false | ⭐️⭐️
(2 rows)

ReadWrite

  • Cannot CREATE database.
  • Cannot CREATE table in database northwind
  • Can INSERT
  • Can SELECT
> export BOUNDARY_ADDR=https://72a20d60-b9c3-438d-8664-dfcbaaaf0867.boundary.hashicorp.cloud
> boundary authenticate oidc -auth-method-id amoidc_l17XJAoZXb
Opening returned authentication URL in your browser...
https://dev-q6ml3431eugrpfdc.us.auth0.com/authorize?client_id=prJEtaNbo9NqHLf7tjKeDM5GWfmI6amc&max_age=0&nonce=D4Jv2CXeWPDzOIe6ubMv&redirect_uri=https%3A%2F%2F72a20d60-b9c3-438d-8664-dfcbaaaf0867.boundary.hashicorp.cloud%2Fv1%2Fauth-methods%2Foidc%3Aauthenticate%3Acallback&response_type=co-...
Authentication information:
Account ID: acctoidc_tVPFQqd66p
Auth Method ID: amoidc_l17XJAoZXb
Expiration Time: Tue, 13 Feb 2024 10:54:01 CET
User ID: u_AWdADm7539
The token name "default" was successfully stored in the chosen keyring and is not displayed here.

> boundary targets list -recursive
Target information:
ID: ttcp_O4XOrZkkm2
Scope ID: p_MfRkfe58Bd
Version: 3
Type: tcp
Name: DocumentDB Read/Write Access
Description: DocumentDB: readWriteAllDBs
Authorized Actions:
read
authorize-session
ID: ttcp_NJQ4NO87QJ
Scope ID: p_MfRkfe58Bd
Version: 3
Type: tcp
Name: RDS Read/Write Access
Description: RDS: SELECT, INSERT, UPDATE, DELETE
Authorized Actions:
read
authorize-session

> boundary connect postgres -target-id ttcp_NJQ4NO87QJ -dbname northwind
psql (14.10 (Homebrew), server 13.13)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
northwind=> \\conninfo
You are connected to database "northwind" as user "v-token-to-**write**-hcFLGfD40CeKPDpG4dSA-1707213290" on host "127.0.0.1" at port "59908".
...
# Create DB
northwind=> create database dbatest2;
ERROR: permission denied to create database
# Create Table
northwind=> create database dbatest2;
ERROR: permission denied to create database
# Insert Data
northwind=> INSERT INTO "test" ("fullName", "isUser", "rating") VALUES ('Katie Kildea', 'false', '⭐️⭐️'), ('Micah Bonass', 'true', '⭐️⭐️⭐️⭐️'), ('Brigid Whitsey', 'true', '⭐️⭐️');
INSERT 0 3
# Select data
northwind=> select * from test;
fullName | isUser | rating
----------------+--------+--------
Almeda Shields | true | ⭐️⭐️
Daniele Upward | false | ⭐️⭐️
Katie Kildea | false | ⭐️⭐️
Micah Bonass | true | ⭐️⭐️⭐️⭐️
Brigid Whitsey | true | ⭐️⭐️
(5 rows)

ReadOnly

  • Cannot CREATE database.
  • Cannot CREATE table in database northwind
  • Cannot INSERT
  • Can SELECT
> export BOUNDARY_ADDR=https://72a20d60-b9c3-438d-8664-dfcbaaaf0867.boundary.hashicorp.cloud
> boundary authenticate oidc -auth-method-id amoidc_l17XJAoZXb
Opening returned authentication URL in your browser...
https://dev-q6ml3431eugrpfdc.us.auth0.com/authorize?client_id=prJEtaNbo9NqHLf7tjKeDM5GWfmI6amc&max_age=0&nonce=gTGh7INtRzT0G2hPN1WY&redirect_uri=https%3A%...
Authentication information:
Account ID: acctoidc_pAmhCujkNE
Auth Method ID: amoidc_l17XJAoZXb
Expiration Time: Tue, 13 Feb 2024 11:11:07 CET
User ID: u_h3xO0X79Og
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
> boundary targets list -recursive
Target information:
ID: ttcp_NyckluxxQp
Scope ID: p_MfRkfe58Bd
Version: 3
Type: tcp
Name: RDS ReadOnly Access
Description: RDS: SELECT
Authorized Actions:
read
authorize-session
ID: ttcp_18raqgMmbq
Scope ID: p_MfRkfe58Bd
Version: 3
Type: tcp
Name: DocumentDB ReadOnly Access
Description: DocumentDB: readAllDBs
Authorized Actions:
authorize-session
read
> boundary connect postgres -target-id ttcp_NyckluxxQp -dbname northwind
psql (14.10 (Homebrew), server 13.13)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
northwind=> \\conninfo
You are connected to database "northwind" as user "v-token-to-**readonly**-pLqGwK7sOeRJIl35s0JG-1707214382" on host "127.0.0.1" at port "60227".
...
# Create DB
northwind=> create database dbatest3;
ERROR: permission denied to create database
# Create Table
northwind=> CREATE TABLE "test3" ("fullName" varchar(255), "isUser" varchar(255), "rating" varchar(255));
ERROR: permission denied for schema public
# Insert data
northwind=> INSERT INTO "test" ("fullName", "isUser", "rating") VALUES ('Katie Kildea', 'false', '⭐️⭐️'), ('Micah Bonass', 'true', '⭐️⭐️⭐️⭐️'), ('Brigid Whitsey', 'true', '⭐️⭐️');
ERROR: permission denied for table test
# Select data
northwind=> select * from test;
fullName | isUser | rating
----------------+--------+--------
Almeda Shields | true | ⭐️⭐️
Daniele Upward | false | ⭐️⭐️
Katie Kildea | false | ⭐️⭐️
Micah Bonass | true | ⭐️⭐️⭐️⭐️
Brigid Whitsey | true | ⭐️⭐️
(5 rows)

DocumentDB Access Workflow

Boundary connect authorizes a session and opens a local port for connection. A Mongo client must be launched manually. An alternative to this workflow can be achieved by using Boundary targets authorize-session.

In order to get a similar experience to the previous workflow, we are going to implement a little hack to map the session token, DocumentDB username and password. That information can later be passed into a boundary connect -exec that will call the mongosh binary.

eval "$(boundary targets authorize-session -id <target-id> -format json | jq -r '.item | "export BOUNDARY_SESSION_TOKEN=\(.authorization_token) BOUNDARY_SESSION_USERNAME=\(.credentials[0].secret.decoded.username) BOUNDARY_SESSION_PASSWORD=\(.credentials[0].secret.decoded.password)"')"
boundary connect -exec mongosh -authz-token=$BOUNDARY_SESSION_TOKEN -- --tls --host {{boundary.addr}} --username $BOUNDARY_SESSION_USERNAME --password $BOUNDARY_SESSION_PASSWORD --tlsAllowInvalidCertificates --retryWrites false

I know that’s a lot of typing. For simplicity we have added each of those commands for each DocumentDB target as output of our second terraform apply. If you have logged as a DBA, you can connect with this single command:

> eval "$(terraform output -state=../2_Config/terraform.tfstate -raw connect_documentDB_target_dba)"  
Proxy listening information:
Address: 127.0.0.1
Connection Limit: 3600
Expiration: Thu, 15 Feb 2024 18:28:22 CET
Port: 59769
Protocol: tcp
Session ID: s_jRYhEEAY9Z
Current Mongosh Log ID: 65cdd9378e4202508ce2c67b
Connecting to: mongodb://<credentials>@127.0.0.1:59769/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsAllowInvalidCertificates=true&retryWrites=false&appName=mongosh+2.1.3
Using MongoDB: 5.0.0
Using Mongosh: 2.1.3
mongosh 2.1.4 is available for download: https://www.mongodb.com/try/download/shell
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
rs0 [direct: primary] test>

DBA

  • Can read, write and do administrative tasks such as removing databases and reading users.
> export BOUNDARY_ADDR=https://7d018f9c-ee36-4ac4-90b6-10b465770d5c.boundary.hashicorp.cloud
> boundary authenticate oidc -auth-method-id amoidc_kFalRmMWZw
Opening returned authentication URL in your browser...
https://dev-q6ml3431eugrpfdc.us.auth0.com/authorize?client_id=OHgDoLbq0K4H71o51CumJ2QtpYi6asF6&max_age=0&nonce=CV3vDQ7FIDJZPdCHRBMw&redirect_uri=https%3A%2F%2F7d018f9c-ee36-4ac4-90b6-10b465770d5c.boundary.hashicorp.cloud%2Fv1%2Fauth-methods%2Foidc%3Aauthenticate%3Acallback&response_type=code&scope=openid&state=NFSM7Lhse1G7kKVVzF3UHanpwUraXzyc367zZrMRr3K3jmAv5oQinNXkBZvDiC8H1wCBWkuk1Ekw8LjF3drJB9gbg8ZQpTqixQMAfsLfGQuz63ETR1qSFcH6PRMqfA1iDLnaSbDyzQVR4WpRc7g4EoMet1Di41QSeMwYknKfUzD9upicPJkUkMHCEFrnR5c6eUvqEEwVMJ2ph1NTAV7jfTMnxMrwuUYWVXUgSeNc4LjBKmxGsHwMujizBJ7e1EXeY9L14FsuBWYu9B8JPEwyEHpz3rPnvkgL1cVAsZLLbMSSiDzwM1gGBJFyqpjG5bBDCMCPDyaXBwHf7eLu26ctRGS38Pc8XeYesGBVNnfEXjMJfcoZXB5fjLLGR53yapBBADguegYu2P9bXzzRJCqvUtpyyQQ1iPDW6XMoExmNTypDvyjPc9CLjW1eFUPQjQpkTtBg
Authentication information:
Account ID: acctoidc_0eWZg22j0a
Auth Method ID: amoidc_kFalRmMWZw
Expiration Time: Thu, 22 Feb 2024 09:06:10 CET
User ID: u_3x6ierNAGn
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
> boundary targets list -recursive
Target information:
ID: ttcp_5KF7b2nemT
Scope ID: p_3qnPearCT1
Version: 3
Type: tcp
Name: DocumentDB DBA Access
Description: DocumentDB: DBA Permissions
Authorized Actions:
authorize-session
read
ID: ttcp_UflbyKyPWb
Scope ID: p_3qnPearCT1
Version: 3
Type: tcp
Name: RDS DBA Access
Description: RDS DBA Permissions
Authorized Actions:
read
authorize-session
> eval "$(boundary targets authorize-session -id ttcp_5KF7b2nemT -format json | jq -r '.item | "export BOUNDARY_SESSION_TOKEN=\(.authorization_token) BOUNDARY_SESSION_USERNAME=\(.credentials[0].secret.decoded.username) BOUNDARY_SESSION_PASSWORD=\(.credentials[0].secret.decoded.password)"')
> boundary connect -exec mongosh -authz-token=$BOUNDARY_SESSION_TOKEN -- --tls --host {{boundary.addr}} --username $BOUNDARY_SESSION_USERNAME --password $BOUNDARY_SESSION_PASSWORD --tlsAllowInvalidCertificates --retryWrites false
Proxy listening information:
Address: 127.0.0.1
Connection Limit: 3600
Expiration: Thu, 15 Feb 2024 17:11:05 CET
Port: 58558
Protocol: tcp
Session ID: s_PK2t9h2KN5
Current Mongosh Log ID: 65cdc7297b18a74a9f2010b1
Connecting to: mongodb://<credentials>@127.0.0.1:58558/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsAllowInvalidCertificates=true&retryWrites=false&appName=mongosh+2.1.3
Using MongoDB: 5.0.0
Using Mongosh: 2.1.3
mongosh 2.1.4 is available for download: https://www.mongodb.com/try/download/shell
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
test> db.runCommand({connectionStatus : 1})
{
authInfo: {
authenticatedUsers: [
{
user: 'v-token-token-dba-9NB04Eptj9j8J2z2UeJZ-1707244196',
db: 'admin'
}
],
authenticatedUserRoles: [
{ role: 'dbAdminAnyDatabase', db: 'admin' },
{ role: 'readWriteAnyDatabase', db: 'admin' },
{ role: 'userAdminAnyDatabase', db: 'admin' }
]
},
ok: 1,
operationTime: Timestamp({ t: 1707244136, i: 1 })
}
test> use mydatabase
switched to db mydatabase
mydatabase>
mydatabase> db.createCollection("mycollection")
{ ok: 1 }
mydatabase> db.mycollection.insertOne({
name: 'John Doe',
age: 30,
city: 'New York'
})
{
acknowledged: true,
insertedId: ObjectId('65c27da614fec5f6b5dedcff')
}
mydatabase> db.mycollection.find()
[
{
_id: ObjectId('65c27da614fec5f6b5dedcff'),
name: 'John Doe',
age: 30,
city: 'New York'
}
]
mydatabase> db.mycollection.drop()
true
mydatabase> db.dropDatabase()
{ ok: 1, dropped: 'mydatabase' }
mydatabase> use test
switched to db test
test> db.getUsers()
{
ok: 1,
users: [
{
_id: 'serviceadmin',
user: 'serviceadmin',
db: 'admin',
roles: [ { db: 'admin', role: 'root' } ]
},
{
_id: 'demo',
user: 'demo',
db: 'admin',
roles: [ { db: 'admin', role: 'root' } ]
},
{
_id: 'v-token-token-dba-9NB04Eptj9j8J2z2UeJZ-1707244196',
user: 'v-token-token-dba-9NB04Eptj9j8J2z2UeJZ-1707244196',
db: 'admin',
roles: [
{ db: 'admin', role: 'userAdminAnyDatabase' },
{ db: 'admin', role: 'dbAdminAnyDatabase' },
{ db: 'admin', role: 'readWriteAnyDatabase' }
]
}
],
operationTime: Timestamp({ t: 1707245099, i: 1 })
}

ReadWrite

  • Can read, write, but lacks database permissions and read user permissions, as an example.
> export BOUNDARY_ADDR=https://7d018f9c-ee36-4ac4-90b6-10b465770d5c.boundary.hashicorp.cloud
> boundary authenticate oidc -auth-method-id amoidc_kFalRmMWZw
Opening returned authentication URL in your browser...
https://dev-q6ml3431eugrpfdc.us.auth0.com/authorize?client_id=OHgDoLbq0K4H71o51CumJ2QtpYi6asF6&max_age=0&nonce=7OSh6XM77W0r052Aomoa&redirect_uri=https%3A%2F%2F7d018f9c-ee36-4ac4-90b6-10b465770d5c.boundary.hashicorp.cloud%2Fv1%2Fauth-methods%2Foidc%3Aauthenticate%3Acallback&response_type=code&scope=openid&state=NFSM7Lhse1G7kKVVzF3UHanpwUraXzyc367zZrMRr3K3jmAv5oQinNXkBZvDiC8H1wCBWkuk1Ekw8LjF3drJB9gbg8ZQpTqixQMAfsLfGQuz63ETR1qSFcH6KoQVkuiWLpHytjpw6a9pBUj2AH94qkv2R3tHVsS8faPTccBvbZheZHPc2kvYpQpahd5kiUz2DYSrHfBSGkNDfg3WD7sBqx1ab8riLDzviA8Pa7HxgzN8d12BEM6YX6CQuq6oxQfBXdyb1D9MAdokuFaod1R6UdivBVNRSNaGaKtpfwzJ7ZRJMuT9zCChyjNzV6JU9yko6ERXaKpqY9DVk2KAY5oxCWAgWza4LWk59XYav5LzBQdrC8gLTFUJzLTd9eMSX2f2n4qBcC6GgYZ7z97FpdR6w3y3WiHWKznvinkLTLj9BTBJwHm9fG6UFbjKScp5LK8KbM98
Authentication information:
Account ID: acctoidc_gfwbHESwrq
Auth Method ID: amoidc_kFalRmMWZw
Expiration Time: Thu, 22 Feb 2024 09:32:53 CET
User ID: u_5WmplVDem2
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
> boundary targets list -recursive
Target information:
ID: ttcp_ezWrfC1F8h
Scope ID: p_3qnPearCT1
Version: 3
Type: tcp
Name: RDS Read/Write Access
Description: RDS: SELECT, INSERT, UPDATE, DELETE
Authorized Actions:
read
authorize-session
ID: ttcp_AV0272rFCb
Scope ID: p_3qnPearCT1
Version: 3
Type: tcp
Name: DocumentDB Read/Write Access
Description: DocumentDB: readWriteAllDBs
Authorized Actions:
authorize-session
read
> eval "$(boundary targets authorize-session -id ttcp_AV0272rFCb -format json | jq -r '.item | "export BOUNDARY_SESSION_TOKEN=\(.authorization_token) BOUNDARY_SESSION_USERNAME=\(.credentials[0].secret.decoded.username) BOUNDARY_SESSION_PASSWORD=\(.credentials[0].secret.decoded.password)"')"
> boundary connect -exec mongosh -authz-token=$BOUNDARY_SESSION_TOKEN -- --tls --host {{boundary.addr}} --username $BOUNDARY_SESSION_USERNAME --password $BOUNDARY_SESSION_PASSWORD --tlsAllowInvalidCertificates --retryWrites false
Proxy listening information:
Address: 127.0.0.1
Connection Limit: 3600
Expiration: Thu, 15 Feb 2024 17:33:58 CET
Port: 58931
Protocol: tcp
Session ID: s_VJlsI25TKo
Current Mongosh Log ID: 65cdcc770e4edc0a8dd1ddb7
Connecting to: mongodb://<credentials>@127.0.0.1:58931/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsAllowInvalidCertificates=true&retryWrites=false&appName=mongosh+2.1.3
Using MongoDB: 5.0.0
Using Mongosh: 2.1.3
mongosh 2.1.4 is available for download: https://www.mongodb.com/try/download/shell
For mongosh info see: https://docs.mongodb.com/mongodb-shell/

test> db.runCommand({connectionStatus : 1})
{
authInfo: {
authenticatedUsers: [
{
user: 'v-token-token-read_write-7fvTfU6B4kUh9acVkilx-1707287236',
db: 'admin'
}
],
authenticatedUserRoles: [ { role: 'readWriteAnyDatabase', db: 'admin' } ]
},
ok: 1,
operationTime: Timestamp({ t: 1707287426, i: 1 })
}
test> use mydatabase2
switched to db mydatabase2
mydatabase2> db.createCollection("mycollection2")
{ ok: 1 }
mydatabase2> db.mycollection.insertOne({
... name: 'John Doe',
... age: 30,
... city: 'New York'
... })
{
acknowledged: true,
insertedId: ObjectId('65c323da6f5259b2e84bf429')
}
mydatabase2> db.mycollection.find()
[
{
_id: ObjectId('65c323da6f5259b2e84bf429'),
name: 'John Doe',
age: 30,
city: 'New York'
}
]
mydatabase2> db.mycollection.drop()
true
mydatabase2> db.dropDatabase()
**MongoServerError: Authorization failure**
mydatabase2> use test
switched to db test
test> db.getUsers()
**MongoServerError: Authorization failure**

ReadOnly

  • Can read, but not create anything within the database or collections.
> export BOUNDARY_ADDR=https://7d018f9c-ee36-4ac4-90b6-10b465770d5c.boundary.hashicorp.cloud
> boundary authenticate oidc -auth-method-id amoidc_kFalRmMWZw
Opening returned authentication URL in your browser...
https://dev-q6ml3431eugrpfdc.us.auth0.com/authorize?client_id=OHgDoLbq0K4H71o51CumJ2QtpYi6asF6&max_age=0&nonce=Ir8lkiMGnYvH8qdAxQ3G&redirect_uri=https%3A%2F%2F7d018f9c-ee36-4ac4-90b6-10b465770d5c.boundary.hashicorp.cloud%2Fv1%2Fauth-methods%2Foidc%3Aauthenticate%3Acallback&response_type=code&scope=openid&state=NFSM7Lhse1G7kKVVzF3UHanpwUraXzyc367zZrMRr3K3jmAv5oQinNXkBZvDiC8H1wCBWkuk1Ekw8LjF3drJB9gbg8ZQpTqixQMAfsLfGQuz63ETR1qSFcH6Y5bB6kipx2DGf2YSwARq3moiPKMpHLCTb8xaV3D4GCS1bETL6rKBY65Zdk6HcKvpEccpGF4V2jTQY5RFVSTTattGPEvuy1nDkaqnYFufj67VYgbCXLTo5JQhBcTVeeo5msrL4UAHiyNp4rGbqNLuU4UyaF6LGLYvUYbDxBmb9JmEod1gjgv3T6QVV35PmusXyhYWjZ1UJSHyyEhjETDrwpgXCcokghSYEb2VBuW1LbFjXTgCy9gdazkC2kYMdSCLGTD5z9V4TdrgCUhk9tkYCaHtABFb9mAmNV7f9vgf4xV1a5C4ipVvr7rKDeBawRc9wHYcfCfhBGDLAuthentication information:
Account ID: acctoidc_dXQcGlVZXS
Auth Method ID: amoidc_kFalRmMWZw
Expiration Time: Thu, 22 Feb 2024 09:36:11 CET
User ID: u_X93PKT600F
The token name "default" was successfully stored in the chosen keyring and is not displayed here.
> boundary targets list -recursive
Target information:
ID: ttcp_cmdRTvMpOX
Scope ID: p_3qnPearCT1
Version: 3
Type: tcp
Name: DocumentDB ReadOnly Access
Description: DocumentDB: readAllDBs
Authorized Actions:
read
authorize-session
ID: ttcp_y5a14LzaOT
Scope ID: p_3qnPearCT1
Version: 3
Type: tcp
Name: RDS ReadOnly Access
Description: RDS: SELECT
Authorized Actions:
read
authorize-session

> eval "$(boundary targets authorize-session -id ttcp_cmdRTvMpOX -format json | jq -r '.item | "export BOUNDARY_SESSION_TOKEN=\(.authorization_token) BOUNDARY_SESSION_USERNAME=\(.credentials[0].secret.decoded.username) BOUNDARY_SESSION_PASSWORD=\(.credentials[0].secret.decoded.password)"')"
> boundary connect -exec mongosh -authz-token=$BOUNDARY_SESSION_TOKEN -- --tls --host {{boundary.addr}} --username $BOUNDARY_SESSION_USERNAME --password $BOUNDARY_SESSION_PASSWORD --tlsAllowInvalidCertificates --retryWrites false

test> db.runCommand({connectionStatus : 1})
{
authInfo: {
authenticatedUsers: [
{
user: 'v-token-token-read_only-Tpyaf33LpBFFHesRysey-1707287972',
db: 'admin'
}
],
authenticatedUserRoles: [ { role: 'readAnyDatabase', db: 'admin' } ]
},
ok: 1,
operationTime: Timestamp({ t: 1707288068, i: 1 })
}
test> use mydatabase3
switched to db mydatabase3
mydatabase3> db.createCollection("mycollection3")
**MongoServerError: Authorization failure**
mydatabase3> use mydatabase2
switched to db mydatabase2
mydatabase2> db.mycollection.insertOne({
... name: 'John Doe',
... age: 30,
... city: 'New York'
... })
**MongoServerError: Authorization failure**
mydatabase2> db.runCommand(
... {
... listCollections: 1.0,
... authorizedCollections: true,
... nameOnly: true
... }
... )
{
waitedMS: Long('0'),
cursor: {
firstBatch: [ { name: 'mycollection2', type: 'collection' } ],
id: Long('0'),
ns: 'mydatabase2.$cmd.listCollections'
},
ok: 1,
operationTime: Timestamp({ t: 1707288297, i: 1 })
}
mydatabase2> db.mycollection2.find()
[
{
_id: ObjectId('65c327e8feed21cc028b8bd7'),
name: 'John Doe',
age: 30,
city: 'New York'
}
]
mydatabase2> db.mycollection2.drop()
**MongoServerError: Authorization failure**
mydatabase2> db.dropDatabase()
**MongoServerError: Authorization failure**
mydatabase2> db.getUsers()
**MongoServerError: Authorization failure**

Summary

We have seen how the power of HashiCorp Boundary and HashiCorp Vault together can provide granular access to hosts with the permissions required by the end users based on their roles following the principle of least privilege, while keeping your network secure with a consistent workflow.

  • Boundary takes care of:
    - Providing a catalog of resources tailored for each user or group.
    - Keeping the network and targets secured. Wide network visibility or lateral movements are not an option.
    - Control session duration and integrate with Vault for credential management.
  • Vault takes care of:
    - Creating dynamic credentials based on specific roles and controlling the lifecycle of those credentials and associated leases.

--

--