Configuring HashiCorp Boundary on Kubernetes to Access Databases

Dionathan Chrys
7 min readJan 11, 2024

--

HashiCorp Boundary
HashiCorp Boundary

What is HashiCorp Boundary?

HashiCorp Boundary can provide access to various systems through unified user identity. With just one login, it’s possible to grant access to various systems such as instances (SSH or RDP), databases, Kubernetes clusters, and even HTTP requests that require authentication.

Advantages

Imagine an environment where you have a PostgreSQL database, a Linux instance, and a Kubernetes cluster. Without a unified identity management, you would need to create credentials for each of these systems. However, with Boundary and a single login, you can access these systems by creating the necessary profiles and permissions.

Let’s get started!

I created a repository on GitHub where I implemented a Proof of Concept (POC) to access databases, thoroughly testing Boundary’s functionality.

https://github.com/dionathanchrys/poc-boundary

Now, let me explain how to use it on your machine, simulating three different environments. Yes, you read that correctly, we won’t use the dev mode. Instead, we’ll set up three clusters (hope your machine can handle it 🤯).

Requirements:

After cloning the repository, execute the shell script that will create all the environments.

While in the repository’s root, run the “create-envs.sh” script located in the “app/kind-cluster/base” folder:

app/kind-cluster/base/create-envs.sh

It will perform the following actions:

  • Create a Docker network for clusters to have connections between them.
  • Create clusters using Kind.
  • Apply Kubernetes manifests, creating Boundary components (controller, workers, and database) and test databases (simulating an application called foo).

After execution, we’ll have three different clusters simulating service, development and production environments.

Note: Between the apply of deployments and jobs, there’s the “kubectl wait” command to wait until it’s ready to proceed. It has a timeout of 120 seconds, feel free to increase it if needed.
Use the following command to delete everything and run the script again without any issues: “
docker rm -f svc-cluster-control-plane dev-cluster-control-plane prd-cluster-control-plane && docker network rm cluster-kind”

Confirm that the containers (clusters) are running using the command:

❯ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fb7d586e3bd kindest/node:v1.25.3 "/usr/local/bin/entr…" 13 minutes ago Up 12 minutes 127.0.0.1:44133->6443/tcp prd-cluster-control-plane
dca90dcb21be kindest/node:v1.25.3 "/usr/local/bin/entr…" 14 minutes ago Up 13 minutes 127.0.0.1:37027->6443/tcp dev-cluster-control-plane
133d9eb5909e kindest/node:v1.25.3 "/usr/local/bin/entr…" 14 minutes ago Up 14 minutes 127.0.0.1:44399->6443/tcp svc-cluster-control-plane

Check the terminal output when running the Boundary migration job, it will provide the admin password and save it in the repository. Access the following address in your browser using the obtained credentials:

http://192.168.222.10:32200

Boundary Login Screen
Boundary Login Screen

After logging in, go to the Workers menu, it should display the two authenticated workers.

Workers
Workers

Return to the Orgs menu and create a new organization (there’s already an example one for exploration). Click the ‘New Org’ button and give it a creative name.

Creating a new Org
Creating a new Org

Now, create projects within the organization (Org). Click the Projects menu and then New.

Creating a new Project
Creating a new Project

Create a project for each environment for our imaginary app foo: development (dev) and production (prd).

New projects
New projects

Enter the dev project and go to Credential Store.

Insert the database credentials so that when connecting to the Target via Boundary’s CLI, it will use these credentials. (Don’t worry, we’ll see all this later.)

Creating Credential Store
Creating Credential Store

Create a Static Credential Store.

Static Credential
Static Credential

After creating it, go to Credentials.

New credential

Insert a name, use type “Username & Password,” and enter the database credentials found in the Kubernetes ConfigMap.
app/foo-postgres/overlay/dev/kustomization.yaml:linha 15

Inserting credentials
Inserting credentials
Credentials created
Credentials created

Now, go to the Targets menu and then New.

Targets
Targets

In the target addition, let’s get straight to the point!

  • Target Address: It’s shown as optional because you can configure addresses through Host Catalogs and Host Sets. However, let’s configure the address directly on the target. In this case, we’ll configure the Kubernetes database service name for dev. Run the following command to check:
❯ kubectl get services --context kind-dev-cluster -n foo-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo-postgres-svc ClusterIP 10.96.45.70 <none> 5432/TCP 25 minutes

The service name is “foo-postgres-svc.”, since we’re accessing from one namespace to another within the same cluster, we need to provide the complete service address, composed of “service.namespace.svc.cluster.local.” Ours will look like this: foo-postgres-svc.foo-app.svc.cluster.local

  • Default Port: Port for accessing our database, in our case, it’s 5432, the default for PostgreSQL.
Creating target
Creating target

Further down, there’s the Workers configuration. As we have “multiple environments,” we need to configure and tell the Controller which Worker should be used to connect to the Target. This is done through the tag.

Our structure’s design is as follows, based on this documentation page:

Example accessing a target in the DEV environment
Example accessing a target in the DEV environment

Returning to the Worker tag configuration within the Target, we need to include the Worker’s tag. The tags were configured on line 17 of the DEV file:

“app/boundary-worker/overlay/dev/config/pki-worker.hcl” in DEV:

tags {
env = ["dev"]
type = ["egress"]
}

Here in the documentation, you can find more details about worker tags.

In the filter, use as shown below:

"dev" in "/tags/env"
Worker’s tags
Worker’s tags

After creating the target, go back to it and proceed to brokered credentials.

Adding Brokered Credentials
Adding Brokered Credentials

Associate them.

Associating Brokered Credentials
Associating Brokered Credentials
Brokered Credentials Associated
Brokered Credentials Associated

Now, repeat the process for the PRD environment by creating credentials, the target, and associating them.

Now, let’s see it in action!

Now that everything is configured and okay, let’s access it. We’ll use the Boundary client CLI, we could do it through the app with an interface, but we can’t use the brokered credentials, and it would defeat the purpose of this article. Of course, we could integrate with Vault and further improve password management, but that will be for another time.

Let’s go to the terminal:

boundary authenticate password \
-addr http://192.168.222.10:32200 \
-keyring-type secret-service \
-auth-method-id ampw_eSdigWzNW1 \
-scope-id o_KV9tApRJlT

Use the same credentials used to log in to the web interface.

To understand the parameters better, you can run the following command:
“boundary authenticate --help” or consult the documentation.

The IDs will change with each installation, to query them, follow these links:

All these parameters can be configured with environment variables, which would speed up the login process(I suggest you do this). To check the variable names, use the following commands:

  • boundary authenticate --help
  • boundary authenticate password --help

If you’ve logged in successfully, you’ll get the following output:

❯boundary authenticate password \
-addr http://192.168.222.10:32200 \
-keyring-type secret-service \
-scope-id o_KV9tApRJlT \
-auth-method-id ampw_eSdigWzNW1
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):

Authentication information:
Account ID: acctpw_nIvOD6rkRR
Auth Method ID: ampw_eSdigWzNW1
Expiration Time: Tue, 16 Jan 2024 21:46:54 -03
User ID: u_d0hKJ0jcXH
The token was successfully stored in the chosen keyring and is not displayed here.

Now we can connect to the target, let’s use the command:

boundary connect postgres \
-addr http://192.168.222.10:32200 \
-keyring-type secret-service \
-target-id ttcp_lwgSUO7dSG \
-dbname postgres

If you want to know more, use:
“boundary connect --help”

If everything is successful, you’ll get the following output:

❯ boundary connect postgres \
-addr http://192.168.222.10:32200 \
-keyring-type secret-service \
-target-id ttcp_lwgSUO7dSG \
-dbname postgres
psql (14.10 (Ubuntu 14.10-0ubuntu0.22.04.1), server 13.13 (Debian 13.13-1.pgdg120+1))
Type "help" for help.

postgres=#

This means you’re inside the database without entering credentials during login because it used the ones we associated earlier. In a real scenario, the person accessing the database won’t have contact with the credentials, you’ll need to configure roles and groups so that they only have access to the targets.

Conclusion

We presented a Proof of Concept (POC) where we can simulate some environments simply. However, in a real scenario where we have OIDC for login and Vault to store passwords, it becomes much more interesting.

In the future, I plan to implement more things in this POC, I’ve listed them in the repository’s issues. If you have any suggestions or encounter any issues, feel free to open an issue on GitHub.

I hope I’ve helped you in some way, if you have any questions, leave a comment.

--

--

Dionathan Chrys

DevOps Analyst | AWS Cloud Practitioner Certified | Kubernetes | Docker | Azure Pipelines