Access the Vault — A Mission for Terraform Cloud Agents

Rob Jackson
HashiCorp Solutions Engineering Blog
6 min readDec 16, 2021

With the skyrocketing usage of Terraform Cloud for Business, the question has arisen more than once about utilizing Terraform Cloud in conjunction with resources in a logically isolated and secure network, in either a public cloud or private data center. The beauty of the Terraform Cloud Agent is that it resides within your secured environment and reaches out to Terraform using secure tokens for authentication and authorization. Therefore, Terraform Cloud doesn’t have to reach into your secure environment. However, what if you are utilizing HashiCorp Vault to secure, or better yet dynamically generate, the secrets that Terraform requires to build infrastructure?

With the Terraform Cloud Agents, we can manage our infrastructure with Terraform Cloud and take credentials provided by Vault onto a private and secured network. Let’s look at how to do this.

Getting Cloud Agents to Talk to Vault

The use case we’re going to focus here on is that dynamic generation of the credentials. After all, I recall hearing several times over my career and personal life that the best way to secure credentials is to ensure that they are short lived. I’m also reminded of this as I’m periodically asked to change my password to something that I haven’t used in the last 600 months with a minimum length of 32 characters consisting of no less than 12 Pharaonic and Incan symbols. Of course, whatever you do, don’t write it down, and don’t check it into a public repository! Yup, this happens, more often than you might think, and before you know it somebody is mining bitcoin on your dime.

Due to my limited number of dimes, I’m going to have Vault dynamically create my AWS credentials. The HashiCorp documentation does have some excellent examples regarding the configuration of Vault to authenticate with AWS as a secrets engine and provide short lived dynamic secrets. I wanted to merge these two trains of thought in hopes that they weren’t on the same track…why can’t I use the Terraform Cloud Agent to talk to my local Vault to acquire those secrets? Well, you can, that’s why I’m writing this.

Before we start, I need to note that I took some liberties with respect to AWS IAM roles and permissions in order to achieve success, so kids, if you try this at home, be sure to have an “adult” help you with the proper policy structure.

Creating an Agent Pool

We’re going to start by creating an agent pool in which our agents can swim. I’m going to walk you through what I’ve done, but feel free to refer to HashiCorp’s official documentation if you require something more professional.

Within your organizational settings, select ‘Agents’ and create an agent pool, as shown in figure 1.

Figure 1: Creating an Agent Pool

The first step in creating a pool is to provide that pool with a meaningful name. I chose “cenote” because those are some beautiful pools, but as the pool is associated with a Terraform Workspace the name should be meaningful in some sense.

Figure 2: Name the Agent Pool

The next step in the creation of the Agent pool is to create the agent tokens. These tokens represent individual Agent instances deployed within the respective environments. As we create each Agent token, Terraform provides us with the instructions for setting up our Agents with the corresponding tokens.

Figure 3: Agent Tokens, or Token Agents

If you delay the process of setting up the Agent, be sure to save these instructions in a secure location (such as Vault), as once you move on the token will no longer be readable. However, each token can easily be recreated through the TFC user interface by revoking and recreating the respective token.

Figure 4: Using Your Agent Tokens

Although we only need one Agent for our demonstration, swimming in a pool alone is rarely as fun as swimming with friends. So I created several Agent tokens, and started a couple of agents in my environment. One Agent is running on a QNAP Network Attached Storage (NAS) device, and another Agent is running on my local MacBook.

Figure 5: Agents Swimming in the Pool

Running the Agents

Now that we have our Agents running, we need to assign them missions. As noted before, the pool of Agents are assigned to one or more Terraform Workspaces. This informs Terraform that the infrastructure management will be executed by the remote Agent, with Terraform Cloud orchestrating the application of the code. This supports the necessary centralized management and policy control, without requiring an instance of Terraform deployed within each operating environment.

Within our Workspace settings, we can select the proper “Execution Mode” for our Terraform workspaces…an action which always seems so ominous to me. When we select “Agent”, we are telling Terraform to use a particular Agent Pool for the execution of the code.

Figure 6: Assigning the Pool to a Workspace

At this point all of the necessary pieces of the Agent and the Workspace have been configured. Let’s take a quick look at the environment.

Figure 7: High Level Layout

We have Terraform Cloud hosted by HashiCorp, which I’m using to manage my infrastructure. I’ve deployed a couple of Terraform Cloud Agents into my local network in my office. One of those Agents is running as a Container on my local MacBook, and another is running on my QNAP NAS. These agents are swimming in a pool that has been assigned to my Workspace that is configured for Agent Execution Mode. I also have HashiCorp Vault running on my NAS, inaccessible from the outside world.

So now we’ll test our little arrangement with some very simple Terraform code. The main parts of the code we’ll review here focus on the AWS and Vault access. We’ll start with the AWS provider definition, which identifies the region as well as the Access Key and Secret Key used for accessing AWS. Note here that instead of explicitly including the keys in the code (which one should NEVER do), we’ll access those keys from Vault as a data source.

provider “aws” {

region = var.aws_region

access_key = data.vault_aws_access_credentials.creds.access_key

secret_key = data.vault_aws_access_credentials.creds.secret_key

}

The Vault provider is pretty straightforward. I’m using an environment variable specifying my Vault access token, so the provider block really only needs the address of the Vault system. As you can see, I’m accessing Vault at a local address that points to my NAS. Note that I’m using HTTP, and not HTTPs. This is purely because I don’t have a signed certificate for Vault on my local network.

provider “vault” {

address = “http://jacknas.local:8200"

}

The last bit we need is to identify the Vault data source from which we’ll have our temporary AWS keys generated. The “backend” identifies the path where I have my AWS Secrets Engine mounted, and the “role” identifies the AWS policy that is associated with the newly created credentials. According to the TTL, the dynamic credentials created will only last 15 minutes.

data “vault_aws_access_credentials” “creds” {

backend = “aws-personal”

role = “deity”

ttl = “15m”

}

After running the plan, I can see the newly created user within my AWS account, along with the specified policy.

Figure 8: Temporary User Credentials In AWS

Conclusion

There we have it! Utilizing the Terraform Cloud Agents, we can manage our infrastructure with Terraform Cloud, utilizing credentials provided by Vault on a private and secured network. Although this example focused on Vault, the plethora of resources and data sources available within Terraform enables Q to empower great capabilities for the Agents. Hopefully this helped demonstrate how Terraform Cloud Agents can really extend the functionality of Terraform Cloud to private networks. If you’re using Terraform Enterprise, have no fear…HashiCorp now supports Cloud Agents on Terraform Enterprise as well, extending the reach of centralized Terraform Enterprise throughout your entire organization!

--

--

Rob Jackson
HashiCorp Solutions Engineering Blog

Solutions Engineer at HashiCorp, with a background in Cable Access Systems and Data Analytics