Deploy HCP Vault & AWS Transit Gateways via Terraform

Andrew Klaas
HashiCorp Solutions Engineering Blog
8 min readSep 2, 2021

Summary: Set up AWS Transit Gateways with HashiCorp Terraform to enable network connectivity to HCP Vault.

With HashiCorp Cloud Platform (HCP), organizations can consume HashiCorp Vault and Consul as a managed service. Vault and Consul clusters traditionally require 3–5 virtual machines to run highly available. Organizations can bypass the need to manage, update, or patch those machines by using HCP, which allows you to quickly provision Vault and Consul clusters without having to operationally support the required infrastructure.

As organizations with on-premises infrastructure begin to adopt hybrid cloud approaches with security platforms like HashiCorp Vault, some want to take advantage of the operational savings of HCP Vault for those on-prem use cases. To do so, they may leverage AWS Transit Gateways with VPN or Direct Connect enabled for on-premise connectivity. In this article, I’ll walk through a sandbox demo that showcases how transit gateways can be connected to HCP Vault, HashiCorp Virtual Network (HVN), and an example EC2 instance using HashiCorp Terraform.

A Note About Best Practices & On-premise Services

Deploying Vault Enterprise per datacenter with replication enabled is best practice. Vault’s co-location with production workloads ensures more robust availability and minimizes latency. However, if an organization is comfortable tolerating increased risks, using transit gateways in combination with VPNs or AWS Direct Connect enables network connectivity from on-premises to HCP Vault.

Demo Overview

In this demo, you will use Terraform’s HCP and AWS providers to set up a HashiCorp Virtual Network (HVN) and deploy an HCP Vault cluster in that HVN. You will then attach that HVN to your AWS account’s Transit Gateway to provide network connectivity with an EC2 instance.

Note: Although an EC2 instance is used as a placeholder target here, you can also use this setup as a framework to build connectivity with on-prem infrastructure as well via the Transit Gateway.

AWS Transit Gateways provide an efficient method for routing network traffic between VPCs, VPNs, Direct Connects, and more. It enables a hub-and-spoke networking model instead of complex peering architectures. Large organizations that consume HCP can use transit gateways to simplify their AWS networking infrastructure and provide both on-premise and cloud locations with network connectivity to HCP Vault.

Your EC2 instance will consume secrets from the HCP Vault cluster through your transit gateway. This diagram shows the demo network architecture between the EC2 instance and the HCP Vault cluster.

Let’s open up the demo repository and start reviewing the configuration that Terraform will use to provision this architecture.

Terraform Configuration Overview

While this section isn’t necessary to run the demo, it will help you understand how the AWS and HCP Terraform providers work together to build this demo infrastructure and give you a deeper understanding for future customization. You can skip to the Demo Deployment section if you’d like to start running the demo right away.

main.tf

Inspect the “main.tf” Terraform file.

provider "hcp" {
}

provider "aws" {
region = var.region
default_tags {
tags = {
Name = var.Name
owner = var.owner
TTL = var.TTL
}
}
}

The first block (empty) is for the HCP provider, which is used to create the HashiCorp Virtual Network and HCP Vault cluster.

The second configuration block is for the AWS provider. This is where the required information for creating AWS resources such as the transit gateways, VPCs, subnets, routes, security groups, and virtual machines is written.

I’ve included a useful Terraform feature called default tags in the AWS resources. This provides an easy way to set a standard list of tags on the AWS resources created with Terraform.

HCP Configuration

To find the HCP configuration, open up HVN.tf.

A HashiCorp Virtual Network is defined there. An HVN is a fundamental abstraction that makes HashiCorp Cloud Platform (HCP) networking possible. An HVN allows you to delegate an IPv4 CIDR range to HCP, which the platform then uses to automatically create a VPC on AWS.

resource "hcp_hvn" "example_hvn" {
hvn_id = "${var.Name}-example-hvn"
cloud_provider = "aws"
region = var.region
cidr_block = var.hvn_cidr
}

And here I’ve defined the HCP Vault cluster.

resource "hcp_vault_cluster" "example_vault_cluster" {
hvn_id = hcp_hvn.example_hvn.hvn_id
cluster_id = "${var.Name}-vault-cluster"
public_endpoint = false
tier = "dev"
}

I’ve defined a default AWS Transit Gateways route propagation in this example in order to connect the HashiCorp HVN to your own AWS network. Other configuration options are listed here.

resource "aws_ec2_transit_gateway" "example" {
}

I’ve also defined an AWS resource share so you can connect the HVN to your own AWS account’s infrastructure.

resource "aws_ram_resource_share" "example" {
name = "example-resource-share"
allow_external_principals = true
}

resource "aws_ram_principal_association" "example" {
resource_share_arn = aws_ram_resource_share.example.arn
principal = hcp_hvn.example_hvn.provider_account_id
}

resource "aws_ram_resource_association" "example" {
resource_share_arn = aws_ram_resource_share.example.arn
resource_arn = aws_ec2_transit_gateway.example.arn
}

resource "hcp_aws_transit_gateway_attachment" "example" {
depends_on = [
aws_ram_principal_association.example,
aws_ram_resource_association.example,
]

hvn_id = hcp_hvn.example_hvn.hvn_id
transit_gateway_attachment_id = "${var.Name}-tgw-attachment"
transit_gateway_id = aws_ec2_transit_gateway.example.id
resource_share_arn = aws_ram_resource_share.example.arn
}

resource "aws_ec2_transit_gateway_vpc_attachment_accepter" "example" {
transit_gateway_attachment_id = hcp_aws_transit_gateway_attachment.example.provider_transit_gateway_attachment_id
}

Next, an HVN route for sending traffic to the transit gateway.

resource "hcp_hvn_route" "route" {
hvn_link = hcp_hvn.example_hvn.self_link
hvn_route_id = "${var.Name}-hvn-to-tgw-attachment"
destination_cidr = aws_vpc.example.cidr_block
target_link = hcp_aws_transit_gateway_attachment.example.self_link
}

Let’s look at the network & compute resources.

AWS Configuration

Open aws.tf. First, I’ve defined the VPC and subnet referenced in the HVN route setup earlier.

resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "my_subnet" {
vpc_id = aws_vpc.example.id
cidr_block = var.vpc_cidr
availability_zone = var.az
}

I attached the subnet to the transit gateway from earlier.

resource "aws_ec2_transit_gateway_vpc_attachment" "example" {
subnet_ids = [aws_subnet.my_subnet.id]
transit_gateway_id = aws_ec2_transit_gateway.example.id
vpc_id = aws_vpc.example.id
depends_on = [
aws_ec2_transit_gateway.example,
]
}

Then I defined routes so traffic flows to the HCP Vault cluster and HVN from your own VPC.

resource "aws_main_route_table_association" "main-vpc" {
vpc_id = "${aws_vpc.example.id}"
route_table_id = "${aws_route_table.main-rt.id}"
}

resource "aws_route_table" "main-rt" {
vpc_id = "${aws_vpc.example.id}"

route {
cidr_block = var.hvn_cidr
transit_gateway_id = "${aws_ec2_transit_gateway.example.id}"
}

route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.vpc-igw.id}"
}

depends_on = [
aws_ec2_transit_gateway.example,
aws_ec2_transit_gateway_vpc_attachment.example,
]
}

Last, I defined a security group, EC2 instance, and keypair.

resource "aws_instance" "test-instance" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.my_subnet.id}"
vpc_security_group_ids = [ "${aws_security_group.main-vpc-sg.id}" ]
key_name = "${aws_key_pair.test-tgw-keypair.key_name}"
associate_public_ip_address = true
user_data = "${data.template_file.init.rendered}"
}

data "template_file" "init" {
template = "${file("${path.module}/init.tpl")}"
}


resource "aws_key_pair" "test-tgw-keypair" {
key_name = "${var.Name}-keypair"
public_key = "${var.public_key}"
}

Now that you have a better understanding of the configuration for the HCP and AWS Terraform providers, you can move on to the demo.

Note: This HashiCorp Learn tutorial walks through the underlying network setup described above via AWS CLI commands as opposed to Terraform.

Demo Deployment

To begin the demo, clone my GitHub repo, then sign-up for an HCP account.

Set up billing or use the free default signup credits to get started. You will be using a “dev” tier Vault cluster in this scenario. This tier is not intended for production use and doing so is highly discouraged. See the following link for pricing details.

Next, create a “service principal” and “service principal key” for authenticating Terraform to the HCP provider. Here are the instructions.

Once you have those credentials, set them as environment variables for Terraform. You will need to provide an access key and secret access key for the AWS account you will be provisioning in. Your AWS credentials should be set as environment variables also.

Never place access keys in code that could potentially be shared publicly.

export HCP_CLIENT_ID=K... 
export HCP_CLIENT_SECRET=9...
export AWS_ACCESS_KEY_ID=A...
export AWS_SECRET_ACCESS_KEY=v...
export AWS_SESSION_TOKEN=I...

git clone https://github.com/Andrew-Klaas/hcp-vault-demo.git
cd hcp-vault-demo

Copy the `terraform.tfvars.example` file to `terraform.tfvars` and fill out the values. The tags are not functionally necessary. However, it is good practice to tag all resources.

region = "us-west-2"
az= "us-west-2a"

//Instance Tags
Name = "YOUR-NAME"
owner = "YOUR-EMAIL"
TTL = 48

//Your public key will be uploaded to machine for SSH access
public_key = "ssh-rsa AA...."

Your SSH public key will be uploaded to the provisioned EC2 instance. This allows you to SSH to the EC2 instance and connect to your Vault cluster.

Initialize Terraform. This action will pull any necessary provider code.

$ terraform init

Once initialized, `terraform plan` should give feedback on the number of resources that will be provisioned.

$ terraform plan
. . .
Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ PUBLIC_IP = (known after apply)
+ VAULT_ADDR = (known after apply)

Run `terraform apply`. The deployment may take upwards of 10 minutes for the HCP Vault cluster to finish being created.

$ terraform apply --auto-approve;
. . .
Apply complete! Resources: 19 added, 0 changed, 0 destroyed.

Outputs:

PUBLIC_IP = "34.262.139.5"
VAULT_ADDR = "https://my-demo-vault-cluster.private.vault.2...3.aws.hashicorp.cloud:8200"

Note the output of the “terraform apply” command. The “PUBLIC_IP” value is the address of the Vault client EC2 instance you will use. The “VAULT_ADDR” output is the address of your new Vault HCP cluster. The cluster is private by default, so you will only be able to reach it from within your EC2 instance. Access can be configured here.

SSH into your new EC2 instance.

ssh -i ~/.ssh/id_rsa ubuntu@34.262.139.5

Once logged in, set the “VAULT_ADDR” environment variable using the URL that was provided in the outputs of your `terraform apply`

export VAULT_ADDR=”https://<VAULT_ADDR>”

Check Vault’s status.

ubuntu@ip-10-0-1-182:~$ vault status
Key Value
--- -----
Recovery Seal Type shamir
Initialized true
Sealed false
Total Recovery Shares 1
Threshold 1
Version 1.7.3+ent
Storage Type raft
Cluster Name vault-cluster-7cb9a968
Cluster ID d5da950f-e503-a1fc-ea29-f657348f9f5a
HA Enabled true
HA Cluster https://172.25.17.129:8201
HA Mode active
Active Since 2021-07-29T14:20:10.716499428Z
Raft Committed Index 522
Raft Applied Index 522
Last WAL 153

Congratulations! You now have a full working HCP Vault sandbox environment.

Navigate to your Vault cluster in the HCP portal and create an admin token so you can login to your cluster. Full instructions are located here.

Copy the token to your EC2 instance and login to Vault. The CLI will prompt you for the copied token.

$ vault login
Token (will be hidden):
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key Value
--- -----
token s.QRg9CTfjZ55XNZNw9sR8loiG.FPdUt
token_accessor xLZEHbiOb85tRrv91jGG7mIV.FPdUt
token_duration 5h59m48s
token_renewable false
token_policies ["default" "hcp-root"]
identity_policies []
policies ["default" "hcp-root"]

You can now start interacting with your Vault cluster.

Next Steps & Conclusion

So, what’s next? We’ve barely scratched the surface of what’s possible. Vault supports dozens of community plugins, auth methods, secret engines, and more.

As a next step, I would recommend reviewing the Vault — AWS authentication method. This integration uses built-in AWS IAM resources to authenticate AWS workloads to Vault.

There are also several valuable Vault guides and walkthroughs on the HashiCorp Learn platform including topics such as using Vault Agent with AWS.

Useful Links

--

--