Terraform Cloud Project Bootcamp with Andrew Brown — Restructure Root Module

Gwen Leigh
5 min readSep 27, 2023

--

This article is part of my Terraform journey with Terraform Bootcamp by Andrew Brown and Andrew Bayko with Chris Williams and Shala.

My wild career adventure into Cloud Computing continues with Andrews, and I highly, highly recommend you to come join us if you are interested. Check out Andrews’ free youtube learning contents. You can also buy some paid courses here to support their good cause.

In this video, we refactor our root module into Terraform’s standard root module structure, which looks like the following:

PROJECT_ROOT
├── variables.tf # stores the structure of input variables
├── providers.tf # defined required providers and their configurations
├── outputs.tf # stores the outputs
├── terraform.tfvars # the data of variables we want to load into our terraform project.
├── main.tf # everything else
└── README.md # required for root modules

From our current project structure, we need to generate the following files. Use the commands below:

code outputs.tf
code providers.tf
code terraform.tfvars
code variables.tf

At the moment, our Terraform code is one big clutter in the main.tf file (below). We will refactor this into the various files we generated above.

terraform {
# backend "remote" {
# hostname = "app.terraform.io"
# organization = "gwenleigh"
# }

# workspaces {
# name = "terra-house"
# }

cloud {
organization = "gwenleigh"
workspaces {
name = "terra-house"
}
}
}


provider "aws" {

}

provider "random" {
# keepers = {
# # Generate a new ID each time we switch to a new AMI ID.
# ami_id = var.ami_id
# }

# byte_length = 8
}

resource "random_string" "bucket_name" {
length = 16
special = false
lower = true
upper = false
# override_special = "!@#$%&*()-_=+[]{}<>:?"
}

resource "aws_s3_bucket" "example" {
bucket = random_string.bucket_name.result

tags = {
Name = "My bucket"
Environment = "Dev"
}
}

output "random_bucket_name_result" {
value = random_string.bucket_name.result
}

Token expiration issues

In case you are having issues with token (authentication and token expiration, etc), you can get a new token issued on Terraform Cloud. The path is shown in the image below:

Generate a new token on Terraform Cloud.

Then copy the new token, then store it in your gitpod or local machine using the following commands:

# Store the token in gitpod / local machine
export TERRAFORM_CLOUD_TOKEN="your_new_token"

# Gitpod only
gp env TERRAFORM_CLOUD_TOKEN="your_new_token"

Then generate out a new terraform credential with the token (./bin/generate_tfrc_credentials), then run terraform init.

User_uuid: No value for required variable

This error happens because our current backend is set with the Terraform Cloud. If there is no Cloud Provider set and no remote configured (once we comment it out), terraform would default back to the local. One point to note that is, , defaulting to the local backend configuration requires “migration”.

Migrate to local backend state

First, we comment out the cloud configuration in providers.tf.

terraform {
# backend "remote" {
# hostname = "app.terraform.io"
# organization = "mariachiinajar"
# }

- cloud {
- organization = "mariachiinajar"
- workspaces {
- name = "terra-house"
- }
- }

required_providers {
random = {
source = "hashicorp/random"
version = "3.5.1"
}
aws = {
source = "hashicorp/aws"
version = "5.16.2"
}
}
}

Then we comment out the user_uuid variables everywhere:

# main.tf
resource "aws_s3_bucket" "example" {
bucket = random_string.bucket_name.result

tags = {
- UserUuid = var.user_uuid
}
}
# variables.tf

- variable "user_uuid" {
- description = "user's UUID"
- type = string

- validation {
- condition = can(regex("^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[1-5][0-9a-fA-F]{3}-[89abAB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}$", var.user_uuid))
- error_message = "The user_uuid value is not a valid UUID."
- }
- }

Then we want to run terraform destroy to make sure we tear down all the running resources prior to the migration.

  • terraform destroy --auto-approve

But we run into the following credential issue. The reason behind this is because we migrated our state to Terraform Cloud. We don’t have the states in the local machine. So the provisioning or deprovisioning work is done via Terraform Cloud, but we haven’t granted Terraform Cloud any permissions to work with AWS resources. Hence, the No valid credential source found issue.

The solution for this is to feed Terraform Cloud our variables for AWS credentials. You can store the variables at the following path:

Terraform Cloud > Workspaces > Variables

When you are storing variables, make sure to select ‘environmental variable’ instead of ‘terraform variable’ and mark as sensitive (this will prevent the values from being displayed in the UI).

I’ve stored the three credential-related environmental variables.

Now, delete the file .terraform.lock.hcl and the folder .terraform.

So a quick summary here:

  • terraform { cloud {}} is gone.
  • .terraform.lock.hcl file and .terraform folder are removed.

This marks the disconnect between your local environment and Terraform Cloud. So if you run terraform, the states will be updated locally.

Now you are ready to start from scratch again. Run the following commands to initiate terraform, then apply the plan. This time around, the file and folder .terraform.lock.hcl and .terraform that are generated will be stored locally in your machine.

terraform init
tf plan -var user_uuid="your-uuid-from-exampro-platform"
Manually injecting a variable worked.

We have successfully migrated and provisioned resources in Terraform locally. This is the end of the tutorial. As usaul, wrap up your work as the following workflow:

  • Commit changes to the feature branch
  • Merge the feature branch to main branch
  • Switch to main branch
  • Tag it (1.1.0)

Managing variables

You can store variables in the terraform.tfvars file, or on Terraform Cloud. As for user_uuid, Andrew opts for Terraform Cloud as we will need it later down the course.

--

--

Gwen Leigh

Cloud Engineer to be. Actively building my profile, network and experience in the cloud space. .