Getting Started with Terraform and AWS

Alvin Lucillo
Nullify
Published in
8 min readJun 11, 2023
Source: terraform-aws-modules

After earning an AWS Certified Solutions Architect — Associate certification, I became more interested in managing infrastructure resources. One thing that piqued my curiosity is the concept of IaC or Infrastructure as Code. With the help of Terraform, one can manage their AWS resources in the form of code which provides benefits such as version control. In this article, I’ll show you how you can get started with managing AWS resources using Terraform and importing existing resources into your project. Note that this is a rudimentary touch on this tool; there’s more to unpack which I’m looking forward to learning. I hope you are too! Let me know your experience and suggestions by commenting on this article.

You can find the repo that contains the Terraform files here:
https://github.com/alvinlucillo/terraform-getting-started

Setting up your environment

It goes without saying that you need an AWS account, but you don’t really need expert knowledge to get started. In the demo, we’ll just use S3 for simplicity.

  1. Install AWS CLI and Terraform CLI
  2. Existing S3 bucket to contain the state file. Update s3bucketin Makefile with the bucket name.
  3. Inside the repo folder, run make init. This will initialize your folder by creating the initial plan file, downloading the Terraform provider tool, and connecting to the backend (i.e., AWS) using credentials set during the AWS CLI setup. You should see this on your terminal: Terraform has been successfully initialized!

Project structure

Let’s take a glance at how the project is structured.

  1. Makefile — contains shortcut commands to execute Terraform operations (e.g., make apply). Check the file for more explanation for each command.
  2. main.tf — defines the resource blocks and backend and provider configuration. Resource blocks allow you to declare infrastructure resources and properties.
  3. variables.tf — defines the variables used in the resource blocks especially if there's s iteration on the objects
  4. values.tfvars — sets values to the variables

Dive right in

This will demonstrate how to use the project and explain some basic concepts of Terraform.

Importing an existing S3 bucket

The goal here is to allow Terraform to manage an existing S3 bucket.

1. In S3, we have this S3 bucket with disabled versioning: terraform-project-files-test-1.

S3 bucket property ‘Bucket Versioning’ initially set as Disabled
S3 bucket’s Bucket Versioning is disabled

2. In values.tfvars, the same bucket name is declared with enabled versioning. This means we will enable Terraform to enable the S3 bucket’s versioning. Notice that s3_mapis defined in variables.tf, which we use to list all the variables used in the Terraform project. Terraform files use the HCL syntax. In the code snippet, we use the bucket name as the key to uniquely identify a resource name. Later, we will see how this is used.

# values.tfvars

s3_map = {
"terraform-project-files-test-1" = {
versioning = "true"
}
}
# variables.tf

variable "region" {
type = string
}

variable "s3_map" {
type = map(any)
}

3. Run make validate to check if your HCL syntax is correct and if you made any changes. If it’s successful, you should see: Success! The configuration is valid.

4. Now, let’s see what changes Terraform is going to do. Run make plan. You should see something like the one below. This command creates a plan (i.e., the changes to be made). The plan is based on what Terraform knows about our infrastructure using the state file (current state) and what you want to do depending on what you declared in the Terraform files (desired state).

# This is the 'make plan' output

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# aws_s3_bucket.s3_test["terraform-project-files-test-1"] will be created
+ resource "aws_s3_bucket" "s3_test" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = "terraform-project-files-test-1"
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}

# aws_s3_bucket_versioning.s3_bucket_versioning_test["terraform-project-files-test-1"] will be created
+ resource "aws_s3_bucket_versioning" "s3_bucket_versioning_test" {
+ bucket = (known after apply)
+ id = (known after apply)

+ versioning_configuration {
+ mfa_delete = (known after apply)
+ status = "Enabled"
}
}

Plan: 2 to add, 0 to change, 0 to destroy.

Notice the following:

  • At the top, it says it will perform the create operation.
  • The bucket with the versioning property is listed.
  • In the end, it summarizes that it will add two resources: aws_s3_bucket and aws_s3_bucket versioning

When you execute make apply:

  • It checks all the resource blocks. In the project, we have aws_s3_bucket and aws_s3_bucket versioning
  • Each resource, which has a name (.e.g., s3_test), can be referenced by other resources (e.g., versioning block)
  • s3_test resource checks the values of s3_map and iterates each value (see values.tfvars). The key is the bucket name.
  • s3_bucket_versioning_test resource references s3_testand assigns a versioning configuration for each bucket.
# main.tf
resource "aws_s3_bucket" "s3_test" {
for_each = var.s3_map

bucket = "${each.key}"
}

resource "aws_s3_bucket_versioning" "s3_bucket_versioning_test" {
for_each = aws_s3_bucket.s3_test

bucket = each.value.id
versioning_configuration {
status = var.s3_map[each.key].versioning ? "Enabled" : "Suspended"
}
}

5. Creating a resource is not the goal; we want to modify the existing S3 bucket’s versioning property. To do that, we need to import it. Run make import TF_ARGS="aws_s3_bucket.s3_test[\\\"terraform-project-files-test-1\\\"] terraform-project-files-test-1"

Your terminal output should have something like this:

# This is the 'make import ...' output

terraform import \
-var="region=us-east-1" -var-file="values.tfvars" \
aws_s3_bucket.s3_test[\"terraform-project-files-test-1\"] terraform-project-files-test-1
aws_s3_bucket.s3_test["terraform-project-files-test-1"]: Importing from ID "terraform-project-files-test-1"...
aws_s3_bucket.s3_test["terraform-project-files-test-1"]: Import prepared!
Prepared aws_s3_bucket for import
aws_s3_bucket.s3_test["terraform-project-files-test-1"]: Refreshing state... [id=terraform-project-files-test-1]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Let’s break down that command:

  • make import is the command in the Makefile
  • TF_ARGS is an environment variable where we put extra arguments
  • aws_s3_bucket is the resource type
  • s3_test is the variable/map where we define the S3 bucket
  • terraform-project-files-test-1 (1st instance) is the ID that we want to assign to the S3 bucket want to import. The ID is the key of the s3_test map. Remember that we are trying to inform Terraform that the S3 bucket we defined in values.tfvars already exists.
  • the double backslashes (\\) ensure we have a single slash (the extra backslash helps ‘escape’ a character inside a string literal
    \\ results to\
    \" results to "
    "aws_s3_bucket.s3_test[\\\"terraform-project-files-test-1\\\"] terraform-project-files-test-1" results to "aws_s3_bucket.s3_test[\"terraform-project-files-test-1\"] terraform-project-files-test-1"
  • terraform-project-files-test-1 (2nd instance) is the ID of the resource in AWS

In summary:

  • Import command retrieves the configuration of the aws_s3_bucket from AWS using terraform-project-files-test-1 (2nd instance) and assigns the project’s Terraform state under the s3_test resource with key terraform-project-files-test-1

Your state file (terraform.state) in S3 should have something like this. Note that I redacted some properties, but you should see your existing bucket definition here. Below, we can see the key, variable/map, and versioning value (false).

But why false? If you remember, the bucket’s versioning is initially set to that value (see the S3 bucket’s screenshot provided earlier in step 1). This means after the import action, the state file now reflects what is currently in our AWS infrastructure. If you look at values.tfvars, we want it to have value true, and that’s what we’re going to do next.

Example state file after the import (with redacted properties):

{
"version": 4,
"resources": [
{
"mode": "managed",
"type": "aws_s3_bucket",
"name": "s3_test",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"index_key": "terraform-project-files-test-1",
"versioning": [
{
"enabled": false,
"mfa_delete": false
}
],
}
]
}
}

6. Run make plan. Earlier in step 4, Terraform informed us that it would create 2 resources. Now, it’s down to just 1, the versioning resource. This is because it already knows that the bucket we defined exists but not the versioning. By default, the new bucket has no versioning. Your terminal should see something like the one below.

Notice that it takes the new status of the versioning (Enabled) and the bucket name.

# This is the 'make plan' output

terraform plan -var="region=us-east-1" -var-file="values.tfvars" -out=".state/terraform.plan"
aws_s3_bucket.s3_test["terraform-project-files-test-1"]: Refreshing state... [id=terraform-project-files-test-1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# aws_s3_bucket_versioning.s3_bucket_versioning_test["terraform-project-files-test-1"] will be created
+ resource "aws_s3_bucket_versioning" "s3_bucket_versioning_test" {
+ bucket = "terraform-project-files-test-1"
+ id = (known after apply)

+ versioning_configuration {
+ mfa_delete = (known after apply)
+ status = "Enabled"
}
}

Plan: 1 to add, 0 to change, 0 to destroy.

7. Run make apply. This time, it applies the plan and refreshes the state file. At this stage, it connects to AWS to verify if there are any conflicts between your plan and the current state of your AWS infrastructure. Your bucket should now show enabled versioning, and your terminal should display something like the one below.

S3 bucket property ‘Bucket Versioning’ now set as Enabled
S3 bucket’s Bucket Versioning is enabled
# This is the 'make apply' output

terraform apply .state/terraform.plan
aws_s3_bucket_versioning.s3_bucket_versioning_test["terraform-project-files-test-1"]: Creating...
aws_s3_bucket_versioning.s3_bucket_versioning_test["terraform-project-files-test-1"]: Creation complete after 4s [id=terraform-project-files-test-1]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

If you try to run make plan, guess what’s going to happen? It should indicate that there’s nothing to apply: No changes. Your infrastructure matches the configuration.

# This is the 'make plan' output subsequently after a 'make apply' run

terraform plan -var="region=us-east-1" -var-file="values.tfvars" -out=".state/terraform.plan"
aws_s3_bucket.s3_test["terraform-project-files-test-1"]: Refreshing state... [id=terraform-project-files-test-1]
aws_s3_bucket_versioning.s3_bucket_versioning_test["terraform-project-files-test-1"]: Refreshing state... [id=terraform-project-files-test-1]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

If you try to run make apply, nothing should be applied.

# This is the 'make apply' output

terraform apply .state/terraform.plan

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Summary

With the help of IaC tools like Terraform, managing cloud infrastructure resources like AWS becomes less daunting and more flexible. Importing an existing resource is just one of the many things you can do with Terraform. I hope this article and the repo help you get started with Terraform with AWS. Feel free to drop any comments/suggestions.

--

--

Alvin Lucillo
Nullify
Editor for

Software engineer, writer, self-taught pianist, and a lifelong learner