Terraform Modules — A Basic Example

william klier
7 min readMay 11, 2019

--

Edited August 9th 2019 to update for Terraform >= 0.12

So you were interested in Terraform. You were going through the getting started guide, and you thought it was really easy to learn HCL, Hashicorp’s Configuration Language. You wrote some basic resources of your own, and the first time you ran $ terraform apply you fell in love. But when you got to modules you thought, I don’t need this, or I’ll learn that later.

If this describes you, this guide will help you A) understand what modules are and how they work, B) write and use your own custom Terraform module, and C) learn a few tips and tricks along the way.

Git repo can be found at bitbucket.org

Prior to using this guide, you should have a basic understanding of AWS and Terraform. Terraform and AWS CLI should be installed and configured. Also FYI, I’m working on a mac.

Enough banter, pop open a terminal and let’s get started:

Set up and configure a Terraform project

Before we init Terraform, we’re gonna set up a project directory, and we’re gonna configure Terraform to use a state file on AWS s3. IMO this is the smart choice for any project where more than one person is editing the same Terraform code. If you have your own way, and you wanna skip ahead goto “Create a Terraform Module” below. Otherwise, edit “CHANGE-THIS-BUCKET-NAME” below to the name of the bucket you will use to keep your terraform state and remember that name for the next step.

$ mkdir website-bucket-module
$ cd website-bucket-module
$ aws s3api create-bucket --bucket CHANGE-THIS-BUCKET-NAME --region us-east-1

In your text editor of choice create a file named “terraform.tfvars” and edit the content below to reflect your own AWS SDK access ID and KEY. (this file should be ignored by Git in .gitignore)

access_key = "BKXXXXXXXXXXXXXXXXXXXXXXB"
secret_key = "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"

Next we’ll create a file named “beta.tf” which will have config information about our remote backend. Include the content below editing to reflect your info (Also should be ignored by Git):

terraform {
backend "s3" {
bucket = "CHANGE-THIS-BUCKET-NAME"
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
}
}

Now create a file named “alfa.tf” with the contents below: These two files make up the root Terraform module, and we will add one more file here later. We’ll use these to init and provision our Terraform project.

variable "access_key" {}
variable "secret_key" {}
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = "us-east-1"
}
resource "null_resource" "null_resource_provisioner" {
provisioner "local-exec" {
command = "echo null resources are now provisioned"
}
}

If everything is configured properly, the next step will create a terraform state file called “terraform.tfstate” and put it in the s3 bucket you made earlier. Let’s init with the config we just wrote, and then apply.

$ terraform init

If you saw green, Terraform is now initialized. now let’s apply (at the prompt type “yes”)

$ terraform apply

You should see a few lines of output letting you know AWS and null resources have been provisioned. One resource was made (not an AWS resource, just a local exec to your shell $ echo null resources are now provisioned). You can use null resources to make local exec calls to your shell. It’s super handy. Check this out for more info.

You now have a working Terraform project up and running with an s3 backend, AWS, and null resources provisioned. Next we’ll create our module!

Create a Terraform Module

Terraform Module files are just standard HCL resource declarations which take input variables, and usually output variables as well. You can provision them into your root TF file(s), passing in the variables they need, and they will create resources based on those variables assigning values for output variables accordingly. The main thing to note here, is that resources created by modules exist in the module’s namespace.

This confused me at first, but it’s easy to understand if we look at an example tfstate:

$ terraform state listaws_s3_bucket.website_bucket
module.s3-website.aws_s3_bucket.website_bucket

As you can see we have two s3 buckets, the first one is in the root module’s namespace called “website_bucket”, and the second, in the module s3-website’s namespace is also called “website_bucket”. These resources can have the same name, because they’re in different namespaces. With this knowledge we can now write a simple Terraform Module. Still in the project directory create a subdirectory for the module code:

$ mkdir s3-website
$ cd s3-website

First, we’re gonna make a file named “variables.tf”. this is where we will declare our input variables. Add the following content:

variable "bucket_name" {
description = "the name of the bucket"
type = "string"
}

Next, we’ll write the module itself. Name this file “main.tf”

# website bucket with versioning and policyresource "aws_s3_bucket" "website_bucket" {  # so we can delete it later
force_destroy = true
bucket = var.bucket_name
acl = "public-read"
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET", "PUT", "POST", "HEAD"]
allowed_origins = ["*"]
max_age_seconds = 3000
}
versioning {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${var.bucket_name}/*"
}
]
}
POLICY
website {
index_document = "index.html"
error_document = "index.html"
}
}

Here we have a single resource declaration for an s3 bucket. We’re passing in our input variable for the name of the bucket, and for the policy at the bottom. This is all pretty straightforward otherwise. We’ve enabled versioning, added a CORS rule, set the index and error docs, set the ACL to public read, and set force destroy to “true” so we can delete the bucket later.

Finally, let’s set some output variables. Create a file named “outputs.tf” and input the following content:

output "website_endpoint" {
value = aws_s3_bucket.website_bucket.website_endpoint
}
output "website_bucket_id" {
value = aws_s3_bucket.website_bucket.id
}

That’s it. You’ve created a module that will spin up s3 websites, now let’s see how to use it.

Change back into the project directory (root module)$ cd .. and then create one last file called “bucket.tf”. Side note: For naming conventions on modules Terraform recommends the names we’re using. For naming conventions on the root module, the names are arbitrary. Terraform will load all .tf files alphabetically, and THEN it decides on dependencies. Anyone interested can check this style guide.

OK, so “bucket.tf”…

# making two website buckets
module "s3-website" {
source = "./s3-website"
bucket_name = "delete-later-some-unique-bucket-name"
}
module "s3-website-2" {
source = "./s3-website"
bucket_name = "delete-later-some-unique-bucket-name-again"
}
# making bucket objects
# you can see how we're using the module's output variables here
resource "aws_s3_bucket_object" "index-1" {
bucket = module.s3-website.website_bucket_id
key = "index.html"
content = "<html><head><title>it works</title></head><body><h1>Believe it or not I'm walking on air.</h1></body></html>"
content_type = "text/html"
}
resource "aws_s3_bucket_object" "index-2" {
bucket = module.s3-website-2.website_bucket_id
key = "index.html"
content = "<html><head><title>this one works too</title></head><body><h1>I never thought i could feel so free-ee-ee!</h1></body></html>"
content_type = "text/html"
}
# output website endpoints
# outputs from the module namespace
output "delete-1-endpoint" {
value = module.s3-website.website_endpoint
}
output "delete-2-endpoint" {
value = module.s3-website-2.website_endpoint
}

At the top of the file you’ll see where we provision two modules, both using the module we wrote as the source. This is where you pass in the value for the module’s input variable “bucket_name”.

Next we create an object for our index document (the website). Rather than uploading a file, we’re creating it with the “content” argument. You’ll notice for the bucket argument we’re using output variables from our module, “module.s3-website-2.website_bucket_id”.

Finally we have our root output variables at the bottom. The variables, whose values are provided by our module, will output at the end of our next apply. The output variables you declared inside the module itself won’t print after an apply unless they exist as output vars in the root module. These are also available after the apply by running $ terraform output

To run the code we first need to re-init Terraform because the modules need to be provisioned:

$ terraform init

You should see it provision your modules. If so…

$ terraform apply

If everything went smoothly you will see the website endpoint output variables in green. If you pull up those URLs in your browser you will see something like this:

Cleaning Up

To tear down everything we just built: Either A) run terraform destroy B) delete or change the extension of “bucket.tf” and run terraform apply or C) edit bucket.tf: Above the top line add “/*” and below the bottom line add “*/” like this:

/*
# making two website buckets
module "s3-website" {
source = "./s3-website"
bucket_name = "delete-later-some-unique-bucket-name"
}
module "s3-website-2" {
source = "./s3-website"
bucket_name = "delete-later-some-unique-bucket-name-again"
}
# making bucket objects
# you can see how we're using the module's output variables here
resource "aws_s3_bucket_object" "index-1" {
bucket = module.s3-website.website_bucket_id
key = "index.html"
content = "<html><head><title>it works</title></head><body><h1>Believe it or not I'm walking on air.</h1></body></html>"
content_type = "text/html"
}
resource "aws_s3_bucket_object" "index-2" {
bucket = module.s3-website-2.website_bucket_id
key = "index.html"
content = "<html><head><title>this one works too</title></head><body><h1>I never thought i could feel so free-ee-ee!</h1></body></html>"
content_type = "text/html"
}
# output website endpoints
# outputs from the module namespace
output "delete-1-endpoint" {
value = module.s3-website.website_endpoint
}
output "delete-2-endpoint" {
value = module.s3-website-2.website_endpoint
}
*/

Then run: $ terraform apply and all resources will be deleted.

To remove the s3 bucket you configured as the backend for your Terraform tfstate file:

$ aws s3 rb s3://CHANGE-THIS-BUCKET-NAME --force

I wrote this module and this guide in order to teach myself. Hope it helps other people as well. Cheers, Andy Klier

--

--