26 Terraform Hacks for Effective Infrastructure Automation (With Examples)
A checklist for Cloud Engineers to live by
Terraform has emerged as a powerful tool for automating provisioning and managing resources across various cloud providers. While many users start with the basics, there are numerous advanced techniques and hacks that can elevate your Terraform expertise to new heights.
In this article, we’ll explore 26 advanced Terraform hacks and strategies, complete with code snippets and real-world examples, to help you optimize your infrastructure provisioning process, improve efficiency, and reduce complexity.
1 — Utilize Terraform Modules for Reusability
One of the fundamental principles of Terraform is reusability. Creating custom modules that encapsulate resource configurations allows you to reuse code and simplify your infrastructure definitions. Let’s see an example of how to create a custom module for an AWS VPC:
# main.tf
module "my_vpc" {
source = "./modules/vpc"
cidr_block = "10.0.0.0/16"
region = "us-east-1"
}
# modules/vpc/main.tf
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
tags = {
Name = "MyVPC"
}
}
2 — Leverage Terraform Workspaces
Workspaces allow you to manage multiple environments (e.g., dev, staging, production) with the same Terraform codebase. This is particularly useful when you need to deploy similar infrastructure with slight variations. Create a new workspace using terraform workspace new <name>
and switch between them with terraform workspace select <name>
.
$ terraform workspace new staging
$ terraform workspace select staging
3 — Use Terraform Data Sources for Interoperability
Data sources enable you to import information about existing resources into your configuration. This can be helpful when you want to reference attributes from existing resources, such as an AWS AMI ID, without creating them anew.
data "aws_ami" "example" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["my-ami-*"]
}
}
resource "aws_instance" "example" {
ami = data.aws_ami.example.id
instance_type = "t2.micro"
# Other instance configuration...
}
4 — Manage Remote State with Terraform Backends
By default, Terraform stores the state locally in a terraform.tfstate
file. However, this becomes impractical in collaborative environments. Leveraging remote backends like Amazon S3 or HashiCorp Consul enables teams to store, lock, and share state files securely.
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "terraform.tfstate"
region = "us-east-1"
}
}
NOTE: Always consider using remote backends with state locking, versioning, and encryption for improved collaboration and data protection.
5 — Utilize Terraform Remote State Data Source
To reference outputs from another Terraform state, use the terraform_remote_state
data source. This is particularly useful when you want to extract information from another infrastructure module or project.
data "terraform_remote_state" "other_module" {
backend = "s3"
config = {
bucket = "other-module-state"
key = "terraform.tfstate"
region = "us-east-1"
}
}
resource "aws_instance" "example" {
# Instance configuration...
subnet_id = data.terraform_remote_state.other_module.subnet_id
}
6 — Work with Terraform Import
Terraform import allows you to import existing infrastructure resources into your Terraform state. This is useful when you are migrating from manual setups to Terraform-managed infrastructure.
$ terraform import aws_instance.example i-1234567890abcdef0
7 — Implement Resource Dependencies
Resource dependencies ensure the correct order of resource creation. By defining dependencies explicitly, Terraform guarantees that dependent resources are created before the depended.
resource "aws_security_group" "web" {
# Security group configuration...
}
resource "aws_instance" "web_server" {
# Instance configuration...
depends_on = [aws_security_group.web]
}
8 — Use Dynamic Blocks for Resource Reusability
Dynamic blocks allow you to create multiple nested blocks dynamically. This is especially useful when configuring multiple rules within a single resource.
resource "aws_security_group" "web" {
# Security group configuration...
dynamic "ingress" {
for_each = var.ports
content {
from_port = ingress.value
to_port = ingress.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
}
# Variables
variable "ports" {
type = list(number)
default = [80, 443, 22]
}
9 — Employ Terraform Provisioners
Provisioners execute scripts on resources after they are created. Use them sparingly and prefer configuration management tools for complex tasks. An example of using a provisioner to install software on an AWS EC2 instance:
resource "aws_instance" "example" {
# Instance configuration...
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx",
]
}
}
10 — Implement Terraform Count and For-Each
Terraform provides two methods for creating multiple instances of the same resource: count
and for_each
. Choose the appropriate method based on whether the number of instances is known or dynamic.
# Using count
resource "aws_instance" "example" {
count = 3
# Instance configuration...
}
# Using for_each
resource "aws_instance" "example" {
for_each = var.instance_names
# Instance configuration...
}
11 — Utilize Terraform Sentinel Policies
Terraform Sentinel is a powerful policy-as-code framework that helps enforce compliance, security, and governance policies. It allows you to prevent certain actions, enforce naming conventions, or perform other custom validations.
# Sentinel Policy
import "tfplan/v1"
# Ensure all AWS instances have tags
main = rule {
all tfplan.resources.aws_instance as _, instances {
all instances as _, r {
r.tags is not null
}
}
}
12 — Make use of Terraform Graph and Plan Visualization
Terraform graph visualizes the resource dependencies in your infrastructure. This helps you understand the order of resource creation and identify potential issues. To generate and view the graph, run:
$ terraform graph | dot -Tpng > graph.png
Additionally, visualize Terraform plans with tools like terraform show -json
and external tools like terraform-visual
for a better understanding of proposed changes.
13 — Utilize Dynamic Provider Configuration
In certain scenarios, you might need to use different cloud providers based on the environment or other factors. Terraform allows dynamic provider configurations to achieve this flexibility.
provider "aws" {
region = var.aws_region
}
provider "google" {
project = var.gcp_project_id
region = var.gcp_region
}
resource "aws_instance" "example" {
# AWS instance configuration...
}
resource "google_compute_instance" "example" {
# Google Cloud instance configuration...
}
14 — Use Custom Providers
Create your own custom Terraform providers / use them from third-party providers to manage resources that are not yet supported by official providers. This allows you to extend Terraform’s capabilities and integrate with other APIs or services. For more advanced custom provider development, refer to the official Terraform Plugin SDK documentation: https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
custom_provider = {
source = "example.com/customprovider"
version = "1.0.0"
}
}
}
resource "custom_provider_resource" "example" {
# Configuration for the custom provider's resource...
}
15 — Use Resource Overrides with Resource Targeting
Resource overrides let you modify the attributes of an existing resource during Terraform apply. This can be helpful when you want to make changes to a specific resource without modifying the entire codebase.
# target resource with a specific address
terraform apply -target=aws_instance.example
# override resource attribute
terraform apply -target=aws_instance.example -var="instance_type=t2.large"
16 — Implement Terraform Testing with Terratest
Terratest is a testing framework that allows you to write automated tests for your Terraform code. With Terratest, you can validate your infrastructure deployments and ensure they meet the desired state.
// main_test.go
package test
import (
"testing"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
func TestTerraformExample(t *testing.T) {
t.Parallel()
terraformOptions := &terraform.Options{
// Set the path to the Terraform code that will be tested.
TerraformDir: "../examples/basic",
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"instance_type": "t2.micro",
"ami_id": "ami-0c55b159cbfafe1f0",
},
// Variables to pass to our Terraform code using TF_VAR_xxx environment variables
EnvVars: map[string]string{
"AWS_DEFAULT_REGION": "us-west-2",
},
}
// Clean up resources after test finishes
defer terraform.Destroy(t, terraformOptions)
// Deploy the infrastructure
terraform.InitAndApply(t, terraformOptions)
// Run Terraform commands to get the output
instanceID := terraform.Output(t, terraformOptions, "instance_id")
// Check that the instance is created and running
instance := aws.GetEc2Instance(t, "us-west-2", instanceID)
assert.True(t, instance.PublicIp != "", "Instance is not running.")
}
17 — Manage Complex Configurations with HCL Functions
Harness the power of HCL functions to manage complex configurations. Functions like file()
, jsondecode()
, and yamldecode()
allow you to read and process external files and data structures directly within your Terraform code.
variable "config_file" {
type = string
default = "config.json"
}
locals {
config = jsondecode(file(var.config_file))
}
resource "aws_instance" "example" {
instance_type = local.config.instance_type
# other instance configuration...
}
18 — Implement Dynamic Required Variables
Make variables conditional by using dynamic block syntax to create optional inputs based on certain conditions.
variable "enable_notifications" {
type = bool
default = false
}
variable "notification_email" {
type = string
default = "info@example.com"
}
locals {
notification_settings = var.enable_notifications ? {
email = var.notification_email
} : {}
}
resource "aws_instance" "example" {
# Instance configuration...
dynamic "ebs_block_device" {
for_each = local.notification_settings
content {
device_name = "/dev/xvdb"
volume_size = 100
encrypted = true
}
}
}
19 — Use Terraform Interpolation and Dynamic Blocks
Use interpolation and dynamic blocks together to conditionally include or exclude resources based on variable values.
variable "create_resources" {
type = bool
default = true
}
resource "aws_instance" "example" {
count = var.create_resources ? 2 : 0
ami = "ami-0c55b159cbfafe1f0"
# Instance configuration...
}
20 — Use Terraform Sentinel Mocking
When writing Sentinel policies, you can use Terraform’s mocking capabilities to test and simulate policy checks before applying them to your infrastructure.
import "tfplan/v1"
mock = tfplan.mock {
"aws_instance.example" = {
count = 3
attribute = {
ami = "ami-0c55b159cbfafe1f0"
}
}
}
main = rule {
all mock as _, resources {
resources.count == 3
}
}
21 — Use External Data Sources
External data sources allow you to reference external data that is not managed by Terraform, such as data from APIs or other sources.
data "external" "example" {
program = ["bash", "-c", "curl https://example.com/api/data"]
}
resource "aws_instance" "example" {
ami = data.external.example.result.ami_id
# Instance configuration...
}
22 — Define Terraform CLI Aliases
Define aliases in the Terraform CLI configuration to simplify long and repetitive commands. This is especially useful for workspace-related tasks.
provider_installation {
aliases {
dev = "app.terraform.io/org/workspace-dev"
prod = "app.terraform.io/org/workspace-prod"
}
}
23 — Use Terraform Local Exec Provisioner with Environment Variables
Combine local-exec provisioner with environment variables to execute commands on your local machine after resource creation.
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
# Instance configuration...
provisioner "local-exec" {
command = "echo 'Instance IP: ${self.public_ip}' >> instance_ips.txt"
environment = {
TF_VAR_instance_ips_file = "instance_ips.txt"
}
}
}
24 — Customize Terraform Error Messages
Use the fail
function in expressions to provide custom error messages when certain conditions are not met.
locals {
num_instances = 3
}
resource "aws_instance" "example" {
count = local.num_instances
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
# Instance configuration...
lifecycle {
prevent_destroy = count.index == 0 ? true : false
}
provisioner "local-exec" {
when = "destroy"
command = "echo 'Instance IP: ${self.public_ip}' >> instance_ips.txt"
on_failure = fail("Failed to execute local-exec provisioner for instance ${self.id}")
}
}
25 — Configure Terraform Providers with Aliases
Assign aliases to providers in the Terraform CLI configuration to avoid conflicts when using multiple providers of the same type.
provider "aws" {
alias = "primary"
region = "us-west-1"
}
provider "aws" {
alias = "secondary"
region = "us-east-1"
}
resource "aws_instance" "primary" {
provider = aws.primary
# Instance configuration...
}
resource "aws_instance" "secondary" {
provider = aws.secondary
# Instance configuration...
}
26 — Parametrize Resource Names
Parameterizing resource names with Terraform is a valuable technique that allows you to dynamically create multiple resources with meaningful and customizable names. This strategy makes your infrastructure code more flexible, maintainable, and easy to understand.
# Define variable (e.g. environment)
variable "environment" {
type = string
default = "dev"
}
# Create the resource (e.g. S3 Bucket)
resource "aws_s3_bucket" "example" {
bucket = "my-bucket-${var.environment}"
acl = "private"
# S3 bucket configuration...
}
# Override variable with different configurations
$ terraform apply -var "environment=dev"
$ terraform apply -var "environment=staging"
$ terraform apply -var "environment=production"
Conclusion
By incorporating these advanced Terraform hacks and strategies into your infrastructure-as-code toolkit, you can elevate your cloud provisioning and management capabilities to a whole new level. Terraform offers a plethora of features to optimize, automate, and secure your infrastructure, and with continuous learning and practice, you can become a true Terraform master.
Stay tuned for the next Cloud DevOps tip. Until then, happy hacking!