Terraform Workspaces. Terraform Basics: part-4

Venkat teja Ravi
Vitwit
Published in
7 min readApr 18, 2020

Welcome back to the series of blog posts on Terraform basics. So far we have discussed How to Launch Ec2 instance using Terraform, How to launch the Autoscaling group with remote state management using Amazon-s3 and Working with Terraform Workspaces using terraform workspaces . In this blog post, we are going to cover the disadvantages of using terraform workspaces concept.

If you are not able to follow up what we are discussing about, I would highly recommend you to go through the past blogs.

Before we jump into the alternative solution to terraform workspaces . Let us discuss what are the disadvantages that we may face working with terraform workspaces . In the previous blog, we have launched two instances with different instance_types such as t2.micro and t3.micro . One of them belongs to terraform workspace stage and the other belongs to terraform workspace prod . Everything was working fine until we want to extend the infrastructure. It will become a hectic process to work with tons of infrastructure using terraform workspaces . Because if we mistakenly change the configuration as required for the workspace stage , without knowing that we are in the workspaceprod . Unfortunately, the deployed infrastructure in the environment prod will be disturbed.

To solve this problem we have to follow the below Directory convention:

project_directory
├── global
│ ├── iam
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── s3
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── mgmt
│ ├── bastion-host
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── jenkins
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── prod
│ ├── data-stores
│ │ ├── mysql
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ └── redis
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── services
│ │ └── web-server-cluster
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── vpc
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── stage
├── data-stores
│ ├── mysql
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── redis
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── services
│ └── web-server-cluster
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── vpc
├── main.tf
├── outputs.tf
└── variables.tf

If you have noticed the above file structure, the configuration files of the environments stage , prod are in separate directories such as stage prod .

We have not only separate files based on environments but also with Infrastructure specific such as vpc .

stage defines all the Terraform configuration files belong to the environment stage . prod defines all the Terraform configuration files belong to the environment production. mgmt defines all the Terraform configuration files belong to the managing tools such as bastion-host and Jenkins. These managing tools are used CI/CD deployment of the whole infrastructure.

global defines all the Terraform configuration files respective to the global services like iam , S3 . The name of this directory is global because these resources can be used across all environments.

vpc will have all the Terraform configuration files respective to the networking resources such as subnet , natateway , vpcendpoints .

services will have all the Terraform configuration files respective to the computing resources such as EC2 , ELB and more...

data-stores will have all the Terraform configuration files respective to the database resources such as redis mysql .

If you have noticed there are main.tf ,variables.tf and outputs.tf Terraform configuration files in each and every directory.

That means each and every directory will be having separate terraform.tfstate files. These separate files can also be stored in remote backend such as amazon S3 .

Enough of this, let us get into the practicals. We are going to write terraform configuration for a cluster of web-servers using AutoScaling Group and ELB. We did the same task in one of the previous blog posts. The only thing we are going to do something extra is creating the same infrastructure in multiple environments with different instance_types .

Create the directories as per the above-mentioned file structure.

Write the below code in the path stage/services/web-server-cluster/main.tf

provider "aws" {
region = "us-east-2"
profile = "Administrator"
}
data "aws_availability_zones" "all" {}resource "aws_security_group" "instance"{
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}resource "aws_launch_configuration" "example"{
image_id = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
security_groups = [aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p "${var.server_port}" &
EOF
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "example"{
launch_configuration = aws_launch_configuration.example.id
availability_zones = data.aws_availability_zones.all.names
load_balancers = [aws_elb.example.name]
health_check_type = "ELB"
min_size = 2
max_size = 10
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}resource "aws_security_group" "elb_sg" {
name = "terraform-example-elb-sg"
#Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
# Inbound HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "example" {
name = "terraform-asg-elb-example"
availability_zones = data.aws_availability_zones.all.names
security_groups = [aws_security_group.elb_sg.id]
# This adds a listener for incoming HTTP requests.
health_check {
target = "HTTP:${var.server_port}/"
interval = 30
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
listener {
lb_port = 80
lb_protocol = "http"
instance_port = var.server_port
instance_protocol = "http"
}
}

Run the command terraform init to initialize the provider. And then run the command terraform plan .

We forgot to do the Terraform backend configuration. Add the following code to stage/services/web-server-cluster/main.tf

terraform {
backend "s3" {
bucket = "terraform-up-and-running-state-4567"
key = "terraform_workspaces/stage/services/web-server- cluster/terraform.tfstate"
region = "us-east-2"
#DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}

If you have noticed the argument key = “terraform_workspaces/stage/services/web-server- cluster/terraform.tfstate” . We can see that we are having a separate path for the terraforms.tfstate file belongs to the stage in S3.

Run the commands terraform init to setup the remote backend S3.

Run the command terraform apply to check the state file in the remote backend S3. Remember that we are working for the environment stage. So the instance_type in this launch_configuration will t2.micro .

Check the AWS Console for that change took place in S3.

As you can see we have created a separate terraform.tfstate file for stage environment.

Now go to the path prod/services/web-server-cluster/main.tf write the following code:

provider "aws" {
region = "us-east-2"
profile = "Administrator"
}
data "aws_availability_zones" "all" {}resource "aws_security_group" "instance"{
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}resource "aws_launch_configuration" "example"{
image_id = "ami-0c55b159cbfafe1f0"
instance_type = "t3.micro"
security_groups = [aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p "${var.server_port}" &
EOF
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "example"{
launch_configuration = aws_launch_configuration.example.id
availability_zones = data.aws_availability_zones.all.names
load_balancers = [aws_elb.example.name]
health_check_type = "ELB"
min_size = 2
max_size = 10
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}resource "aws_security_group" "elb_sg" {
name = "terraform-example-elb-sg"
#Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
# Inbound HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "example" {
name = "terraform-asg-elb-example"
availability_zones = data.aws_availability_zones.all.names
security_groups = [aws_security_group.elb_sg.id]
# This adds a listener for incoming HTTP requests.
health_check {
target = "HTTP:${var.server_port}/"
interval = 30
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
listener {
lb_port = 80
lb_protocol = "http"
instance_port = var.server_port
instance_protocol = "http"
}
}

To add the terraform.tfstate file for prod environment in S3. Add the following code :

terraform {
backend "s3" {
bucket = "terraform-up-and-running-state-4567"
key = "terraform_workspaces/prod/services/web-server- cluster/terraform.tfstate"
region = "us-east-2"
#DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}

If you have noticed the argument key = “terraform_workspaces/prod/services/web-server- cluster/terraform.tfstate” . We can see that we are having a separate path for the terraforms.tfstate file belongs to the environment production in S3.

Run the commands terraform init to set up the remote backend S3.

Run the command terraform apply to check the state file in the remote backend S3. Remember that we are working for the environment prod. So the instance_type in this launch_configuration will t3.micro .

Check the AWS Console for that change took place in S3.

As you can see we have created a separate terraform.tfstate file for prod environment.

The terraform commands running in a separate directory will not affect the configuration in other directories. For example, if I want to delete infrastructure in the environment stage then the infrastructure in the environment prod will not be affected.

Conclusion:

So far we have discussed how to separate the environments using respective directories. By doing this we can also manage separate terraform.tfstate files respective to the environment.

If you need help with Terraform, DevOps practices, or AWS at your company, feel free to reach out to us at Vitwit.

--

--

Venkat teja Ravi
Vitwit
Writer for

Software Engineer at Vitwit Technologies. A technology company helping businesses to transform, automate and scale with AI, Blockchain and Cloud computing.