Working with Terraform Workspaces. Terraform Basics: Part3

Venkat teja Ravi
Vitwit
Published in
6 min readApr 11, 2020
Source: Hashicorp

In the previous blog posts, we have seen how to launch an EC2 instance, we have also deployed a cluster of web servers using the Autoscaling group and Elastic load balancer. and we have also had a look at how remote state management using s3.

In this blog post, we will discuss how to work on the same Terraform configuration file with multiple work environments such as stage, production.

When you are working on a Terraform file and you have deployed the configured infrastructure. Now you want to test with the same configuration file with minute tweaks and you do not want the deployed infrastructure terraform.tfstate file to get disturbed which is there in aws bucket (Read about how to store Terraform state file in s3 bucket).

Solving this problem Terraform has come up with a feature called Terraform workspaces.

It will help us to create environment-specific terraform.tfstate files. So stage will be having one file and prod will be having another state file.

To get more idea on this let us create an Ec2 instance. Write the code as follows in the file main.tf

provider "aws" {
region = "us-east-2"
}resource "aws_instance" "bastion_host" {
ami_id = ""
instance_type = "t3.micro"
}

Run the command terraform init to initialize the AWS providers and then run the command terraform apply to add aws_instance resource.

Add the below code to the file main.tf to create terraform.tfstate file in aws s3 bucket .

terraform {
backend "s3" {
#Replace this with your bucket name
bucket = "terraform-up-and-running-state-4567"
key = "global/default-workspace/terraform.tfstate"
region = "us-east-2"
#Replace this with your dynamodb_table name
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}

bucket is the AWS s3 bucket name, key is the path for storing terraform.tfstate file.

Run the command terraform init again to apply the changes.

After that run terraform apply to create the Ec2 instance with instance_type = “t2.micro” .

The output is as follows:

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.57"Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

As you can see the state file terraform.tfstate is in the s3 bucket. But I want to change the instance_type from "t3.micro" to "t2.micro" without disturbing the deployed Ec2 instance. That means we want to work on the environment stage leaving production environment aside. In Git hub we can do this with the help of Git hub branches. Terraform also has a similar feature such as terraform workspace .

Our main goal is to create multiple workspaces. To do so, run the command terraform workspace show . You can see output as default .

This says that we are working in the default workspace. So now if I want to test the deployed instance with the instance_type = “t2.micro”.

To do so create one more workspace let say stage . Run the following command terraform workspace new stage .

Output is as follows:

Created and switched to workspace "stage"!You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.

This output says that a new workspace with the name stage is created and we have switched to it.

Now go to the main.tf and replace the instance_type = "t3.micro" with instance_type = "t2.micro" . Now run the command terraform plan to see the output.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:.........
............
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

If you notice this Plan: 1 to add, 0 to change, 0 to destroy. This means that Terraform wants to create another aws_instance with instance_type = "t2.micro" . If we are working in the default workspace it won’t be the same condition. The Terraform would replace the existing instance with the new configuration. But now we are in a different workspace so Terraform want to create a new instance indeed.

If you run the command terraform apply. Go and check the AWS console for the instances. You can see two instances will be up and running with a different configuration.

Now check the AWS s3 bucket for the terraform.tfstate file.

You can see that there is a new directory called env: Open the directory and check for terraform.tfstate file.

Have a look at the path env:/stage/demo/terraform-test-workspaces/terraform.tftstate .

See terraform workspace --help for more functionalities.

Now if you want to clean up the infrastructure in the stage workspace. Then run the command terraform destroy . Before doing this check that you are in the right workspace by running this command terraform workspace show .

After running the command You can check the console.

As you can check the instance with instance_type = "t2.micro" which is in stage workspace is terminated.

The instance_type = "t3.micro" which is in default workspace is still up and running.

So this is how you work with the different workspaces. But what if we are having multiple lines of code, with huge infrastructure. This method is not recommended because we cannot replace a huge amount of code so many times.

So this can be handled by having multiples directories with each of it maintaining an environment. In the next blog post, we will discuss it.

So far in this blog post, we have discussed how to work on the same configuration files with multiple environments using the concept terraform workspace . In the upcoming blog post, we will discuss on how to address the disadvantages of following this method.

If you need help with Terraform, DevOps practices, or AWS at your company, feel free to reach out to us at Vitwit.

--

--

Venkat teja Ravi
Vitwit
Writer for

Software Engineer at Vitwit Technologies. A technology company helping businesses to transform, automate and scale with AI, Blockchain and Cloud computing.