Back To The Future With Terraform!

It’s never too late to retrofit Terraform into your infrastructure

Edoardo Nosotti
Jun 1 · 8 min read
Photo by Nenad Milosevic on Unsplash

Terraform came late into my life, but I quickly fell in love with it. It was 2018 already and I was still using “native”, proprietary IaaC tools to deploy and maintain my infrastructures. All of a sudden, I needed to port a project to a multi-cloud environment and the old tools did not apply anymore. That’s when I met Terraform, wrote my first configuration, and never left it ever since. Indeed, I decided to port all of my existing infrastructure projects to Terraform. Terraform is very easy to get started with and has a soft learning curve if you are starting a new project from scratch. Yet, it took me a while to become confident enough to mess with my running workloads and do the switch without breaking either them or my own nerves. So I am glad to share a few tips to help you with your journey to the wonderful land of HashiCorp.

The good news is that you will not break your existing infrastructure with a Terraform import, unless you recklessly apply stuff without reviewing the proposed changes first. Stick to the plan and you will be fine!

On “state” and “import”

The state in Terraform is somewhat like “version control for infrastructures”. Whenever you create or update resources with Terraform, it will write their current “state” in a state file. If you make changes to your configurations and apply them, Terraform will verify the current status of the resources against the state file to determine the incremental changes it needs to make to reflect the new configuration. In particular, Terraform needs a resource to be tracked in the state to be aware of its existence and match the actual resource to its representation in the configuration. The state file also allows Terraform to revert any accidental changes made to the infrastructure (usually outside of Terraform itself).

When you import a resource you tell Terraform to “map this existing cloud resource to this scripted resource. Terraform will try to pull information from the existing resource, save its current configuration into the state and mark it with the name of the scripted resource.
The next time you run $ terraform apply it will apply any change you have put into the script to the existing resource.

State files are stored locally by default, but unless you are just practicing with the tool or creating resources to execute one-off tasks, storing state files remotely is the best practice for any scenario. Terraform supports backends to promote centralized “state version control” and collaboration. The positive side-effect of using backends based on Cloud storage, such as s3 or gcs, is that you get built-in versioning support that really comes in handy when you are working with complex import jobs. Terraform also offers the state rm command to clean up borked imports and as a very last resort state files can also be manually edited. It’s just JSON after all, but I would not recommend it until you will be familiar and confident with Terraform.
All of this is to say that the Terraform import is a safe enough and fault-tolerant process.

Hands-on: basic imports

Imagine that we have an AWS S3 bucket named: my-incredible-bucket and we write a new Terraform configuration file containing this code:

provider "aws" {
region = "eu-west-1"
resource "aws_s3_bucket" "incredible_bucket" {
bucket = "my-incredible-bucket"
acl = "private"

If we run $ terraform apply on this configuration straight away and confirm (“yes”) the action, it will fail because a S3 bucket named: my-incredible-bucket already exists, but Terraform does not know that it is the same bucket it is supposed to handle, yet. Or, if you apply a new configuration featuring a resource not subject to unique naming constraints, it will create new “copy”.

To import the existing bucket into the state, we can run:
$ terraform import aws_s3_bucket.incredible_bucket my-incredible-bucket

The syntax of the command above is:

  • terraform import (the import command)
  • {resource_type}. (the Terraform resource type)
  • {tf_resource_name} (the resource name in the Terraform configuration)
  • {cloud_resource_name} (actual resource name or ID on the Cloud)

The my-incredible-bucket S3 bucket should now be bound to the:
resource "aws_s3_bucket" "incredible_bucket" in Terraform.

Now, if we run $ terraform apply again, you would probably expect it to tell you that the infrastructure is already “up-to-date”.
Spoiler: most likely it would not.

It looked easy on TV…

Oftentimes, when you import a resource into a Terraform configuration and run $ terraform plan or $ terraform apply it will show you some (or even lots!) of changes to apply, changes that you DON’T want to happen at this time. This happens because your Terraform configuration does not reflect the exact configuration of the existing resource. Hold your panic, fixing this is easy. It just takes time and patience.

Here’s some Terraform output from my test:

A piece of the configuration is missing

First, Terraform noticed my bucket did not have a “canned ACL”. So it is suggesting to add a safe private ACL (note the + acl directive). This is indeed a good point, so I will let Terraform change this at the next apply.

Second, Terraform wants to remove a lifecycle directive I had set on the bucket, to flush objects (files) older than 1 day. I forgot to add the lifecycle_rule directive into the Terraform configuration, so Terraform is trying to change my actual bucket to match the Terraform configuration, that does not have such rule.
I don’t want this, so I need to update my Terraform configuration to match the current, effective state of the bucket. The Terraform output comes in handy, I just have to copy the lifecycle_rule { ... } output, paste it into my configuration file and clean it up:

  • remove the - (minus + white space) chars at the beginning of each line
  • remove the -> null chars at the end of each line

So if I change the Terraform configuration as follows:

provider "aws" {
region = "eu-west-1"
resource "aws_s3_bucket" "incredible_bucket" {
bucket = "my-incredible-bucket"
acl = "private"
lifecycle_rule {
abort_incomplete_multipart_upload_days = 0
enabled = true
id = "my_incredible_bucket_bucket_exp_rule"
tags = {}
expiration {
days = 1
expired_object_delete_marker = false

then run $ terraform apply again, it will just show a change in the ACL:

Looks good now

If I also changed the acl configuration until it completely reflected the actual state of the resources, the $ terraform apply output would be instead:

Mission accomplished

Level-up: advanced imports

Being a powerful tool, Terraform offers for loops, for_each loops and… splats to further automate and streamline the provisioning of resources. These come particularly in handy for subnets, firewall rules and multi-zone (or region) deployments.

Consider the following Terraform configuration:

provider "aws" {
region = "eu-west-1"
variable subnets {
type = number
default = 2
resource "aws_vpc" "main" {
cidr_block = ""
resource "aws_subnet" "public_subnets" {
count = var.subnets
vpc_id =
cidr_block = "10.0.${(count.index + 1)}.0/24"

it iterates a number of times defined by a variable and creates subnets into a VPC. How do you import existing subnets into a resource that uses count?
It is as simple as adding an index to the imported resource name:

$ terraform import aws_subnet.public_subnets[0] {SUBNET_1_ID}
$ terraform import aws_subnet.public_subnets[1] {SUBNET_2_ID}

Please note the [0]...[1] sequence above. Just adding an incremental index will let you import existing resources into counted resources. Caveat: the order matters. Should you apply the configuration above from scratch, Terraform would create the subnets in this order:

[0] => cidr_block = ""
[1] => cidr_block = ""

meaning that you have to import the existing resources following the same logic. Otherwise, Terraform will detect “changes” to be applied to the existing resources, messing up your infrastructure (some resources are also immutable, meaning that they need to be destroyed and replaced with new instances if their configuration is altered).

Imagine that you are automating the management of Projects in the Google Cloud Platform. In such provider, you have to explicitly enable the APIs (services) you want to use. You will often have a list of services to enable and putting a google_project_service resource for each individual service is far from being clean, DRY code. Also, it is hardly maintainable and reusable.

Consider the following Terraform configuration for the Google provider:

locals {
google_apis = [

provider "google" {
project = "my-incredible-project"
region = "europe-west1"
zone = "europe-west1-b"

resource "google_project_service" "services" {
for_each = toset(local.google_apis)

service = each.value

If you apply the above from scratch, you will get this output:

The output from commands often provides valuable advice and cut-and-paste ready code

Note these lines:


Terraform is suggesting the naming convention it will use to store the state of such resources. Therefore, the correct import commands for such configuration are:

$ terraform import[""]
$ terraform import[""]


  • my-incredible-project is the “Project ID” of my GCP Project.
  • {PROJECT_ID}/{SERVICE_NAME} is the name/ID of the resource (in this case, the project-service association) as defined on the actual Cloud platform. You can find it documented in the Import section of each resource documentation. Also, I wrapped it in quotes because it contains special characters.

Help! I borked the state… what now?

First of all, I suggest switching the backend to a versioned store from the very beginning. Should you use a bucket-based backend such as s3 or gcs, ensure that versioning is enabled. The state is indeed a JSON file, so you can restore a previous version if you mess it up really badly.

Terraform can also “forget” the state of a resource with the:
$ terraform state rm {TF_RESOURCE_NAME} command.

Looking at the examples above, you could undo an import with:

$ terraform state rm[""]

and start all over, correcting any mistake.

Rounding up

Importing a whole workload into Terraform takes some time. Don’t rush it, import your resources in small, incremental steps and carefully review the progress with $ terraform plan after each step.

Start with simplified configurations and iterate with plan to verify and add the missing pieces (it will also write part of the code for you!).

Check the Import section of each resource document on the Terraform website for details.

If you break the state with an import (but did not apply anything), just keep calm. Your infrastructure is still there. Breathe, rinse and repeat ;)

Photo by Pascal Habermann on Unsplash

Output screenshots were created with Carbon.


Tutorials, tips and fast news on Cloud, DevOps and Code

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store