Terraform Remote State on Azure

Michael Hannecke
Bluetuple.ai
Published in
6 min readAug 16, 2023

Step-by-step guide to setup Terraform remote state file on azure

Introduction

Cloud computing and infrastructure-as-code are transforming the way we manage and deploy resources in the IT world. Among the leading tools in this revolution is Terraform. In this post, we’ll delve into setting up Terraform with remote state management within an Azure subscription, ensuring a smooth and scalable infrastructure management experience.

Prerequisites

0. Source Code

All files can be found here as templates: https://github.com/bluetuple/terraform-azure/

1. Install Terraform

Before diving into Azure, ensure you have Terraform installed. Download the appropriate package from the Terraform website and install it.

Explanation: Terraform is a command-line tool, so you need to have it locally installed to run commands against your Azure resources.

2. Azure CLI Installation

Ensure the Azure Command-Line Interface (CLI) is installed. This tool interacts directly with Azure services.

Explanation: While Terraform will handle most tasks, the Azure CLI allows for additional configuration and verification tasks.

3. Authenticate Terraform with Azure

Before Terraform can manage resources in Azure, it needs permissions. Use the Azure CLI to log in:

az login

A new browser window will open and you have to login to your azure account. In the shown list of available subscriptions, note down the ID of the subscription you want to use.

Now set the subscription:

az account set --subscription "<subscription-id>"

You can verify the actual setting with

az account show

4. Create a Service Principal

Now we have to create a Service Principal. A SP is an application in Azure Active Directory (AAD), which will provide the authorization tokens Terraform need to perform actions on your behalf.

az ad sp create-for-rbac -- role="Contributor" \
--scopes="/subscriptions/<subscription-id"

It is recommended to NOT store the provided credentials in any Terraform script, so we will export them as environment variables and store them in a hidden file on the console.

Keep in mind, that everyone who has access to your local console could as well read these variables. So for a production environment this would not be a preferred approach. For any production environment I would recommend to store these variables in a key vault, but this is out of scope for now.

5. Set Environment Variables

Set the following environment variables and store then in a .secrets file. You can source them afterwards by `source .secrets`.

Create a new hidden secrets file in the actual folder:

nano .secrets

Copy the following export into the new file, replace the placeholders with your actual values and save the file (`CTRL/Control + x +y`):

export ARM_CLIENT_ID="<app_id>"
export ARM_CLIENT_SECRET="<client secret>"
export ARM_SUBSCRIPTION_ID="<subscription-id>"
export ARM_TENANT_ID="<tenand-id>"

Activate the settings:

source .secrets

Terraform Configuration

We now have to create a couple of terraform declaration files. In theory it would be possible to place everything despite the variable definition in one file but for readability and modularity reasons we will place the declarations in separate files. Create the following empty files with you preferred code editor:

main.tf

resourcegroups.tf

variables.tf

storage.tf

remotestate.tfvars

We now have to setup an Initial `main.tf` file which will hold the terraform provider information. Place the following code in your `main.tf` file and safe it:

# configuration of azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
# version = "~>3.6.0"
}
}
required_version = ">= 1.5.0"
}

provider "azurerm" {
features {}
}

Next we will define a file for variable declaration and for resource groups. Let’s start with the variable definition.

Copy the following code into your `variables.tf` file:

# varibales.tf
## All variable definitions will go here

variable "sbx_resgroup_name" {
type = string
description = "The resurcegroup for state file storage account"
}

variable "sbx_default_location" {
type = string
description = "default location"
}

variable "sbx_tf_storage_account" {
type = string
description = "storage account used for terraform backend state"
}

Create a new file `resourcegroups.tf` and place the definition of new resource groups for holding the storage-account we will use for remote state in the next steps. For the start, we only need one resource group to hold the storage account wherin we’ll place the state file.

# All resourcegroup definitinos will go here

# resource groups for terraform remote state
resource "azurerm_resource_group" "sbx_tf_rg" {
name = var.sbx_resgroup_name
location = var.sbx_default_location
}

Right after this create a file named `storage.tf` and place the storage account details in there. Keep in mind that a storage account name in azure must be unique worldwide.

# Create a storage account to store the central Terraform state.
resource "azurerm_storage_account" "tfstateac" {
name = var.sbx_tf_storage_account # Name of the storage account, provided as a variable.
resource_group_name = var.sbx_resgroup_name # Name of the resource group, provided as a variable.
location = var.sbx_default_location # Location where the storage account will be created, provided as a variable.
account_tier = "Standard" # Specifies the performance and redundancy of the storage account.
account_replication_type = "LRS" # Specifies the type of replication to be used for data redundancy.
enable_https_traffic_only = true # Enforces HTTPS-only access to the storage account.

# Note: Additional settings like network rules, encryption, etc. can be configured here.
}

# Define a blob container within the storage account to store the Terraform backend state.
resource "azurerm_storage_container" "tfstate" {
name = "terraformstate" # Name of the blob container for storing the Terraform state.
storage_account_name = var.sbx_tf_storage_account # Name of the storage account, provided as a variable.
container_access_type = "private" # Specifies the access level for the blob container.

depends_on = [azurerm_storage_account.tfstateac]
}

With everything in place we now let Terraform download the azure provider and create resource group and storage account. Carry out the following commands, ensuring that you’re in the same folder than the .tf files.

To shorten typing, it might be a god ider to set an alias for terrafom first — beeing lazy at the right time isn’t that bad…

alias tf=terraform

# this will download the azutre provider:
tf init

# well formatted code is more readable and shiny:
tf fmt

# Let's check for typos
tf validate

# no errors , to we can ask terraform to create a plan of the changes
# before implementing - safety first
tf plan

# if everything looks got, let terraform do the lifting:
tf apply -auto-approve

#Chech the status
tf show

Moving from local state to remote state

In the main.tf add the following lines:

backend "azurerm" {
resource_group_name = "<your-resourcegroup-name>"
storage_account_name = "<your-unique-storage-account>"
container_name = "terraformstate"
key = "sbx/state"
}

This will instruct terraform to store state information in the given storage account. ‘key’ is an suffix placed before the statefale, you can use this key to separate different stages like ‘prd’, ‘qa’, ‘dev’.

Ensure that the backend configuration is exactly placed inside the terraform{} block at the same indentation as required_providers{}.

The main.tf should look like this:

# configuration of azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
# version = "~>3.6.0"
}
}
required_version = ">= 1.5.0"

backend "azurerm" {
resource_group_name = "<your-resourcegroup-name>"
storage_account_name = "<your-unique-storage-account>"
container_name = "terraformstate"
key = "terraform/sbx/state"
}
}

provider "azurerm" {
features {}
}

Unfortunately Terraform does not support using variables in the backend configuration — maby that will change in future versions.

As last step you now have to initialize Terraform again:

terraform init

You have to approve (‘yes’) to have your state file moved to the azure backend.

That’s it — you’re done. your terraform state is no independent form your local machine. Other team mates now could carry out Infrastructure as Code task with terraform as well (given they have proper access rights).

P.S.:

If you want to clean up the sandbox environment again:

tf destroy

Everything we created by files in the current folder (!) will be deleted…

Happy Infrastructure-as-Code(ing)!

Contact me on LinkedIn https://www.linkedin.com/in/michaelhannecke/

If you have read it to this point, thank you! You are a hero (and a Nerd ❤)! I try to keep my readers up to date with “interesting happenings in the AI world,” so please 🔔 clap | follow

--

--