Terraform Zero to Hero

“Infrastructure as a code”

Prashanta Paudel
Mar 29 · 24 min read

Going through all of the content(text, video, and courses), you will acquire 80% of the terraform knowledge required for DevOps daily tasks. Best way to learn is by practice so try to do things yourself while going through the content.

Cloud computing metaphor: the group of networked elements providing services need not be individually addressed or managed by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud.

Cloud computing is an information technology (IT) paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.

Three main ways to cloud compute are(service models)

  1. IaaS — Infrastructure as a service eg GCP, AWS, Azure,rackspace.com
  2. PaaS — Platform as a service eg App-engine, Force.com, Azure
  3. SaaS — Software as a Service eg Gmail, GDrive, Salesforce, Dropbox,Office 365

There are three ways to deploy cloud(cloud deployment models)

  1. Private cloud
  2. Public cloud
  3. Hybrid cloud

Infrastructure management and automation is a very hot topic in the context of cloud automation today. I have not used terraform earlier so, I will be learning as well as writing important points to be noted while working with terraform. We will go from basics to server implementations using terraform.

You can check the dynamics of DevOps from the picture below

Taken from : https://thenewstack.io/want-devops-automation-its-people-before-pipelines/

Infrastructure as a code(IaC) is a process of managing and provisioning mechanism for authenticating, planning and implementing servers and data centers in the cloud and private network through machine readable and understandable configuration files rather than physically configuring the hardware. Many infrastructures including bare metal servers and virtual servers can be configured through this method.

We can use either scripts or declarative definitions for defining the infrastructure. We can also use a version control system to manage the version of the infrastructure files that we develop during the time span.

There are two approaches to IaC implementation called push and pull approach. In a push approach, the configurations are pushed to the target system and in pull approach configurations are pulled from the configuration server.

Various tools for IaC

DevOps consists of various stages before any software implementation is ready. These can be defined in various stages.

PLAN

The plan is composed of two things: “define” and “plan”. This activity refers to the business value and application requirements. Specifically “Plan” activities include:

  • Production metrics, objects, and feedback
  • Requirements
  • Business metrics
  • Update release metrics
  • Release plan, timing and business case
  • Security policy and requirement

A combination of the IT personnel will be involved in these activities: business application owners, software development, software architects, continual release management, security officers and the organization responsible for managing the production of IT infrastructure. Some notable vendors and solutions that facilitate Plan include: Atlassian, CA Technologies, iRise and Jama Software.

CREATE

Create is composed of the building, coding, and configuring of the software development process.[8] The specific activities are:

  • Design of the software and configuration
  • Coding including code quality and performance
  • Software build and build performance
  • Release candidate

Tools and vendor in this category often overlap with other categories. Because DevOps is about breaking down silos, this is reflective in the activities and product solutions.[clarification needed]

Some notable solutions and vendors include Bitbucket, GitLab, GitHub, Electric Cloud, and CFEngine.

VERIFY

Verify is directly associated with ensuring the quality of the software release; activities designed to ensure code quality is maintained and the highest quality is deployed to production.[8] The main activities in this are:

  • Acceptance testing
  • Regression testing
  • Security and vulnerability analysis
  • Performance
  • Configuration testing

Notable vendors and solutions for verify related activities generally fall under four main categories: Test automation (ThoughtWorks, IBM, HP), Static analysis (Parasoft, Microsoft, SonarSource), Test Lab (Skytap, Microsoft, Delphix), and Security (HP, IBM, Trustwave, FlawCheck).

PACKAGING

Packaging refers to the activities involved once the release is ready for deployment, often also referred to as staging or Preproduction / “preprod”.[8] This often includes tasks and activities such as:

  • Approval/preapprovals
  • Package configuration
  • Triggered releases
  • Release staging and holding

Notable solutions for this include universal package managers such as: Jfrog’s Artifactory, Sonatype Nexus repository, and Inedo’s ProGet.[9]

RELEASE

Release related activities include schedule, orchestration, provisioning and deploying software into production and targeted environment.[10] The specific Release activities include:

  • Release coordination
  • Deploying and promoting applications
  • Fallbacks and recovery
  • Scheduled/timed releases

Solutions that cover this aspect of the toolchain include application release automation, deployment automation and release management; specific vendors are Automic, Clarive, Inedo, BMC Software, IBM, Flexagon, VMware, and XebiaLabs.[11]

CONFIGURE

Configure activities fall under the operation side of DevOps. Once software is deployed, there may be additional IT infrastructure provisioning and configuration activities required. Specific activities including:

  • Infrastructure storage, database and network provisioning and configuring
  • Application provision and configuration.

The main types of solutions that facilitate these activities are continuous configuration automation, configuration management, and infrastructure as code tools. Notable solutions include Ansible, Chef, Puppet, Otter, and Salt.

MONITORING

Monitoring is an important link in a DevOps toolchain. It allows IT organization to identify specific issues of specific releases and to understand the impact on end-users.[8] A summary of Monitor related activities are:

  • Performance of IT infrastructure
  • End-user response and experience
  • Production metrics and statistics

Information from monitoring activities often impacts Plan activities required for changes and for new release cycles. Notable vendors are BigPanda, Ganglia, New Relic, Wireshark, and Plumbr.

Enough of the Theory part, lets focus primarily in Terraform and how it is implemented.

TERRAFORM

Terraform is an open-source infrastructure as a code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON.Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, IBM Cloud (formerly Bluemix), Google Cloud Platform, Linode, Microsoft Azure, Oracle Cloud Infrastructure, or VMware vSphere as well as OpenStack.

Wikipedia

Terraform is distributed as a binary file in all supported platforms.

Available binaries

INSTALLATION OF TERRAFORM

To install terraform in your system you need to first download the binary to your computer and then add a PATH to your system file.

You can follow the instruction here for all supported platform

As for me, i am using MAC so i will briefly show you how to do it on MAC.

First you need to have a package installer.So install homebrew or any other package installer. Then just go to bash shell and type

#brew install terraform

It will install latest version of terraform with path already setup.

check this video for better understanding.

VERIFY THE INSTALL

After installing terraform you can verify the install by

prashantapaudel$ terraform — version

Terraform v0.11.13

— — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Even though terraform can be used to implement infrastructure on many platform for this blog i am focusing only on AWS as i am working on it.

If you haven’t created AWS account please create it now and use free tier while setting up otherwise you may be billed for usages and i am not resposible in that case.

CONFIGURATION FILE FORMAT

The set of files used to describe infrastructure in Terraform is simply known as a Terraform configuration. Terraform uses its own configuration language, designed to allow concise descriptions of infrastructure. The Terraform language is declarative, describing an intended goal rather than the steps to reach that goal.

In terraform we use HCL language(Hashcorp configuration language).Configuration files can also be JSON, but we recommend only using JSON when the configuration is generated by a machine.

the smallest part of configuration is argument. Many arguments form block and many blocks form Modules.

The folder structure could be as shown below.

Generally we can think

Arguments>Blocks>Modules>Terraform

Resources and Modules

A Terraform configuration consists of a root module, where evaluation begins, along with a tree of child modules created when one module calls another.

The main purpose of the Terraform language is declaring resources. All other language features exist only to make the definition of resources more flexible and convenient.

A group of resources can be gathered into a module, which creates a larger unit of configuration. A resource describes a single infrastructure object, while a module might describe a set of objects and the necessary relationships between them in order to create a higher-level system.

So, in one terraform root folder there could be many modules folder which are called by root module.

— — — — main terraform folder

— — — — — — — — -build

— — — — — — — — — — -main.tf, auth.tf,

— — — — — — — — -test

— — — — — — — — — —main.tf, auth.tf

— — — — — — — — -production

— — — — — — — — — — -main.tf,auth.tf

— — — — — — — — -Modules

— — — — — — — — — — -ebs,sg,vpc

So, you got the idea that main.tf could be inside one folder but can call anywhere, even github address.

Arguments, Blocks, and Expressions

The syntax of the Terraform language consists of only a few basic elements:

  • Blocks are containers for other content and usually represent the configuration of some kind of object, like a resource. Blocks have a block type, can have zero or more labels, and have a body that contains any number of arguments and nested blocks. Most of Terraform’s features are controlled by top-level blocks in a configuration file.
  • Arguments assign a value to a name. They appear within blocks.
  • Expressions represent a value, either literally or by referencing and combining other values. They appear as values for arguments, or within other expressions.

Code Organization

The Terraform language uses configuration files that are named with the .tf file extension. There is also a JSON-based variant of the language that is named with the .tf.json file extension.

Configuration files must always use UTF-8 encoding, and by convention are usually maintained with Unix-style line endings (LF) rather than Windows-style line endings (CRLF), though both are accepted.

A module is a collection of .tf or .tf.json files kept together in a directory. The root module is built from the configuration files in the current working directory when Terraform is run, and this module may reference child modules in other directories, which can in turn reference other modules, etc.

The simplest Terraform configuration is a single root module containing only a single .tf file. A configuration can grow gradually as more resources are added, either by creating new configuration files within the root module or by organizing sets of resources into child modules.

Identifiers

Argument names, block type names, and the names of most Terraform-specific constructs like resources, input variables, etc. are all identifiers.

Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.

For complete identifier rules, Terraform implements the Unicode identifier syntax, extended to include the ASCII hyphen character -.

Comments

The Terraform language supports three different syntaxes for comments:

  • # begins a single-line comment, ending at the end of the line.
  • // also begins a single-line comment, as an alternative to #.
  • /* and */ are start and end delimiters for a comment that might span over multiple lines.

The # single-line comment style is the default comment style and should be used in most cases. Automatic configuration formatting tools may automatically transform // comments into # comments, since the double-slash style is not idiomatic.

Character Encoding and Line Endings

Terraform configuration files must always be UTF-8 encoded. While the delimiters of the language are all ASCII characters, Terraform accepts non-ASCII characters in identifiers, comments, and string values.

Terraform accepts configuration files with either Unix-style line endings (LF only) or Windows-style line endings (CR then LF), but the idiomatic style is to use the Unix convention, and so automatic configuration formatting tools may automatically transform CRLF endings to LF.

Simple-example

TERRAFORM AUTHENTICATION

Before you can apply anything in the AWS you need to have a key pair that will authenticate terraform to AWS platform so when you apply.

The simple way to do this in MAC, as i am using mac is to store it inside home folder and prove the path while using it.

So, authentication has 2 parts

  1. key : which is stored inside home folder (/home/User/)or where-ever you wish
  2. aws config file : ~/.aws/config which stores the authentication information entered while using aws config command
aws config command

which is like :

[default]

aws_access_key_id = ashdkjkjdhkashdkjhakjdhkja

aws_secret_access_key = kaskjdkasdkhaskjdkjashkdhlldalkd

you can make another profile here manually and use it while working on another AWS project or change the default profile everytime you work on new project.

OR

Use the key pair in terraform itself.The entire configuration is shown below. We’ll go over each part after. Save the contents to a file named example.tf. Verify that there are no other *.tf files in your directory, since Terraform loads all of them.

This is a complete configuration that Terraform is ready to apply. The general structure should be intuitive and straightforward.

The provider block is used to configure the named provider, in our case "aws". A provider is responsible for creating and managing resources. Multiple provider blocks can exist if a Terraform configuration is composed of multiple providers, which is a common situation.

The resource block defines a resource that exists within the infrastructure. A resource might be a physical component such as an EC2 instance, or it can be a logical resource such as a Heroku application.

The resource block has two strings before opening the block: the resource type and the resource name. In our example, the resource type is “aws_instance” and the name is “example.” The prefix of the type maps to the provider. In our case “aws_instance” automatically tells Terraform that it is managed by the “aws” provider.

Within the resource block itself is configuration for that resource. This is dependent on each resource provider and is fully documented within our providers reference. For our EC2 instance, we specify an AMI for Ubuntu, and request a “t2.micro” instance so we qualify under the free tier.

Initialization

The first command to run for a new configuration — or after checking out an existing configuration from version control — is terraform init, which initializes various local settings and data that will be used by subsequent commands.

Terraform uses a plugin based architecture to support the numerous infrastructure and service providers available. As of Terraform version 0.10.0, each “Provider” is its own encapsulated binary distributed separately from Terraform itself. The terraform init command will automatically download and install any Provider binary for the providers in use within the configuration, which in this case is just the aws provider.

#terraform init

APPLY

If the plan was created successfully, Terraform will now pause and wait for approval before proceeding. If anything in the plan seems incorrect or dangerous, it is safe to abort here with no changes made to your infrastructure. In this case the plan looks acceptable, so type yes at the confirmation prompt to proceed.

Executing the plan will take a few minutes since Terraform waits for the EC2 instance to become available:

#terraform apply

DESTROY

We’ve now seen how to build and change infrastructure. Before we move on to creating multiple resources and showing resource dependencies, we’re going to go over how to completely destroy the Terraform-managed infrastructure.

Destroying your infrastructure is a rare event in production environments. But if you’re using Terraform to spin up multiple environments such as development, test, QA environments, then destroying is a useful action.

» Destroy

Resources can be destroyed using the terraform destroy command, which is similar to terraform apply but it behaves as if all of the resources have been removed from the configuration.

$ terraform destroy

# …

- aws_instance.example

The — prefix indicates that the instance will be destroyed. As with apply, Terraform shows its execution plan and waits for approval before making any changes.

Answer yes to execute this plan and destroy the infrastructure:

# …

aws_instance.example: Destroying…

Apply complete! Resources: 0 added, 0 changed, 1 destroyed.

# …

Just like with apply, Terraform determines the order in which things must be destroyed. In this case there was only one resource, so no ordering was necessary. In more complicated cases with multiple resources, Terraform will destroy them in a suitable order to respect dependencies, as we’ll see later in this guide.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Now you know most important part of terraform. Dig deeper into it by going through these materials.

— — — — — — — — — — — — — — — — — — — — — — — — — — — -

It is also important to note that terraform interacts with API not directly with the cloud vendors. So, when we apply something the code is checked for integrity by terraform and then desired state is requested via API.

Change Configuration

When you update your terraform code, depending on the state the infrastructure may be completely destroyed and rebuild or updated.

Terraform builds an execution plan that only modifies what is necessary to reach your desired state.

By using Terraform to change infrastructure, you can version control not only your configurations but also your state so you can see how the infrastructure evolved over time.

Let’s modify the ami of our instance. Edit the aws_instance.example resource in your configuration and change it to the following:

resource “aws_instance” “example” {

ami = “ami-b374d5a5”

instance_type = “t2.micro”

}

$ terraform apply

# …

-/+ aws_instance.example

ami: “ami-2757f631” => “ami-b374d5a5” (forces new resource)

availability_zone: “us-east-1a” => “<computed>”

ebs_block_device.#: “0” => “<computed>”

ephemeral_block_device.#: “0” => “<computed>”

instance_state: “running” => “<computed>”

instance_type: “t2.micro” => “t2.micro”

private_dns: “ip-172–31–17–94.ec2.internal” => “<computed>”

private_ip: “172.31.17.94” => “<computed>”

public_dns: “ec2–54–82–183–4.compute-1.amazonaws.com” => “<computed>”

public_ip: “54.82.183.4” => “<computed>”

subnet_id: “subnet-1497024d” => “<computed>”

vpc_security_group_ids.#: “1” => “<computed>”

The prefix -/+ means that Terraform will destroy and recreate the resource, rather than updating it in-place. While some attributes can be updated in-place (which are shown with the ~ prefix), changing the AMI for an EC2 instance requires recreating it. Terraform handles these details for you, and the execution plan makes it clear what Terraform will do.

Additionally, the execution plan shows that the AMI change is what required resource to be replaced. Using this information, you can adjust your changes to possibly avoid destroy/create updates if they are not acceptable in some situations.

Once again, Terraform prompts for approval of the execution plan before proceeding. Answer yes to execute the planned steps:

# …

aws_instance.example: Refreshing state… (ID: i-64c268fe)

aws_instance.example: Destroying…

aws_instance.example: Destruction complete

aws_instance.example: Creating…

ami: “” => “ami-b374d5a5”

availability_zone: “” => “<computed>”

ebs_block_device.#: “” => “<computed>”

ephemeral_block_device.#: “” => “<computed>”

instance_state: “” => “<computed>”

instance_type: “” => “t2.micro”

key_name: “” => “<computed>”

placement_group: “” => “<computed>”

private_dns: “” => “<computed>”

private_ip: “” => “<computed>”

public_dns: “” => “<computed>”

public_ip: “” => “<computed>”

root_block_device.#: “” => “<computed>”

security_groups.#: “” => “<computed>”

source_dest_check: “” => “true”

subnet_id: “” => “<computed>”

tenancy: “” => “<computed>”

vpc_security_group_ids.#: “” => “<computed>”

aws_instance.example: Still creating… (10s elapsed)

aws_instance.example: Still creating… (20s elapsed)

aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

# …

As indicated by the execution plan, Terraform first destroyed the existing instance and then created a new one in its place. You can use terraform show again to see the new values associated with this instance.

Resource Dependencies

In terraform we usually have more than one modules and modules that depend on another module to fully enable infrastructure. In this way one resource is dependent on another resources.

for example let’s update the example to

resource “aws_eip” “ip” {

instance = “${aws_instance.example.id}”

}

This should look familiar from the earlier example of adding an EC2 instance resource, except this time we’re building an “aws_eip” resource type. This resource type allocates and associates anelastic IP to an EC2 instance.

The only parameter for aws_eip is “instance” which is the EC2 instance to assign the IP to. For this value, we use an interpolation to use an attribute from the EC2 instance we managed earlier.

The syntax for this interpolation should be straightforward: it requests the “id” attribute from the “aws_instance.example” resource.

If we run

$ terraform apply

+ aws_eip.ip

allocation_id: “<computed>”

association_id: “<computed>”

domain: “<computed>”

instance: “${aws_instance.example.id}”

network_interface: “<computed>”

private_ip: “<computed>”

public_ip: “<computed>”

+ aws_instance.example

ami: “ami-b374d5a5”

availability_zone: “<computed>”

ebs_block_device.#: “<computed>”

ephemeral_block_device.#: “<computed>”

instance_state: “<computed>”

instance_type: “t2.micro”

key_name: “<computed>”

placement_group: “<computed>”

private_dns: “<computed>”

private_ip: “<computed>”

public_dns: “<computed>”

public_ip: “<computed>”

root_block_device.#: “<computed>”

security_groups.#: “<computed>”

source_dest_check: “true”

subnet_id: “<computed>”

tenancy: “<computed>”

vpc_security_group_ids.#: “<computed>”

Terraform will create two resources: the instance and the elastic IP. In the “instance” value for the “aws_eip”, you can see the raw interpolation is still present. This is because this variable won’t be known until the “aws_instance” is created. It will be replaced at apply-time.

PROVISIONING

Provisioning is to allow rights to do certain executions. If we consider this example

resource “aws_instance” “example” {

ami = “ami-b374d5a5”

instance_type = “t2.micro”

provisioner “local-exec” {

command = “echo ${aws_instance.example.public_ip} > ip_address.txt”

}

}

This adds a provisioner block within the resource block. Multiple provisioner blocks can be added to define multiple provisioning steps. Terraform supports multiple provisioners, but for this example we are using the local-exec provisioner.

The local-exec provisioner executes a command locally on the machine running Terraform. We're using this provisioner versus the others so we don't have to worry about specifying anyconnection info right now.

Input Variables

You now have enough Terraform knowledge to create useful configurations, but we’re still hard-coding access keys, AMIs, etc. To become truly shareable and version controlled, we need to parameterize the configurations. This page introduces input variables as a way to do this.

Defining Variables

Let’s first extract our access key, secret key, and region into a few variables. Create another file variables.tf with the following contents.

Note: that the file can be named anything, since Terraform loads all files ending in .tf in a directory.

variable “access_key” {}

variable “secret_key” {}

variable “region” {

default = “us-east-1”

}

This defines three variables within your Terraform configuration. The first two have empty blocks {}. The third sets a default. If a default value is set, the variable is optional. Otherwise, the variable is required. If you run terraform plan now, Terraform will prompt you for the values for unset string variables.

» Using Variables in Configuration

Next, replace the AWS provider configuration with the following:

provider “aws” {

access_key = “${var.access_key}”

secret_key = “${var.secret_key}”

region = “${var.region}”

}

This uses more interpolations, this time prefixed with var.. This tells Terraform that you’re accessing variables. This configures the AWS provider with the given variables.

» Assigning Variables

There are multiple ways to assign variables. Below is also the order in which variable values are chosen. The following is the descending order of precedence in which variables are considered.

» Command-line flags

You can set variables directly on the command-line with the -var flag. Any command in Terraform that inspects the configuration accepts this flag, such as apply, plan, and refresh:

$ terraform apply \

-var ‘access_key=foo’ \

-var ‘secret_key=bar’

# …

Once again, setting variables this way will not save them, and they’ll have to be input repeatedly as commands are executed.

» From a file

To persist variable values, create a file and assign variables within this file. Create a file named terraform.tfvars with the following contents:

access_key = “foo”

secret_key = “bar”

For all files which match terraform.tfvars or *.auto.tfvars present in the current directory, Terraform automatically loads them to populate variables. If the file is named something else, you can use the -var-file flag directly to specify a file. These files are the same syntax as Terraform configuration files. And like Terraform configuration files, these files can also be JSON.

We don’t recommend saving usernames and password to version control, but you can create a local secret variables file and use -var-file to load it.

You can use multiple -var-file arguments in a single command, with some checked in to version control and others not checked in. For example:

$ terraform apply \

-var-file=”secret.tfvars” \

-var-file=”production.tfvars”

» From environment variables

Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_access_key variable can be set to set the access_key variable.

Note: Environment variables can only populate string-type variables. List and map type variables must be populated via one of the other mechanisms.

» UI Input

If you execute terraform apply with certain variables unspecified, Terraform will ask you to input their values interactively. These values are not saved, but this provides a convenient workflow when getting started with Terraform. UI Input is not recommended for everyday use of Terraform.

Note: UI Input is only supported for string variables. List and map variables must be populated via one of the other mechanisms.

» Variable Defaults

If no value is assigned to a variable via any of these methods and the variable has a default key in its declaration, that value will be used for the variable.

» Lists

Lists are defined either explicitly or implicitly

# implicitly by using brackets […]

variable “cidrs” { default = [] }

# explicitly

variable “cidrs” { type = “list” }

You can specify lists in a terraform.tfvars file:

cidrs = [ “10.0.0.0/16”, “10.1.0.0/16” ]

» Maps

We’ve replaced our sensitive strings with variables, but we still are hard-coding AMIs. Unfortunately, AMIs are specific to the region that is in use. One option is to just ask the user to input the proper AMI for the region, but Terraform can do better than that with maps.

Maps are a way to create variables that are lookup tables. An example will show this best. Let’s extract our AMIs into a map and add support for the us-west-2 region as well:

variable “amis” {

type = “map”

default = {

“us-east-1” = “ami-b374d5a5”

“us-west-2” = “ami-4b32be2b”

}

}

A variable can have a map type assigned explicitly, or it can be implicitly declared as a map by specifying a default value that is a map. The above demonstrates both.

Then, replace the aws_instance with the following:

resource “aws_instance” “example” {

ami = “${lookup(var.amis, var.region)}”

instance_type = “t2.micro”

}

This introduces a new type of interpolation: a function call. The lookup function does a dynamic lookup in a map for a key. The key is var.region, which specifies that the value of the region variables is the key.

While we don’t use it in our example, it is worth noting that you can also do a static lookup of a map directly with ${var.amis[“us-east-1”]}.

» Assigning Maps

We set defaults above, but maps can also be set using the -var and -var-file values. For example:

$ terraform apply -var ‘amis={ us-east-1 = “foo”, us-west-2 = “bar” }’

# …

Note: Even if every key will be assigned as input, the variable must be established as a map by setting its default to {}.

Here is an example of setting a map’s keys from a file. Starting with these variable definitions:

variable “region” {}

variable “amis” {

type = “map”

}

You can specify keys in a terraform.tfvars file:

amis = {

“us-east-1” = “ami-abc123”

“us-west-2” = “ami-def456”

}

And access them via lookup():

output “ami” {

value = “${lookup(var.amis, var.region)}”

}

Like so:

$ terraform apply -var region=us-west-2

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

ami = ami-def456

Output Variables

In the previous section, we introduced input variables as a way to parameterize Terraform configurations. In this page, we introduce output variables as a way to organize data to be easily queried and shown back to the Terraform user.

When building potentially complex infrastructure, Terraform stores hundreds or thousands of attribute values for all your resources. But as a user of Terraform, you may only be interested in a few values of importance, such as a load balancer IP, VPN address, etc.

Outputs are a way to tell Terraform what data is important. This data is outputted when apply is called, and can be queried using the terraform output command.

Defining Outputs

Let’s define an output to show us the public IP address of the elastic IP address that we create. Add this to any of your *.tf files:

output “ip” {

value = “${aws_eip.ip.public_ip}”

}

This defines an output variable named “ip”. The name of the variable must conform to Terraform variable naming conventions if it is to be used as an input to other modules. The value field specifies what the value will be, and almost always contains one or more interpolations, since the output data is typically dynamic. In this case, we’re outputting the public_ip attribute of the elastic IP address.

Multiple output blocks can be defined to specify multiple output variables.

Viewing Outputs

Run terraform apply to populate the output. This only needs to be done once after the output is defined. The apply output should change slightly. At the end you should see this:

$ terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

ip = 50.17.232.209

apply highlights the outputs. You can also query the outputs after apply-time using terraform output:

$ terraform output ip

50.17.232.209

This command is useful for scripts to extract outputs.

Modules

Up to this point, we’ve been configuring Terraform by editing Terraform configurations directly. As our infrastructure grows, this practice has a few key problems: a lack of organization, a lack of reusability, and difficulties in management for teams.

Modules in Terraform are self-contained packages of Terraform configurations that are managed as a group. Modules are used to create reusable components, improve organization, and to treat pieces of infrastructure as a black box.

Up to this point, we’ve been configuring Terraform by editing Terraform configurations directly. As our infrastructure grows, this practice has a few key problems: a lack of organization, a lack of reusability, and difficulties in management for teams.

Modules in Terraform are self-contained packages of Terraform configurations that are managed as a group. Modules are used to create reusable components, improve the organization, and to treat pieces of infrastructure as a black box.

This section of the getting started will cover the basics of using modules. Writing modules is covered in more detail in the modules documentation.

Warning! The examples on this page are not eligible for the AWS free tier. Do not try the examples on this page unless you’re willing to spend a small amount of money.

» Using Modules

If you have any instances running from prior steps in the getting started guide, use terraform destroy to destroy them, and remove all configuration files.

The Terraform Registry includes a directory of ready-to-use modules for various common purposes, which can serve as larger building-blocks for your infrastructure.

In this example, we’re going to use the Consul Terraform module for AWS, which will set up a complete Consul cluster. This and other modules can be found via the search feature on the Terraform Registry site.

Create a configuration file with the following contents:

provider “aws” {

access_key = “AWS ACCESS KEY”

secret_key = “AWS SECRET KEY”

region = “us-east-1”

}

module “consul” {

source = “hashicorp/consul/aws”

version = “0.3.3”

aws_region = “us-east-1” # should match provider region

num_servers = “3”

}

The module block begins with the example given on the Terraform Registry page for this module, telling Terraform to create and manage this module. This is similar to a resource block: it has a name used within this configuration — in this case, “consul” — and a set of input values that are listed in the module’s “Inputs” documentation.

(Note that the provider block can be omitted in favor of environment variables. See the AWS Provider docs for details. This module requires that your AWS account has a default VPC.)

The source attribute is the only mandatory argument for modules. It tells Terraform where the module can be retrieved. Terraform automatically downloads and manages modules for you.

In this case, the module is retrieved from the official Terraform Registry. Terraform can also retrieve modules from a variety of sources, including private module registries or directly from Git, Mercurial, HTTP, and local files.

The other attributes shown are inputs to our module. This module supports many additional inputs, but all are optional and have reasonable values for experimentation.

After adding a new module to the configuration, it is necessary to run (or re-run) terraform init to obtain and install the new module’s source code:

$ terraform init

# …

By default, this command does not check for new module versions that may be available, so it is safe to run multiple times. The -upgrade option will additionally check for any newer versions of existing modules and providers that may be available.

Certification

You may have already known by now that there is no certification for terraform but you can take many courses online.

github

It is a standard practice to use github or any other version control system for sharing and versioning terraform codes online and share with other co-workers. Below is the standard environment for any terraform project. You may need to modify zones, IP, folder, etc before plan run without errors.

Please use the materials at your own risk in both Github and AWS. I am not responsible if you lose your data or kill your servers.

Online Courses

https://www.linkedin.com/learning/learning-terraform/next-steps

https://linuxacademy.com/amazon.../deploying-to-aws-with-ansible-and-terraform

BOOKS

— — — — — — — — — — — — — — -

https://github.com/vtraida/books/blob/master/Terraform%20Up%20and%20Running.pdf

https://github.com/arpitjindal97/technology_books/blob/master/Terraform/Getting-Started-with-Terraform,2nd-Edition.pdf

Blogs

https://blog.scottlogic.com/2018/10/08/infrastructure-as-code-getting-started-with-terraform.html

https://www.hashicorp.com/blog/category/terraform (must follow)

Resources

https://www.hashicorp.com/resources

Course certificates

Certificate from Linkedin

References:

Prashanta Paudel

Written by

AWS Certified Practitioner | GCP Certified Associate | Software Engineer

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade