Part 3 — HumanGov Application — Terraform-1: Infrastructure as Code (IaC) with Terraform

Cansu Tekin
11 min readMay 10, 2023

--

HumanGov is a software-as-a-service (SaaS) cloud company that will create a Human Resources Management SaaS application for the Department of Education across all 50 states in the US and host the application files and databases in the cloud. Whenever a new employee is hired, a new registry will be created for this employee inside this application.

In this following project series, we are going to transition the architecture from a traditional virtual machine architecture to a modern container-based architecture using Docker containers and Kubernetes running on AWS. In addition, we will also be responsible for automating the complete software delivery process using Pipelines CI/CD using AWS services such as AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Finally, we will learn how to monitor and observe the cloud environment in real-time using tools such as Prometheus, Grafana, and automate one-off cloud tasks using Python and the AWS SDK.

In this section, we are going to introduce the fundamentals of Terraform and practice Terraform with cloud provider AWS before using it in the implementation of the HumanGov application. This is the 3rd part of a project series. Check Part 1 and Part 2 to follow up.

HumanGov is a software-as-a-service (SaaS) cloud company that will create a Human Resources Management SaaS application for the Department of Education across all 50 states in the US and host the application files and databases in the cloud. Whenever a new employee is hired, a new registry will be created for this employee inside this application. Department of Education demanded compliance with some rules while developing the application and its infrastructure:

  1. No pooled tenants are allowed by States’ IT Divisions. While developing a SaaS application we have 2 architecture options; shared/pooled or silent mod. The pooled mod is that all the users of the application (tenants) share the same infrastructure. In the silent mod, the application and all the related data of it for a given user have their own infrastructure to serve that particular user. Each US state will have its own infrastructure to serve its application.
  2. Containers are not homologated yet by States’ IT Divisions. The application architecture should not include any containerization like Docker and Kubernetes. The first version of the application will not be hosted inside Docker, instead, will be delivered using virtual machines until States’ IT Divisions approve containerization.

HumanGov will need to provision a dedicated infrastructure for each state. Each state will have its own virtual machine EC2 instance, DynamoDB database to store its own data, and S3 Bucket to store the ID of the employees inside AWS Cloud.

We need 50 different environments for each state/tenant including the three resources for each environment; EC2 instance, DynamoDB database, and S3 Bucke. In total, we need to provision 50*3=150 resources inside AWS. As DevOps engineers, we are responsible for provisioning all these resources. If done manually, this is a cumbersome task and it opens the door to human mistakes. As a solution, we are going to provision all these resources in a fully automated way using Terraform.

Terraform: Infrastructure as Code (IaC)

Anytime we provision resources in the on-promise environment we do it manually. It is expensive, hard to scale up and down, prone to human error, inconsistent, and slow to deploy. IaC allows defining of infrastructure deployment using machine-readable files to deploy, update and destroy resources in an automated way in the cloud. This is usually done by calling the cloud provider APIs’. IaC is very important for automation. IaC automation is a fast, safe, consistent, reusable, and powerful way to improve the software delivery process and avoid human errors. Additionally, we can store our IaC source code files inside the version control system.

IaC Tools:

Configuration Management Tools: Ansible, puppet

Provisioning Tools: Terraform, AWS CloudFormation, Azure Resource Manager (ARM), GCP Deployment Manager, OCI Resource Manager

Configuration management tool configure already provisioned resources. For example, Terraform creates EC2 instances. Ansible creates folders, files, users, changes permissions, and installs software packages inside those EC2 instances.

Terraform: Terraform is an IaC tool that lets you build, change, and version cloud and on-premises resources.

  • Terraform can provision infrastructure on all cloud providers like AWS, GCP, Microsoft Azure, and Oracle (cloud agnostic).
  • Terraform allows the creation of mutable infrastructure. Whenever you need a change in your infrastructure you can update the terraform file and it will handle the changes.
  • Terraform is agentless and masterless. You do not need any software running inside your host to provision the infrastructure. You only need the Terraform running.
  • Terraform uses declarative languages not programing language.

Terraform uses user-friendly HashiCorp Configuration Language, HCL. HCL is a declarative language to define the infrastructure as a “Block of Code”. HCL syntax is built around 2 key syntax constructs: blocks and arguments. Files with .tf extension are most likely written in HCL and terraform configuration files.

Type of blocks: local, terraform, providers, resources, data, module, variable, output.

Let's define a resource block as an example assuming we are going to provision a new EC2 instance on AWS named “webserver”:

#-block-#-------parameters---------#
# | |
resource "<resource_type>" "<name>"{
ami = "ami-830c94e3" # argument1
instance_type = "t2.medium" # argument2
}

# Creating resource on AWS
resource "aws_instance" "webserver"{
ami = "ami-830c94e3"
instance_type = "t2.medium"
}

# Creating resource on Microsoft Azure
resource "azurerm_resource_group" "my_rg"{
name = "tf-rg"
location = "brazilsouth"
}
  • resource_type has two sections:

providerName_resourceType -> aws_instance

  • name can be anything you prefer. It is an identifier for the Terraform code to refer to this resource.
  • We have 2 arguments here; ami and instance_type. The ami refers to Amazon Machine Image and defines the operating system of the instance. The instance_type defines the size of the instance, memory, and CPU.

We can get most of this information from Terraform documentation.

Working with Terraform Providers

  1. First, we need to create terraform configuration files; main.tf, resources.tf, and variables.tf. The main.tf file finds the provider that you would like to provision the resources. The resources.tf file defines the resources you would like to provision, and the variables.tf file includes variables you would like to assign values.

Example of main.tf file to provision an EC2 instance on AWS:

terraform {
required_version = "~> 1.4.0" # version constraint
required_providers {
aws = {
version = "4.64.0"
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-west-2"
access_key = xyz
secret_key = xyz
}

resource "aws_instance" "webserver"{
ami = "ami-9283xc0"
instance_type = "t3.medium"
}

We have 3 blocks here: terraform block, provider block, and resource block. Under Terraform block, we define the version of the Terraform and provider. Under the provider block, we configure the AWS provider by providing more information.

Terraform version constraint defines that this code should work with Terraform binary within the minor range version from 1.4.0 to 1.4.9. If there is a major update from 1.4.9 to 1.5.0, this code would not work as intended with higher releases. We should enforce the version constraint to make sure our Terraform code is aligned with the Terraform binary. Whenever Terraform or the provider releases a new version of the binary, test if the code works properly.

2. Second, we need Terraform installed or a tool that comes with pre-installed Terraform like AWS Cloud9. Whenever we execute Terraform, it creates a state file, tfstate, and writes the states of the provisioning infrastructure inside it.

You can install Terraform based on your operating system by following the link below:

#Download terraform
wget https://releases.hashicorp.com/terraform/0.15.1/terraform_0.15.1_linux_amd64.zip

#Unzip terraform binaries
unzip terraform_0.15.1_linux_amd64.zip

#Add terraform to /usr/local/bin
sudo mv terraform /usr/local/bin

The first command to run is terraform init which looks configuration file main.tf inside the configuration directory. Configuration Directory is a directory that Terraform is going to look at whenever you run Terraform commands. The provider is AWS in this example. It downloads Terraform AWS plugins. Goes source = “hashicorp/aws” and installs version 4.64.0. So this Terraform code can run successfully on the local machine. These plugins will be stored in a hidden folder .terraform on your local machine.

We can split the main.tf into 2 configuration files to work more organized: terraform.tf and resources.tf

terraform.tf

terraform {
required_version = "~> 1.4.0" # version constraint
required_providers {
aws = {
version = "4.64.0"
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-west-2"
access_key = xyz
secret_key = xyz
}

resources.tf

resource "aws_instance" "webserver"{
ami = "ami-9283xc0"
instance_type = "t3.medium"
}

We place those files inside the configuration directory. Make sure you are in the configuration directory when running Terraform commands. Run the following Terraform commands in the same order to provision the AWS EC2 instance as we did with the main.tf file.

terraform init # initialize Terraform
terraform plan # tells what the Terraform is planning to create
terraform apply # creates resources in the cloud
terraform destroy # you can remove resources after created and no longer needed

Example of terraform.tf while working with multiple cloud providers.

terraform {
required_version = "~> 1.4.0" # version constraint
required_providers {
aws = {
version = "4.64.0"
source = "hashicorp/aws"
}
azurerm = {
version = "3.0.0"
source = "hashicorp/azurerm"
}
google= {
version = "3.5.0"
source = "hashicorp/google"
}
oci= {
source = "hashicorp/oci"
}
}
}

When we run terraform init command each provider’s plugin will be downloaded inside .terraform hidden folder. Each cloud provider will have its own directory. You can see all the details for each provider in Terraform documentation:

Every time when you create resources with Terraform, use Terraform documentation because Terraform and cloud providers are always updating the way of creating resources.

We are going to create a sample VPC (Virtual Private Cloud) using AWS provider. Go to Terraform documentation, select AWS provider, and click the Documentation button. We are going to use the same example inside the documentation.

Step 1: Create a folder inside the humangov AWS Cloud9 environment we created before to store Terraform project files

We are going to use a cloud-based environment AWS Cloud9 during all series of projects, so no need for Terraform installation. Terraform is pre-installed inside the EC2 instance of the Cloud9 environment.

We are going to create a folder named humangov-terraform which will be our configuration directory.

Step 2: Create a main.tf file under humangov-terraform configuration directory

main.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}

# Create a VPC
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}

Step 3: Initialize the Terraform configuration

In the meantime, our AWS account has only one VPC in the us-east-1 region.

Go to the configuration directory on AWS Cloud9 and initialize Terraform.

terraform init

AWS plugin is installed inside the hidden directory .terraform.

terraform plan
terraform apply 

A VPC in the AWS and terraform.tfstate file in the configuration directory is created. Now we have 2 VPCS.

Step 4: Split main.tf file content to resources.tf and terraform.tf file and create a local file inside the configuration directory on Cloud9

terraform.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}

resources.tf

# Create a VPC
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}

Go to the Terraform documentation provider page. Select the local provider from the list and go to its documentation. It is used to manage local resources, such as creating files. Use the example to create a local file and add content to it.

Add the following structure from the documentation to the resources.tf to create a file named notes.txt

# Create a VPC
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}

# Create a local file named notes.txt
resource "local_file" "notes" {
content = "This file is created by Terraform"
filename = "notes.txt"
}

Every time we use a new provider we run terraform init command again. Here we are using a local provider. Terraform needs to install the new provider’s plugin.

terraform init
terraform plan
terraform apply

Step 5: Use the random provider to create a random string

Go to the provider main page in the Terraform documentation again, select the random provider, and click the documentation.

We are going to use the same example here. Add this resource to the resources.tf file. The file should look like this at this point:

# Create a VPC
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}

# Create a local file named notes.txt
resource "local_file" "notes" {
content = "This file is created by Terraform"
filename = "notes.txt"
}

# create a random string with lenght 10
resource "random_string" "random" {
length = 10
}

Initialize and run Terraform for configuration to create resources.

terraform init
terraform plan
terraform apply

It will give an id. We can use this id to refer to this string while using it. We can use this string as an ID, a password, a file name, or a resource name.

Step 6: Add Azure provider to terraform.tf file and see how Terraform works with multiple cloud providers

The terraform.tf file should look like this:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}

# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}

Initialize the Terraform again and check the .terraform folder. You should see all plugins for the providers (aws, azurerm, random, local)we executed so far.

We can configure Azure provider or we can provision resources in it as we did for AWS. Additionally, if you realize any authentication is not required to provision resources inside AWS because Terraform is already pre-authenticated to configure AWS resources inside Cloud9. However, if we want to configure Azure resources using the same configuration we need to be authenticated with Azure because Cloud9 is not pre-authenticated with Azure.

We have done with Terraform fundamentals section 1. We can destroy the resources we created with Terraform. Go to the configuration directory and run terraform destroy command.

terraform destroy

All 3 resources AWS VPC, notes.txt, and the random string we created destroyed successfully!

We will continue working on Terraform fundamentals and once we are done we will apply our Terraform knowledge to configure the cloud infrastructure for the HumanGov application.

CONGRATULATIONS!!

--

--

Cansu Tekin

AWS Community Builder | Full Stack Java Developer | DevOps | AWS | Microsoft Azure | Google Cloud | Docker | Kubernetes | Ansible | Terraform