Terraform: Decoupling a Monolith Configuration

Brandi McCall
Women in Technology
18 min readMar 31, 2023

In my previous tutorial I created a monolithic Terraform configuration that deployed infrastructure from a single main.tf file. In this tutorial, we will decouple the monolith and remove hard-coded values from the main.tf file, replacing them with variables. We will also create a separate file for the Jenkins installation and start script, a providers file, and an output file. In total, we will create all of the following:

  • main.tf
  • variables.tf
  • jenkins.sh
  • providers.tf
  • output.tf

Prerequisites:

Getting Started

If you have not already done so, or are not very familiar with the format of Terraform HCL files, read my previous tutorial “Terraform, Amazon EC2, and Jenkins” as this is a prerequisite for what is to come. Make sure that you have set up your Cloud9 environment and configured the AWS provider through the CLI with your AWS user access keys. Create a new directory through your CLI where you want to house this project and change to that directory.

Monolithic Terraform Configuration File

Let’s review the main.tf file from the last tutorial, where we used a single file to create infrastructure. The HCL code below is explained by the comments for each block, but in summary, this main.tf file uses AWS as the Terraform provider, creates an EC2 instance based on the stated AMI and instance type, bootstraps a script to install and start Jenkins on instance startup, creates a security group to open ports 22, 8080, and 443 for inbound traffic, and creates a private S3 bucket with a random number name extension that will be used by developers to house Jenkins artifacts.

#Configure the AWS Provider
provider "aws" {
region = "us-east-2"
}

#Create EC2 Instance
resource "aws_instance" "instance1" {
ami = "ami-0533def491c57d991"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]
tags = {
Name = "jenkins_instance"
}

#Bootstrap Jenkins installation and start
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
sudo yum upgrade
sudo amazon-linux-extras install java-openjdk11 -y
sudo yum install jenkins -y
sudo systemctl enable jenkins
sudo systemctl start jenkins
EOF

user_data_replace_on_change = true
}

#Create security group
resource "aws_security_group" "jenkins_sg" {
name = "jenkins_sg"
description = "Open ports 22, 8080, and 443"

#Allow incoming TCP requests on port 22 from any IP
ingress {
description = "Incoming SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

#Allow incoming TCP requests on port 8080 from any IP
ingress {
description = "Incoming 8080"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

#Allow incoming TCP requests on port 443 from any IP
ingress {
description = "Incoming 443"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

#Allow all outbound requests
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "jenkins_sg"
}
}

#Create S3 bucket for Jenkins artifacts
resource "aws_s3_bucket" "jenkins-artifacts" {
bucket = "jenkins-artifacts-${random_id.randomness.hex}"

tags = {
Name = "jenkins_artifacts"
}
}

#Make S3 bucket private
resource "aws_s3_bucket_acl" "private_bucket" {
bucket = aws_s3_bucket.jenkins-artifacts.id
acl = "private"
}

#Create random number for S3 bucket name
resource "random_id" "randomness" {
byte_length = 16
}

Decouple the Monolith

Although infrastructure can be built with a single main.tf file, it is a Terraform best practice to split infrastructure into smaller Terraform configuration files so they can be worked on by multiple teams. When a Terraform monolith is broken down into multiple files, this is called a loosely-coupled Terraform configuration. In this section, we will create several files to begin decoupling our monolith. We will copy pieces from our main.tf file created in the previous tutorial to our new main.tf file created in a new directory. When I refer to “previous main.tf file” it means the file created in the last tutorial. When I refer to “new main.tf file” it means the file created in this tutorial in your new directory.

Navigate to Cloud 9 and create a new directory that you want to work on this project in. Create the following files within the new directory:

  • main.tf
  • variables.tf
  • jenkins.sh
  • providers.tf
  • output.tf

Now we will start taking pieces from our previous monolith main.tf file and placing those pieces in these individual files. You cannot have two main.tf files in the same directory so this is why it is best to create a new directory for this project, allowing you to keep your main.tf file from the last tutorial in a different directory. You can see that my file tree on the left has all of my files created and I have them all open. You may not have the terraform.tfstate or terraform.tfstate.backup files yet as these will be automatically created later when you run the terraform apply command. You can go ahead and copy the contents of your previous main.tf file from the last tutorial and paste it into the new main.tf file you just created. We will be editing the new main.tf file going forward.

Edit the New providers.tf File

We want to separate our providers HCL code from our main code, so cut it out of the main.tf file and paste it into the providers.tf file. Notice that I have added an initial Terraform block to specify a Terraform version and an AWS version. This is optional.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.24.1"
}
}
required_version = ">= 1.1.5"
}

#Configure the AWS Provider
provider "aws" {
region = var.variables_region
}

This will be the complete contents of the providers.tf file so you can go ahead and save and close the file. Remember, this will no longer be in our main.tf file and will only be located in the providers.tf file.

Edit the New jenkins.sh File

Next we want to take our bootstrap script out of the main.tf file and put it in its own file. Because it is a bash script, the file extension must end in .sh so this file is called jenkins.sh. Cut the script from your main.tf file and paste it into the jenkins.sh file.

#!/bin/bash
sudo yum update -y
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
sudo yum upgrade
sudo amazon-linux-extras install java-openjdk11 -y
sudo yum install jenkins -y
sudo systemctl enable jenkins
sudo systemctl start jenkins

Notice this looks a little bit different than it did in the main.tf file for the last tutorial. Here is a screenshot of it in the previous main.tf file:

In the previous tutorial, we used Terraform’s heredoc syntax with <<-EOF and EOF that allowed us to insert multi-string lines. We do not need this in the separate jenkins.sh file, so make sure you do not include them. Also note that user_data_replace_on_change = true is not included in the jenkins.sh file because it is not part of the actual script. We can move this to the top of our main.tf file under the EC2 instance resource.

Go back to your jenkins.sh file and make sure it only includes the bash script above. It should look like this:

Save and close the file since we will not need to edit it again.

Now that we have our script in a separate file, we need to reference that file in our main.tf file. Navigate to your new main.tf file and add the following under the instance resource block:

user_data = file("jenkins.sh")

This tells Terraform the user data script can be found in the jenkins.sh file.

Notice I have also included a few other arguments that we will discuss later. For now, go ahead and include associate_public_ip_address = true as this will assign a public IP address to the instance that will be created.

Edit the variables.tf File

In the previous tutorial, we hard-coded all of our data into the main.tf file (ex. Region, AMI ID, instance type, etc.). A better practice is to not hard-code any data in your main.tf file, and instead create a separate variables.tf file that define variables. You then reference these variables in the main.tf file. In this section, we are going to go through each resource block and create variables for all of our hard-coded data. We will list those variables in the variables.tf file. When you reference a variable in the variables.tf file, use the following syntax:

var.<variable_name>

Starting under the instance resource block, we can create variables for the Region (now located in the providers.tf file), AMI, instance type, instance name, and key name. The key name is a new argument we will add that assigns an existing AWS EC2 key pair to the instance that we will create, and can be used later to SSH into the instance. If you don’t have an existing key pair, you can create one in the EC2 console.

The Region, AMI, instance type, and key name variables will look like this:

Type out your variables in the variables.tf file or copy from the code below. The description is optional, but you do need a type and default. Replace my key name with your existing key pair name.

variable "variables_region" {
description = "Region"
type = string
default = "us-east-2"
}

variable "variables_ami" {
description = "AMI"
type = string
default = "ami-0533def491c57d991"
}

variable "variables_instance_type" {
description = "Instance Type"
type = string
default = "t2.micro"
}

variable "variables_instance_name" {
description = "Instance Name"
type = string
default = "Instance1"
}

variable "variables_key_name" {
description = "EC2 Key Name"
type = string
default = "EC2-Ohio"
}

Moving on to the security group resource, we can create variables for the protocol, ingress and egress CIDR blocks. Type these in your variables.tf file or copy from the code box below.

variable "variables_cidr" {
description = "CIDR for All IPs"
type = string
default = "0.0.0.0/0"
}

variable "variables_tcp" {
description = "TCP Protocol"
type = string
default = "tcp"
}

variable "variables_egress" {
description = "Egress All"
type = string
default = "-1"
}

Now lets delete our hard-coded data from the main.tf file and use var.<variable_name> syntax to reference the variables we just created. Make sure to save your variables.tf file.

For the instance resource, the replacement should look like this:

For the security group resource, the replacement should look like this:

Notice that cidr_blocks requires a list of CIDRs so the variable is encased in square brackets [ ]. Save the main.tf file.

For the Region in the providers.tf file, the replacement should look like this:

Update S3 Bucket Privacy

In the last tutorial, we created an S3 bucket with a random number extension and set the “acl” as “private” to make the bucket private.

While this made the bucket private, it did not necessarily make objects private. To tighten up security on this bucket and make both the bucket and objects private, we will define several more arguments under the bucket resource block in the form of variables. Notice a link to the official Terraform documentation reference for this topic is listed as a comment in the code. To tighten up your bucket security, add the following variables to your variables.tf file:

variable "variables_block_public_acls" {
description = "Block all public ACLs"
type = bool
default = true
}

variable "variables_block_public_policy" {
description = "Block all public ACLs"
type = bool
default = true
}

variable "variables_ignore_public_acls" {
description = "Block all public ACLs"
type = bool
default = true
}

variable "variables_restrict_public_buckets" {
description = "Block all public ACLs"
type = bool
default = true
}

This is the completion of your variables.tf file so go ahead and save it. You should have 12 variables total (variables_region, variables_ami, variables_instance_type, variables_instance_name, variables_key_name, variables_cidr, variables_tcp, variables_egress, variables_block_public_acls, variables_block_public_policy, variables_ignore_public_acls, and variables_restrict_public_buckets).

Now lets reference the new bucket privacy variables in the main.tf file. Add the following aws_s3_bucket_public_access_block resource block:

#Make S3 bucket and objects private
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_public_access_block
resource "aws_s3_bucket_public_access_block" "private-bucket" {
bucket = aws_s3_bucket.jenkins-artifacts.id
block_public_acls = var.variables_block_public_acls
block_public_policy = var.variables_block_public_policy
ignore_public_acls = var.variables_ignore_public_acls
restrict_public_buckets = var.variables_restrict_public_buckets
}

This will block all public access to the bucket and objects. Because we are redefining our ACL within this block, you can delete the entire aws_s3_bucket_acl resource from your main.tf file and only use the code listed above that you just added.

Create IAM Policy that Allows Read, Write, and List Access to Jenkins S3 Bucket

If you are familiar with IAM policies and roles, you know that permissions to instances and buckets are key. In the next few sections, we will create an IAM policy that will allow users to read objects, write objects, and list objects pertaining to the Jenkins artifacts S3 bucket. We will then create an IAM role that the EC2 Jenkins instance can assume and attach the policy to the role. We will save all of this in an EC2 instance profile, so that when a user logs into the instance, they are assuming the role and have permission to read and write to the S3 bucket and list objects in the S3 bucket. First, let’s create the IAM policy.

The policy to allow read, write, and list privileges for the S3 Jenkins artifacts bucket will be a standard AWS policy that allows “ListBucket”, “GetObjects”, and “PutObjects”. We will not go into details of how to create a bucket policy, but more information on this can be found on the official AWS documentation and an example can be found here. Terraform uses jsonencodeto encode a value to a string using JSON format. Add the following code to you main.tf file:

#Create IAM policy that allows read/write access to Jenkins artifacts S3 bucket
#https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html
resource "aws_iam_policy" "S3-read-write-policy" {
name = "S3-read-write-policy"
path = "/"
description = "Policy that allows read/write access to Jenkins artifacts S3 bucket"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "ListObjectsInBucket",
"Effect" : "Allow",
"Action" : ["s3:ListBucket"],
"Resource" : ["arn:aws:s3:::jenkins-artifacts"]
},
{
"Sid" : "GetPutObjectsInBucket"
"Effect" : "Allow"
"Action" : ["s3:GetObject", "s3:PutObject"]
"Resource" : ["arn:aws:s3:::jenkins-artifacts/*"]
}
]
})
}

Your main.tf file should now look like this:

Create IAM Role for EC2 Jenkins Instance

Now we will use Terraform to create an IAM role that our EC2 Jenkins instance can assume. To create the role, we will use the aws_iam_role resource. Add the following code to your main.tf file:

#Create IAM role that can be assumed by EC2 instance (Jenkins server)
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
resource "aws_iam_role" "EC2-Jenkins-role" {
name = "EC2-Jenkins-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})

tags = {
Name = "EC2-Jenkins-Role"
}
}

Attach Role to IAM Policy

We now need to attach the role that we created to the IAM policy. To do this, we will use the aws_iam_policy_attachment resource. You will need to insert the role name and policy name that you created earlier. Add the following code to your main.tf file:

#Attach the role to the IAM policy
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment
resource "aws_iam_policy_attachment" "policy-attachment" {
name = "policy-attachment"
roles = [aws_iam_role.EC2-Jenkins-role.name] #Insert role name
policy_arn = aws_iam_policy.S3-read-write-policy.arn #Insert policy name
}

Create an Instance Profile

An AWS instance profile is a container for an IAM role that allows you to pass information from the role to the EC2 instance upon instance startup. This will allow any user of the EC2 Jenkins instance to assume the role that we created previously and be able to access the S3 bucket as defined in the bucket policy. To create an instance profile with Terraform use the aws_iam_instance_profile resource. Add the following code to your main.tf file:

#Create an instance profile
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile
resource "aws_iam_instance_profile" "EC2-Jenkins-profile" {
name = "EC2-Jenkins-profile"
role = aws_iam_role.EC2-Jenkins-role.name
}

After adding the code, we need to reference the instance profile in the instance resource block. Go back up to the top of your main.tf file and add the iam_instance_profile argument.

Review New main.tf File

Because we were editing data from our previous main.tf file in the new main.tf file, let’s recap on what should be in the file. Here is the entire code that you should have in your new main.tf file:

#Create EC2 Instance
resource "aws_instance" "instance1" {
ami = var.variables_ami
instance_type = var.variables_instance_type
vpc_security_group_ids = [aws_security_group.jenkins-sg.id]
user_data = file("jenkins.sh")
user_data_replace_on_change = true
associate_public_ip_address = true
iam_instance_profile = aws_iam_instance_profile.EC2-Jenkins-profile.name
key_name = var.variables_key_name

tags = {
Name = var.variables_instance_name
}
}

#Create security group
resource "aws_security_group" "jenkins-sg" {
name = "jenkins-sg"
description = "Open ports 22, 8080, and 443"

#Allow incoming TCP requests on port 22 from any IP
ingress {
description = "Incoming SSH"
from_port = 22
to_port = 22
protocol = var.variables_tcp
cidr_blocks = [var.variables_cidr]
}

#Allow incoming TCP requests on port 8080 from any IP
ingress {
description = "Incoming 8080"
from_port = 8080
to_port = 8080
protocol = var.variables_tcp
cidr_blocks = [var.variables_cidr]
}

#Allow incoming TCP requests on port 443 from any IP
ingress {
description = "Incoming 443"
from_port = 443
to_port = 443
protocol = var.variables_tcp
cidr_blocks = [var.variables_cidr]
}

#Allow all outbound requests
egress {
from_port = 0
to_port = 0
protocol = var.variables_egress
cidr_blocks = [var.variables_cidr]
}

tags = {
Name = "jenkins_sg"
}
}

#Create S3 bucket for Jenkins artifacts
resource "aws_s3_bucket" "jenkins-artifacts" {
bucket = "jenkins-artifacts-${random_id.randomness.hex}"

tags = {
Name = "jenkins_artifacts"
}
}

#Create random number for S3 bucket name
resource "random_id" "randomness" {
byte_length = 16
}

#Make S3 bucket and objects private
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_public_access_block
resource "aws_s3_bucket_public_access_block" "private-bucket" {
bucket = aws_s3_bucket.jenkins-artifacts.id
block_public_acls = var.variables_block_public_acls
block_public_policy = var.variables_block_public_policy
ignore_public_acls = var.variables_ignore_public_acls
restrict_public_buckets = var.variables_restrict_public_buckets
}

#Create IAM policy that allows read/write access to Jenkins artifacts S3 bucket
#https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html
resource "aws_iam_policy" "S3-read-write-policy" {
name = "S3-read-write-policy"
path = "/"
description = "Policy that allows read/write access to Jenkins artifacts S3 bucket"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "ListObjectsInBucket",
"Effect" : "Allow",
"Action" : ["s3:ListBucket"],
"Resource" : ["arn:aws:s3:::jenkins-artifacts"]
},
{
"Sid" : "GetPutObjectsInBucket"
"Effect" : "Allow"
"Action" : ["s3:GetObject", "s3:PutObject"]
"Resource" : ["arn:aws:s3:::jenkins-artifacts/*"]
}
]
})
}

#Create IAM role that can be assumed by EC2 instance (Jenkins server)
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
resource "aws_iam_role" "EC2-Jenkins-role" {
name = "EC2-Jenkins-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})

tags = {
Name = "EC2-Jenkins-Role"
}
}

#Attach the role to the IAM policy
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment
resource "aws_iam_policy_attachment" "policy-attachment" {
name = "policy-attachment"
roles = [aws_iam_role.EC2-Jenkins-role.name] #Insert role name
policy_arn = aws_iam_policy.S3-read-write-policy.arn #Insert policy name
}

#Create an instance profile
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile
resource "aws_iam_instance_profile" "EC2-Jenkins-profile" {
name = "EC2-Jenkins-profile"
role = aws_iam_role.EC2-Jenkins-role.name
}

Make sure to save the file!

Edit the outputs.tf File

Outputs are optional but allow you to output data into the CLI after Terraform has created the infrastructure. Each output block needs a name and a value. The description is optional. In the below code, I’ve created three outputs that will give us the instance ID, instance public IP address, and the bucket name created. You can add more types of output depending on your preference, but otherwise, copy the code below and paste it in your outputs.tf file.

#Define output 
#https://developer.hashicorp.com/terraform/language/values/outputs
output "Jenkins-instance-id" {
value = aws_instance.instance1.id
description = "Jenkins instance ID number"
}

output "Jenkins-public-ip" {
value = aws_instance.instance1.public_ip
description = "Jenkins public IP of the web server"
}

output "Jenkins-bucket-name" {
value = aws_s3_bucket.jenkins-artifacts.id
description = "Name of the Jenkins S3 bucket"
}

Save and close the file.

Deploy Infrastructure with Terraform

Now that all of our files are complete, we are ready to deploy our infrastructure. First, let’s test that our spacing is correct in our files by using the following command:

terraform fmt

Now we will follow the IVPAD format to deploy Terraform. Run the following commands:

terraform init

This initializes the backend and any plugins we need to deploy our infrastructure.

terraform validate

This validates the HCL syntax is correct and that everything is referenced correctly. If you get errors, it will tell you which line of code needs to be adjusted so go to that line and work through the errors. Do not proceed until you get the green Success! output.

terraform plan 

This command lists out a plan of the infrastructure that Terraform will create based on the main.tf file. If the plan looks good, proceed.

terraform apply -auto-approve

This command actually creates the infrastructure from the Terraform files and will output whatever you had in your output.tf file. It can take a few minutes to complete. Notice I get a green “Apply complete!” banner and my outputs that I requested.

Confirm Infrastructure was Created in AWS Console

We should be able to find all of our infrastructure in the AWS console, so first head to EC2 and find the instance that was created.

Next go to the VPC console and find the security group that was created. Confirm the inbound and outbound ports defined in the code were opened.

Navigate to the S3 console to find the bucket created and confirm it is private.

Verify that You Can Reach Your Jenkins Install via Port 8080 in Your Web Browser

Just like in the last tutorial, we want to confirm that we can reach the Jenkins landing page. We can start by curling the address from the CLI. You can find the instance IP address in the output that we created. Use the following command:

curl <instance_ip:8080>

You should see HTML output. Now let’s see if we can access the Jenkins instance in a web browser. In your web browser enter the following:

http://<instance_ip>:8080

You should see the Jenkins log in page:

If you do not, make sure your browser didn’t redirect you to the HTTPS site. Another way to check that Jenkins was installed and started on the instance is to connect to the instance via EC2 Instance Connect and run the following command:

systemctl status jenkins

You can see that Jenkins is installed and running:

Conclusion and CleanUp

This tutorial took a monolithic main.tf file created in my previous tutorial and decoupled it into multiple configuration files. We added variables and output and separated our providers from our main code. We then deployed the infrastructure with Terraform, confirmed the resources were created, and confirmed we could access our Jenkins server. To clean up everything you just created, use this command:

terraform destroy

Thank you so much for following along and keep watching for more DevOps tutorials!

--

--