Creating a CI/CD pipeline with Terraform Cloud to deploy WordPress application infrastructure: Part 1

Aalok Trivedi
13 min readApr 19, 2023

Intro

Over the last couple of articles, I’ve delved deeper and deeper into Terraform and learned ways to implement Infrastructure as Code (IaC) more efficiently. If you haven't already, I highly recommend reviewing my past articles, as I go over the basics of Terraform in more detail.

As I continue my DevOps journey, I want to gain more insight and experience with Continuous Integration/Continuous Deployment (CI/CD) to simulate an application environment as best as I can.

In this article, we will use a CI/CD process to build and deploy a WordPress application infrastructure through Terraform.

What is CI/CD?

CI/CD is one of the core methods of development that allows organizations to build and ship software quickly and efficiently. By adding automation to the build and deploy process, we are able to continuously deliver code, test, and deploy, ensuring a constant flow of app features, upgrades, and bug fixes to customers. As DevOps engineers, it’s crucial to become familiar with CI/CD and constantly find ways to improve upon the deployment process.

https://www.synopsys.com/glossary/what-is-cicd.html

What we’re building

We are going to use GitHub and Terraform Cloud to deploy a highly-available WordPress application infrastructure through a simple CI/CD process.

The infrastructure includes:

  1. A VPC with two (2) public and private subnets.
  2. An Auto Scaling Group (ASG) that provisions EC2 instances with WordPress installed.
  3. An RDS MySQL database for WordPress to store data.

Terraform Cloud

Although there are tons of CI/CD tools out there (GitLab, Circle CI, AWS Code Build/Code Deploy/Code Pipeline, etc…), we’re going to use GitHub and Terraform Cloud, as they provide the least amount of setup and a great introduction to CI/CD.

Terraform Cloud allows us to easily manage resources, state, secrets, and deployments all in one platform. It can even integrate with other CI/CD tools to fit your organization’s existing processes.

Prerequisites

  1. An AWS account with IAM user access.
  2. A GitHub and Terraform Cloud account.
  3. Foundational knowledge of Linux file systems and commands.
  4. Foundational knowledge of AWS resources, such as VPCs, subnets, security groups, Auto Scaling Groups, RDS, etc…
  5. Foundational knowledge of Terraform basics.
  6. Access to a terminal/command line tool.
  7. An IDE, such as VS Code or Cloud9.

GitHub repo

Here is my GitHub repo if you want to follow along ➡️

🚀 🚀 🚀 🚀 🚀 Let’s get started!

Configure GitHub & Terraform Cloud

Create a repository

Before we create our pipeline, we need to create a repository where we can store and version control our code. For this scenario, we’ll use GitHub and call the repo terraform_wordpress_dev. The repo will later be connected to our Terraform Cloud workspace, which allows the deployment to be triggered by pull or merge requests.

.gitignore: It’s important to have a .gitignore file so we don’t upload unnecessary .terraform and .tfstate files. Luckily, GitHub has a nice Terraform .gitignore template, so be sure to include that in the repo.

We’ll also enable some standard branch protection rules so proper merge and pull request protocols are set in place (Repo settings > Branches).

Create a project & workspace

After creating a Terraform Cloud account, we’ll create a new project called Wordpress App and a new workspace called wordpress_app_us_east_1_DEV.

Workspaces are like mini environments that allow teams to separate application flows and stages. We can use workspaces to separate our application into tiers, such as ‘networking,’ ‘database,’ ‘frontend,’ etc… or stages, such as ‘dev,’ ‘QA,’ and ‘prod.’

We’ll create a version control (VCS) workflow and connect our GitHub repo.

If we expand the Advanced options, we’ll see more options on how our VCS can control deployments.

We’ll select Manual apply to add an extra layer of protection before we let Terraform Cloud run and apply our code. This option means we’ll (more likely a higher-up/senior) have to manually approve and apply the planned deployment From Terraform Cloud.

We’ll see this in action later on and see what happens for each option.

Add workspace variables

Terraform Cloud allows us to create workspace-specific terraform and environment variables to use in our deployments. These variables can also be shared across multiple workspaces.

We’ll use workspace Environment variables to store our AWS credentials, such as our AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Be sure to mark them as Sensitive.

Alright! Our CI/CD pipeline is set up. Now we have to write the code for our WordPress app infrastructure!

Create the network base

Let’s start by creating the networking base for our app, which includes:

  1. A VPC with a CIDR block of 10.0.0.0/16.
  2. Two (2) public and private subnets.
  3. Public and private route tables.

First, let’s create a new branch, called ‘networking’. Self-explanatory, but we should never ever be working on or committing to the main branch.

$ git checkout -b networking

Login

To use Terraform Cloud in the CLI, we first need to log in and generate an API token. Run the terraform login command in the terminal and follow the directions. We shouold be autmmatically taken to Terraform Cloud too generate the token.

Remote backend & providers

Since we’re using Terraform Cloud as our CI/CD tool, we can also use it as our remote backend to control the state. To configure the backend, we’ll need the organization and workspace name we set in our account.

terraform {
cloud {
organization = "ORGANIZATION NAME"
workspaces {
name = "wordpress_app_us_east_1_DEV"
}
}
}

We’ll also configure the AWS provider to assign the region and default tags. We can even add validation to variables to ensure proper values are used.

#--variables.tf--

# environment vars
#---------------------------------------
variable "environment" {
type = string
default = "DEV"

validation {
condition = contains(["DEV", "QA", "PROD"], upper(var.environment))
error_message = "The 'environment' tag must be one of 'DEV','QA', or 'PROD'."
}
validation {
condition = upper(var.environment) == var.environment
error_message = "The 'environment' tag must be in all in uppercase."
}
}
variable "aws_region" {
type = string
description = "AWS region"
default = "us-east-1"
}
variable "app_name" {
type = string
description = "application name"
default = "wordpressApp"
}

variable "az_count" {
type = number
description = "Number of availability zones to use"
default = 2
}
#--providers.tf--

terraform {
cloud {
organization = "ORGANIZATION NAME"
workspaces {
name = "wordpress_app_us_east_1_DEV"
}
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.60.0"
}
}
}

provider "aws" {
region = var.aws_region
default_tags {
tags = {
App = var.app_name
Environment = var.environment
Terraform = "True"
}
}
}

Let’s run terraform init to initialize the local directory and configure the backend.

Let’s also run a terraform apply just to initialize the state.

Whoops, Just kidding! Because we connected our repo to our Terraform Cloud CI/CD workspace, it will not let us apply our configuration. That’s a good thing! Being able to apply locally kind of defeats the purpose of automation, doesn't it? Everything should run through the proper CI/CD workflow and be applied only when we get the proper approval. We won't apply anything, just yet.

Locals

In a new network.tf file, let’s establish some locals.

Locals are useful to use when you want to give the result of an expression and then re-use that result throughout your configuration. Some use cases for using locals are repeatable naming prefixes/conventions, common tags, and extracting data source information.

For our locals, we’ll establish a naming prefix (app_name-environment) and the names of the Availability Zones we want to use (us-east-1a, us-east-1b)

#--network.tf--

# Retrieve the list of AZs in the current AWS region
data "aws_availability_zones" "available" {}

locals {
name_prefix = lower("${var.app_name}-${var.environment}")
azs = slice(data.aws_availability_zones.available.names, 0, var.az_count)
}

VPC module

Hey, remember, for my last article, we had to manually create a VPC, Internet Gateway, public/private subnets, NAT Gateway, Elastic IP, public/private route tables, and public/private route table associations…? Yeah, we don’t have to do that anymore…

This time, we’ll use the power of modules to quickly create multiple resources or common settings all in one go.

Modules provide the greatest amount of flexibility and reusability of common configurations across multiple projects without having to create each resource from scratch. Instead of creating a custom module for the network, we’ll use a great VPC module from the Terraform Registry. This VPC module allows us to create a VPC, subnets, Internet Gateway, NAT, and much more, all in one go!

I’ll admit, modules are a bit of a learning curve, and it does take a ton of pouring through the documentation to figure out all the options/inputs (there’s a lot…), but they are SUPER powerful once you get the hang of it.

In our network.tf file, we’ll call the vpc module with a module block.

#--network.tf--

module "network" {
source = "terraform-aws-modules/vpc/aws"
version = "4.0.1"
}

The source argument defines where the module lives. If it’s a remote module, the source can be a Terraform Registry path, a Git repo link, etc… If it’s a local module, it will point to the module path within the main directory.

Here’s the full VPC module configuration we’ll use:

#--network.tf--

module "network" {
source = "terraform-aws-modules/vpc/aws"
version = "4.0.1"
name = local.name_prefix
cidr = var.vpc_cidr
enable_dns_hostnames = var.enable_dns_hostnames

azs = local.azs

# Public subnets
public_subnets = [for key, value in local.azs : cidrsubnet(var.vpc_cidr, var.public_newbits, key + var.public_newnum)]
public_subnet_tags = {
"Tier" = "Web"
}
map_public_ip_on_launch = var.map_public_ip_on_launch

# Private subnets
private_subnets = [for key, value in local.azs : cidrsubnet(var.vpc_cidr, var.private_newbits, key + var.private_newnum)]
private_subnet_tags = {
"Tier" = "Database"
}

# NAT
enable_nat_gateway = var.enable_nat_gateway
}
#--variables.tf--

# VPC
#----------------------------------------
variable "vpc_name" {
type = string
description = "VPC name"
default = "vpc"
}
variable "vpc_cidr" {
type = string
description = "VPC cidr"
default = "10.0.0.0/16"
}
variable "enable_dns_hostnames" {
type = bool
description = "enable dns hostnames"
default = true
}

# Subnets
#----------------------------------------

# Public
variable "map_public_ip_on_launch" {
type = bool
description = "enable auto-assign ipv4"
default = true
}
variable "public_newbits" {
type = number
description = "number to add for public_subnet 'newbits' cidrsubnet() function"
default = 8
}
variable "public_newnum" {
type = number
description = "number to add for public_subnet 'newnum' cidrsubnet() function"
default = 100
}

# Private
variable "private_newbits" {
type = number
description = "number to add for private_subnet 'newbits' cidrsubnet() function"
default = 8
}
variable "private_newnum" {
type = number
description = "number to add for private_subnet 'newnum' cidrsubnet() function"
default = 4
}

# NAT gateway
#----------------------------------------
variable "enable_nat_gateway" {
type = bool
description = "enable NAT gateway"
default = false
}
variable "single_nat_gateway" {
type = bool
description = "enable single NAT"
default = false
}
variable "one_nat_gateway_per_az" {
type = bool
description = "enable NAT gateway in each AZ"
default = false
}

Let’s break this down a bit:

  • Subnets: for key, value in local.azs : cidrsubnet(local.vpc_cidr, var.public_newbits, key + var.public_newnum)
    The module takes a public_subnets and private_subnets input, where we can iterate over our established AZs (us-east-1a & us-east-1b) and create a public/private subnet for each zone. We can also use the cidrsubnet() function to assign the CIDR block for each.
  • NAT Gateway: We don’t need a NAT for the RDS database, so we’ll set enable_nat_gateway to false.

The beauty of this module is it will automatically create all the proper public/private routes tables, routes to the IGW and/or NAT, and route associations.

Outputs

Now that we have the networking module in place, let’s create a new outputs.tf file and output some resource information, such as the VPC ID and subnet IDs.

#--outputs.tf--

output "vpc_id" {
value = module.network.vpc_id
}

output "public_subnets" {
value = module.network.public_subnets
}
output "private_subnets" {
value = module.network.private_subnets
}

Plan

We can’t apply the configuration, but we can still locally run a terraform plan to make sure the proper resources are created.

VPC
Public subnets
Private subnets
Route tables & associations

Yeah… Hell yeah!

Commit & Push

Now that we’ve completed the networking base of the app, let’s commit and push the code to the networking branch.

git commit -a -m "Create networking base."

git push

Create a pull request

If we go back to our GitHub repo, we can create a pull request to allow someone from our team to review and approve our code.

Review speculative plan

Once a pull request has been made, Terraform Cloud will create a speculative plan so the reviewer can look over the planned resources and approve the pull request or require changes.

Notice how GitHub will create a ‘check’ for the Terraform plan. If we click on the Details link, we’ll be taken to the plan.

Here, the reviewer can look over the plan and add any comments, if necessary. If the plan fails, the ‘check’ will also fail, and we won’t be able to merge the branch.

NOTE: By nature of the GitHub flow, the one who made the pull request can’t be the reviewer, so we’ll have to pretend we’re the reviewer, commit cardinal sin, and approve our own requests. Just know, that we should be recieving all green checks before merging our branch.

Merge the networking branch to main

After reviewing the code and plan, we’ve decided everything is good to go. We’re ready to merge our networking branch with the main branch and apply the Terraform plan.

Again, just for this demo, we’ll commit cardinal sin and merge our own requests. Shhhhhhh

Review & apply

Once we’ve merged the branch into main, a new Terraform ‘Run’ will be triggered. Since we chose ‘Manual apply in our workspace settings, we’ll still need to go into Terraform Cloud and manually apply the plan.

If everything looks good, click on Apply and add a comment.

We should see a live view of all the resources being created.

VPC
Subnets
Route Tables

Change settings to ‘Auto apply’

Now let’s see what happens if we change the ‘Apply method’ setting to ‘Auto apply.

Let’s create a new branch and make small change and adjust the private_newnum variable from 4 to 8.

git checkout -b "private-cidr-fix"
Result should be a CIDR change from “10.0.4.0/24” | “10.0.5.0/24” → “10.0.8.0/24” | “10.0.9.0/24”

Commit the changes, push to the branch, and create a new pull request.

Once the plan has succeeded, review the plan and merge to the main branch.

Before, we had to manually apply the plan. Now, the plan will automatically apply when we merge to main! Since we’re now automating the apply stage, we have to take extra care when reviewing and approving someone’s code (Again, in the real world, this will probably done by a senior team member).

Success!

Woo! We’ve just used our first CI/CD pipeline to deploy a networking base for our application!

That’s all for now. In Part 2, we’ll continue on and configure the ASG to provision the instances so they have WordPress properly installed and connected to an RDS database.

Stay tuned!

Thank you

Thank you for following me on my cloud engineering journey. I hope this article was helpful and informative. Please give me a like & follow as I continue my journey, and I will share more articles like this!

Feel free to connect with me on LinkedIn!

--

--