Deploy Dynamic Web Apps on AWS using CI/CD Pipelines and GitHub Actions

Eugene Miguel
32 min readAug 26, 2023

--

Construct a fully automated pipeline to deploy any dynamic application on AWS.

What is CI/CD?

CI and CD stand for continuous integration and continuous delivery/continuous deployment. In very simple terms, CI is a modern software development practice in which incremental code changes are made frequently and reliably.

Why is CI/CD important?

CI/CD allows organizations to ship software quickly and efficiently. CI/CD facilitates an effective process for getting products to market faster than ever before, continuously delivering code into production, and ensuring an ongoing flow of new features and bug fixes via the most efficient delivery method.

Objectives:

  • Setting up your local computer for the project
  • Setting up your AWS account and testing Terraform code
  • Starting to build the CI/CD pipeline in GitHub Actions
  • Creating a GitHub Actions Job to Build a Docker Image

Setting up Your Environment

For more details on setting up your local computer, AWS account, and testing your Terraform code for this tutorial, visit my previous project here and here where I explained it thoroughly. All of my files for this project can be found in my rentzone-github-actions-terraform-ecs-project GitHub repository.

The Terraform code that we will use to complete this project are the same code that we used to deploy the rentzone application in my Terraform project. Hence I strongly recommend that you complete it first before jumping in our CI/CD project.

Setting up Your Local Computer for the project:

Install and set up the following tools on your computer to test your Terraform code locally. This will help you find and fix any problems with your Terraform code.

  • Installing Terraform on Your Computer
  • Signing Up for a Free GitHub Account
  • Installing Git on Your Computer
  • Generating Key Pairs for Secure Connections
  • Adding the Public SSH Key to GitHub
  • Installing Visual Studio Code on Your Computer
  • Installing the Extensions for Terraform in Visual Studio Code
  • Installing the AWS Command Line Interface (CLI) on Your Computer
  • Creating an IAM User in AWS
  • Generating an Access Key for the IAM User
  • Running the AWS Configure Command to create a profile

I set up my local computer in my previous projects, visit them here and here where I explained it thoroughly.

Setting Up Your AWS Account and Testing Terraform Code

Set up your AWS account and test your Terraform code.

Creating your GitHub repository and cloning it on your computer

For more details on creating and cloning our GitHub repository on your computer, visit my previous project here and here where I explained it thoroughly.

Updating the Gitignore file

Open your project folder (rentzone-github-actions-terraform-ecs-project) folder in Visual Studio Code then click on .gitignore file. Replace its contents with the .gitignore raw file from my GitHub repository then save your work and push the update into your GitHub repository.

We are replacing its values because in the new update in GitHub, it has added .tfvars and .tfvars.json. and since we are storing the values of our variables in the .tfvars file, it wouldn’t commit our .tfvars file into our GitHub repository. Hence it will break our pipeline while we are building it.

Adding the Terraform code into the repository

We will add the Terraform code that we will use to build our AWS infrastructure in our repository. Download the iac.zip from my GitHub repository. Open your project folder in Visual Studio Code.

Drag the unzipped folder to your project folder. Select Copy Folder in the prompt box.

If your backend.tf and terraform.tfvars show red error later on, this is fine we will update its values as we go along.

We now have the iac folder which contains all the Terraform codes that we will use to deploy our infrastructure in AWS. Push these codes to your GitHub repository.

Creating an S3 bucket to store your Terraform state

For more details on creating an S3 bucket to store your Terraform State, visit my previous projects here and here where I explained it thoroughly.

You can read more information about Terraform State here.

Creating a DynamoDB table to lock the Terraform state

For more details on creating an S3 bucket to store your Terraform state, visit my previous projects here and here where I explained it thoroughly.

Creating Secrets in AWS Secrets Manager

We will add the value for our RDS database name username, password, and our ECR registry as secrets in Secret Manager — this is one of the changes I made to the terraform code.

rds.tf file lines 18–20 we are storing the values of our DB username, password, and DB name as secrets.
ecs.tf file line 38 we are storing the value of our ECR registry as secret in Secrets Manager.

Let’s go to our AWS console. Go to AWS Secrets Manager and select Store a new secret and enter the following information:

Step 1

Secret type: Other type of secret

The Keys and you must provide your own values:

  1. rds_db_name
  2. username
  3. password
  4. ecr_registry

Step 1

Ensure that you are in us-east-1 region. Your ECR registry starts with numbers and ends with amazonaws.com.

Step 2

Provide your Secret name following the format above

Step 3

Leave them to their default settings

Step 4

Review the summary of settings then click Store.

We have successfully stored the secret rentzone-app-dev-secrets.

When you go to your Secret name and click Retrieve secret value you will see the Secrets you currently have.

Registering a domain name in Route 53

For more details on registering a domain name in Route 53, visit my tutorial here where I explained it thoroughly.

Updating the Terraform backend file with S3 and DynamoDB information

We are updating our backend.tf file to store our Terraform state file in S3 and lock it with a Dynamo DB table.

Definition of values:

# store the terraform state file in s3 and lock with dynamodb
terraform {
backend "s3" {
bucket = "h-terraform-remote-state"
key = "rentzone-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
}
}

Bucket — the name of the S3 bucket we created previously. This is where we want to store our state file in.

Key — the name you want to give your state file where you store it in the S3 bucket followed by /terraform.tfstate which means that in this S3 bucket, Terraform is going to create a folder (rentzone-app) and in that folder its going to store our state file in it (terraform.tfstate).

Region — The region we want to deploy our application in.

Dynamodb_table — The name of the dynameDB table that we will use to lock our state file.

Your backend.tf file should look like this then save your work.

Filling out the values in the terraform.tfvars file

Add the following values in the terraform.tfvars file so that we can deploy our infrastructure in AWS.

Definition of some values:

vpc_cidr — You can provide your own CIDR block as long as it doesn’t overlap with each other.

alternative_names — The sub domain name. This will allow us to request for an SSL certificate for our domain and sub domain name.

env_file bucket_name — We are creating a new unique bucket.

env_file_name — The name and extension of the environment file where we want to store in this S3 bucket this will also contain our environment variables. Its name should be identical to the environment file that I created in the iac folder.

Your env-variables-file.env file in your project folder is empty at the moment but when we build our CI/CD pipeline with GitHub actions we are going to pull our environment variables from our pipeline and update these files with those variables.

# environment variables
region = "us-east-1"
project_name = "rentzone"
environment = "dev"

# vpc variables
vpc_cidr = "10.0.0.0/16"
public_subnet_az1_cidr = "10.0.0.0/24"
public_subnet_az2_cidr = "10.0.1.0/24"
private_app_subnet_az1_cidr = "10.0.2.0/24"
private_app_subnet_az2_cidr = "10.0.3.0/24"
private_data_subnet_az1_cidr = "10.0.4.0/24"
private_data_subnet_az2_cidr = "10.0.5.0/24"

# secrets manager variables
secrets_manager_secret_name = "rentzone-app-dev-secrets"

# rds variables
multi_az_deployment = "false"
database_instance_identifier = "app-db"
database_instance_class = "db.t2.micro"
publicly_accessible = "false"

# acm variables
domain_name = "pinkastra.co.uk"
alternative_names = "*.pinkastra.co.uk"

# s3 variables
env_file_bucket_name = "h-rentzone-app-env-file-bucket"
env_file_name = "env-variables-file.env"

# ecs variables
architecture = "X86_64"
image_name = "rentzone-app"
image_tag = "latest"

# route-53 variables
record_name = "www"

Your terraform.tfvars file should look like this

Save your work and push your updates to your GitHub repository.

Push yourself!

Photo by Maxime Agnelli on Unsplash

Running the terraform apply command to test the Terraform code

We are going to test our Terraform code to make sure that everything is working properly before we start building our CI/CD pipeline.

When you apply this Terraform code, it’s going to create the infrastructure that you see in this reference architecture in your AWS account.

This infrastructure has:

  • VPC with public and private subnets
  • Internet gateway
  • Route tables
  • Application load balancer
  • ECS service
  • RDS instance
  • S3 bucket and more
Reference Architecture

Open your integrated terminal and run terraform init to initialize.

Up next is run terrform apply to show you the plan and 47 resources that it will create

Terraform has successfully created all the resources in my AWS account.

We have exported some values that we want to use in our CI/CD pipeline. This is going to allow our pipeline to dynamically reference these values so that we don’t have to hard code any of these values in the pipeline. We won’t know how some of these values will turn out to be until we create our resources. This is one of the use case of Terraform Output you can refer to your output.tf file.

We confirmed that our terraform code is working properly. Let’s go ahead and run terraform destroy to destroy the resources.

Delete your .terraform and .terraform.lock.hcl files because we don’t need these files in our IAC folder. Afterwards, you can close your terminal.

When we run terraform init Terraform will automatically download those 2 files in our IAC directory so that it can create our resources in our AWS account. Since we’re going to use our CI/CD pipeline to create our resources, in our pipeline Terraform will redownload those files in our IAC folder.

Starting to Build the CI/CD Pipeline in GitHub Actions

We will learn how to create all the jobs we need to build our CI/CD pipeline. We will create all the jobs in the following reference architecture in GitHub actions. These jobs will allow us to build a fully automated CI/CD pipeline to deploy any dynamic application on AWS.

When we say Runners in GitHub actions, its a machine that we want to use to build the job. In this project we will use 2 types of runners to build our CI/CD pipeline.

GitHub Hosted Runner — A cloud-based virtual machine provided by GitHub for running automated workflows. We will use this to build our job, afterwards the machine will go away.

Self Hosted Runner — A runner ( a machine that we will create such as EC2 instance) that is set up and maintained by the user on their own infrastructure for running GitHub actions workflows.

Reference Architecture
  1. Configure AWS credentials

2. Build AWS infrastructure with Terraform

3. Create ECR repository

3. Start self-hosted EC2 runner

4. Build and push Docker image to ECR

5. Create environment file and export to S3

5. Migrate data into RDS database with Flyway

6. Stop self-hosted EC2 runner

7. Create new task definition revision

8. Restart ECS Fargate service

Creating a GitHub Personal Access Token

We will create a personal access token. Docker will use it to clone the application code’ repository when we build our docker image.

Go to Developer settings under Settings page in your GitHub account.

Under Personal access token, go to Tokens (classic), Generate new token, and Generate new token (classic).

Provide your Note, Expiration, select repo, and Generate token. Copy the token and save it in a safe place, as you will not have access to it again after you leave this page.

Creating GitHub Repository Secrets

We will create the repository secrets that the GitHub actions job need to build our CI/CD pipeline for this project.

Open the repository secrets.txt from my GitHub repository. This contains the key for the secrets we want to store in our repository secrets. Copy the reference file and paste it in your notepad.

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
ECR_REGISTRY
PERSONAL_ACCESS_TOKEN
RDS_DB_NAME
RDS_DB_PASSWORD
RDS_DB_USERNAME

Go to your GitHub repository for this project. Select Settings, Secrets and variables, then click Actions. Under Secrets tab, click New repository secret.

Provide the name and secrets for each key. Do this for each of the keys above then click Add secret.

Once completed, it should look like this. This is all we need to do to create the Secrets that our GitHub actions job would need to build our CI/CD pipeline.

Creating a GitHub Actions Workflow File

To start building the CI/CD pipeline to deploy our application, we must create a Workflow File.

Workflow files use YAML syntax, and must have either a .yml or .yaml file extension. You must store workflow files in the .github/workflows directory of your repository.

Open your repository for this project in Visual Studio Code. Create a new folder named .github then create another folder named workflows within this folder.

Right click on your Workflows folder, select New File and name it deploy_pipeline.yml.

This is how you create the Workflow File that GitHub Actions will use to build the CI/CD pipeline.

Creating a GitHub Actions Job to configure AWS credentials

1. Configure AWS credentials

We will create the first job in our pipeline. This job will be responsible for configuring our IAM credentials to verify our access to AWS and authorize our GitHub actions job to create new resources in our AWS account.

Open your project folder in Visual Studio Code and select the deploy_pipeline.yml to open it. Open the reference file that I created in my GitHub repository. Copy and paste the syntax in your deploy_pipeline.yml file then save your work. To know more about the definition of syntax that we are using here visit the GitHub actions documentation.

name: 

on:
push:
branches:

env:
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
AWS_REGION:

jobs:
# Configure AWS credentials
configure_aws_credentials:
name: Configure AWS credentials
runs-on:
steps:
- name:
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-access-key-id:
aws-secret-access-key:
aws-region:

After creating the job that will configure our AWS credentials, our deploy_pipeline.yml should look like this

Go ahead and save your work and push the updates to your GitHub repository. Remember that our pipeline will trigger when we push our updates into the main branch.

In our GitHub repository, if we go to Actions we can see that the job has already ran.
The name of our commit message and pipeline. For every job you run, the number will increment by 1. You can click the commit message if you want to see this job.
We will see the visual representation of our pipeline and how the jobs depend on each other as we continue to add more jobs.

Creating a GitHub Actions Job to deploy AWS Infrastructure

2. Build AWS infrastructure with Terraform

The next job in our pipeline will use Terraform and the Ubuntu GitHub Hosted Runner to build our infrastructure in AWS. This job will apply our Terraform code and create the following:

  1. VPC with public and private subnets
  2. Internet gateway
  3. NAT gateways
  4. Security groups
  5. Application load balancer
  6. RDS instance
  7. S3 bucket
  8. IAM role
  9. ECS cluster
  10. ECS task definition
  11. ECS service and
  12. Record set in Route 53

Open the reference file that I created in my GitHub repository. On line 26, copy and paste it in your workflow file in your project folder in Visual Studio Code.

When you are working on YAML syntax, the indentation is important.
For all your jobs such as lines 14 and 27, ensure that they are aligned with each other.

Let’s set our environment variables for our job deploy_aws_infrastructure. On line 49, copy the environment key TERRAFORM_ACTION . Add one line and paste it below line 10. We will run the terraform apply command to deploy our infrastructure in AWS.

We will use this environment variable, TERRAFORM_ACTION to decide whether we want to run terraform apply or terraform destroy.
Actions/checkout@v3 checks out our code from our repository. When we say “check out our repository”, when our GitHub Runner starts we want to clone our repository on that runner so it can use our Terraform code in the IAC folder to deploy our infrastructure in AWS.

To know more about all the Actions we used in this job visit the documentation here and here.

GitHub Actions will be running your command from the root directory. In order to apply our Terraform code, we need to change our directory from the root directory into the IAC directory. The IAC represents the name of the directory where our Terraform code is in.

This job will get the output from our Terraform deployment so that our GitHub Actions job can use those output. In our Terraform project, we are using the output.tf file to export the output of our image name, domain name, RDS endpoint, image tag, and so on (see your output.tf file).

For every output we’ve listed in our output.tf file, we have to create a step in our GitHub Actions job to get that output.

We added this condition if: env.TERRAFORM_ACTION == ‘apply’ because when we run terraform apply command after creating our AWS infrastructure, Terraform will export the value of the outputs that we have specified in the output.tf file but when we run terraform destroy there wouldn’t be an output to export. That is why we have set a condition on this step that we only want to run this step if Terraform action is equals to apply.
We are using this environment variable to get the output of our image name (line 53) afterwards we’re creating another output here (line 54) that is equivalent to any output that we will get here (line 53) and we are exporting it to our GitHub Actions environment file ($GITHUB_ENV).

This is how you can get the output from your Terraform deployment and use it in your GitHub Actions job. The same concept applies to the rest of the following values.

At the moment, our deploy_pipeline.yml should look like this

This step prints our GitHub Actions environment file to show the contents of the file. I added this step to verify that the output we have exported from our Terraform deployment has been properly placed in our GitHub Actions environment file.

To finish our GitHub Actions job to build our AWS infrastructure, we will create an output option to export some environment variables from this job so that the following GitHub Actions job can use them.

Save your work. To deploy this GitHub Actions job to create our infrastructure in AWS, go ahead and commit and sync your changes to your GitHub repository. This is going to start our build pipeline.

In your project repository in your GitHub account, our latest commit is running.
Our pipeline running in order. Its now running the Build AWS infrastructure job.
The steps it has completed and running.
It has finished running the Terraform apply command where it created 47 resources. The rest of the steps ran successfully.
Environment variables and their values for the Build AWS infrastructure job. We exported these from our Terraform deployment.
Both jobs ran successfully.

You can verify that all the resources from the reference architecture has been properly applied to your AWS account.

The ECS service is going to fail for now because we haven’t built our Docker image yet, Hence there isn’t an image attached to it.

Creating a GitHub Actions Job to destroy AWS Infrastructure

2. Build AWS infrastructure with Terraform

This is the second part of the previous tutorial. We will use the GitHub Actions job we created in the last tutorial to destroy the resource we created in our AWS account.

Open your workflow file in Visual Studio Code. On line 11, replace apply with destroy

Let’s save our work then push our update to our project folder in our GitHub repository to trigger our pipeline. Afterwards, let’s go to our GitHub account

Our latest commit destroy aws infrastructure
It has ran the steps in our workflow to destroy the resources we created in our AWS account with Terraform.
We don’t have to export the value of the resources we’ve already removed. That is why these other steps to export our Terraform output did not run.

Creating a GitHub Actions Job to create an Amazon ECR repository

Insert and explain reference architecture.

We will create the next job in our pipeline to create a repository in Amazon ECR which we will use to store our Docker image for this project.

Copy the reference file from my GitHub repository then paste all of it on line 144 (ensure that your cursor is at the beginning before pasting) in your workflow file in Visual Studio Code.

This is the job that we will use to create our repository in ECR.

Let’s modify the steps.

needs: this job is going to depend on deploy_aws_infrastructure and configure_aws_credentials
This condition means that if the deploy_aws_infrastructure job output (terraform_action) is not equal to destroy, then we want to execute this job.

Line 156. We want to check if the ECR repository we are creating already exists in our AWS account before we run the next command otherwise the pipeline will fail.

Line 158. We’re setting an environment variable IMAGE_NAME and we are getting its value from the build AWS infrastructure job output IMAGE_NAME as defined in our Terraform project (terraform.tfvars line 34) and we are dynamically referencing it in our CI/CD pipeline.

Line 159. We are running a CLI command to check if our repository exists. If so, we are going to get the name and we are storing it in "repo_name and we are exporting it to our environment file in GitHub Actions.

We’ve completed the job that we will need. Copy and paste these in your workflow file.

# Create ECR repository
create_ecr_repository:
name: Create ECR repository
needs:
- configure_aws_credentials
- deploy_aws_infrastructure
if: needs.deploy_aws_infrastructure.outputs.terraform_action != 'destroy'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Check if ECR repository exists
env:
IMAGE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.image_name }}
run: |
result=$(aws ecr describe-repositories --repository-names "${{ env.IMAGE_NAME }}" | jq -r '.repositories[0].repositoryName')
echo "repo_name=$result" >> $GITHUB_ENV
continue-on-error: true

- name: Create ECR repository
env:
IMAGE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.image_name }}
if: env.repo_name != env.IMAGE_NAME
run: |
aws ecr create-repository --repository-name ${{ env.IMAGE_NAME }}
Our workflow, deploy_pipeline.yml file should look like this.

This is all we need to do to create the GitHub actions job that we will use to create our repository in Amazon ECR Don’t forget to update your TERRAFORM_ACTION to apply before saving your work. Commit and sync changes to push your updates in to your GitHub repository to trigger your pipeline.

The latest commit create ecr repo has already triggered our pipeline.
Our pipeline is now running the 2nd job Build AWS infrastructure.
It has completed all jobs.
The new repository we just created in our AWS account.

Creating a Self-Hosted Runner

3. Start self-hosted EC2 runner

For the next job in our pipeline, we will start a self hosted EC2 runner in the private subnet. We will use this runner for 2 things in our pipeline.

  1. We will use it to build a Docker image and push it to the Amazon ECR repository we created previously.
  2. We will also use it to run our database migration with Flyway.

The reason why we are using a self hosted runner to complete these jobs is because launching an EC2 runner in our private subnet will allow the runner to easily access the resources in our private subnet.

To know more about the Actions that we will use to create our self hosted runner visit the How to start section in the Actions Documentation.

Creating Key Pairs

To create a .pem keypair visit my tutorial here where I discussed it thoroughly.

2 keys will be generated, the public and private key. The key that is downloaded in to your computer is the private key. Make sure to move it to the same directory your PowerShell Terminal opens to.

Launching an Amazon Linux 2 EC2 instance

To create an AMI that we will use to start our self hosted EC2 runner, we need to launch an EC2 instance in a public subnet.

Use these settings to launch your EC2 instance.

Name: GitHub Action Runner

AMI: Amazon Linux 2 AMI

Instance type: t2.micro

Key pair: Select your key pair

VPC: default

Security Group: SSH with your IP as a source

Successfully launched the EC2 instance

SSH into an EC2 instance for Windows and Mac users

Up next is we will SSH in to the EC2 instance we just created using PowerShell or Terminal.

Successfully SSH in to our EC2 instance.

Installing Docker and Git on the EC2 instance

We will install Docker, Git, and enable the Docker service on our EC2 instance.

Run these commands in your PowerShell/Terminal.

 sudo yum update -y && \
sudo yum install docker -y && \
sudo yum install git -y && \
sudo yum install libicu -y && \
sudo systemctl enable docker
Successfully executed all the commands.
Docker and Git was successfully installed

Creating an Amazon Machine Image AMI and terminating the EC2 instance

We’re going to use this EC2 instance to create an AMI so that our GitHub actions job can use that AMI to start our self hosted runner.

Create an image from the EC2 instance using the information below:

Image name and description: GitHub Action Runner

Storage size: 20

Tag: GitHub Action Runner

GitHub Action Runner AMI has been created.
GitHub Action Runner snapshot has also completed.

Lastly, let’s terminate our EC2 instance.

GitHub Action Runner instance was successfully terminated.

Creating a GitHub Actions Job to start a Self-Hosted Runner

3. Create ECR repository

Open your workflow file (deploy_pipeline.yml) for this project in Visual Studio Code and copy the reference file from my GitHub repository. Paste it under your last job # Create ECR Repository.

The reference file that contains the job and the steps that we will use to start the self hosted runner.

We will start our self hosted runner in the private subnet and in order for it to be available, first we need to deploy our AWS infrastructure. That is why this is job depends on deploy_aws_infrastructure job.

This job is going to need our configure_aws_credentials and deploy_aws_infrastructure.

Copy the value from line 150 and paste it here.

If we run Terraform apply then we want to run the following jobs. Otherwise, running Terraform destroy means we are removing our resources so we don’t have to run this job.

This step checks whether we already have the EC2 instance running tagged as ec2-github-runner, hence it will enter "runner-running=true" in our GitHub Actions environment variable otherwise it’ll enter "runner-running=false"

steps:
- name: Check for running EC2 runner
run: |
instances=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=ec2-github-runner" "Name=instance-state-name,Values=running" --query 'Reservations[].Instances[].InstanceId' --output text)

if [ -n "$instances" ]; then
echo "runner-running=true" >> $GITHUB_ENV
else
echo "runner-running=false" >> $GITHUB_ENV
fi

id a way we can tag this step so we can reference it in other steps or jobs.

uses this step is using this action machulav/ec2-github-runner@v2 This is the action that we have to launch the EC2 instance and create the AMI before we can use it. Visit here for more information.

github-token Your personal access token from Secrets in your repository.

ec2-image-id The AMI ID we created in our AWS account.

subnet-id and security-group-id we export their ID from the deploy_aws_infrastructure job.

Our deploy_pipeline.yml should look like this

This is all we need to do to create the job that will start the self hosted EC2 runner. Save your work then push our updates in to our GitHub repository to trigger our pipeline.

The name of our commit and it has triggered our pipeline.
The entire pipeine. It is running the job to configure our AWS credentials, infrastructure, ECR repository, and self-hosted EC2 runner.

Remember that we are storing our state file in S3 so when we run this job, Build AWS infrastructure it will check the state of our Terraform file in S3 and it will realize that our state is already up to date. Therefore, it will move on to the next job.

The entire pipeline. We are running the Create ECR repository and Start self-hosted runner jobs parallel to each other.

We should have a runner in our GitHub account and EC2 instance running in our AWS account.

In your project repository in GitHub, this shows the IP 10–0–4–134 of the runner .
The EC2 instance that the GitHub Actions job just created with the same IP address as GitHub runner.

This is how you create a self-hosted EC2 runner.

Creating a GitHub Actions Job to Build a Docker Image

4. Build and push Docker image to ECR

The next job in our pipeline will build the Docker image for our application and push the image to the Amazon ECR repository we created in the previous step.

Prerequisites:

  1. Set up a repository to store the application code.
  2. Add the application code to the repository.
  3. Create a Dockerfile that our job will use to build the Docker image for our application.
  4. Create the appserviceprovider.php file that we’ll update in our application code to allow our application to redirect HTTP traffic to HTTPS.

We already covered all these topics on my previous project when we deployed this application in the AWS management console. You must complete this project first before starting this CI/CD project. It will help you gain better understanding.

Creating a repository to store the application code

We will create a repository to store our application code then we will clone the repository on our computer.

For more details please visit here where I discussed it thoroughly.

Adding the application code to the GitHub repository

Up next we will add our application code in to that repository and we will push the changes back to GitHub.

For more details please visit here where I discussed it thoroughly.

Creating the Dockerfile

We will create the Dockerfile that our build job we used to build the Docker image for our application.

Create a new file named Dockerfile in your root folder. This is our project folder that we are using to build our pipeline.

Copy the reference file from my GitHub repository and paste it in your Dockerfile then save your work. For more details please visit here where I discussed it thoroughly.

Creating the AppServiceProvider.php file

We will create the AppServiceProvider.php file that our application needs to redirect HTTP traffic to HTTPS. Create a new file in your project folder named AppServiceProvider.php then paste the reference file from my GitHub repository. This contains the code that we need to add to our AppServiceProvider.php file. Save your work. We won’t push our updates in to our GitHub repository yet.

Creating a GitHub Actions job to build and push a Docker Image into Amazon ECR

4. Build and push Docker image to ECR

Before we create the job that we will use to build and push our Docker image in to Amazon ECR, we need to enter some environment variables. Access my reference file from my GitHub repository

Open your deploy_pipeline.yml workflow file. We need to enter some environment variables before we create the job to build the Docker image.

Paste the reference file this way. Your variables should look like this.
These are the environment variables we need to build our Docker image.

Copy the second reference file that I created in my GitHub repository.

This contains the job that we will use to build the Docker image for our application and push it to Amazon ECR repository.

Paste the reference file this way just below outputs.
Our deploy_pipeline.yml should look like this after entering the environment variables that we need in this job.
Our workflow file should look like this after referencing our secrets and variables.
This option will retag the image so that we can push it to Amazon ECR. We export the value of our image name from the deploy_aws_infrastructure job.
This step will push the image to Amazon ECR, hence we are creating an environment variable for our IMAGE_NAME similar to what we just did.

This is all we need to do to create the build job to build the docker image for our application and push the image to Amazon ECR. Save your work and push your updates to your GitHub repository to trigger our pipeline.

Our latest commit has triggered the pipeline
It has started the Build and push Docker image job.
Give your pipeline enough time to build and push the Docker Image to ECR and complete all jobs.
We verified that the repository rentzone-app is in our Amazon ECR repository.
We verified that the Docker image latest is in our rentzone-app repository.

This is how you create a GitHub actions job to build a Docker image and push it to Amazon ECR repository.

Creating a GitHub Actions Job to export the environment variables into the S3 bucket

5. Create environment file and export to S3

The next job that we will create in our pipeline will store all the build arguments we used to build the Docker image in a file, afterwards the job will copy the file in to the S3 bucket so that the ECS Fargate containers can reference the variables we stored in the file. Access my reference file from my GitHub repository and copy it to your workflow file.

Paste it this way just below our last job.

Below shows how we referenced the values. Similar to what we did on the previous steps.

  # Create environment file and export to S3 
export_env_variables:
name: Create environment file and export to S3
needs:
- configure_aws_credentials
- deploy_aws_infrastructure
- start_runner
- build_and_push_image
if: needs.deploy_aws_infrastructure.outputs.terraform_action != 'destroy'
runs-on: ubuntu-latest
steps:
- name: Export environment variable values to file
env:
DOMAIN_NAME: ${{ needs.deploy_aws_infrastructure.outputs.domain_name }}
RDS_ENDPOINT: ${{ needs.deploy_aws_infrastructure.outputs.rds_endpoint }}
ENVIRONMENT_FILE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.environment_file_name }}
run: |
echo "PERSONAL_ACCESS_TOKEN=${{ secrets.PERSONAL_ACCESS_TOKEN }}" > ${{ env.ENVIRONMENT_FILE_NAME }}
echo "GITHUB_USERNAME=${{ env.GITHUB_USERNAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "REPOSITORY_NAME=${{ env.REPOSITORY_NAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "WEB_FILE_ZIP=${{ env.WEB_FILE_ZIP }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "WEB_FILE_UNZIP=${{ env.WEB_FILE_UNZIP }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "DOMAIN_NAME=${{ env.DOMAIN_NAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_ENDPOINT=${{ env.RDS_ENDPOINT }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_DB_NAME=${{ secrets.RDS_DB_NAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_DB_USERNAME=${{ secrets.RDS_DB_USERNAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_DB_PASSWORD=${{ secrets.RDS_DB_PASSWORD }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}

- name: Upload environment file to S3
env:
ENVIRONMENT_FILE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.environment_file_name }}
ENV_FILE_BUCKET_NAME: ${{ needs.deploy_aws_infrastructure.outputs.env_file_bucket_name }}
run: aws s3 cp ${{ env.ENVIRONMENT_FILE_NAME }} s3://${{ env.ENV_FILE_BUCKET_NAME }}/${{ env.ENVIRONMENT_FILE_NAME }}

Let’s define the values below.

Needs. As discussed, this is where we specify the jobs that the current job we are running in our pipeline depends on.
This step will export the environment variables to a file. We’ve created an environment variable to get the value of our Domain Name, RDS Endpoint, and Environment File Name.
Line 285. The first command is we’re going to echo the personal access token = the value ${{ }} of personal access token into our environment file ${{ }}. This is what we accomplished for the rest of the commands.

To store the key value pairs above in our environment file the next step is going to upload that file in to our S3 bucket.

Run. This is running the AWS CLI command, S3 copy.
Your completed workflow file in Visual Studio Code should look like this . Mind each character and space.

Let’s save our work and push our update in to our GitHub repository to trigger our pipeline.

Our latest commit and it has triggered our pipeline. It is now running the job.
It has successfully created the environment file and export to S3.
We verified that the variables are properly stored in the environment file we just uploaded in to our S3 bucket rentzone-h-rentzone-app-env-file-bucket.
We defined our bucket name in our terraform.tfvars file.
We see the environment file inside our bucket.

Creating the SQL folder and adding the SQL script

The next job that we will create in our pipeline will use Flyway to migrate the SQL data for our application in to the RDS database. First we need to add the SQL script we want to migrate in to our RDS database to our project folder. Create a new folder named sql in your root project folder then access my reference file and download the SQL script.

Drag your downloaded script to your new sql folder and release.

Creating a GitHub Actions Job to Migrate Data into the RDS database with Flyway

5. Migrate data into RDS database with Flyway

The next step in our pipeline, we will use Flyway to transfer the SQL data for our application in to the RDS database. This involves setting up flyway on our self-hosted runner and using it to move the data in to the RDS database.

FLYWAY_VERSION. The environment variable we’re adding is for the Flyway version. This will allow us to download any version we want.

You can use the latest Flyway version here.

Access my reference file from my GitHub repository and paste it this way in your project folder.

This reference file contains the job and steps we will use to migrate the data for our application in to the RDS database.
Needs. As discussed, this is where we specify the jobs that the current job we are running in our pipeline depends on.
Under steps: we’ve listed the steps we want to run to migrate our data in to the RDS database.

This step will run the wget command to download Flyway on our self-hosted runner.

wget -qO- https://repo1.maven.org/maven2/org/flywaydb/flyway-commandline/${{ env.FLYWAY_VERSION }}/flyway-commandline-${{ env.FLYWAY_VERSION }}-linux-x64.tar.gz | tar xvz && sudo ln -s `pwd`/flyway-${{ env.FLYWAY_VERSION }}/flyway /usr/local/bin 

This is all we need to do to create the command to download Flyway on our self-hosted runner and the way that we have created our command, it allows us to dynamically change the version of Flyway we want to download.

When you download Flyway on your machine, its configuration file usually comes with a SQL directory that is why we want to remove that folder from the Flyway configuration file.

For more information on basic Linux commands visit here.

The Flyway migrate command and its variables migrate the SQL script (V1_rentzone-db.sql) for our application in to the RDS database.

Let’s save our work and push our update to our GitHub project repository to trigger our pipeline.

Our latest commit and it has triggered our pipeline.
It has successfully completed all jobs including the Create environment file and export to S3 and Migrate data into RDS database with Flyway.

Terminating the Self-Hosted Runner in the AWS management console

6. Stop self-hosted EC2 runner

The next job that we will create in our pipeline will be used to stop the self-hosted runner. We are launching the self-hosted runner to build our Docker image and migrate the data for our application in to the RDS database. Once we complete this task, we will terminate the self-hosted runner immediately.

First, terminate the self-hosted runner that is currently running in the management console.

We are terminating this because when our pipeline runs again, the migrate data job will try to download Flyway on the runner and the job will fail because we already have Flyway installed on it.

Creating a GitHub Actions Job to Stop the Self-Hosted Runner

7. Stop self-hosted runner

We will create the job that we will use to stop the self-hosted EC2 runner. Access my reference file in my GitHub repository and copy it to your project folder just below the last job we created.

Our # Stop the self hosted EC2 runner should look like this after updating the values.

Needs — This is where we specify all the jobs we want to run first before this job runs.

This is all we need to do to create the job that will stop the self-hosted runner that we are using to build our Docker image and migrate our data in to the RDS database. Let’s save our work and push our updates to our GitHub repository to trigger our pipeline.

Our latest commit has triggered our pipeline
Its now running the jobs. Give your pipeline some time because it’ll start a new self-hosted EC2 runner.
It has completed all jobs and stopped the self-hosted EC2 runner.
AWS EC2 instance i-05f9427f8bdda667c is terminated
We verified that the EC2 instance i-05f9427f8bdda667c is terminated in our AWS account.

Creating a GitHub Actions Job to Create a new ECS task definition revision

7. Create new task definition revision

In the build and push image job in our pipeline, we successfully built the Docker image for our application and push the image to Amazon ECR. In the next job that we will create in our pipeline, we will update the Task Definition for the ECS service hosting our application with the new image we pushed to Amazon ECR.

Access my reference file in my GitHub repository. Open your workflow file deploy_pipeline.yml and paste it on line 364 just below the last job we created.

Needs. This is where we specify all the jobs we want to run first before this current job runs.

We only have 1 step in this job. For this step, we want to create a new Task Definition revision and we’ve also created an environment variable for this step. This is how our steps should look like. Add ${{ }}/ on line 382.

    steps:
- name: Create new task definition revision
env:
ECS_FAMILY: ${{ needs.deploy_aws_infrastructure.outputs.task_definition_name }}
ECS_IMAGE: ${{ secrets.ECR_REGISTRY }}/${{ needs.deploy_aws_infrastructure.outputs.image_name }}:${{ needs.deploy_aws_infrastructure.outputs.image_tag }}

The following names are specified in the deploy_aws_infrastructure job.

ECS_FAMILY name of the task definition we want to create a revision of.

ECS_IMAGE name of our container image.

The value that the ECS will create is the URI of our Docker image in Amazon ECR

This is all we need to do to create the job that we will use to create the new Task Definition revision for our ECS service. Let’s save our work then push our new updates in to our GitHub repository to trigger our pipeline.

Our latest commit has triggered our pipeline.
It has ran the job and started configuring our AWS credentials. We are going to wait for all these jobs to run.
It has successfully completed the last job Create new Task Definition revision.
All jobs under Create New Task Definition Revision are completed.
Click the output to see all the commands we ran.
Our Task Definition for this project.
Our Task Definition revisions.
Our latest Task Definition revision.
Our current ECS cluster is still using the old Task Definition because we haven’t created the job to update the ECS service to use the latest Task Definition revision we just created.

Creating GitHub Actions Job to restart the ECS Fargate Service

8. Restart ECS Fargate service

We will create the final job in our pipeline that will restart the ECS service and forces it to use the latest Task Definition revision we created previously.

Access my reference file in my GitHub repository. Open your workflow file deploy_pipeline.yml and paste it on line 405 just below the last job we created.

Needs. This is where we specify all the jobs we want to run first before this current job runs.
The first step in our job will update ECS service. ENV is where we specified our environment variable for that step. The second step will wait for the ECS service to stabilize. Run we are running another AWS CLI command, AWS ECS wait services-stable.

This is all we need to do to create the job that will restart the ECS Fargate service. Let’s save our work then push our new updates in to our GitHub repository to trigger our pipeline.

Our latest commit and it triggered our pipeline.
We can see the jobs and it started running Configure AWS credentials, Build AWS infrastructure, and so on.
Give your pipeline enough time to complete all jobs.
It has started running the job, Restart ECS Fargate service.
The job, Restart ECS Fargate service has finished running.
Click the output to review Update ECS Service.
The output shows that the ECS service is stable.
Amazon ECS in your AWS account shows your Cluster.
Under Cluster and Services, you should have 1 service running . The Deployment Current State is running and Active Status. There should be 1 Task running and 1 Healthy Target.

This means that our ECS task has been successfully deployed and we can now access our application through our domain name.

The domain name that we registered for this project.

To access our application, open a new web browser and type your domain name in the address bar.

There you have it! We can now access our application.

Conclusion and Clean Up

This is how you create a full CI/CD project that works you from beginning to end on how to deploy an application in AWS. This is an excellent project to present to your hiring manager and employers because this covers many aspects of the skills you need to know as a Cloud and DevOps Engineer.

This project covers:

  • How to deploy applications on AWS using core AWS services such as VPC, public and private subnets, NAT gateways, security groups, application load balancer, RDS, ECS, ECR, auto scaling group, Route 53, and more.
  • Containerization and shows your skills on how to build a Docker image and push the image to Amazon ECR.
  • How to deploy application on AWS with Infrastructure as Code — Terraform.
  • How to deploy dynamic application on AWS using CI/CD pipeline and GitHub actions.

To prevent unnecessary AWS costs from running your services, on line 11 under Deploy Pipeline replace apply with destroy Next, save your workflow and push our update to our GitHub repository to trigger our pipeline and destroy all the resources we’ve used for this project.

Congratulations!

Photo by IvanBE pratama on Unsplash

Thank you for following along and stay tuned for my next project.

Build real-world projects with me here! Show your Hiring Manager and Organization that you are the right person for the job and stand out from the crowd!

Connect with me on LinkedIn and GitHub.

--

--

Eugene Miguel

Cloud DevOps Engineer • AWS Certified Solutions Architect