Deploy Dynamic Web Apps on AWS using CI/CD Pipelines and GitHub Actions
Construct a fully automated pipeline to deploy any dynamic application on AWS.
What is CI/CD?
CI and CD stand for continuous integration and continuous delivery/continuous deployment. In very simple terms, CI is a modern software development practice in which incremental code changes are made frequently and reliably.
Why is CI/CD important?
CI/CD allows organizations to ship software quickly and efficiently. CI/CD facilitates an effective process for getting products to market faster than ever before, continuously delivering code into production, and ensuring an ongoing flow of new features and bug fixes via the most efficient delivery method.
Objectives:
- Setting up your local computer for the project
- Setting up your AWS account and testing Terraform code
- Starting to build the CI/CD pipeline in GitHub Actions
- Creating a GitHub Actions Job to Build a Docker Image
Setting up Your Environment
For more details on setting up your local computer, AWS account, and testing your Terraform code for this tutorial, visit my previous project here and here where I explained it thoroughly. All of my files for this project can be found in my rentzone-github-actions-terraform-ecs-project GitHub repository.
The Terraform code that we will use to complete this project are the same code that we used to deploy the rentzone application in my Terraform project. Hence I strongly recommend that you complete it first before jumping in our CI/CD project.
Setting up Your Local Computer for the project:
Install and set up the following tools on your computer to test your Terraform code locally. This will help you find and fix any problems with your Terraform code.
- Installing Terraform on Your Computer
- Signing Up for a Free GitHub Account
- Installing Git on Your Computer
- Generating Key Pairs for Secure Connections
- Adding the Public SSH Key to GitHub
- Installing Visual Studio Code on Your Computer
- Installing the Extensions for Terraform in Visual Studio Code
- Installing the AWS Command Line Interface (CLI) on Your Computer
- Creating an IAM User in AWS
- Generating an Access Key for the IAM User
- Running the AWS Configure Command to create a profile
I set up my local computer in my previous projects, visit them here and here where I explained it thoroughly.
Setting Up Your AWS Account and Testing Terraform Code
Set up your AWS account and test your Terraform code.
Creating your GitHub repository and cloning it on your computer
For more details on creating and cloning our GitHub repository on your computer, visit my previous project here and here where I explained it thoroughly.
Updating the Gitignore file
Open your project folder (rentzone-github-actions-terraform-ecs-project) folder in Visual Studio Code then click on .gitignore file. Replace its contents with the .gitignore raw file from my GitHub repository then save your work and push the update into your GitHub repository.
We are replacing its values because in the new update in GitHub, it has added .tfvars and .tfvars.json. and since we are storing the values of our variables in the .tfvars file, it wouldn’t commit our .tfvars file into our GitHub repository. Hence it will break our pipeline while we are building it.
Adding the Terraform code into the repository
We will add the Terraform code that we will use to build our AWS infrastructure in our repository. Download the iac.zip from my GitHub repository. Open your project folder in Visual Studio Code.
If your backend.tf and terraform.tfvars show red error later on, this is fine we will update its values as we go along.
We now have the iac folder which contains all the Terraform codes that we will use to deploy our infrastructure in AWS. Push these codes to your GitHub repository.
Creating an S3 bucket to store your Terraform state
For more details on creating an S3 bucket to store your Terraform State, visit my previous projects here and here where I explained it thoroughly.
You can read more information about Terraform State here.
Creating a DynamoDB table to lock the Terraform state
For more details on creating an S3 bucket to store your Terraform state, visit my previous projects here and here where I explained it thoroughly.
Creating Secrets in AWS Secrets Manager
We will add the value for our RDS database name username, password, and our ECR registry as secrets in Secret Manager — this is one of the changes I made to the terraform code.
Let’s go to our AWS console. Go to AWS Secrets Manager and select Store a new secret and enter the following information:
Step 1
Secret type: Other type of secret
The Keys and you must provide your own values:
- rds_db_name
- username
- password
- ecr_registry
Step 1
Step 2
Step 3
Step 4
Review the summary of settings then click Store.
We have successfully stored the secret rentzone-app-dev-secrets.
Registering a domain name in Route 53
For more details on registering a domain name in Route 53, visit my tutorial here where I explained it thoroughly.
Updating the Terraform backend file with S3 and DynamoDB information
We are updating our backend.tf file to store our Terraform state file in S3 and lock it with a Dynamo DB table.
Definition of values:
# store the terraform state file in s3 and lock with dynamodb
terraform {
backend "s3" {
bucket = "h-terraform-remote-state"
key = "rentzone-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
}
}
Bucket — the name of the S3 bucket we created previously. This is where we want to store our state file in.
Key — the name you want to give your state file where you store it in the S3 bucket followed by /terraform.tfstate which means that in this S3 bucket, Terraform is going to create a folder (rentzone-app) and in that folder its going to store our state file in it (terraform.tfstate).
Region — The region we want to deploy our application in.
Dynamodb_table — The name of the dynameDB table that we will use to lock our state file.
Filling out the values in the terraform.tfvars file
Add the following values in the terraform.tfvars file so that we can deploy our infrastructure in AWS.
Definition of some values:
vpc_cidr — You can provide your own CIDR block as long as it doesn’t overlap with each other.
alternative_names — The sub domain name. This will allow us to request for an SSL certificate for our domain and sub domain name.
env_file bucket_name — We are creating a new unique bucket.
env_file_name — The name and extension of the environment file where we want to store in this S3 bucket this will also contain our environment variables. Its name should be identical to the environment file that I created in the iac folder.
Your env-variables-file.env file in your project folder is empty at the moment but when we build our CI/CD pipeline with GitHub actions we are going to pull our environment variables from our pipeline and update these files with those variables.
# environment variables
region = "us-east-1"
project_name = "rentzone"
environment = "dev"
# vpc variables
vpc_cidr = "10.0.0.0/16"
public_subnet_az1_cidr = "10.0.0.0/24"
public_subnet_az2_cidr = "10.0.1.0/24"
private_app_subnet_az1_cidr = "10.0.2.0/24"
private_app_subnet_az2_cidr = "10.0.3.0/24"
private_data_subnet_az1_cidr = "10.0.4.0/24"
private_data_subnet_az2_cidr = "10.0.5.0/24"
# secrets manager variables
secrets_manager_secret_name = "rentzone-app-dev-secrets"
# rds variables
multi_az_deployment = "false"
database_instance_identifier = "app-db"
database_instance_class = "db.t2.micro"
publicly_accessible = "false"
# acm variables
domain_name = "pinkastra.co.uk"
alternative_names = "*.pinkastra.co.uk"
# s3 variables
env_file_bucket_name = "h-rentzone-app-env-file-bucket"
env_file_name = "env-variables-file.env"
# ecs variables
architecture = "X86_64"
image_name = "rentzone-app"
image_tag = "latest"
# route-53 variables
record_name = "www"
Your terraform.tfvars file should look like this
Save your work and push your updates to your GitHub repository.
Push yourself!
Running the terraform apply command to test the Terraform code
We are going to test our Terraform code to make sure that everything is working properly before we start building our CI/CD pipeline.
When you apply this Terraform code, it’s going to create the infrastructure that you see in this reference architecture in your AWS account.
This infrastructure has:
- VPC with public and private subnets
- Internet gateway
- Route tables
- Application load balancer
- ECS service
- RDS instance
- S3 bucket and more
Open your integrated terminal and run terraform init
to initialize.
Up next is run terrform apply
to show you the plan and 47 resources that it will create
Terraform has successfully created all the resources in my AWS account.
We have exported some values that we want to use in our CI/CD pipeline. This is going to allow our pipeline to dynamically reference these values so that we don’t have to hard code any of these values in the pipeline. We won’t know how some of these values will turn out to be until we create our resources. This is one of the use case of Terraform Output you can refer to your output.tf file.
We confirmed that our terraform code is working properly. Let’s go ahead and run terraform destroy
to destroy the resources.
Delete your .terraform and .terraform.lock.hcl files because we don’t need these files in our IAC folder. Afterwards, you can close your terminal.
When we run
terraform init
Terraform will automatically download those 2 files in our IAC directory so that it can create our resources in our AWS account. Since we’re going to use our CI/CD pipeline to create our resources, in our pipeline Terraform will redownload those files in our IAC folder.
Starting to Build the CI/CD Pipeline in GitHub Actions
We will learn how to create all the jobs we need to build our CI/CD pipeline. We will create all the jobs in the following reference architecture in GitHub actions. These jobs will allow us to build a fully automated CI/CD pipeline to deploy any dynamic application on AWS.
When we say Runners in GitHub actions, its a machine that we want to use to build the job. In this project we will use 2 types of runners to build our CI/CD pipeline.
GitHub Hosted Runner — A cloud-based virtual machine provided by GitHub for running automated workflows. We will use this to build our job, afterwards the machine will go away.
Self Hosted Runner — A runner ( a machine that we will create such as EC2 instance) that is set up and maintained by the user on their own infrastructure for running GitHub actions workflows.
- Configure AWS credentials
2. Build AWS infrastructure with Terraform
3. Create ECR repository
3. Start self-hosted EC2 runner
4. Build and push Docker image to ECR
5. Create environment file and export to S3
5. Migrate data into RDS database with Flyway
6. Stop self-hosted EC2 runner
7. Create new task definition revision
8. Restart ECS Fargate service
Creating a GitHub Personal Access Token
We will create a personal access token. Docker will use it to clone the application code’ repository when we build our docker image.
Go to Developer settings under Settings page in your GitHub account.
Under Personal access token, go to Tokens (classic), Generate new token, and Generate new token (classic).
Provide your Note, Expiration, select repo, and Generate token. Copy the token and save it in a safe place, as you will not have access to it again after you leave this page.
Creating GitHub Repository Secrets
We will create the repository secrets that the GitHub actions job need to build our CI/CD pipeline for this project.
Open the repository secrets.txt from my GitHub repository. This contains the key for the secrets we want to store in our repository secrets. Copy the reference file and paste it in your notepad.
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
ECR_REGISTRY
PERSONAL_ACCESS_TOKEN
RDS_DB_NAME
RDS_DB_PASSWORD
RDS_DB_USERNAME
Go to your GitHub repository for this project. Select Settings, Secrets and variables, then click Actions. Under Secrets tab, click New repository secret.
Provide the name and secrets for each key. Do this for each of the keys above then click Add secret.
Once completed, it should look like this. This is all we need to do to create the Secrets that our GitHub actions job would need to build our CI/CD pipeline.
Creating a GitHub Actions Workflow File
To start building the CI/CD pipeline to deploy our application, we must create a Workflow File.
Workflow files use YAML syntax, and must have either a .yml
or .yaml
file extension. You must store workflow files in the .github/workflows
directory of your repository.
Open your repository for this project in Visual Studio Code. Create a new folder named .github then create another folder named workflows within this folder.
Right click on your Workflows folder, select New File and name it deploy_pipeline.yml.
This is how you create the Workflow File that GitHub Actions will use to build the CI/CD pipeline.
Creating a GitHub Actions Job to configure AWS credentials
We will create the first job in our pipeline. This job will be responsible for configuring our IAM credentials to verify our access to AWS and authorize our GitHub actions job to create new resources in our AWS account.
Open your project folder in Visual Studio Code and select the deploy_pipeline.yml to open it. Open the reference file that I created in my GitHub repository. Copy and paste the syntax in your deploy_pipeline.yml file then save your work. To know more about the definition of syntax that we are using here visit the GitHub actions documentation.
name:
on:
push:
branches:
env:
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
AWS_REGION:
jobs:
# Configure AWS credentials
configure_aws_credentials:
name: Configure AWS credentials
runs-on:
steps:
- name:
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-access-key-id:
aws-secret-access-key:
aws-region:
After creating the job that will configure our AWS credentials, our deploy_pipeline.yml should look like this
Go ahead and save your work and push the updates to your GitHub repository. Remember that our pipeline will trigger when we push our updates into the main branch.
Creating a GitHub Actions Job to deploy AWS Infrastructure
The next job in our pipeline will use Terraform and the Ubuntu GitHub Hosted Runner to build our infrastructure in AWS. This job will apply our Terraform code and create the following:
- VPC with public and private subnets
- Internet gateway
- NAT gateways
- Security groups
- Application load balancer
- RDS instance
- S3 bucket
- IAM role
- ECS cluster
- ECS task definition
- ECS service and
- Record set in Route 53
Open the reference file that I created in my GitHub repository. On line 26, copy and paste it in your workflow file in your project folder in Visual Studio Code.
Let’s set our environment variables for our job deploy_aws_infrastructure. On line 49, copy the environment key TERRAFORM_ACTION
. Add one line and paste it below line 10. We will run the terraform apply
command to deploy our infrastructure in AWS.
To know more about all the Actions we used in this job visit the documentation here and here.
This job will get the output from our Terraform deployment so that our GitHub Actions job can use those output. In our Terraform project, we are using the output.tf file to export the output of our image name, domain name, RDS endpoint, image tag, and so on (see your output.tf file).
For every output we’ve listed in our output.tf file, we have to create a step in our GitHub Actions job to get that output.
This is how you can get the output from your Terraform deployment and use it in your GitHub Actions job. The same concept applies to the rest of the following values.
At the moment, our deploy_pipeline.yml should look like this
To finish our GitHub Actions job to build our AWS infrastructure, we will create an output option to export some environment variables from this job so that the following GitHub Actions job can use them.
Save your work. To deploy this GitHub Actions job to create our infrastructure in AWS, go ahead and commit and sync your changes to your GitHub repository. This is going to start our build pipeline.
You can verify that all the resources from the reference architecture has been properly applied to your AWS account.
The ECS service is going to fail for now because we haven’t built our Docker image yet, Hence there isn’t an image attached to it.
Creating a GitHub Actions Job to destroy AWS Infrastructure
This is the second part of the previous tutorial. We will use the GitHub Actions job we created in the last tutorial to destroy the resource we created in our AWS account.
Open your workflow file in Visual Studio Code. On line 11, replace apply
with destroy
Let’s save our work then push our update to our project folder in our GitHub repository to trigger our pipeline. Afterwards, let’s go to our GitHub account
Creating a GitHub Actions Job to create an Amazon ECR repository
Insert and explain reference architecture.
We will create the next job in our pipeline to create a repository in Amazon ECR which we will use to store our Docker image for this project.
Copy the reference file from my GitHub repository then paste all of it on line 144 (ensure that your cursor is at the beginning before pasting) in your workflow file in Visual Studio Code.
Let’s modify the steps.
Line 156. We want to check if the ECR repository we are creating already exists in our AWS account before we run the next command otherwise the pipeline will fail.
Line 158. We’re setting an environment variable IMAGE_NAME and we are getting its value from the build AWS infrastructure job output IMAGE_NAME as defined in our Terraform project (terraform.tfvars line 34) and we are dynamically referencing it in our CI/CD pipeline.
Line 159. We are running a CLI command to check if our repository exists. If so, we are going to get the name and we are storing it in "repo_name
and we are exporting it to our environment file in GitHub Actions.
We’ve completed the job that we will need. Copy and paste these in your workflow file.
# Create ECR repository
create_ecr_repository:
name: Create ECR repository
needs:
- configure_aws_credentials
- deploy_aws_infrastructure
if: needs.deploy_aws_infrastructure.outputs.terraform_action != 'destroy'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Check if ECR repository exists
env:
IMAGE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.image_name }}
run: |
result=$(aws ecr describe-repositories --repository-names "${{ env.IMAGE_NAME }}" | jq -r '.repositories[0].repositoryName')
echo "repo_name=$result" >> $GITHUB_ENV
continue-on-error: true
- name: Create ECR repository
env:
IMAGE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.image_name }}
if: env.repo_name != env.IMAGE_NAME
run: |
aws ecr create-repository --repository-name ${{ env.IMAGE_NAME }}
This is all we need to do to create the GitHub actions job that we will use to create our repository in Amazon ECR Don’t forget to update your TERRAFORM_ACTION
to apply
before saving your work. Commit and sync changes to push your updates in to your GitHub repository to trigger your pipeline.
Creating a Self-Hosted Runner
For the next job in our pipeline, we will start a self hosted EC2 runner in the private subnet. We will use this runner for 2 things in our pipeline.
- We will use it to build a Docker image and push it to the Amazon ECR repository we created previously.
- We will also use it to run our database migration with Flyway.
The reason why we are using a self hosted runner to complete these jobs is because launching an EC2 runner in our private subnet will allow the runner to easily access the resources in our private subnet.
To know more about the Actions that we will use to create our self hosted runner visit the How to start section in the Actions Documentation.
Creating Key Pairs
To create a .pem keypair visit my tutorial here where I discussed it thoroughly.
2 keys will be generated, the public and private key. The key that is downloaded in to your computer is the private key. Make sure to move it to the same directory your PowerShell Terminal opens to.
Launching an Amazon Linux 2 EC2 instance
To create an AMI that we will use to start our self hosted EC2 runner, we need to launch an EC2 instance in a public subnet.
Use these settings to launch your EC2 instance.
Name: GitHub Action Runner
AMI: Amazon Linux 2 AMI
Instance type: t2.micro
Key pair: Select your key pair
VPC: default
Security Group: SSH with your IP as a source
SSH into an EC2 instance for Windows and Mac users
Up next is we will SSH in to the EC2 instance we just created using PowerShell or Terminal.
Installing Docker and Git on the EC2 instance
We will install Docker, Git, and enable the Docker service on our EC2 instance.
Run these commands in your PowerShell/Terminal.
sudo yum update -y && \
sudo yum install docker -y && \
sudo yum install git -y && \
sudo yum install libicu -y && \
sudo systemctl enable docker
Creating an Amazon Machine Image AMI and terminating the EC2 instance
We’re going to use this EC2 instance to create an AMI so that our GitHub actions job can use that AMI to start our self hosted runner.
Create an image from the EC2 instance using the information below:
Image name and description: GitHub Action Runner
Storage size: 20
Tag: GitHub Action Runner
Lastly, let’s terminate our EC2 instance.
Creating a GitHub Actions Job to start a Self-Hosted Runner
Open your workflow file (deploy_pipeline.yml) for this project in Visual Studio Code and copy the reference file from my GitHub repository. Paste it under your last job # Create ECR Repository.
We will start our self hosted runner in the private subnet and in order for it to be available, first we need to deploy our AWS infrastructure. That is why this is job depends on deploy_aws_infrastructure
job.
Copy the value from line 150 and paste it here.
This step checks whether we already have the EC2 instance running tagged as ec2-github-runner, hence it will enter "runner-running=true"
in our GitHub Actions environment variable otherwise it’ll enter "runner-running=false"
steps:
- name: Check for running EC2 runner
run: |
instances=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=ec2-github-runner" "Name=instance-state-name,Values=running" --query 'Reservations[].Instances[].InstanceId' --output text)
if [ -n "$instances" ]; then
echo "runner-running=true" >> $GITHUB_ENV
else
echo "runner-running=false" >> $GITHUB_ENV
fi
id
a way we can tag this step so we can reference it in other steps or jobs.
uses
this step is using this action machulav/ec2-github-runner@v2
This is the action that we have to launch the EC2 instance and create the AMI before we can use it. Visit here for more information.
github-token
Your personal access token from Secrets in your repository.
ec2-image-id
The AMI ID we created in our AWS account.
subnet-id
and security-group-id
we export their ID from the deploy_aws_infrastructure job.
Our deploy_pipeline.yml should look like this
This is all we need to do to create the job that will start the self hosted EC2 runner. Save your work then push our updates in to our GitHub repository to trigger our pipeline.
Remember that we are storing our state file in S3 so when we run this job, Build AWS infrastructure it will check the state of our Terraform file in S3 and it will realize that our state is already up to date. Therefore, it will move on to the next job.
We should have a runner in our GitHub account and EC2 instance running in our AWS account.
This is how you create a self-hosted EC2 runner.
Creating a GitHub Actions Job to Build a Docker Image
The next job in our pipeline will build the Docker image for our application and push the image to the Amazon ECR repository we created in the previous step.
Prerequisites:
- Set up a repository to store the application code.
- Add the application code to the repository.
- Create a Dockerfile that our job will use to build the Docker image for our application.
- Create the appserviceprovider.php file that we’ll update in our application code to allow our application to redirect HTTP traffic to HTTPS.
We already covered all these topics on my previous project when we deployed this application in the AWS management console. You must complete this project first before starting this CI/CD project. It will help you gain better understanding.
Creating a repository to store the application code
We will create a repository to store our application code then we will clone the repository on our computer.
For more details please visit here where I discussed it thoroughly.
Adding the application code to the GitHub repository
Up next we will add our application code in to that repository and we will push the changes back to GitHub.
For more details please visit here where I discussed it thoroughly.
Creating the Dockerfile
We will create the Dockerfile that our build job we used to build the Docker image for our application.
Copy the reference file from my GitHub repository and paste it in your Dockerfile then save your work. For more details please visit here where I discussed it thoroughly.
Creating the AppServiceProvider.php file
We will create the AppServiceProvider.php file that our application needs to redirect HTTP traffic to HTTPS. Create a new file in your project folder named AppServiceProvider.php
then paste the reference file from my GitHub repository. This contains the code that we need to add to our AppServiceProvider.php file. Save your work. We won’t push our updates in to our GitHub repository yet.
Creating a GitHub Actions job to build and push a Docker Image into Amazon ECR
Before we create the job that we will use to build and push our Docker image in to Amazon ECR, we need to enter some environment variables. Access my reference file from my GitHub repository
Open your deploy_pipeline.yml workflow file. We need to enter some environment variables before we create the job to build the Docker image.
Copy the second reference file that I created in my GitHub repository.
This contains the job that we will use to build the Docker image for our application and push it to Amazon ECR repository.
This is all we need to do to create the build job to build the docker image for our application and push the image to Amazon ECR. Save your work and push your updates to your GitHub repository to trigger our pipeline.
This is how you create a GitHub actions job to build a Docker image and push it to Amazon ECR repository.
Creating a GitHub Actions Job to export the environment variables into the S3 bucket
The next job that we will create in our pipeline will store all the build arguments we used to build the Docker image in a file, afterwards the job will copy the file in to the S3 bucket so that the ECS Fargate containers can reference the variables we stored in the file. Access my reference file from my GitHub repository and copy it to your workflow file.
Below shows how we referenced the values. Similar to what we did on the previous steps.
# Create environment file and export to S3
export_env_variables:
name: Create environment file and export to S3
needs:
- configure_aws_credentials
- deploy_aws_infrastructure
- start_runner
- build_and_push_image
if: needs.deploy_aws_infrastructure.outputs.terraform_action != 'destroy'
runs-on: ubuntu-latest
steps:
- name: Export environment variable values to file
env:
DOMAIN_NAME: ${{ needs.deploy_aws_infrastructure.outputs.domain_name }}
RDS_ENDPOINT: ${{ needs.deploy_aws_infrastructure.outputs.rds_endpoint }}
ENVIRONMENT_FILE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.environment_file_name }}
run: |
echo "PERSONAL_ACCESS_TOKEN=${{ secrets.PERSONAL_ACCESS_TOKEN }}" > ${{ env.ENVIRONMENT_FILE_NAME }}
echo "GITHUB_USERNAME=${{ env.GITHUB_USERNAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "REPOSITORY_NAME=${{ env.REPOSITORY_NAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "WEB_FILE_ZIP=${{ env.WEB_FILE_ZIP }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "WEB_FILE_UNZIP=${{ env.WEB_FILE_UNZIP }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "DOMAIN_NAME=${{ env.DOMAIN_NAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_ENDPOINT=${{ env.RDS_ENDPOINT }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_DB_NAME=${{ secrets.RDS_DB_NAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_DB_USERNAME=${{ secrets.RDS_DB_USERNAME }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
echo "RDS_DB_PASSWORD=${{ secrets.RDS_DB_PASSWORD }}" >> ${{ env.ENVIRONMENT_FILE_NAME }}
- name: Upload environment file to S3
env:
ENVIRONMENT_FILE_NAME: ${{ needs.deploy_aws_infrastructure.outputs.environment_file_name }}
ENV_FILE_BUCKET_NAME: ${{ needs.deploy_aws_infrastructure.outputs.env_file_bucket_name }}
run: aws s3 cp ${{ env.ENVIRONMENT_FILE_NAME }} s3://${{ env.ENV_FILE_BUCKET_NAME }}/${{ env.ENVIRONMENT_FILE_NAME }}
Let’s define the values below.
To store the key value pairs above in our environment file the next step is going to upload that file in to our S3 bucket.
Let’s save our work and push our update in to our GitHub repository to trigger our pipeline.
Creating the SQL folder and adding the SQL script
The next job that we will create in our pipeline will use Flyway to migrate the SQL data for our application in to the RDS database. First we need to add the SQL script we want to migrate in to our RDS database to our project folder. Create a new folder named sql
in your root project folder then access my reference file and download the SQL script.
Creating a GitHub Actions Job to Migrate Data into the RDS database with Flyway
The next step in our pipeline, we will use Flyway to transfer the SQL data for our application in to the RDS database. This involves setting up flyway on our self-hosted runner and using it to move the data in to the RDS database.
You can use the latest Flyway version here.
Access my reference file from my GitHub repository and paste it this way in your project folder.
This step will run the wget
command to download Flyway on our self-hosted runner.
wget -qO- https://repo1.maven.org/maven2/org/flywaydb/flyway-commandline/${{ env.FLYWAY_VERSION }}/flyway-commandline-${{ env.FLYWAY_VERSION }}-linux-x64.tar.gz | tar xvz && sudo ln -s `pwd`/flyway-${{ env.FLYWAY_VERSION }}/flyway /usr/local/bin
This is all we need to do to create the command to download Flyway on our self-hosted runner and the way that we have created our command, it allows us to dynamically change the version of Flyway we want to download.
For more information on basic Linux commands visit here.
Let’s save our work and push our update to our GitHub project repository to trigger our pipeline.
Terminating the Self-Hosted Runner in the AWS management console
The next job that we will create in our pipeline will be used to stop the self-hosted runner. We are launching the self-hosted runner to build our Docker image and migrate the data for our application in to the RDS database. Once we complete this task, we will terminate the self-hosted runner immediately.
First, terminate the self-hosted runner that is currently running in the management console.
Creating a GitHub Actions Job to Stop the Self-Hosted Runner
We will create the job that we will use to stop the self-hosted EC2 runner. Access my reference file in my GitHub repository and copy it to your project folder just below the last job we created.
Our # Stop the self hosted EC2 runner should look like this after updating the values.
This is all we need to do to create the job that will stop the self-hosted runner that we are using to build our Docker image and migrate our data in to the RDS database. Let’s save our work and push our updates to our GitHub repository to trigger our pipeline.
Creating a GitHub Actions Job to Create a new ECS task definition revision
In the build and push image job in our pipeline, we successfully built the Docker image for our application and push the image to Amazon ECR. In the next job that we will create in our pipeline, we will update the Task Definition for the ECS service hosting our application with the new image we pushed to Amazon ECR.
Access my reference file in my GitHub repository. Open your workflow file deploy_pipeline.yml and paste it on line 364 just below the last job we created.
We only have 1 step in this job. For this step, we want to create a new Task Definition revision and we’ve also created an environment variable for this step. This is how our steps should look like. Add ${{ }}/
on line 382.
steps:
- name: Create new task definition revision
env:
ECS_FAMILY: ${{ needs.deploy_aws_infrastructure.outputs.task_definition_name }}
ECS_IMAGE: ${{ secrets.ECR_REGISTRY }}/${{ needs.deploy_aws_infrastructure.outputs.image_name }}:${{ needs.deploy_aws_infrastructure.outputs.image_tag }}
The following names are specified in the deploy_aws_infrastructure job.
ECS_FAMILY
name of the task definition we want to create a revision of.
ECS_IMAGE
name of our container image.
This is all we need to do to create the job that we will use to create the new Task Definition revision for our ECS service. Let’s save our work then push our new updates in to our GitHub repository to trigger our pipeline.
Creating GitHub Actions Job to restart the ECS Fargate Service
We will create the final job in our pipeline that will restart the ECS service and forces it to use the latest Task Definition revision we created previously.
Access my reference file in my GitHub repository. Open your workflow file deploy_pipeline.yml and paste it on line 405 just below the last job we created.
This is all we need to do to create the job that will restart the ECS Fargate service. Let’s save our work then push our new updates in to our GitHub repository to trigger our pipeline.
This means that our ECS task has been successfully deployed and we can now access our application through our domain name.
To access our application, open a new web browser and type your domain name in the address bar.
There you have it! We can now access our application.
Conclusion and Clean Up
This is how you create a full CI/CD project that works you from beginning to end on how to deploy an application in AWS. This is an excellent project to present to your hiring manager and employers because this covers many aspects of the skills you need to know as a Cloud and DevOps Engineer.
This project covers:
- How to deploy applications on AWS using core AWS services such as VPC, public and private subnets, NAT gateways, security groups, application load balancer, RDS, ECS, ECR, auto scaling group, Route 53, and more.
- Containerization and shows your skills on how to build a Docker image and push the image to Amazon ECR.
- How to deploy application on AWS with Infrastructure as Code — Terraform.
- How to deploy dynamic application on AWS using CI/CD pipeline and GitHub actions.
To prevent unnecessary AWS costs from running your services, on line 11 under Deploy Pipeline replace apply
with destroy
Next, save your workflow and push our update to our GitHub repository to trigger our pipeline and destroy all the resources we’ve used for this project.
Congratulations!
Thank you for following along and stay tuned for my next project.