CI/CD on AWS using IaC, with a spice of Security

Minn
11 min readDec 10, 2023

--

Scenario

An organization intends to host its three-tier web application on AWS. Furthermore, they aim to implement CI/CD to ensure a seamless execution of business processes and updates. Additionally, there is a requirement for a secure architecture, and the infrastructure should be replicable across different environments.

CAUTION

This project simulates an environment and its architecture might lack some best practices. It is advised against using it for production environments.

Architecture overview

architecture

Prerequisites

  1. AWS account (Create one)
  2. Valid domain
  3. Terraform local setup (tf setup)
  4. AWS cli in us-east-1 with admin privileges (aws setup)
  5. Fundamental knowledge of IaC, Git, DNS, and AWS

Technologies

  • AWS
  • Terraform
  • Node.js
  • CloudFlare

Before we proceed, I would like to apologize that this article is technical-focused and may not be entirely beginner-friendly. To streamline the content, I have chosen to exclude certain explanations, such as ‘Why IaC?’, ‘ What is CI/CD’, and so on.

1. DNS setup

CloudFlare and Route 53 Configuration
Visit the CloudFlare official website, create an account, and add your domain. Follow the provided instructions to replace your domain name servers with the assigned CloudFlare values. I registered my domain with PorkBun, so I will go add my assigned values as below.

ns-setup

After 5–15 minutes, your domain will show as active on CloudFlare if you setup the name servers correctly.

After CloudFlare setup, go to the Route 53 console. Create a public hosted zone with your domain name. e.g. If your domain is helloworld.com, your public hosted zone must be helloworld.com as well. After creating the hosted zone, you will see four AWS name servers. Add these values to CloudFlare.

Under DNS > Records of CloudFlare dashboard, click Add record.

  • Type: NS
  • Name: @
  • Nameserver: Fill in your Route 53 values
  • TTL: Auto

After this, your DNS will have four Route 53 name servers.

SSL Certificate Creation
Now, we will create an SSL certificate with CloudFlare and import it into AWS Certificate Manager. This certificate will be used with our load balancer for the ‘https’ listener access.
Go to SSL/TLS > Origin Server of CloudFlare and Create certificate with the following specifications.

  • Private key type: RSA (2048)
  • Hostnames: *.yourdomain.com, yourdomain.com

Afterwards, go to the ACM console and import the certificate.

2. Edit IaC configurations

Setup project directory

Clone my repository containing Terraform configurations.

git clone https://github.com/YU88John/codepipeline-proj1.git

My approach to this project’s file hierarchy is modular, and it looks like this.

├codepipeline-proj1/
└── /terraform
└── main.tf
└── providers.tf
└── /modules
└── /compute
└── /database
└── /monitoring
└── /networking

Each module will have main.tf, outputs.tf, and variables.tf respectively. We will deploy the modules by calling them from root main.tf. The necessary variables will also be injected via this main.tf.

Before deploying this infrastructure, we need to make some configuration edits.

Create an AMI

We require our EC2 instances to have Code Deploy and CloudWatch agents pre-installed.

  • First, manually create a ‘t2.micro’ EC2 instance with the ‘Amazon Linux 2’ image. Add the userdata located inside /compute/code-deploy-agent.sh of the project directory.
  • Ensure your security group allows inbound traffic on port 22.
  • SSH into the instance via a local shell or Instance connect. We will install CloudWatch agent with the following commands.
  • Check if Code Deploy agent is installed succesfully. It should show ‘running as PID(…)’.
sudo service codedeploy-agent status
  • In the /compute directory, there will be another file called ‘cloudwatch-config.json’. Paste the configuration inside that file into ‘config.json’ that we will create below. This configuration file instructs CloudWatch agent which metrics to collect and deliver. Read more about CloudWatch configurations here.
sudo yum install -y amazon-cloudwatch-agent

sudo nano /opt/aws/amazon-cloudwatch-agent/bin/config.json

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s

sudo systemctl start amazon-cloudwatch-agent.service

sudo systemctl status amazon-cloudwatch-agent.service

If everything is working fine, create an AMI with the configurations.

  • Actions > Image and templates > Create image
  • Make sure to disable reboot

Replace your ARN

  • Copy your created AMI id and replace it at /terraform/main.tf
  • Paste your imported certificate ARN at modules/networking/main.tf

Clone the directory for CI/CD

While creating Code Pipeline, you will need to create a source trigger repository that includes the app, ‘buildspec.yml’, and ‘appspec.yml’. Clone the repository that contains application and CI/CD configuration files.

git clone https://github.com/YU88John/nodeapp.git

This repository contains the Node app(app.js) and its dependency file(package.json), buildspec.yaml which will be used by Code Build, and appspec.yml and shell scripts which will be used by Code Deploy.

Code Build will build the artifacts and store it into an S3 bucket. These artifacts include appspec.yml, Node app and its dependency package. Code Deploy will use those artifacts to deploy the application to EC2 instances.

Here’s the overview.

├nodeapp/
└── /scripts
└── install-dependencies.sh
└── start-app.sh
└── app.js
└── appspec.yml
└── buildspec.yml
└── package.json

3. Deploy and configure the infrastructure

Make sure you are in the /terraform directory. Let’s deploy the infrastructure now.

This is my providers.tf version for this project. Your local Terraform version should match this, or adjust accordingly.

terraform {
required_version = "1.6.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}

Now, let’s apply it.

terraform init

terraform validate

terraform plan

terraform apply

This may take 10–20 mins. Wait until everything is applied succesfully!
As of now, the load balancer target groups will show ‘Unhealthy’ as we don’t have any app deployed on our EC2 instances.

Configure RDS endpoint

In the Terraform outputs section of your terminal, you can find the RDS endpoint. Copy that and paste it to nodeapp/app.js. Our Node app will use these details to communicate with the backend, which is a multi-AZ RDS instance in this project.

Note: This is considered a bad practice for using credentials. In production environments, it is suggested to use a credential manager or at least export the Terraform output directly into the app.

Afterward, create a new GitHub repository and push every file under the /nodeapp directory. We will use this new repository as the source for our Code Pipeline.

Create SNS topic

For the manual approval stage in the pipeline, we will need an SNS topic. Go to the SNS console and create a topic and subscription. Confirm your subscription email afterward.

Point the Domain to Load Balancer

Go to your CloudFlare dashboard, under DNS > Records, click Add record. -

  • Type: CNAME
  • Name: @
  • Target: Application load balancer’s DNS

The configuration should look like this.

cname sample

Finally, as we want end-to-end traffic encryption, please select ‘Full’ under SSL/TLS > Overview.

ssl full encryption sample

4. Construct CI/CD Pipeline

Our Terraform has already created the Code Deploy application that we will use with Code Pipeline. So, let’s go and build the pipeline.

Go to the Code Pipeline console. Create pipeline.

Step 1

  • We will use ‘V2' for Pipeline type
  • Create a service role with necessary permissions.
  • Make sure the Advanced > Artifacts bucket is set to Default Location.

Step 2

  • Source provider: GitHub (Version 2)
  • Create a GitHub connection. This will redirect to OAuth authentication of your repository. Please follow the straightforward instructions from AWS.

I would suggest to install the connector only to your source repository.

  • For the trigger, select ‘Push in a branch’
  • The branch name must be ‘main’ or your default branch
  • For Output artifact format, select CodePipeline default

Step 3
Create a Code Build project with the hyperlink button.

  • Environment image: Managed Image
  • Compute: EC2
  • Operating system: Amazon Linux
  • Runtime: Standard
  • Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
  • Image version: Always use the latest image for this runtime version
  • Create a service role with your desired name
  • Buildspec name: buildspec.yml
  • Enable CloudWatch logs

Step 4

  • Deploy provider: AWS Code Deploy
    In the dropdown menu, the application and deployment group created by Terraform will be readily available. Make sure the region is `US East (N. Virginia)`.

The deployment group is configured to drain the instances out of target groups during deployment. Moreover, only half of the instances number will get deployed at the same time, ensuring there is no downtime serving the website. This part of Terraform configuration does that job:

resource "aws_codedeploy_deployment_group" "lab-codedeploy-deployment-group" {
app_name = aws_codedeploy_app.lab-codedeploy-app.name
deployment_config_name = "CodeDeployDefault.HalfAtATime"
deployment_group_name = "LabCodeDeployDeploymentGroup"
service_role_arn = aws_iam_role.lab_codedeploy_role.arn
autoscaling_groups = [aws_autoscaling_group.lab-asg.name]

deployment_style {
deployment_type = "IN_PLACE"
deployment_option = "WITH_TRAFFIC_CONTROL"
}

load_balancer_info {
target_group_info {
name = var.tg_name
}
}
}

Step 5
Review your configurations and create the pipeline once you are satisfied.

The pipeline will start executing now, and as soon as it is successful, you can access the website via your domain name. Additionally, the ‘https' listener will work perfectly as we already specified the port 443 listener with our CloudFlare certificate.

Add Pipeline Status Notification (optional)

You can configure the pipeline to send notifications based on events such as Execution failed/succeeded, stage succeeded and so on. You can conveniently add this under the Notify part of the Code Pipeline console.

Add manual approval stage

While it’s commendable that we’ve established a CI/CD pipeline triggered with each update to the main branch, it’s crucial to introduce a validation step before approving the build and deployment.

In a comprehensive operational setting, pipelines typically include Quality Assurance (QA) steps such as SonarQube, Talisman, and more. However, for this project, we’ll simplify it by implementing a manual approval through email.

Click Edit your pipeline, and add a stage after Source. In the new stage, we will add an action as below. Paste your SNS topic ARN and leave the rest of the configuration as default values. Save everything, and you will see the new stage on your pipeline console.

Test run the pipeline

In your IDE, open the folder containing your git repository (the trigger point of the pipeline). Make some changes to app.js response text.

Push the changes to your repository.

git add -A

git commit -m "SNS test run"

git push origin main

Now, if you go to the Code Pipeline, you will see the pipeline is paused at the approval stage. At the same time, you will receive the approval notification at your subscribed mail, as shown below. Click the link and click on the approval stage to approve or reject the CI/CD.

After the successful execution of the pipeline, hit the refresh at the browser tab. Now, you can see the updated code.

5. Add security to the Web Application

As of now, we have finished building a three-tier web application. We successfully deployed the Node application, serve it through our domain name, and added CI/CD to our architecture.

Furthermore, it’s important to add security before the traffic can reach any crucial systems such as servers and databases. To prevent application layer attacks such as cross-site scripting (XSS), we can use WAF in AWS. Read more about WAF here.

Create an IP set

Before we implement any web ACLs to our surface, we have to create an approved or denied IP set. Head over to the console and go to IP sets.

Create an IP set that contains your public IP. You can check your public IP with the following command if you are using Linux.

curl ifconfig.me

Then, add it to the IP addresses list with ‘/32’ at the end. In CIDR notation, ‘/32' refers to a single IP.

Create a web ACL

Now that, we have our IP set. It’s time to create an Access Control List using that. Head over to the Web ACLs tab of the WAF console.

  • Resource type: Regional resources
  • Region: US East (N. Virginia)

Under Add AWS resources, choose the load balancer created by terraform.

After that, click Next and let’s add a rule.
Add rules > Add my own rules and rule groups
Choose your previously created IP set and the action must be ‘Allow’

Click on Save.
In the same tab, select ‘Block’ for rules that don’t match. For the blocked traffic, you can set a custom response.

Accept default values for all the other steps and click on Create. This will take 1–3 minutes.

Test the web ACL

Head back to the browser tab where the website is opened. If you hit refresh, you will see that your traffic is denied.

Why is this? Let’s figure it out.

Did you remember that we configured our DNS with CloudFlare? The traffic flow looks like this as of now.

As you can see, our IP isn’t directly communicating with the ALB. CloudFlare is doing the job. Consequently, the access is denied as the WAF in front of the ALB does not have knowledge of CloudFlare IPs.

To ensure this, try pasting your ALB’s DNS name in the browser and well, you can see your website. This is because you are directly accessing the ALB, and the WAF has your IP in the allowed IP set.

Let’s solve the issue now. Go to this website and copy the IPs used by CloudFlare. Go back to your WAF console, edit your IP set and add those IPs. Click on Save.

In a moment, you will be able to access your website via domain name.

AWS WAF has so many capabilities apart from allowing or blocking a subset of IPs. You can explore all these functions at your convenience. It also provides a traffic dashboard where you can view details about what is blocked, what kinds of devices are accessing your website.

Moreover, our terraform configuration has also created a custom dashboard that collects some of the metrics from our autoscaling group and RDS instance. From this dashboard, you can access information about the under or over-utilized resources conveniently and adjust the scaling accordingly.

Finally, it’s time to clean up the resources to prevent unwanted charges. Yes, cloud is expensive when you forget :))

Destroy the infrastructure

terraform destroy

Also, clean up the manually created resources:

  • Web ACL
    - IP set
  • Code Pipeline
  • SNS topic
  • AMI
  • Route 53 hosted zone

In this project, we have learnt how to create a CI/CD pipeline on AWS, secure the application layer attacks, and deploy the infrastructure in a repeatable way using Infrastructure-as-Code.

All the configurations and codes used in this project are publicly available at my repositories.

--

--