How I Built an AWS App Without Spending a Penny — The Pipeline

Abhishek Chaudhuri
11 min readSep 22, 2023

--

AWS logo with dollar sign crossed out

This is part 5 of a 6-part series. See the previous parts where we build both the frontend and the backend.

Let’s set up the pipeline for the frontend app, microservices, and CloudFormation templates. This will make maintaining the apps much easier. The general steps we want to follow for a pipeline are as follows:

  1. Clone the repository.
  2. Install all dependencies.
  3. Test the app.
  4. Build the app.
  5. Scan for vulnerabilities.
  6. Create an image.
  7. Deploy to dev/QA/prod.
  8. Monitor the health of the app.
  9. Update the status on GitHub.

I used GitHub Actions (GA) to create the pipelines since it integrates well with our GitHub repo. Our frontend pipeline will follow this workflow:

Frontend pipeline workflow diagram
Workflow diagrams created using Mermaid

The build step is standard for React apps and matches the buildspec seen earlier for CodeBuild. Due to the 100-minute limit we have per month, we limit CodeBuild jobs to the CD stage, and have GA run similar steps for much cheaper. The tricky part is configuring the AWS CLI to work in the pipeline. AWS has created several actions for various services, such as CodeBuild, CloudFormation, and SAM: https://github.com/aws-actions. We could follow our local setup and use long-term credentials, but we can do better than that. Instead, we can make GitHub an OIDC provider and create a role to give GA temporary access to AWS (1 hour by default). Then we need to give the role the minimum permissions required to run the entire pipeline. In our frontend template, we can add the following to create our OIDC provider:

GitHubOIDC:
Type: AWS::IAM::OIDCProvider
Properties:
Url: https://token.actions.githubusercontent.com
ClientIdList:
- sts.amazonaws.com
ThumbprintList:
- 6938fd4d98bab03faadb97b34396831e3780aea1
- 1c58a3a8518e8759bf075b76b750d4f2df264fcd
GitHubRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: sts:AssumeRoleWithWebIdentity
Principal:
Federated: !Ref GitHubOIDC
Condition:
# aud = audience, sub = subject
StringEquals:
token.actions.githubusercontent.com:aud: sts.amazonaws.com
StringLike:
# Replace with your username and repo name
token.actions.githubusercontent.com:sub: repo:Abhiek187/aws-shop:*
ManagedPolicyArns:
# A list of policies required for every workflow (max 10)
# The shortest managed policies will move to a consolidated customer policy
- !Ref GitHubRemainingPolicy
# CloudFormation (need read access to detect drift & write access to update resources)
- "arn:aws:iam::aws:policy/CloudFrontFullAccess" # 31
- "arn:aws:iam::aws:policy/IAMFullAccess" # 22
- "arn:aws:iam::aws:policy/AmazonRoute53FullAccess" # 31
# React
- "arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess" # 133
# Microservices
- "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess" # 108
# SAM
- "arn:aws:iam::aws:policy/AWSLambda_FullAccess" # 52
- "arn:aws:iam::aws:policy/AmazonEventBridgeFullAccess" # 83
- "arn:aws:iam::aws:policy/CloudWatchFullAccessV2" # 45
GitHubRemainingPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: AWSCloudFormationFullAccess
Effect: Allow
Action:
- "cloudformation:*"
Resource: "*"
- Sid: AmazonSNSFullAccess
Effect: Allow
Action:
- "sns:*"
Resource: "*"
- Sid: AmazonS3FullAccess
Effect: Allow
Action:
- "s3:*"
- "s3-object-lambda:*"
Resource: "*"
- Sid: AmazonAPIGatewayAdministrator
Effect: Allow
Action:
- "apigateway:*"
Resource: "arn:aws:apigateway:*::/*"
- Sid: AmazonSQSFullAccess
Effect: Allow
Action:
- "sqs:*"
Resource: "*"

Much of this template has thankfully been provided by the configure-aws-credentials README. We create an OIDC provider and copy the thumbprints belonging to GA’s token service. Then we create a role that the provider can assume for this GitHub repo using STS (security token service). Roles have 2 parts to them: a trust policy and one or more permission policies. The trust policy describes who is allowed to assume the role (in this case, GitHub). Make sure to supply your own username and repo name for the subject. And the permission policies describe what access the role gives when using AWS. We need to give this role full access to every service referenced by CloudFormation since the pipeline can modify any resource that’s updated. The catch is that we can only have 10 policies per role. So, I created a custom policy to hold all the remaining permissions. In the comments, I wrote how many lines each managed policy contained (as of this writing). I copied the ARNs of the longest policies in the GitHubRole resource and copied the contents of the shorter policies in a separate managed policy resource. (We can’t import the ARNs directly in a managed policy.)

We need to deploy these changes first and copy the role ARN. Then we can save it as an Actions variable in our repo and reference it in the workflow file (alongside the region name):

# This workflow will do a clean installation of node dependencies, cache/restore them, build the source code, and run tests across different versions of node
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-nodejs

name: Node.js CI/CD

on:
push:
branches: ["main"]
paths:
- "shop-app/**"
- ".github/workflows/node.js.yml"
pull_request:
branches: ["main"]
paths:
- "shop-app/**"
- ".github/workflows/node.js.yml"

env:
CODEBUILD_PROJECT: AWSShopBuild # replace with your project name

# Required to use the AWS CLI
permissions:
id-token: write
contents: read

jobs:
build:
runs-on: ubuntu-latest

defaults:
run:
working-directory: ./shop-app

strategy:
matrix:
node-version: [18.x, 20.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/

steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: "npm"
cache-dependency-path: "./shop-app/package-lock.json"
- run: npm ci
- run: npm run lint --if-present
- run: npm run build --if-present
- run: npm test

# If the tests were successful and we're pushing to main, deploy to S3 using CodeBuild
deploy:
runs-on: ubuntu-latest
needs: build
if: github.event_name == 'push'

steps:
- uses: actions/checkout@v4
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.GH_OIDC_ROLE }}
aws-region: ${{ vars.AWS_REGION }}
- name: Run CodeBuild
uses: aws-actions/aws-codebuild-run-build@v1
with:
project-name: ${{ env.CODEBUILD_PROJECT }}

The permissions section was very important to add to each workflow. Otherwise, the configure-aws-credentials step wouldn’t work. We use the CodeBuild action to run the CodeBuild project on deployments. And we limit this workflow to only run when we make changes related to the front end (since I created a mono repo for this project).

Our microservices will follow this workflow:

Backend pipeline workflow diagram
# This workflow will install Python dependencies, run tests, and lint with a single version of Python
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: Python CI/CD

on:
push:
branches: ["main"]
paths:
- "microservices/store/**"
- "microservices/iam-old/**"
- ".github/workflows/python-app.yml"
pull_request:
branches: ["main"]
paths:
- "microservices/store/**"
- "microservices/iam-old/**"
- ".github/workflows/python-app.yml"

# Required to use the AWS CLI
permissions:
id-token: write
contents: read

jobs:
build:
runs-on: ubuntu-latest

defaults:
run:
working-directory: ${{ matrix.directory }}

strategy:
matrix:
directory: ["./microservices/store", "./microservices/iam-old"]
python-version: [3.9, "3.10", "3.11"]
# See the supported Python release schedule at https://devguide.python.org/versions/

steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.GH_OIDC_ROLE }}
aws-region: ${{ vars.AWS_REGION }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with pytest
run: |
python -m pytest

# If the tests were successful and we're pushing to main, deploy using SAM
deploy:
runs-on: ubuntu-latest
needs: build
if: github.event_name == 'push'

defaults:
run:
working-directory: ${{ matrix.directory }}

strategy:
matrix:
directory: ["./microservices/store", "./microservices/iam-old"]

steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- uses: aws-actions/setup-sam@v2
with:
use-installer: true
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.GH_OIDC_ROLE }}
aws-region: ${{ vars.AWS_REGION }}
#- run: sam validate # skipping due to OpenAPI DefinitionUri bug
- run: sam build
# Prevent prompts and failure when the stack is unchanged
- run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset

Like with Node.js, Python code follows a standard CI process, with pytest as the testing framework. We do need to configure AWS to run some integration tests, but if you’re only running unit tests and have mocked all the services, then you can exclude this step in CI. For CD, we use the provided SAM action to install SAM and run the build and deploy steps. When we deploy, we add flags to make sure no user input is required, nor should it fail if there aren’t any changes to deploy (due to an empty change set). It’s good to run sam validate as well to make sure the SAM templates are valid, but due to a bug with the OpenAPI spec, I had to exclude this step.

And finally, our CloudFormation templates will follow this workflow:

CloudFormation pipeline workflow diagram

CloudFormation templates are interesting since they’re not built the same way as other languages, but they’re still code. So, we still want to try to follow the general pipeline steps at the beginning as best as we can. Cloning is still the same. The only dependency we need to install is AWS. We can exclude the build and image steps since those aren’t applicable. We can test the templates in a couple of ways. First, AWS provides a validate-template API to make sure the templates are syntactically valid. Second, we can test to make sure the stack hasn’t drifted since our last deployment. Drift occurs when a resource defined in a template is modified outside CloudFormation. This makes the template out-of-sync with what’s deployed in AWS, preventing it from being the source of truth. I wrote a shell script to check the drift status of a given stack:

# Check if the stack exists before proceeding
if ! aws cloudformation describe-stacks --stack-name $STACK_NAME &> /dev/null ; then
echo "Stack $STACK_NAME doesn't exist, skipping..." && exit 0
fi

# Start stack drift detection
DRIFT_ID=$(aws cloudformation detect-stack-drift --stack-name $STACK_NAME | jq -r ".StackDriftDetectionId")

while : ; do
# Wait until the drift check is complete
read DRIFT_STATUS DETECT_STATUS < <(echo $(aws cloudformation describe-stack-drift-detection-status --stack-drift-detection-id $DRIFT_ID | jq -r ".StackDriftStatus, .DetectionStatus"))
echo "$DETECT_STATUS"
[ $DETECT_STATUS == "DETECTION_IN_PROGRESS" ] || break
sleep 1
done

if [ $DETECT_STATUS == "DETECTION_FAILED" ]; then
aws cloudformation describe-stack-drift-detection-status --stack-drift-detection-id $DRIFT_ID
echo "Failed to detect drift. See details above." && exit 1
elif [ $DRIFT_STATUS == "DRIFTED" ]; then
aws cloudformation describe-stack-resource-drifts --stack-name $STACK_NAME --stack-resource-drift-status-filters DELETED MODIFIED --no-cli-pager
echo "The CloudFormation stack has drifted. See details above." && exit 1
else
echo "No drift detected."
fi

If this is the first time we’re creating the stack, we don’t need to check for drift, so we exit early. When we call detect-stack-drift, we need to wait until detection finishes (usually within a few seconds). If there’s drift, we show what’s drifted. Otherwise, the check succeeds, and we move on to the next step.

Security scanning is important when deploying infrastructure. We want to follow AWS’s well-architected framework when deploying resources. For example, S3 buckets should be private, and encryption should be used everywhere when possible. I used CloudFormation Guard, or cfn-guard, which is an open-source tool provided by AWS to scan CloudFormation templates. After installing the tool, we need to define a set of rules for cfn-guard to check against. Guard rules are written in a domain-specific language (DSL), but thankfully AWS provides several rule templates for us. I copied the well-architected reliability and security pillar rules from the Docker image found here into a folder that can be referenced by cfn-guard. (These can be used in conjunction with Config, but that is outside of our budget.) If we run the command now, we’ll get a lot of errors from various resources since we’re on a limited budget. Many of the suggestions are worth following, but some like S3 replication or Lambda in a VPC are too costly for us and would require higher maintenance. We can ignore these rules by adding a Metadata block under each resource, like this:

ReactAppBucket:
Type: AWS::S3::Bucket
Metadata:
guard:
SuppressedRules:
# Reliability suppressions
- S3_BUCKET_REPLICATION_ENABLED # stay within 435 MB
- S3_BUCKET_DEFAULT_LOCK_ENABLED # objects can be overwritten regularly

I added all the Metadata blocks in the final source code, so you can browse those for reference if you run into any issues with deployments. Lastly, for deployments, we can use the cloudformation-github-deploy action. For the arguments, we need to pass the stack name, template file, and parameter file (even if it’s empty), prevent errors from empty change sets, and add CAPABILITY_NAMED_IAM (which is required to create or modify IAM resources). However, we need a way to pass the correct stack name, file, and parameters for each stack we want to deploy in the pipeline. To accomplish this, I created a JSON file containing these properties:

[
{
"file": "cloudformation.yaml",
"stack": "AWS-Shop-Frontend-Stack",
"params": "params.json"
}
]

Then I added a step before the build process to read this file and save it as an output, so the CI and CD jobs can parse the JSON. The final workflow file looks like the following:

name: CloudFormation CI/CD

on:
push:
branches: ["main"]
paths:
- "**.yaml"
- ".github/workflows/cfn.yml"
pull_request:
branches: ["main"]
paths:
- "**.yaml"
- ".github/workflows/cfn.yml"

# Required to use the AWS CLI
permissions:
id-token: write
contents: read

jobs:
# Read all the CloudFormation files and stack names and create a matrix
get-templates:
runs-on: ubuntu-latest

outputs:
cfn-info: ${{ steps.read-json.outputs.info }}

steps:
- uses: actions/checkout@v4

- id: read-json
run: echo "info=$(jq -c . < .github/workflows/cfn-info.json)" >> "$GITHUB_OUTPUT"

build:
runs-on: ubuntu-latest
needs: get-templates

strategy:
matrix:
cfn-info: ${{ fromJson(needs.get-templates.outputs.cfn-info) }}

steps:
- name: Set Environment Variables
run: |
echo "STACK_NAME=${{ matrix.cfn-info.stack }}" >> "$GITHUB_ENV"
echo "STACK_FILE=${{ matrix.cfn-info.file }}" >> "$GITHUB_ENV"

- uses: actions/checkout@v4

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.GH_OIDC_ROLE }}
aws-region: ${{ vars.AWS_REGION }}

- name: Validate CloudFormation Template
run: aws cloudformation validate-template --template-body file://${{ env.STACK_FILE }}

- name: Detect Stack Drift
run: |
chmod u+x detect-cfn-drift.sh
./detect-cfn-drift.sh

- name: Install cfn-guard
run: |
curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/aws-cloudformation/cloudformation-guard/main/install-guard.sh | sh
echo "~/.guard/bin" >> "$GITHUB_PATH"

# Run checks based on AWS's well-architected security & reliability pillars
- name: Check for Vulnerabilities using cfn-guard
run: >-
cfn-guard validate --show-summary fail --output-format single-line-summary
--data ${{ env.STACK_FILE }} --rules cfn-guard-rules/

# If the tests were successful and we're pushing to main, create/update the stack
deploy:
runs-on: ubuntu-latest
needs:
- get-templates
- build
if: github.event_name == 'push'

strategy:
matrix:
cfn-info: ${{ fromJson(needs.get-templates.outputs.cfn-info) }}

steps:
- name: Set Environment Variables
run: |
echo "STACK_NAME=${{ matrix.cfn-info.stack }}" >> "$GITHUB_ENV"
echo "STACK_FILE=${{ matrix.cfn-info.file }}" >> "$GITHUB_ENV"
echo "PARAM_FILE=${{ matrix.cfn-info.params }}" >> "$GITHUB_ENV"

- uses: actions/checkout@v4

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.GH_OIDC_ROLE }}
aws-region: ${{ vars.AWS_REGION }}

- name: Deploy to AWS CloudFormation
uses: aws-actions/aws-cloudformation-github-deploy@v1
with:
name: ${{ env.STACK_NAME }}
template: ${{ env.STACK_FILE }}
parameter-overrides: file://${{ github.workspace }}/${{ env.PARAM_FILE }}
no-fail-on-empty-changeset: "1"
capabilities: CAPABILITY_NAMED_IAM

We can also use Dependabot to automatically update our dependencies and reduce the risk of vulnerabilities. To do so, we create a .github/dependabot.yml file:

version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
day: "wednesday"

- package-ecosystem: "npm"
directory: "/shop-app"
schedule:
interval: "weekly"
day: "wednesday"
groups:
aws:
patterns:
- "@aws-sdk/*"
eslint:
patterns:
- "*eslint*"
mui-emotion:
patterns:
- "@emotion/*"
- "@mui/*"
redux:
patterns:
- "*redux*"
types:
patterns:
- "@types/*"
vite:
patterns:
- "*vite*"

- package-ecosystem: "pip"
directory: "/microservices/iam-old"
schedule:
interval: "weekly"
day: "wednesday"
groups:
boto:
patterns:
- "boto3"
- "botocore"
- "s3transfer"

- package-ecosystem: "pip"
directory: "/microservices/store"
schedule:
interval: "weekly"
day: "wednesday"
groups:
boto:
patterns:
- "boto3"
- "botocore"
- "s3transfer"
moto:
patterns:
- "moto"
- "py-partiql-parser"

(Note: I picked Wednesday to space out all the Dependabot updates across all my GitHub projects.) The AWS SDKs are updated nearly every day, so your projects will continuously be maintained for as long as AWS is around.

Hooray, now we’re starting to become DevOps professionals!

In the final part, we will take a look at all the other services we can leverage for our project.

The full GitHub repo can be found here: https://github.com/Abhiek187/aws-shop

--

--