Part 14 — HumanGov Application — Proof of Concept (POC) on AWS Elastic Container Service (ECS) fronted By Application Load Balancer (ALB) and Storing Docker Images on Elastic Container Registry (ECR)

Cansu Tekin
10 min readMar 19, 2024

--

The HumanGov Application is a Human Resources Management Cloud SaaS application for the Department of Education across all 50 states in the US. Whenever a state hired a new employee the registiration of the new employees will be done through the application. Their information will be stored inside AWS DynamoDB and their employment document will be stored inside S3 buckets. Our primary responsibility as DevOps Engineer is to modernize and enhance the HumanGov application. We first focused on provisioning the infrastructure with Terraform (Part 10) onto the AWS Cloud environment and then we configured and deployed applications on those resources with Ansible (Part 3). In this section, we will containerize the HumanGov application using Docker to make it more efficient, portable, and scalable across different computing environments, and keep improving the application architecture with DevOps tools in the following sections.

PART 1: Containerize the HumanGov application using Docker, and push the image to AWS ECR (Elastic Container Registry)

PART 2: Provision the AWS S3 Bucket and DynamoDB table using Terraform, and the ECS (Elastic Container Service) cluster manually

PART 3: Deploy the application by creating a service on ECS

SOLUTION ARCHITECTURE

PART 1: Containerize the HumanGov application using Docker, and push the image to AWS ECR (Elastic Container Registry)

SOLUTION ARCHITECTURE

Step 1: Create an IAM Role for ECS tasks

When we create an IAM role for ECS tasks, we typically define the permissions required for those tasks to interact with other AWS services, such as accessing Amazon S3 buckets, writing logs to Amazon CloudWatch, or pulling container images from Amazon ECR (Elastic Container Registry).

Set Permissions:

# Grant full access to Amazon S3 resources;listing, uploading, downloading, and deleting objects within S3 buckets.
AmazonS3FullAccess
# Provide full access to Amazon DynamoDB; creating, reading, updating, and deleting tables and data within DynamoDB.
AmazonDynamoDBFullAccess
# This policy used by Amazon ECS tasks.
# It grants permissions necessary for ECS tasks to execute, including accessing Amazon CloudWatch logs and Amazon S3 for task storage.
AmazonECSTaskExecutionRolePolicy

By specifying “Elastic Container Service Task” as the trusted entity type, we’re indicating that the policy is intended to be attached to IAM roles used by ECS tasks.

Step 2: Disable the AWS-managed credentials on Cloud9 and set up new IAM credentials

Settings > AWS Settings

We already have a user from the previous project series named cloud9-user. Delete the previously created keys and create a new key for the cloud9-user with the Command Line Interface (CLI) use case.

Store them in export.sh file and export it.

cd ~/environment
touch export.sh
source export.sh
# validate
echo $AWS_ACCESS_KEY_ID

Step 3: Containerize the HumanGov application using Docker

cd human-gov-application/src/
# Create the Dockerfile
touch Dockerfile

Dockerfile: This Dockerfile is used to build a Docker image for deploying a Flask application using Gunicorn

# Use Python as a base image
# This provides the necessary Python environment to run the Flask application
FROM python:3.8-slim-buster

# Set working directory inside the container to /app
# This will be the directory where the Flask application code will be copied
# If the /app directory doesn't exist, Docker will create it. Then, subsequent commands like COPY, RUN, and CMD will be executed relative to this directory
WORKDIR /app

# Copies the requirements.txt file from the local directory to the /app directory inside the container
# Installs the Python dependencies listed in the requirements.txt file using pip
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

# Copy the entire local directory (containing the Flask application code) to the /app directory inside the container
COPY . /app

# Specify the default command to run when the container starts.
# Start Gunicorn serving the Flask application defined in the humangov module's app object.
CMD ["gunicorn", "--workers", "1", "--bind", "0.0.0.0:8000", "humangov:app"]

Step 4: Build Docker Image for humangov-app Flask application and NGINX application, then push them to the ECR repository (Elastic Container Registry)

The Elastic Container Registry (ECR) is a fully managed Docker container registry service provided by Amazon Web Services (AWS). It allows users to store, manage, and deploy Docker container images. ECR integrates with other AWS services such as Amazon ECS (Elastic Container Service) and AWS Fargate.

Create a new public ECR repository named humangov-app for Flask application.

Go to View push commands:

  1. Retrieve an authentication token and authenticate your Docker client to your registry. Use the AWS CLI:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/r9c2d3d1

2. Build your Docker image using the following command. For information on building a Docker file from scratch see the instructions here. You can skip this step if your image is already built:

docker build -t humangov-app .

3. After the build completes, tag your image so you can push the image to this repository:

docker tag humangov-app:latest public.ecr.aws/r9c2d3d1/humangov-app:latest

4. Run the following command to push this image to your newly created AWS repository:

docker push public.ecr.aws/r9c2d3d1/humangov-app:latest

Step 5: Containerize the NGINX application

cd ..
mkdir nginx
# Create the NGINX configuration files
touch nginx.conf
touch proxy_params
touch Dockerfile

nginx.conf file:

server {
listen 80;
server_name humangov www.humangov;

location / {
include proxy_params;
proxy_pass http://localhost:8000;
}
}

It forwards the request to the container hosting the Flask Python application running on port 8000 of the localhost and configures NGINX to listen for incoming HTTP requests on port 80. http://localhost:8000, indicates that NGINX will forward requests to the Flask application running on port 8000 of the localhost.

Create the Dockerfile for NGINX image:

# Use NGINX alpine as a base image
FROM nginx:alpine

# Remove the default NGINX configuration file
RUN rm /etc/nginx/conf.d/default.conf

# Copy custom configuration file
COPY nginx.conf /etc/nginx/conf.d

# Copy proxy parameters
COPY proxy_params /etc/nginx/proxy_params

# Expose port 80
EXPOSE 80

# Start NGINX
CMD ["nginx", "-g", "daemon off;"]

Create a new public ECR repository named humangov-nginx.

Push commands for humangov-nginx:

  1. Retrieve an authentication token and authenticate your Docker client to your registry. Use the AWS CLI:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/r9c2d3d1

2. Build your Docker image using the following command. For information on building a Docker file from scratch see the instructions here. You can skip this step if your image is already built:

docker build -t humangov-nginx .

3. After the build completes, tag your image so you can push the image to this repository:

docker tag humangov-nginx:latest public.ecr.aws/r9c2d3d1/humangov-nginx:latest

4. Run the following command to push this image to your AWS repository:

docker push public.ecr.aws/r9c2d3d1/humangov-nginx:latest

Part 2: Provision the AWS S3 Bucket and DynamoDB table using Terraform, and the ECS (Elastic Container Service) cluster manually

Please be aware that AWS Fargate does not currently offer a free tier, so engaging in this hands-on project will result in incurring a small amount of USD in AWS charges. Remember to promptly delete the cluster once you’ve completed the hands-on to avoid any unnecessary costs.

Step 1: Update the Terraform files before provisioning the AWS S3 Bucket and DynamoDB table

We already have a Terraform file with the desired infrastructure from the previous section. We do not need EC2 instance resources and their configuration at this point because we will use AWS Fargate. AWS Fargate is a serverless compute engine for containers that allows you to run containers without managing the underlying infrastructure.

Comment out the blocks not related to S3 and DynamoDb Table on modules/aws_humangov_infrastructure/main.tf

This is the updated modules/aws_humangov_infrastructure/main.tf file:

resource "aws_dynamodb_table" "state_dynamodb" {
name = "humangov-${var.state_name}-dynamodb"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"

attribute {
name = "id"
type = "S"
}

tags = {
Name = "humangov-${var.state_name}"
}
}

resource "random_string" "bucket_suffix" {
length = 4
special = false
upper = false
}

resource "aws_s3_bucket" "state_s3" {
bucket = "humangov-${var.state_name}-s3-${random_string.bucket_suffix.result}"

tags = {
Name = "humangov-${var.state_name}"
}
}

This is the updated modules/aws_humangov_infrastructure/outputs.tf file:

output "state_dynamodb_table" {
value = aws_dynamodb_table.state_dynamodb.name
}

output "state_s3_bucket" {
value = aws_s3_bucket.state_s3.bucket
}

This is the updated /humangov_infrastructure/terraform/outputs.tf file:

output "state_infrastructure_outputs" {
value = {
for state, infrastructure in module.aws_humangov_infrastructure :
state => {
dynamodb_table = infrastructure.state_dynamodb_table
s3_bucket = infrastructure.state_s3_bucket
}
}
}

Step 2: Provision the AWS DynamoDB and S3 bucket using Terraform

cd ~/environment/human-gov-infrastructure/terraform/
terraform show
terraform fmt
terraform validate
terraform plan
terraform apply

Take note of your outputs.

Outputs:

state_infrastructure_outputs = {
"california" = {
"dynamodb_table" = "humangov-california-dynamodb"
"s3_bucket" = "humangov-california-s3-eq2h"
}
}

Step 3: Create a new ECS cluster named humangov-ecs-cluster

An AWS ECS cluster is a logical grouping of container instances that you manage together. Within a cluster, you can deploy multiple services and tasks. ECS clusters can span multiple Availability Zones within a region, providing high availability and fault tolerance for your containerized applications.

Part 3: Deploy the application by creating a service on ECS

Step 1: Create the Task Definition

An AWS ECS task definition is a blueprint for our application containers. It defines various parameters such as Docker image, CPU and memory requirements, networking configuration, and more. When you run a task using ECS, you specify which task definition to use, and ECS launches the containers according to the specifications defined in that task definition.

Go to your ECS dashboard -> Task Definition -> Create new task definition -> name it humangov-fullstack

Launch type: AWS Fargate
Operating system/Architecture: Linux/X86_64
CPU: 1 vCPU
Memory: 3 GB
Task role: HumanGovECSExecutionRole
Task execution role: HumanGovECSExecutionRole
humangov-app: public.ecr.aws/r9c2d3d1/humangov-app
humangov-nginx: public.ecr.aws/r9c2d3d1/humangov-nginx

Create a container for the humangov-app Flask app docker images we created before:

Add environment variables for the humangov-app container:

Add a container for the humangov-nginx docker images we created before:

Step 2: Create a service

A service in ECS allows us to define long-running tasks, ensuring that a specified number of instances of a task definition are running and distributing traffic across them as needed. When you create a service, you specify the task definition to use, the desired number of tasks to run, and other configurations. The service manages the deployment and scaling of tasks based on the parameters we’ve defined, making it easier to run and manage containerized applications in ECS.

Launch type: The EC2 launch type allows you to run tasks on a cluster of EC2 instances, while the Fargate launch type allows you to run tasks without managing the underlying infrastructure.

During service configuration, we selected the desired task to run as 2. Go to each task and navigate to ENI ID’s. You can see that our application is running two different availability zones.

Step 3: Open one of the task DNS to test the application

Step 4: Destroy infrastructure and other resources

terraform destroy

CONGRATULATIONS!!!

--

--

Cansu Tekin

AWS Community Builder | Full Stack Java Developer | DevOps | AWS | Microsoft Azure | Google Cloud | Docker | Kubernetes | Ansible | Terraform