ECS (Fargate) with ALB Deployment Using Terraform — Part 2

Uzairyuseph
The Cloud Journal
Published in
6 min readOct 10, 2023

This article is a continuation of a larger demonstration on how to create a robust pattern consisting of a highly secured ECS container running a Flask application with connections to a DynamoDB database.

In Part 1 of this series, we delved deeply into the architecture of this project, outlining the strategy to ensure high availability and security of our application.

Architecture diagram

Part 2 will focus on the terraform implementation of the ECS container.

Part 3 will continue with terraform implementation of the supporting resources as well as focusing on the deployment of the application.

READY!!!

SET!!!

Step 3: Terraform

Terraform is an Infrastructure as Code (IaC) tool used to define and provision infrastructure. Each resource will be defined in a separate file and will be deployed using a few commands, including the flask application. The full code can be found in this GitHub repo.

Each set of services in AWS will be created in separate terraform files to logically group these components. The folder structure should look like the image below:

Main Folder structure

Step 3.1 : ECS

In the ECS file we define the cluster, the ECS service and the task definition.

ECS file

The cluster is a collection of computing resources required to run the workload. The ECS service manages and configures the tasks that needs to run. It also registers the tasks with the target group (will be discussed later) to receive the distributed traffic.

The Task Definition resource defines the configurations of the individual tasks and also includes a health check.

ecs.tf

##########################################################################################
# This file describes the ECS resources: ECS cluster, ECS task definition, ECS service
##########################################################################################

#ECS cluster
resource "aws_ecs_cluster" "ecs_cluster" {
name = "test-ecs-cluster"
}

#The Task Definition used in conjunction with the ECS service
resource "aws_ecs_task_definition" "task_definition" {
family = "test-family"
# container definitions describes the configurations for the task
container_definitions = jsonencode(
[
{
"name" : "test-container",
"image" : "${aws_ecr_repository.ecr.repository_url}:latest",
"entryPoint" : []
"essential" : true,
"networkMode" : "awsvpc",
"portMappings" : [
{
"containerPort" : var.container_port,
"hostPort" : var.container_port,
}
]
"healthCheck" : {
"command" : [ "CMD-SHELL", "curl -f http://localhost:8081/ || exit 1" ],
"interval" : 30,
"timeout" : 5,
"startPeriod" : 10,
"retries" :3
}
}
]
)
#Fargate is used as opposed to EC2, so we do not need to manage the EC2 instances. Fargate is serveless
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
execution_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
task_role_arn = aws_iam_role.ecsTaskRole.arn
}

#The ECS service described. This resources allows you to manage tasks
resource "aws_ecs_service" "ecs_service" {
name = "test-ecs-service"
cluster = aws_ecs_cluster.ecs_cluster.arn
task_definition = aws_ecs_task_definition.task_definition.arn
launch_type = "FARGATE"
scheduling_strategy = "REPLICA"
desired_count = 2 # the number of tasks you wish to run

network_configuration {
subnets = [aws_subnet.private_subnet_1.id , aws_subnet.private_subnet_2.id]
assign_public_ip = false
security_groups = [aws_security_group.ecs_sg.id, aws_security_group.alb_sg.id]
}

# This block registers the tasks to a target group of the loadbalancer.
load_balancer {
target_group_arn = aws_lb_target_group.target_group.arn
container_name = "test-container"
container_port = var.container_port
}
depends_on = [aws_lb_listener.listener]
}

Step 3.2 : ECR

Amazon ECR (Elastic Container Registry) is a fully managed service which allows for the storage of docker images that can be used to launch containers in AWS.

The integration between docker and ECR means that standard docker commands can be used to interact with images. Docker commands will be used to push images to ECR.

There are two sections of the ECR portion. The first is defining the repo and the second is the building and pushing of the image.

ECR.tf file

ecr.tf

#################################################################################################
# This file describes the ECR resources: ECR repo, ECR policy, resources to build and push image
#################################################################################################

#Creation of the ECR repo
resource "aws_ecr_repository" "ecr" {
name = "my-test-repo"
}

#The ECR policy describes the management of images in the repo
resource "aws_ecr_lifecycle_policy" "ecr_policy" {
repository = aws_ecr_repository.ecr.name
policy = local.ecr_policy
}

#This is the policy defining the rules for images in the repo
locals {
ecr_policy = jsonencode({
"rules":[
{
"rulePriority" : 1,
"description" : "Expire images older than 14 days",
"selection": {
"tagStatus" : "any",
"countType" : "sinceImagePushed",
"countUnit" : "days",
"countNumber" : 14
},
"action": {
"type" : "expire"
}
}
]
})
}

In the above code, the ECR repo is defined as well as a policy. The policy expires any images older than 14 days. You can edit this policy however you prefer or even remove it completely if you want.

The next part is to push the image. Now, while it is possible to push an image from your command line, I’ve decided to include these scripts in my terraform. In doing so, once I run terraform apply all resources required will be created and I will not need to run any other script.

This is achieved using the null resource in terraform.

ecr.tf

#The commands below are used to build and push a docker image of the application in the app folder
locals {
docker_login_command = "aws ecr get-login-password --region ${var.region} --profile personal| docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com"
docker_build_command = "docker build -t ${aws_ecr_repository.ecr.name} ./app"
docker_tag_command = "docker tag ${aws_ecr_repository.ecr.name}:latest ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com/${aws_ecr_repository.ecr.name}:latest"
docker_push_command = "docker push ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com/${aws_ecr_repository.ecr.name}:latest"
}

#This resource authenticates you to the ECR service
resource "null_resource" "docker_login" {
provisioner "local-exec" {
command = local.docker_login_command
}
triggers = {
"run_at" = timestamp()
}
depends_on = [ aws_ecr_repository.ecr ]
}

#This resource builds the docker image from the Dockerfile in the app folder
resource "null_resource" "docker_build" {
provisioner "local-exec" {
command = local.docker_build_command
}
triggers = {
"run_at" = timestamp()
}
depends_on = [ null_resource.docker_login ]
}

#This resource tags the image
resource "null_resource" "docker_tag" {
provisioner "local-exec" {
command = local.docker_tag_command
}
triggers = {
"run_at" = timestamp()
}
depends_on = [ null_resource.docker_build ]
}

#This resource pushes the docker image to the ECR repo
resource "null_resource" "docker_push" {
provisioner "local-exec" {
command = local.docker_push_command
}
triggers = {
"run_at" = timestamp()
}
depends_on = [ null_resource.docker_tag ]
}

These four commands are required for the push of the docker image to the repo. In each resource there is a dependency included, this ensures the commands are run in a specific order. The triggers are also set for each command to run at runtime, meaning every time a terraform apply is done, the image will be pushed to the repo.

Step 3.3 : DynamoDB

DynamoDB is a fully managed NoSQL AWS database. This database is schema-less and easy to use. This makes the perfect option for a basic application to get all the services running.

DynamoDB.tf file

dynamodb.tf

#################################################################################################
# This file describes the DynamoDB resources: dynamodb table, dynamodb endpoint
#################################################################################################

#Dynamodb Table
resource "aws_dynamodb_table" "test_table" {
name = "leavedays"
billing_mode = "PAY_PER_REQUEST"
hash_key = "leave_id"
attribute {
name = "leave_id"
type = "N"
}
}

#DynamoDB Endpoint
resource "aws_vpc_endpoint" "dynamodb_Endpoint" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.eu-west-1.dynamodb"
vpc_endpoint_type = "Gateway"
route_table_ids = [aws_route_table.private.id]
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : "*",
"Action" : "*",
"Resource" : "*"
}
]
})
}

This file describes the DynamoDB table as well as the VPC Endpoint. In the definition of the endpoint a route table is attached. A route needs to be present in the route table of the private subnets in order for the ECS containers in the private subnet to connect to DynamoDB.

VPC endpoint route

The ‘Target’ in the above picture is the VPC endpoint ID and the ‘Destination’ is a prefix list. A prefix list is a group of IP addresses that corresponds to to that particular services in its corresponding region.
More information about prefix lists can be found here.

And that’s where we’ll end Part 2. You can catch Part 3 here.

--

--