Deploying Different Containers on Amazon’s ECS using Fargate and Terraform: Part 2

Sunil V
Vitwit
Published in
6 min readApr 10, 2020
Source:- Google images

What we did in part one is we created VPC and Security groups. Now we are left with creating ALB and Target Groups, ECS cluster. Then Let’s create the ALB first. The project structure looks like this.

project
alb
*main.tf
*variables.tf
*outputs.tf
fargate
polices
*main.tf
*variables.tf
*outputs.tf

First, declare the variables what we need for creating ALB and ECS in outside VPC module variables.tf file.

variable "container_name" {}
variable "desired_count" {}
variable "fargate_cpu" {}
variable "fargate_memory" {}
variable "image_url" {}
variable "container_name_two" {}
variable "desired_count_two" {}
variable "image_url_two" {}
variable "fargate_cpu_two" {}
variable "fargate_memory_two" {}

Define those variables in project/example.auto.tfvars

container_name = "test_fargate_nginx"
container_name_two = "test_fargate_blockchain"
desired_count = 3
desired_count_two = 3
fargate_cpu = 256
fargate_memory = 512
image_url = "nginx:latest"
image_url_two = "bradfordhamilton/crystal_blockchain:latest"
fargate_cpu_two = 1024
fargate_memory_two = 2048

We are now ready to write ALB module

In project/alb/variables.tf , let's specify the variables which are going to take:

variable "public_subnets"{}
variable "env"{}
variable "vpc_id"{}
variable "alb_sg_id"{}
variable "app_port"{}
variable "app_port_two"{}

we got all things for creating our alb infrastructure. then we will create resources one by one in project/alb/main.tf.

locals{
count = length(var.public_subnets)
}
resource "aws_lb" "web_lb" {
name = "${var.env}-ecs-lb"
load_balancer_type = "application"
internal = false
subnets = [var.public_subnets[0],var.public_subnets[1]]
security_groups = [var.alb_sg_id]
}

We are using ALB so that is the reason we specified type is application. We need to place our ALB in public subnets and we have to specify internet-facing. So our containers serve the application to the internet through this ALB.

We have to call our container A and Container B through ALB. How?? Don’t think too much, by providing target groups we can achieve that. A Target group tells a load balancer where to direct traffic to EC2 instances, fixed IP addresses; or AWS Lambda functions, amongst others. Now we have to create two target groups why because we are deploying two different containers. Then Lets’s create target groups

resource "aws_lb_target_group" "web_lb_target" {
name = "${var.env}--ecs-lb-target"
port = 80
protocol = "HTTP"
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = "3"
interval = "30"
protocol = "HTTP"
matcher = "200"
timeout = "3"
path = "/"
unhealthy_threshold = "2"
}
}

We’re going to be listening for HTTP requests on port 80, however, I think it goes without saying that if you’re using this in production — listen for HTTPS over port 443. You can use AWS certificate manager (ACM) to provision and manage certificates. Do you think how can we check the status of our containers? By using health checks. Our Application Load Balancer periodically sends requests to its registered targets to test their status. Do you think about where is the relation between ALB and Target groups? Wait for some time.
Let’s create another Target group:

resource "aws_lb_target_group" "web_lb_two_target" {
name = "${var.env}--ecs-lb-two-target"
port = 80
protocol = "HTTP"
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = "3"
interval = "30"
protocol = "HTTP"
matcher = "200"
timeout = "3"
path = "/"
unhealthy_threshold = "2"
}
}

Now redirect all the traffic from ALB to our target groups. We can use path-based routing or different listeners when we deploy different multiple containers on ECS. In this article, we go with different listeners. Let’s create ALB listeners.

resource "aws_lb_listener" "my-test-alb-listner" {
load_balancer_arn = aws_lb.web_lb.arn
port = var.app_port
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.web_lb_target.arn
}
}
resource "aws_lb_listener" "my-test-alb-two-listner" {
load_balancer_arn = aws_lb.web_lb.arn
port = var.app_port_two
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.web_lb_two_target.arn
}
}

Whatever the requests come through app_port we forward request to the first target group(web_lb_target) and from app_port_two we forward the requests to another target groups. Here is the relation what we are searching for target groups.

We have to use above ARN’s in our Fargate module. So we have to define these ARN’s in outputs.tf file.

output "target_group_arn" {
value = aws_lb_target_group.web_lb_target.arn
}
output "alb_listener" {
value = aws_lb_listener.my-test-alb-listner
}
output "target_group_arn_two" {
value = aws_lb_target_group.web_lb_two_target.arn
}
output "alb_listener_two" {
value = aws_lb_listener.my-test-alb-two-listner
}

Now we are left with only one module that is fargate. Let’s create

In project/fargate/variables.tf let’s specify the variables which are going to take:

variable "vpc_id" {}
variable "app_port" {}
variable "app_port_two" {}
variable "web_subnets" {}
variable "ecs_sg_id" {}
variable "public_subnets" {}
variable "alb_sg_id" {}
variable "fargate_cpu" {}
variable "fargate_memory" {}
variable "image_url" {}
variable "image_url_two" {}
variable "container_name" {}
variable "container_name_two"{}
variable "desired_count" {}
variable "desired_count_two" {}
variable "fargate_cpu_two" {}
variable "fargate_memory_two" {}
variable "env"{}

Let’s write our container stuff in project/fargate/main.tf . First, we call our ALB module from this fargate module, because we want ARN’s.

module "alb" {
source = "../alb"
app_port = var.app_port
public_subnets = var.public_subnets
vpc_id = var.vpc_id
env = var.env
alb_sg_id = var.alb_sg_id
app_port_two = var.app_port_two
}

Let’s create an ECS cluster

resource "aws_ecs_cluster" "ecs_cluster" {
name = "demo-cluster"
}

Next what we have to do is, Define our task definition. Before that, we have to specify our container details in the JSON file.project/fargate/polices/container_one.json

[
{
"name": "${container_name}",
"image": "${image_url}",
"cpu": ${fargate_cpu},
"memory": ${fargate_memory},
"portMappings": [
{
"containerPort": ${container_port},
"hostPort": ${host_port}
}
]
}
]

Are you thinking about how can we get those values into our JSON file? By using data sources. We have to define the data source in project/fargate/main.tf

data "template_file" "container_def_data" {
template = file("${path.module}/polices/fargate-container-def.json")
vars = {
container_name = var.container_name
image_url = var.image_url
container_port = var.app_port
host_port = var.app_port
fargate_cpu = var.fargate_cpu
fargate_memory = var.fargate_memory
}
}

Do the above process for another container also. Now create task definitions for both different containers. To pull the docker images we need task execution role. Write policy in the policie’s directory.

resource "aws_iam_role" "ecs_role" {
name = "ecs_role"
assume_role_policy = file("${path.module}/polices/assume-role-policy.json")
}

By referencing this IAM role we can use as the task execution role.

resource "aws_ecs_task_definition" "ecs_task_def" {
family = "app_nginx"
execution_role_arn = aws_iam_role.ecs_role.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.fargate_cpu
memory = var.fargate_memory
container_definitions = data.template_file.container_def_data.rendered
}
resource "aws_ecs_task_definition" "ecs_task_def_two" {
family = "app_tomacat"
execution_role_arn = aws_iam_role.ecs_role.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.fargate_cpu_two
memory = var.fargate_memory_two
container_definitions = data.template_file.container_def_data_two.rendered
}

Do you want to run multiple instances of the same container? Then how can we do that? By creating services. We are using 2 different containers so we have to create two services. One more reason for creating services is, to target these containers through ALB. Let’s create

resource "aws_ecs_service" "main" {
name = "app-service"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = aws_ecs_task_definition.ecs_task_def.arn
desired_count = var.desired_count
launch_type = "FARGATE"
network_configuration {
security_groups = [var.ecs_sg_id]
subnets = var.web_subnets
}
load_balancer {
target_group_arn = module.alb.target_group_arn
container_name = var.container_name
container_port = var.app_port
}
depends_on = [module.alb.alb_listener, aws_iam_role_policy.ecs_role_policy]
}
resource "aws_ecs_service" "main_two" {
name = "app-service_two"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = aws_ecs_task_definition.ecs_task_def_two.arn
desired_count = var.desired_count_two
launch_type = "FARGATE"
network_configuration {
security_groups = [var.ecs_sg_id]
subnets = var.web_subnets
}
load_balancer {
target_group_arn = module.alb.target_group_arn_two
container_name = var.container_name_two
container_port = var.app_port_two
}
depends_on = [module.alb.alb_listener_two, aws_iam_role_policy.ecs_role_policy]
}

We have to provide high security, for this reason, we are placing these containers in private subnets(web_subnets). We allow the traffic only comes from ALB, so we specify security groups(ecs_sg_id). Now we attached this service to the target group. Those services depend on respective ALB listeners.

All that’s left to do is include our Fargate module in project/main.tf like so :

module "fargate" {
source = "./fargate"
app_port = var.app_port
container_name = var.container_name
desired_count = var.desired_count
fargate_cpu = var.fargate_cpu
fargate_memory = var.fargate_memory
image_url = var.image_url
alb_sg_id = module.vpc.alb_sg_id
ecs_sg_id = module.vpc.ecs_sg_id
web_subnets = module.vpc.web_subnets
public_subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
env = var.env
app_port_two = var.app_port_two
container_name_two = var.container_name_two
desired_count_two = var.desired_count_two
fargate_cpu_two = var.fargate_cpu_two
fargate_memory_two = var.fargate_memory_two
image_url_two = var.image_url_two
}

Now, You have to init first

terraform init

Validate terraform with

terraform validate

You can now make a plan with

terraform plan -out=tfdev_plan -var env=dev -var-file="example.auto.tfvars"

Then apply

terraform apply tfdev_plan

Did you observe that we are using a single load balancer for different services? Before ALB comes to the picture we need to use one ELB per each service. It increases a lot of costs correct. If you want 10 services, what will you do? you have to create 10 ELB’s. Ohh! it so much hurt. When ALB comes with distributes incoming application traffic across multiple targets. By taking this advantage we created only one ALB per two services with multiple targets. Now the costs decrease from almost 80 to 90 %. That is the thing we did in this article.

Yeah cool, we covered the beauty of ALB and Terraform and ECS. Still, are you thinking about the beauty of Fargate?

Just ask your self ??

Did you install docker-engine or did you provide any servers? No.

What did you do is just that we focused on our application that’s it right. This is one of the beauty of Fargate, they are a lot more to explore.

--

--

Sunil V
Vitwit
Writer for

Software Developer | AWS Solution Architect Certified at GYTworkz