Deploying Different Containers on Amazon’s ECS using Fargate and Terraform: Part 1

Sunil V
Vitwit
Published in
7 min readApr 3, 2020
Source:- Google images

Before we jump into the tutorial I wanted to do a brief overview. Fargate is a new launch type within ECS for deploying containers. Now we deploy different types of containers using Fargate.To achieve this and provide high security and high scalabilty we have to follow these steps.

  1. Setup VPC
  2. Define Security Groups
  3. Store images in ECR or Docker
  4. Setup ALB and Target Groups
  5. Setup ECS Cluster

Let’s Get Started

Let’s start with writing terraform code for those steps. We are going to use terraform modules because it’s a much easier way to write and manage your infrastructure code especially when your infrastructure is huge. For each step, we are creating a module. In this article, we are creating only VPC module. The project structure looks like this.

project
vpc
*main.tf
*variables.tf
*outputs.tf
*main.tf
*variables.tf
*outputs.tf
*exampe.auto.tfvars

We will be making VPC for highly available. For that, we are using two availability zones. Per each az, we are creating 2 subnets(public subnets and private subnets).To get more security for our containers we are using private subnets. How these containers serve the application to the world?? Don’t think too much buddies. We will use load-balancers, these load balancers placed in public subnets, so they interact with our containers then serve the application to the outside world and too achieve Fault Tolerance also. Here, we also cover “terraform workspaces” to deploy our project into multiple environments.

Ok, Let’s do something. Let’s declare variables, why do we need variables? While creating resources through the AWS management console, we give the parameters right. For that purpose we need variables. Then declare the variables outside the VPC module variables.tf file.

variable "profile"{
type = string
default = "your_aws_profile"
}
variable "region"{
type = string
default = "us-east-1"
}
variable "env"{
description = "env: dev or prod"
}
variable "vpc_name" {
type = map
description = "Name for vpc"
}
variable "vpc_cidr"{
type = map
}
variable "public_cidrs"{
type = map
}
variable "az"{
type=map
}
variable "web_cidrs"{
type = map
}
variable "app_port"{
type = string
default = 80
}
variable "app_port_two"{
type = string
default = 80
}

Are you thinking, why we have used the map for some of the variables? because our project has to be deployed in different environments right. so that is the reason we are using it. We have to define variables, where? We will define variables by using default like earlier we did, or we can define variables in a separate file that is tfvars(example.auto.tfvars). Let’s do that.

vpc_cidr = {
dev = "10.0.0.0/16"
prod = "10.1.0.0/16"
}
vpc_name = {
dev = "dev-vpc"
prod = "prod-vpc"
}
public_cidrs = {
dev = ["10.0.0.0/24","10.0.1.0/24"]
prod = ["10.1.0.0/24","10.1.1.0/24"]
}
web_cidrs = {
dev = ["10.0.10.0/24","10.0.11.0/24"]
prod = ["10.1.10.0/24","10.1.11.0/24"]
}
az = {
dev={
0 = "us-east-1a"
1 = "us-east-1b"
}
prod={
0 = "us-east-1c"
1 = "us-east-1d"
}
}

Next, let’s configure the AWS provider for terraform in project/main.tf

provider "aws" {
region = var.region
profile = var.profile
}

We are now ready to write the “VPC MODULE”

In project/vpc/variables.tf , let's specify the variables which are going to take:

variable "vpc_name" {}
variable "vpc_cidr" {}
variable "env"{}
variable "public_cidrs"{}
variable "web_cidrs"{}
variable "app_cidrs"{}
variable "az"{}
variable "app_port"{}
variable "app_port_two"{}

We are ready with all things for creating our VPC infrastructure. Now we will create resources one by one in project/vpc/main.tf.

locals{ 
cidr_block = lookup(var.vpc_cidr,var.env)
public_cidrs = lookup(var.public_cidrs,var.env)
az = lookup(var.az,var.env)
web_cidrs = lookup(var.web_cidrs,var.env)
app_cidrs = lookup(var.app_cidrs,var.env)
vpc_name =lookup(var.vpc_name,var.env)
}

Why do we need locals? Because we used different environments, so it’s hard to “write expression every time ” to get variable values based on the environment, that is the reason we are using it. How to get values based on the environment? Using “lookup” function .lookup function retrieves the values based on the key(here dev or prod).

resource "aws_vpc" "tf_vpc" {
cidr_block = local.cidr_block
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = local.vpc_name
}
}

aws_vpc resource will create a VPC with the specified CIDR block. We have enabled dns_hostnames and dns_support so that we don’t have to deal with IP addresses when we deploy the infrastructure.tf_vpc is the resource variable, It refers within this local VPCmodule only. To access the local variables we need to specify local.

VPC is a virtual private network in the AWS .then how the communication occurs between VPC and outside world? This job is done by an internet gateway. Let’s create

resource "aws_internet_gateway" "tf_internet_gateway" {
vpc_id = aws_vpc.tf_vpc.id
tags = {
Name = "${var.env}_tf_igw"
}
}

What we did is, by using aws_internet_gateway we created the internet gateway then attach to VPC by using VPC id. Tags are helpful to find the resources in the main AWS infrastructure.

Ok, our private network is ready. To maintain this large network or provide security is more difficult so we are using the subnet concepts. Subnet’s are nothing but dividing the large network into a small network. Let’s create subnets in each availability zone.

resource "aws_subnet" "tf_public_subnet" {
vpc_id = aws_vpc.tf_vpc.id
count = length(local.public_cidrs)
cidr_block = local.public_cidrs[count.index]
availability_zone = lookup(local.az,count.index)
map_public_ip_on_launch= true
tags = {
Name = "${var.env}_tf_public${count.index+1}"
}
}

Don’t think too much on code. Just think about how will you create resources in the AWS management console. For that first, we will specify CIDR block and az and give the name (tag) to it and if you want public IP’s for your instances in subnets you will enable public IP, that’s it. Do you want 10 subnets, we have to do the above process for 10 times. For this purpose, we are using count that’s it.
Are you thinking that these are public subnet’s because we have done a public IP enable? Nope. Whenever we route traffic through the Internet gateway, we call them public subnets. So Let’s define route tables and associate the subnets to this route table.

resource "aws_route_table" "tf_public_rt" {
vpc_id = aws_vpc.tf_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.tf_internet_gateway.id
}
tags = {
Name = "${var.env}_tf_public"
}
}
resource "aws_route_table_association" "tf_public_assoc" {
count = length(aws_subnet.tf_public_subnet)
subnet_id = aws_subnet.tf_public_subnet.*.id[count.index]
route_table_id = aws_route_table.tf_public_rt.id
}

What we did here in the route table association code is how many subnets we need to associate, for that we count a number of subnets and assign the subnet id. aws_subnet.tf_public_subnet.*.id[count.index] what it does is, in the place of * replacing count iterate value then assign an id to it.

The last thing left in VPC is to create a private subnet. Whenever we route traffic through the Nat gateway, we call them private subnets. Let’s do

resource "aws_subnet" "tf_web_subnet" {
count = length(local.web_cidrs)
vpc_id = aws_vpc.tf_vpc.id
cidr_block = local.web_cidrs[count.index]
availability_zone = lookup(local.az,count.index % 2)
tags = {
Name = "${var.env}_tf_private_subnet${count.index+1}"
}
}

Now we have to associate these subnets to the route table. Before that, we have to create NAT’s. Remember we have to place the NAT’s in public subnet’s.

resource "aws_nat_gateway" "tf_nat_gateway" {
count = length(local.public_cidrs)
allocation_id = aws_eip.tf_eip.*.id[count.index]
subnet_id = aws_subnet.tf_public_subnet.*.id[count.index]
tags = {
Name = "${var.env}_tf_ngw_${count.index+1}"
}
}
resource "aws_eip" "tf_eip" {
count = 2
vpc = true
}
resource "aws_route_table" "tf_web_rt" {
vpc_id = aws_vpc.tf_vpc.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.tf_nat_gateway.*.id[0]
}
tags = {
Name = "${var.env}_tf_web_rt"
}
}
resource "aws_route_table_association" "tf_web_rt_assoc" {
count = length(aws_subnet.tf_web_subnet)
subnet_id = aws_subnet.tf_web_subnet.*.id[count.index]
route_table_id = aws_route_table.tf_web_rt.id
}

If we place NAT in only one az. Suppose that az fails, then other az private subnets will not download the required software, then the entire application goes down. So that’s why we are creating 2 NAT’s in different az’s. Now our VPC fully high available.

Now that we have our network created, let’s set up some security groups to make sure our app is properly protected.

# ALB Security Group
resource "aws_security_group" "alb_sg" {
name = "${var.env}_alb_sg"
vpc_id = aws_vpc.tf_vpc.id
ingress {
from_port = var.app_port
to_port = var.app_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = var.app_port_two
to_port = var.app_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "ecs_sg" {
name = "${var.env}ecs_entry_sg"
description = "Used for access to the containers"
vpc_id = aws_vpc.tf_vpc.id
ingress {
from_port = var.app_port
to_port = var.app_port
protocol = "tcp"
cidr_blocks = [aws_security_group.alb_sg.id]
}
ingress {
from_port = var.app_port_two
to_port = var.app_port_two
protocol = "tcp"
security_groups = [aws_security_group.alb_sg.id]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}

whatever the traffic comes through the alb, we have to allow only that traffic to our containers.

All we need to do is to include our VPC module in project/main.tf like so :

module "vpc" {
source = "./vpc"
vpc_name = var.vpc_name
env = var.env
az=var.az
vpc_cidr = var.vpc_cidr
public_cidrs=var.public_cidrs
web_cidrs = var.web_cidrs
app_port = var.app_port
app_port_two = var.app_port_two
}

Now, You have to init first

terraform init

Validate terraform with

terraform validate

Create workspace with

terraform workspace new dev

You can now make a plan with

terraform plan -out=tfdev_plan -var env=dev -var-file="example.auto.tfvars"

Then apply

terraform apply tfdev_plan

We created VPC and security groups with 4 clicks only. Awesome right. That is the beauty of terraform. We will catch up in part two. In part two we will do how to setup ALB, ECS cluster and containers.

--

--

Sunil V
Vitwit
Writer for

Software Developer | AWS Solution Architect Certified at GYTworkz