Designing and Implementing an ECS Cluster on AWS for a Map Server — Part 3 and 4

Idan Shifres
CBRE Build
Published in
5 min readJun 30, 2019

Part 3: Implementation — Terraform modules
Part 4: Wrapping up — Continuous Deployment

Part 3: Implementation — Terraform modules

We use Terraform to deploy all our infrastructure. Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

In adding new Terraform modules we wanted to consider:

  1. Using Terraform modules we’d already created
  2. Dependencies between modules (e.g. the load balancer depends on SSL Certificate creation which depends on the Route53 DNS name).
  3. Setting up variables and constants so that the new modules can be used for other similar applications. Examples of variables: region, instance type, min/max amount of instances, container image, etc.

The three new modules we made:

  1. TileServer module: the TileServer service itself, sets up the variables and specifies the 2 other modules it will need
  2. ALB module: sets up the Application Load Balancer. from previous infrastructure, we had an ELB (Elastic Load Balancer) module but no ALB module. ALBs support more advanced routing so that traffic coming through port 80 can be forwarded to the containers.
  3. ECS module: sets up the Elastic Container Service. This module will need to use components created in the ALB module.

TileServer module

In order to create the TileServer module, we had to gather the necessary information as input:

A snippet of the TileServer Terraform module and tfvars file:

A Terraform code snippet showing the TileServer module calling the ALB and ECS modules:

ALB module

In this module we create all the load balancer components:

  • SSL certificate: using ACM — Amazon Certificate Manager
  • Security groups: acts as a virtual firewall that controls the traffic for one or more instances. You can add rules to each security group that allow traffic to or from its associated instances.
  • Route 53 record: DNS record to point to the Load Balancer DNS Name
  • Target groups: containers register to target groups and expose the application using high ports dynamically. This allows multiple containers to run on the same instance.

List of registered targets in the TileServer ALB Target Group. Note there are 2 unique instance IDs, each one represents an EC2 Instance with 2 containers exposing the application on 2 different dynamically assigned ports.

As mentioned, some of these components will be used by the ECS module and thus need to be exported as outputs. These outputs are then referenced by using ${module.module_name.output_name}.

Outputs:

  • alb_sg_id: The Load Balancer’s Security group. This security group will have access to the ECS cluster EC2 Instances.
  • app_target_group_arn: The Load Balancer’s Target Group where the containers register.
  • route53_zone_id: Route 53 (AWS DNS Service) Zone ID created by the ALB module

ECS module

The ECS Module is in charge of all the ECS components, which is a long list of components that intertwine and sometimes depend on each other, including:

  • EFS components: Creating the Elastic FileSystem itself and its security groups.
  • EC2 User Data: The “initialization” script that runs when the EC2 Instance starts. This script includes mounting the EFS on the EC2 instance to be used later by the container which runs the application.
  • ECS Task Definition: Defines and configures how the Docker container should run. Some of the definitions include: CPU and memory allocation, container image, data volumes (which is very important to our infrastructure as our TileServer containers use tile map files and configuration hosted on this data volume), etc.
  • ECS Service: The definition of the amount of desired containers (tasks) and its configuration, the service scheduler is also responsible of maintaining the desired amount of containers in case they fail or stopped.
  • ECS Service Autoscaling: Creating the container autoscaling rules to determine the desired amount of containers running for our application and the rules for scaling in and out this number. We’ve decided to have a desired count of 4 containers which can scale out to 8 containers in case of high CPU or Memory usage for the service.
  • IAM Roles and Policies: All the necessary ECS and EC2 Instances roles and policies to give the cluster permission to run itself efficiently.
  • EC2 Austoscaling Groups and Launch configuration: Defining the Autoscaling group for the EC2 instances (apart from the ECS Service Autoscaling) to determine the number of EC2 instances that should host the application. We’ve decided to have a desired number of 2 EC2 Instances (hosting 2 containers each) and up to 4 EC2 instances (hosting 4 containers each) based on CPU and Memory Autoscaling policies.

TileServer service view in the ECS Cluster. The “Tasks” bar shows all the service containers and their status.

Part 4: Wrapping up — Continuous Deployment

After all this is set up, we now have a scalable, robust containerized cluster. The fully automated deployment takes approximately 3½ minutes to create the entire infrastructure!

Our last step is dealing with ongoing updates from our engineers. These can be new configuration (e.g. new map tiles files which are ready to be used) or styles (e.g. colors, fonts, icons). When engineers make these changes, we want to automatically update the TileServer to reflect these changes.

Updating the TileServer consists of a few steps:

  1. Making and committing changes to the git repo
  2. Verifying the configuration changes e.g. checking that the json is valid
  3. Checking for the existence of the necessary files referenced in the configuration e.g. map tiles should be stored remotely on AWS EFS and style files should be in the git repo
  4. Pushing the new configuration and styles to the containers
  5. Reloading the application process inside the container in order to reload the application on each EC2 Instances in the ECS Cluster

We scripted steps 2–5 and put it into Rundeck, a job scheduler and Run Book automation system, so developers can update the TileServer with a single click.

Designing and implementing an ECS Cluster was a very challenging task, at the same time it’s also been an opportunity for us to build a service from the ground up using the best Devops practices and methodologies. This experience demonstrated to us how important it is to plan ahead, design and implement modular deployment to ensure a stable and scalable service and most importantly, this effort allowed us to create a self-serving deployment to host our own TileServer.

Idan Shifres is a Sr. Devops Engineer at CBRE Build and a Devops Evangelist. Between finding the best Devops delivery practices and developing Terraform modules, you can probably find him in Meetups or bars looking for his next IPA beer to add to his list.

--

--