Designing and Implementing an ECS Cluster on AWS for a Map Server — Part 1 and 2

Idan Shifres
CBRE Build
Published in
5 min readJun 21, 2019

Our story begins like most stories in the SRE/Devops culture, with a simple request: “Two of our applications need a cost-effective solution for interactive maps.” Our first thought (of course) was to use Google Maps or MapBox. These are popular, easy-to-use, well-supported and documented services.

As we investigated though, we realized that we had some slightly different needs; this led to a number of design choices that we wanted to share.

Part 1: Beginnings — why we needed a map server
Part 2: Design — components of an ECS cluster

Part 1: Beginnings — Why we needed a map server

In the last year and a half, we’ve launched several products that included interactive maps. For example, MarketDash is a storytelling tool for presenting a data-driven narrative about a market; the natural setting for this data is a map.

Visualizing data for 10 submarkets in L.A.

Satellite map of Dallas and its major roadways

When we started developing these products, we quickly realized that there are multiple components that make interactive maps work:

Components of interactive maps: peach colored squares represent data and green squares represent applications, the dotted line separates the components on the client from those on the server.

  1. Map tilesets: A collection of raster or vector data broken up into a uniform grid of square tiles at different resolutions.
  2. Map style: A json document file that defines the visual appearance of a map e.g. what data to draw, the order to draw it in, and how to style the data when drawing it. The properties and options here depend on part 4 (the interactive map renderer).
  3. Tile Server: A map server that receives a request that includes latitude, longitude, and zoom level and returns the corresponding set of map tiles in the correct format.
  4. Interactive Map Renderer: a front-end library for displaying the map tiles and styles.
  5. Custom data: Any additional data. For example: polygons that correspond to the shape of specific markets and have certain labels and colors.

We looked at multiple services and began using Mapbox for 1–3 and MapBox GL JS for 4. Mapbox GL JS is an interactive map renderer that uses map styles conforming to the Mapbox Style Specification and applies those styles to vector tiles per the Mapbox Vector Tile Specification, and renders them using WebGL.

While Mapbox was impressive and exceptionally easy to use, they were missing a key feature: offline maps for web applications. At the time, offline maps were in beta for mobile apps and required self-hosting the tilesets and tile server for web applications.

As a result, we ended up replacing 1–3 with open-source or custom components:

  1. Map tilesets: OpenMapTiles, open-source vector tilesets
  2. Tile server: TileServer GL, an open-source map server
  3. Our own map styles: custom map styles according to the Mapbox GL Style Spec

Cue our Devops team: how do we host our map tilesets, TileServer, and map styles in a robust, scalable way, with automated deployments and updates?

Part 2: Design — components of an ECS cluster

Upon hearing that we needed to host our own map components, I said, “No problem! Let’s do it the same way we always do-EC2 instances and Auto Scaling Groups!”

Unfortunately, although the TileServer’s documentation has instructions for installing from npm, after attempting to create an AMI using the npm installation, I realized it would be difficult to install the virtual display drivers used by the TileServer and have those work well with the NodeJS module.

An alternative installation for the TileServer is using a Docker container. This saves us the time and effort of creating the AMI itself but containerized services had not previously been part of our infrastructure.

How do you design the deployment of a containerized service on Amazon? Where do you even begin?

Question 1: What are the main AWS components?

  1. EFS (Elastic File System): EFS is Amazon’s managed Network File System and easily mounts onto Amazon’s EC2 instances. We need this to store our map tilesets.
  2. ALB (Application Load Balancer): Balances requests for the application between the containers.
  3. ECS (Elastic Container Service): Containers cluster, service and task to support container deployment on EC2 Instances.

Question 2: How should the containers be launched?

Our options were:

  1. AWS Fargate: Allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers.
  2. Self-managed EC2 instances infrastructure: Managing our own EC2 Instances and their related Scaling infrastructure: Auto Scaling Groups, Launch Configuration, Scaling policies, etc.

Although AWS Fargate diminishes the need to create and manage your own EC2 instances infrastructure, we decided to go with option 2: self-managed EC2 instances, due to the fact that at this time, it is impossible to automatically create an EFS mount point and Docker shared volumes on AWS Fargate managed infrastructure.

Question 3: How should we scale?

It’s always good practice to “expect the best but prepare for the worst”, that’s why it’s very important to make sure the cluster components are scalable and can automatically recover in case something happens.

For those reasons and also to make the application infrastructure more resilient to different amounts of requests and load, we’ve created 2 Autoscaling layers:

  1. Container Autoscaling: the ECS cluster will scale containers based on the containers’ CPU and Memory usage.
  2. EC2 Autoscaling: scale the number of EC2 instances to support the ECS containers based on the EC2 instances’ CPU and Memory usage.

This diagram demonstrates the ECS cluster components and their relationships:

The red colored components are the main components: ALB, ECS and EFS. The rest are necessary sub-components which were also automated to ensure the ECS Cluster functionality.

Click here to read part 3 & 4 of this story!

Idan Shifres is a Sr. Devops Engineer at CBRE Build and a Devops Evangelist. Between finding the best Devops delivery practices and developing Terraform modules, you can probably find him in Meetups or bars looking for his next IPA beer to add to his list.

--

--