Nginx Meets Amazon ECS: Hosting Multiple Back-End Services Using a Single Load Balancer

Abdul Rahman
The Startup

--

Scalability is equally a key concept in software development as it is in business. We need to ensure that our systems can be made bigger or smaller as per our needs. One major benefit of scalable architectures is that it makes our system more cost-efficient. In my last tutorial I talked about using AWS’s Elastic Container Service (ECS) coupled with an Application Load Balancer (ALB) and an auto-scaling group of EC2 instances to scale your web application up and down as per user traffic, CPU and memory needs, and consequently save a significant amount of money. This tutorial takes it a notch further, showing you how to add multiple back-end apps onto your ALB-ECS setup.

Note: This tutorial doesn’t cover web apps with static files like HTML templates. Serving static files using the method I describe in this tutorial is a bit trickier. This method is intended for back-end apps which don’t require a GUI

WHY BOTHER?

AWS Application Load balancers are not dirt cheap. Even though, from the face of it, they may look inexpensive, AWS has a confusing way of billing application load balancers. The base cost is, as of today, only “$0.0225 per Application Load Balancer-hour (or partial hour)” in the US East (Ohio) region. But there is an additional cost calculation based on “LCUs”. This calculation is based on 4 factors — new connections, active connections, processed bytes and rule evaluations. According to Example 1 in AWS’s cost breakdown, an application load balancer for a basic app may cost around $22 a month. Adding this up over a year, it would cost around $264 annually. Add that to all the additional costs you may incur hosting your app like domain costs and costs of your EC2 instances and your database — you can see how this can add up over time.

The point I am trying to make is that it may not be a good idea to have separate load balancers for each of your back-end apps. Say you run two applications and a separate ALB for each of those apps. That would cost you around $528 annually going by the rate I mentioned earlier. What if you could save half that amount by using a single application load balancer?

This can be achieved by a feature of AWS ALBs called “Content-based routing” or “Path-based routing”. This feature allows the ALB to route traffic to multiple apps by using URL paths. For example, if you run two back-end apps — app1 and app2, you could use the same ALB to get to “my-alb.amazon.com/app1” and “my-alb.amazon.com/app2”. Notice the “/app1” and “/app2” bits. These are paths of the same URL.

WHAT IS THE ROLE OF NGINX HERE?

Even though Path-based routing is a very helpful feature, those paths themselves may not be very useful inside your apps. In the earlier example for instance, your apps may not support the “/app1” and “/app2” paths inside them. This is unless you have custom-made them to cope with this issue i.e. added “/app1” at the beginning of each of your, say, Flask routes or Django URL paths like “/app1/login”, “/app1/home” and “/app1/users”. The problem is there is no support for URL rewrites in AWS ALBs at least as of now. This means you cannot remove “/app1” bit on AWS before it hits your app. So, the same URL path is carried over all the way from the end-user’s browser to your load balancer and then to your web app containers inside your EC2 instances.

Enter Nginx (drumroll, please!). Nginx is a free open-source web server. It can do many things that AWS is doing for us, like load balancing and handling HTTP requests. But one of the salient usages of Nginx is as a reverse proxy. In our case, this simply means Nginx can act as a middle man between AWS and our web app container. Add this to the fact that Nginx is really good at URL rewrites and we have exactly what we need — a middle man that filters our URL paths to remove unwanted bits and sends it over to our web app.

Prerequisites for this tutorial:

Besides a Free-tier AWS account, you will need AWS CLI and docker installed on your machine. You can install docker from here and learn how to download and set up AWS CLI here. It is assumed that you already have 2 Dockerised web apps in hand. If not, you can learn how to create one from my earlier tutorial or download from my git repository here. You’ll find the docker files for “app1” (Dockerfile) and Nginx (Dockerfile_nginx) in the repo. You can reuse the Dockerfile to build the Docker image for “app2”. I have left comments on this in the Dockerfile.

WHAT DOES THE SETUP INCLUDE?

This setup includes the following:

  • An auto-scaling group of EC2 instances
  • An ECS cluster
  • 2 ECS task definitions
  • 2 ECR repositories containing docker images for app1 and app2
  • 2 ECS Services — one for each app

In this tutorial we’ll use Nginx as reverse proxy that reroutes traffic to our web app containers after rewriting the URL. To do this, we’ll run an Nginx docker container with each of our web app containers and link those two containers together using the “bridge” network mode in AWS. The whole setup will look somewhat like this:

BUILDING THE SETUP

1-Create ECR repositories and upload the Docker images

To create an ECR repo go to the ECS console and select “Repositories” from the left-hand pane. Click on “Create repository”. Give the repository a name. Enable the “Scan on push” option and click on the “Create repository” button.

Now click on the “View push commands” button and a popup will give you instructions on how to push your docker images into the repository.

Once you have finished pushing your image and can see it listed inside your ECR repository, note down the image URI. We’ll need this in a later step.

Repeat this process until you have all the 3 Docker images (app1, app2, nginx) uploaded to ECR

2-Run the Cloudformation template

Since I have already explained how to create an ECS setup via the AWS console in my last tutorial, we’ll use AWS Cloudformation to build it for us this time, using a Cloudformation template. Cloudformation is arguably the best utility that AWS offers developers. It is an IaaC (Infrastructure as a Code) service. You can write templates in YAML or JSON formats specifying all the AWS resources you want built and their configurations. Cloudformation will run this template and automatically build these resources for you. Let’s see how to do this.

Download the Cloudformation template that we will use from here (It is too lengthy to be shared here). It is written in YAML. The “Parameters” section at the top lists the variables which are re-used once or multiple times in the template. You should edit the Parameters and replace the “<>” placeholders with information that is specific to your AWS account.

Use the following commands to find your VPC ID and subnets:

$ aws ec2 describe-vpcs
$ aws ec2 describe-subnets

From the AWS console click on the “Services” drop-down and select “Cloudformation”. This will take you to the Cloudformation console. From there, click on the 3-dashes sign in the top-left corner and select “Designer”. When you get to the designer click on the “Template” tab at the bottom and select “YAML” format from the top-right corner of the text area. Copy and paste the Cloudformation template in the text area.

Now, click on the create stack button at the top of the console. The button should look like a cloud with an up arrow inside it. On the next screen click “Next”, give your stack a name and click on “Next” and then “Next” again. Once you reach the “Review” stage, tick the acknowledgement tick-box and click on “Create Stack”. You’ll then see Cloudformation building all the resources specified in the template, one by one.

WHAT DOES THE TEMPLATE BUILD?

The template builds:

  1. an ECS cluster
  2. an Application Load Balancer and an ALB listener
  3. 2 ECS Task Definitions and 2 ECS Services from them
  4. an auto-scaling group of 4 EC2 instances.
  5. IAM roles for the task definitions and the EC2 instances
  6. Target Groups for the ALB listener to which the instances will be added
  7. A security group for the ALB and the instances
  8. Two listener rules for the ALB listener

HOW THE SETUP WORKS

The way the setup is configured is not hard to understand. The ALB listener listens for internet traffic directed towards the ALB. This is where the traffic first hits. The ALB, based on the listener rules, forwards the traffic onto the app1 target group if the URL path begins with “/app1” and likewise for app2. If neither of the paths are used the ALB will return a custom message. The app1 and app2 target groups both have 2 EC2 instances each in them — each running their respective apps.

The 2 ECS services run the apps (or “ECS tasks”) inside the EC2 instances based on the configurations provided in the Task Definitions. The Task Definitions are configured as follows:

Each ECS task runs 2 Docker containers — the Nginx container and the app container — in each of the 4 instances. The Nginx container can speak with port 80 of the EC2 instance where traffic comes from the ALB, but the app container can’t. As mentioned earlier the Nginx container mediates the communication between the instance port and the app container.

When the traffic reaches the Nginx container, Nginx rewrites the URL removing the “/app1” and “/app2” bits. This is because we have instructed it to do so in the “nginx.conf” file. Nginx then passes the traffic on to the app container, receives a response and passes that back to instance port 80. The response then travels all the way back to the end-user. Pretty cool, right?

TEST IF THE SETUP WORKS!

To see if your setup works go to your application load balancer and copy it’s DNS name. Copy and paste it onto a web browser. You should be able to access your apps at “/app1” and “/app2”!

To troubleshoot or if you want to see what’s happening inside your instances, you can connect to your instances using a session manager from the EC2 console and see the logs for the app and Nginx Docker containers. You can also view the ECS logs by typing in the following:

$ sudo cat /var/log/ecs/ecs-agent.log

--

--

Abdul Rahman
The Startup

Budding software engineer with experience in full-stack development and DevOps engineering. 📚 Love to code. ⌨️ Love to write.✒️