Building a chat app and deploying it using AWS Fargate

Nathan Peck
Feb 1, 2018 · 9 min read

This article walks through the process of building a chat application, containerizing it, and deploying it using AWS Fargate. The result of following along with this guide will be a working URL hosting a public, realtime chat web app. But all this will be accomplished without needing to have a single EC2 instance on your AWS account!

If you want to follow along with the article and build and deploy this application yourself you need to make sure that you have the following things:

  • Node.js (The runtime language of the chat app we are building)
  • Docker (The tool we will use for packaging the app up for deployment)
  • An AWS account, and the AWS CLI (We will deploy the application on AWS)

Once you have these resources ready you can get started.

Starting from open source

To power the chat application in this demo we are going to use, one of the most popular realtime communication frameworks for the web. It operates via a Node.js server side component, and provides client side libraries for every major runtime language, including browser JavaScript.

To run a chat application using we are going to need a server side application which runs a server. But it also needs to host some static web content: HTML and JavaScript that can be fetched by a web browser.

When a user loads the URL of this application, their browser will download the HTML and run JavaScript code locally on their computer. This is called a “web application”. The web application will connect to the server side application to send messages in the chat room, and receive messages from other users.

So now that you understand the pieces of this architecture, you need to actually build and deploy it. Fortunately there is already an open source chat room demo in the examples, and this is a great starting point. All you’ve got to do is open the command line and use the following commands to clone a repository that has the sample code onto your machine:

git clone
git checkout 1-starting-point

Now that you have the sample code, let’s test it out on your local machine:

npm install
npm start

You will see a message that looks similar to this:

Image for post
Image for post

This means that the application is now running on your local machine. If you open your browser and enter in your URL bar you will see the web application:

Image for post
Image for post

You can open two browser tabs and chat with yourself on your local machine, but that’s not very interesting. You probably want to be able to send your friends a link to the application and chat with them, but right now it is only accessible on your own local machine.

To solve this problem let’s package the application up in a docker container and run it on AWS.

Building a Docker container

The first step to packaging your application into a Docker container is adding a to the project. This file is a series of instructions that tell Docker how to fetch the application and any of its dependencies, do any build steps that are necessary, and finally run the application.

Let’s look at the entire first and then go line by line to explain what each line does, and why it is written the way it is

Now let’s look line by line to see what each line does:

Image for post
Image for post

The is organized into two build stages. The first uses a full Node.js environment that includes NPM. It adds the package definition file, fetches and installs the dependencies and copies the application code in. The second phase uses a stripped down Node environment that does not include NPM. It just copies the built app from the previous stage, essentially throwing away all the unneeded extra things that existed in the first stage. This way the image is clean and minimal, containing only the bare minimum of what is needed to run the application.

You can execute this build by using the following command:

docker build -t chat .

This tells Docker to use the in the current directory to build an image, and to tag the image with the name

You can then run the image as a container on your local machine:

docker run -d --name chat -p 3000:3000 chat

This tells docker to run the image tagged chat, with the name chat. The flag tells Docker to run the application “detached” in the background. The flag tells Docker to forward any traffic going to on your local machine into the container where the application code is listening.

If you run it will list the running containers on the machine, and once again you can visit in the browser and see the application running in your browser. It looks the same on the surface, but rather than running directly on your host machine, the application is running inside a docker container.

Image for post
Image for post

You now have a container that runs on your local machine. The next step is to get this container uploaded off your local machine, into your AWS account, and launch it there.

Pushing your image to a private container registry

The way you get your container image into your AWS account is by creating a container registry and pushing your container image to it. Just like a git repository is a place where you can push your code and it keeps track of each revision to the code, a container registry is a place where you can push your built images, and it keeps track of each revision of your application as a whole.

The AWS service for storing your container images is Elastic Container Registry (ECR). You can create a new private registry for your application by navigating to ECR in the AWS dashboard and clicking the “Create repository” button:

Image for post
Image for post

After you click the button and enter a name for the repository you are given a series of commands you can use to push to the repository. The first step is to login to your private registry:

`aws ecr get-login --no-include-email --region us-east-1`

Then you need to retag the local image you built in the previous step so it can be uploaded to the new repository:

docker tag chat:latest

And then push the image:

docker push

Once the image has uploaded you can see it show up in the AWS console when you view the details of the chat repository:

Image for post
Image for post

The docker image is now uploaded to a private registry in your own AWS account. The next step is to run it!

Running the application from your docker registry

To launch the docker image on your account under AWS Fargate let’s use some CloudFormation templates from AWS. CloudFormation is an AWS tool which allows you to describe the resources you want to launch on your account as metadata. Then CloudFormation reads this file and automatically creates, modifies, or deletes resources on your behalf. This approach is called “infrastructure as code” and it allows you to quickly and automatically configure your AWS resources without making mistakes by mistyping something or skipping a step.

I’ve added the templates that are needed to deploy the application in a branch of the git repo:

git checkout 3-deployment

The templates you need to use are located at inside the project repo. You can use the AWS CLI to quickly deploy a Fargate cluster using the template at

aws cloudformation deploy --stack-name=production --template-file=recipes/public-vpc.yml --capabilities=CAPABILITY_IAM

This command may take a few minutes while it sets up a dedicated VPC for the application, a load balancer, and all the resources you need to launch a docker container as a service in AWS Fargate.

Once it completes you can launch the template at to get your container running in the cluster. This time let’s use the console so its easier to enter all the parameters.

You can navigate to the CloudFormation console and click the “Create Stack” button, then choose a file to upload. You want to upload the file at

After upload you are greeted by a screen where you can customize the parameters:

Image for post
Image for post

There are a number of parameters here but the only ones we need to worry about right now are:

Stack Name

Let’s name the CloudFormation stack itself


This is the image you uploaded earlier, the same value from the docker push command, something like:


This is how many copies of the container to run. The default in this template is but we have not yet configured this app to be horizontally scalable, so you need to change this value to (we will extend the application to be horizontally scalable in a follow-up article).


This is a name for the service itself. Once again you can call it


This is the port number that the application inside the container needs to receive traffic on. You need to change it to since this is the default port that the Node.js app receives traffic on.

Once you enter these values you can click “Next” a few times to review the stack and finally “Create” to launch the stack. It will show up in the console with a status of

Image for post
Image for post

Once the status changes to you are now ready to check out running application. If you click on the stack and select the “Outputs” tab you will see an output called

Image for post
Image for post

This is the public facing URL of the application. Click the link to load the web application in your browser:

Image for post
Image for post

Once again there is a chat application running in your browser, but this time instead of running locally on your own machine, it is now running inside a docker container in AWS Fargate, and it has a public facing address on the internet that you can give to your friends so they can chat with you from their own computer.

And best of all if you navigate to the EC2 Dashboard on your AWS account you will see this:

Image for post
Image for post

That’s right! There are zero running instances. The containerized application is being run by AWS Fargate, so there are no EC2 instances that you need to manage or worry about.


This article showed how to take a Node.js application, build a docker container for it, upload the container to a registry hosted by AWS, and then run the container using AWS Fargate. This is only the beginning though. In the next installment of this series we will modify the application to be horizontally scalable, and configure autoscaling for our containerized deployment. This will allow the app to automatically scale up as demand increases, with no admin intervention required, even if thousands of users hit our application.

Part Two:

Containers on AWS

News and tutorials on how to run container deployments on…

Nathan Peck

Written by

Developer Advocate for Container Services at Amazon Web Services

Containers on AWS

News and tutorials on how to run container deployments on AWS using Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), AWS Fargate, and Amazon Elastic Container Registry (ECR).

Nathan Peck

Written by

Developer Advocate for Container Services at Amazon Web Services

Containers on AWS

News and tutorials on how to run container deployments on AWS using Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), AWS Fargate, and Amazon Elastic Container Registry (ECR).

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store