Learn How to Containerize a Multi-Container Web Application and Deploy It with Docker-Compose

Carolina Delwing Rosa
8 min readJul 26, 2023

--

In today’s tutorial, I will show you how to containerize a web application and deploy it on AWS using Docker-Compose. Yes, folks, we’re going to talk about containers. Yay!

You’ll learn how to build Dockerfiles, Docker images, Docker-compose files, and how to use them to deploy applications.

It’s impossible to explain Docker and Containers in only one article because containers are a whole world of information. Therefore, I’ll do my best to keep it simple and short, but I would strongly recommend you read more about it.

What is a container?

A container can be defined as an isolation of an application and its resources (software, binaries, and configurations). You can see it as a running process in your machine which is isolated from all the other processes. They are portable, which means you can run on any OS, in local or virtual machines, and they use the kernel of the host OS.

Oh, so are containers and virtual machines the same? No, they are not! This is a classic question: while virtual machines emulate the OS and virtualize its hardware, containers are lighter because they share the host OS resources.

Therefore, if you need to run NGINX, for example, you can deploy an NGINX container in any OS that supports containerization (Linux, Mac, Windows,…), because the container already has all dependencies and libraries required to run the application.

What is Docker?

It’s basically three things: a containerization technology (container runtime), an open-source project that improves the technology, and the name of the company that supports this technology. You can read more about it here.

Some other important concepts you need to know:

Container Image: is a template that contains everything required to run an application, such as dependencies, configurations, binaries, environmental variables, metadata, etc. A container, therefore, is a running instance of an image.

Dockerfile: is a text-based file which contains all necessary instructions to build a Docker image layer by layer: desired based image, volumes, software to be installed, commands to be executed, etc.

Docker Hub: is a Docker Image repository. In other words, it is a website where you can push (share/upload) and pull (download) container images.

Docker-Compose: is a tool used to run multi-container applications. The configuration is done through a YAML file, which is used to create and start the services required for your application to work.

Docker Persistent Data Storage: containers and data inside them are, by definition, not persistent. If the container dies, the data is lost. However, we can keep this data using persistent data storage (volumes and bind mounts).

Volumes: are external directories completely managed by Docker, being the preferred mechanism for persistent data storage. Every time you create a new volume, a new directory with the same name is created in “/var/lib/docker/volumes”.

Bind Mounts: are external directories that rely on the host machine OS and are not managed by Docker. You need to ensure that there is an available directory in the host, and then you mount this directory to the container.

Some famous Docker commands:

docker pull — pulls Docker images from DockerHub (internet).

docker run it runs a container based on the desired image.

docker ps it lists running and stopped containers.

docker build —it builds an image from a Dockerfile.

Prerequisites of this project:

  • Access to a Linux terminal
  • AWS Account
  • Basic Docker and AWS knowledge

1. Create an EC2 instance and clone the GitHub repository

Since we are going to deploy this application on AWS, let’s create an EC2 instance!

Log in to your AWS account, and launch an EC2 instance with an Ubuntu 20.04 AMI, t2.micro size, key-pair, and with the option to assign a Public IPV4 enabled. Besides, set up the security group to allow, from anywhere, SSH connection (port 22), HTTP/HTTPS access (ports 80 and 443), and custom TCP (port 3000). If you have any questions regarding these steps, check my previous articles or leave me a message.

Once the EC2 is running, go to your terminal and SSH into it:

ssh -i <YOUR_KEY_PAIR> ubuntu@<YOUR_EC2_PUBLIC_IP>

Now, install Git, Docker, and Docker-compose in your EC2 by following these tutorials:

Then, clone the GitHub repository, which contains all the required files for the application to run: application code, database file, dependencies file, entry point script, dockerfiles, and docker-compose file. Since the focus of this article is not the application per se, if you are curious about it, check my previous article here. It’s basically a NodeJS API connected to a MongoDB database.

git clone https://github.com/caroldelwing/WCD-DevOps.git
cd WCD-DevOps/project_34

2. Building the Dockerfiles

Now, let’s take a look at the Dockerfiles used in this project. We have one Dockerfile for the application, and another Dockerfile for the database, which means that we are going to have two different running containers, and we’re going to manage the deployment using a Docker-compose file.

Note: remember that this tutorial was done for learning purposes only.

MongoDB Dockerfile

#Base image
FROM mongo:latest

#Set the working dir
WORKDIR /data

#Copy the CSV file and entrypoint script from host to container
COPY nhl-stats-2022.csv ./
COPY entrypoint.sh ./

#Make the entrypoint script executable
RUN chmod +x entrypoint.sh

#Execute the entrypoint script
CMD ["./entrypoint.sh"]

FROM — indicates which base image to use, and it must be the first line of your Dockerfile. In this case, we want to deploy a MongoDB database, so it makes sense to use the “mongo” base image, which already has MongoDB installed by default. You can search for other images on DockerHub.

WORKDIR — specifies the working directory inside the container.

COPY — it copies files from the host (EC2) to the container. Here, we are copying the CSV file for the database and the entry point script.

RUN — it executes commands in a new layer of your image during the building process. Here, we are making the entry point script executable.

CMD — it executes commands only after the container is up and running. Here, we are finally executing the entry point script, which is responsible for starting the MongoDB, setting the host as “db” (this information will be important later), and importing the CSV file into a new collection inside the container. If you are curious about the script, just check it out here.

Nodejs Application Dockerfile

#Base image and multi-stage build
FROM node:14-alpine as build
#Set the working dir
WORKDIR /app
#Copy package.json to the working dir
COPY package.json ./
#Install the dependencies
RUN npm install
#Copy the rest of the application code and packages
COPY . .
#Starts the next stage of the dockerfile
FROM build as prod
#Expose the app port
EXPOSE 3000
#Start the app
CMD ["npm", "start"]

Finally, for the application Dockerfile, we’ll use a multi-stage strategy to build our image, with two “FROM” blocks, with the goal of reducing the image size.

First of all, we’ll use node:14-alpine as the base image, which already has NodeJS and npm installed. Then, we’ll copy the dependencies file and install it. After that, we’ll copy the remaining files, which include the application code, we’ll start a new FROM block, expose port 3000 (because the application is listening on port 3000), and finally, we’ll start the application once the container is up. Easy, right?

2. Building the Docker-compose file

Once your Dockerfiles are ready, let’s jump to the most important part of this tutorial: building the docker-compose file. With this YAML file, we can define the environment of our application: we define the services, number of containers, volumes, environmental variables, network, etc. Let’s take a look at our docker-compose file:

version: '3.7'
services:
db:
build:
context: .
dockerfile: ./Dockerfile.mongodb
restart: always
volumes:
- db-data:/data/db
ports:
- 27017:27017
app:
build:
context: .
dockerfile: ./Dockerfile.app
restart: always
depends_on:
- db
environment:
- DB_HOST=db
- DB_PORT=27017
- DB_NAME=WCD_project2
ports:
- 3000:3000
volumes:
db-data:

version — it’s the version of the docker-compose you are using.

services —it’s the block where the services of the application will be defined. In this case, we have two services: the first one is db (remember when we defined the host as “db” when importing the CSV file into MongoDB? It was the name of the service!), and the second one is app.

build —it tells docker to build the image looking for the specific Dockerfile, which is located in the same directory as the Docker-compose file. Here, we have two build lines: the first one for the db service and the second one for the app service.

volumes —it creates a volume outside the container and mounts it to the path /data/db inside the container, avoiding database data loss if the container dies.

ports —it maps the service in the container to be accessible from the host machine. For the app service, as the application is listening on port 3000 of the container, we’ll map port 3000 in the container to port 3000 of the host (EC2 instance), being able to access the application on port 3000 of the host.

depends on—it tells that the application service depends on the database service. In other words, Docker will start the db service before starting the app service.

environment —it defines the variables that the app service will use to connect to the MongoDB database.

3. Time to action :)

Once you have everything ready, let’s have some fun by executing one short and powerful command in your terminal:

sudo docker-compose up

If everything went well, you should see the images being built and the following output (among many other lines):

Let’s test it! Copy the public IP of your EC2 instance, paste it into your web browser, pointing to port 3000, and edit the route according to the desired output.

Available routes:

  • / - returns all documents in the nhl_stats_2022 collection.
  • /players/top/:number - returns top players. For example, /players/top/10 will return the top 10 players leading in points scored.
  • /players/team/:teamname - returns all players of a team. For example, /players/team/TOR will return all players of Toronto Maple Leafs.
  • /teams - returns a list of the teams.

If you access the teams route, you should have this output on your browser:

We did it! We successfully containerized a multi-container web application and deployed it on AWS using Docker-Compose.

Test the other routes and tell me if it worked for you as well. Hope you guys enjoyed this project, and see you on the next one! :)

--

--

Carolina Delwing Rosa

Engineer & tech enthusiast. Passionate writer. Documenting my journey into DevOps & Cloud. Reach me here: https://www.linkedin.com/in/carolinadelwingrosa/