Deploy a Static Web App on AWS with Docker, Amazon ECR, and Amazon ECS

Eugene Miguel
30 min readJun 19, 2023

--

Mastering container deployment with ease and efficiency.

What is Container Deployment?

Container deployment is a method for quickly building and releasing complex applications. Docker container deployment is a popular technology that gives developers the ability to construct application environments with speed at scale.

Photo by AVI Networks on Google

Why use Container Deployment?

Container deployments are well-suited to a variety of modern software and infrastructure strategies, including the aforementioned microservices approach. They can speed up application development and reduce the budget on IT operations teams, because they’re abstracted away from the environments they run in.

Container deployments can replace many of the tasks previously handled by IT operations. When a tool like Docker deploys multiple containers, it places applications in virtual containers that run on the same operating system. This provides a benefit not offered by virtual machines. Using a virtual machine requires running an entire guest operating system to deploy a single application.

This is costly and slow if deploying many applications. If you deploy a Docker container, each container has everything needed to run the app and can be easily spun up or down for testing. This is how container deployment saves resources like storage, memory and processing power and speeds up the CI/CD pipeline.

Introduction

Hi! Welcome back to another project. We will embark on a journey to master container deployment. I will teach you how to use docker, docker hub, amazon ECR, and amazon ECS to deploy this website on AWS.

The AWS services that we will use:

  • VPC with public and private subnets
  • NAT gateways
  • Security groups
  • Application load balancer
  • Amazon elastic container registry (ECR)
  • Amazon elastic container service (ECS)
  • Certificate manager
  • Route 53

We will also use DevOps tools such as GitHub, docker, and AWS to create, manage, and deploy containerized applications with ease and efficiency.

I want to emphasize that completing my first 5 projects is crucial. It will help you piece together and understand the concept of this project.

Requirements

  1. Install Git and Visual Studio Code on your computer. Register for a GitHub account. Create a key pair and add the public SSH key to GitHub.
  2. Install AWS CLI and create IAM user and named profile.
  3. AWS account
  4. Download and install docker.
  5. Create an Amazon ECR repository.
  6. Create an ECS cluster.
Reference Architecture — we will use this architecture and steps to complete this project.

Objectives

a. Install Git and Visual Studio Code on your computer. Register for a GitHub account. Create a key pair and add the public SSH key to GitHub.

  1. Create a GitHub repository to store the docker file.
  2. Clone the GitHub repository on your computer.
  3. Sign up for a docker hub account.
  4. Download and install docker on your computer.
  5. Create docker file.
  6. Build the container image.
  7. Start the container.
  8. Create a repository in your docker hub account
  9. Push the image to your docker hub repository
  10. Install AWS CLI. Create IAM user and named profile.
  11. Create an Amazon ECR repository to store your image
  12. Push the image to your ECR repository
  13. VPC with public and private subnets in 2 availability zones
  14. Resources such as NAT gateway, bastion host, and application load balancer uses public subnets.
  15. Create security groups
  16. Application load balancer
  17. Create an ECS cluster
  18. Create a task definition
  19. Create a service
  20. We are using Route 53 to register our domain name and create a record set.
  21. We are using AWS certificate manager to secure web communication to our website.

Let’s do this!

Photo by Elisabeth Wales on Unsplash

2. Sign up for a Free GitHub Account

We will create a GitHub account to store all the code’s file project. For details on how to set up a GitHub account to use for this project, visit my previous tutorial here where I explained it thoroughly.

3. Create Key Pairs

We will create key pair on our computer so we can use it to clone our private GitHub repository. For details on how to create key pairs to use for this project, visit my previous tutorial here where I explained it thoroughly.

4. Add the Public SSH Key to GitHub

We will upload our keypair into GitHub. Previously, we created a keypair the next thing that we need to do is to upload the public key of that keypair into GitHub. Afterwards, we’ll be able to clone our GitHub repository.

For details on how to add the public SSH key to GitHub, visit my previous tutorial here where I explained it thoroughly.

5. Install Git

We will clone our repository to our local computer. Cloning means that you are creating an identical copy of your GitHub repository on your local computer. So when you clone a GitHub repository, all the files in the repository are downloaded to your computer to allow you to work with those files easily

For details on how to install Git, visit my previous tutorial here where I explained it thoroughly.

6. Install Visual Studio Code

Visual Studio Code is a text editor and we will install it on your computer. For details on how to install it, visit my previous tutorial here where I explained it thoroughly.

7. Create a GitHub Repository to Store Docker file

Reference Architecture
  1. We will create a GitHub repository to store the docker file.

For details on how to create a GitHub repository, visit my previous tutorial here where I explained it thoroughly.

When you have successfully created your docker-projects repository it should look like this.

8. Clone the Repository to Store Dockerfile

Reference Architecture

2. Clone the GitHub repository on your computer.

For details on how to clone the repository on your computer, visit my previous tutorial here where I explained it thoroughly.

When you have successfully cloned your docker-projects repository, it should look like this.

9. Sign up for a Docker Hub Account

Reference Architecture

3. Sign up for a docker hub account.

When we build our container images, we will store them in docker hub. To sign up for a docker hub account, go to hub.docker.com and register.

Complete the information needed to create a docker ID and sign up.

Choose the personal plan and click continue with free. You may need to verify your email address before proceeding.

We have successfully signed up for a free docker account and verified our email address.

10. Enable Virtualization on your Computer.

The virtualization is always enabled on your computer by default. To check if this is enabled on your computer watch this video by Triple-A Tech Solution.

Before you can install docker on your windows computer, you must enable virtualization on your computer.

If you’ve verified the Virtualization Enabled In Firmware: Yes. This means that virtualization is enabled on your computer and you can stop here and proceed to the next step. Otherwise, finish watching the video and complete the steps in enabling virtualization in windows 10, this requires rebooting your computer.

Let’s keep moving!

Photo by Nandhu Kumar on Unsplash

11. Download and Install Docker on Your Computer

Reference Architecture

4. Download and install docker on your computer.

Go here to download and install docker on your computer. Click Docker Desktop for Windows. Run and install the executable file.

Reboot your computer to complete the installation.

The first time you open docker, you need to review and accept the service agreement. When you see the WSL 2 Installation is Incomplete window, click the link underneath and click the link below Step 4 to download the latest linux kernel package.

Run the executable file and the Setup Wizard.

After updating the package, proceed to Step 5 and follow the instructions.

The operation completed successfully and that is all that we need to do.

Restart docker and sign in. Lastly, verify if docker is installed on your computer. Open Windows PowerShell and run docker -v If you get an output like this, it meaning that its successfully installed in your computer.

This is all we need to do to install docker on our computer.

12. Create Dockerfile

Reference Architecture

5. Create Dockerfile

We are going to create the Dockerfile that we will use to build the container image for the Jupiter website. A Dockerfile is a text document that contains all the commands we will pull on the command line to create a container image.

First, open your Visual Studio Code and open your project folder. This is the docker project repository we cloned on our computer.

Create a folder in this repository. For details on how to create a folder in your repository on Visual Studio Code, visit my previous tutorial here where I explained it thoroughly.

Give the folder a name. We will use this to store the Dockerfile for the Jupiter website.

Inside this folder, we are going to create our docker file.

For details on how to create a file in your folder, visit my previous tutorial here where I explained it thoroughly.

Ensure that you type the Dockerfile the same way I did.

We have created the Dockerfile. All we have to do here is to write all the commands that we will use to build our container image for the Jupiter website.

Few things that I want to explain before we proceed. On my Jupiter website project, the commands that we used to install the website on our EC2 instance can be found in my Docker-Project GitHub repository. These are the same commands that we will use to create our Dockerfile that will build the container image for the Jupiter website. This is also where you can find the Dockerfile that we will build the container image for the Jupiter website.

Let me explain these commands.

FROM amazonlinux:latest

To create this Dockerfile that will build the container image for the Jupiter website, the first thing we did is to start with the base image. To specify a base image, in the beginning of your Dockerfile type FROM (all caps). amazonlinux:latest Specify the base image that you want to use. This is similar to where we launched an EC2 instance in the management console and we used Amazon Linux 2.

In this Dockerfile, we are using an Amazon linux base image. Moreover, the base image comes from Dockerhub. So for any type of base image that you are trying to use, you will find it in Dockerhub.

Let’s go to Dockerhub so I can show you. Here, you will find container images for every software you can think of. Below shows some of the popular images that people have downloaded.

The best image we are using in our Dockerfile to build the Jupiter website is Amazon Linux. Let’s search for that image and type it in the search bar.

Select the search result

Blue — name of the image

Yellow — docker command to pull the image

Red — different tags for Amazon Linux that you can reference in your Dockerfile.

Let’s go back to our Visual Studio Code. In our Dockerfile, From amzonlinux:latest we are using amazonlinux as our base image and the tag that we are referencing is latest.

Once we specified the base image, let’s go to the next command.

Type RUN then we will list all the commands that we want to run.

Blue — updates all the packages on our container.

Yellow — installs the Apache server

These are the same commands we ran on our EC2 instance.

Red — One of the limitation of a container is, sometimes the packages that you need may not be installed on the container so what we need to do is install the packages. In our Dockerfile, we are using wget to download the webfiles from GitHub. Afterwards, we will unzip it. That is why we are running this command.

After installing all the dependencies, the next thing we did is change our directory to html directory.

Next, we used wget to download the web files from GitHub.

To explain the rest of the commands

Blue — unzips the folder that we downloaded

Yellow — copies all the web files into the HTML directory

Red — removes the zip folder that we downloaded from GitHub plus the folder we unzipped.

Green — the port we want to expose and we are exposing port 80. You can expose any port you want.

Orange — an entry point is going to set the default application that will start when the container starts.

The same commands that we ran on the EC2 instance that enabled and started the Apache server.

systemctl enable httpd

systemctl start httpd

This is how you create a Dockerfile. Our takeaway is the same commands we ran on the EC2 instance to store our Jupiter website are the same commands we used to create the Dockerfile.

The only difference is we have to enter the commands in our Dockerfile in a format that Docker understands.

We have finished creating our Dockerfile. Go ahead and save your file then push the updates to our GitHub repository.

For details on how to save the file that you are working on Visual Studio Code, push the updates to your GitHub repository, and verify if the files are there visit my previous tutorial here where I explained it thoroughly.

13. Build the Container Image

Reference Architecture

Previously, we created the Dockerfile. We will use this to build a container image for the Jupiter website.

Open your project folder in Visual Studio Code. Right click on the jupiter-website folder (your own folder name maybe be different) and select Open in Integrated Terminal.

This opens the Windows PowerShell terminal and its the equivalent to opening the PowerShell terminal on your computer. Ensure that you are in the jupiter-website directory.

Running ls shows that in this directory my Dockerfile is here. You can run clear to clear your screen.

Next, to use Docker to build the container image for the Jupiter website run docker build -t jupiter .

Blue — stands for tag

Yellow — Image name

This is the command that we will use to build the container image for the Jupiter website. When we type our command like this, Docker would type the latest Jupiter image but if you want to specify a tag number for your Jupiter image you would type it this way docker build -t jupiter:1.0 .

Docker is using Dockerfile to build our container image for the Jupiter website. Give it time to finish running all the commands.

Docker has successfully used the Dockerfile we created previously. Looking at the commands, all the steps translate to the same commands we ran on the EC2 instance to build the website.

Run docker image ls to see the image we just built.

There you have it! You can see the container image we just built including the image name, tag, image ID, time of creation, and size.

This is how we use Dockerfile to build the container image for the Jupiter website. Up next, we will use this image to start our container.

14. Start the Container

Reference Architecture

In the last tutorial, we used the Dockerfile to build the container image for the Jupiter website. In this step, we will use the image we built to start the container.

Open your project folder in Visual Studio Code. Open the PowerShell terminal to the jupiter-website directory. Let’s rerun docker image ls to view the image that we created earlier.

To start the container, run docker run -dp 80:80 jupiter

d stands for detach, running the container in the background.

p stands for port, exposing port 80 on our Dockerfile

jupiter the name of our image

We have successfully started the container.

To see our Jupiter website, let’s open a browser . Type http://localhost:80

There you go! The Jupiter website is running in a Docker container.

Let’s go back to Visual Studio Code. Since we have used the Dockerfile to create our container image and verified that the container image is working, let’s stop the container.

Run docker psto see the containers we have running locally.

Blue — container ID

Run docker stop “CONTAINER ID” to stop the Jupiter container from running.

We successfully stopped the container from running.

15. Create a Repository in Docker Hub

Reference Architecture

8. Create a repository in your Docker Hub account.

We’re going to create a repository in our Docker Hub account to store the container image we created earlier. Go to our Docker Hub account, go to Repositories then Create repository

Give your repository name and description. You need to select Public because we are using a free plan. Click Create.

We have successfully created the Jupiter repository in Docker Hub. We can use this (blue) command to push our image into this repository.

16. Push the Image to the Docker Hub Repository

Reference Architecture

9. Push the Image to your Docker Hub repository.

We will push the container image that we created for the Jupiter website to the repository we created in Docker Hub.

Open your project folder in Visual Studio Code. Open the PowerShell terminal to the jupiter-website folder and log in to Docker Hub by running this command docker login -u astra01 the -u stands for username then put your username for your Docker Hub account.

We have successfully logged in.

Let’s list the images we currently have. Run docker image ls

This is the container image for the Jupiter website

Next, we will use the Docker tag command to give this Jupiter image a new name. Run docker tag jupiter astra01/jupiter

jupiter is the image name

astra01/jupiter your username and the name of the image

We have successfully given the Docker Image a new name.

If you rerun docker image ls

Blue — the new name we gave to our container image

Let’s push this image to the Docker Hub repository. Run docker push astra01/jupiter

Its now pushing the image to the repository in Docker Hub.

When its finished, let’s go to our Docker Hub account to verify that the image is there. Go to repositories then select the Jupiter repository

There you go, in the Jupiter repository you will see the image we pushed to it. The image’s tag latest.

This is how you push an image that you created locally in your computer to your Docker Hub repository.

17. Install the AWS Command Line (CLI) on a Windows Computer

Reference Architecture

10. Prerequisite — install AWS CLI Create IAM User and Named Profile

For details on how to install the AWS Command Line (CLI) on a Windows computer, visit my previous tutorial here where I explained it thoroughly.

18. Create IAM User

The next thing that we will do is push the container image we created to the Elastic Container Registry (ECR) in our AWS account.

The ECR is similar to Docker Hub. It is a service that allows you to store your container images in AWS. To do this, we will create an IAM user with programmatic access then we will use the AWS CLI that uses access key and secret access key to authenticate with AWS in order to push the container images to ECR. Afterwards, we will use to run Fargate containers.

For details on how to create an IAM user with programmatic access, visit my previous tutorial here where I explained it thoroughly.

19. Create a Profile

We will configure the user’s access key ID and secret access key on our computer. Configuring the user’s credentials on our computer will allow us to authenticate with our AWS account programmatically.

Open the command prompt and run aws configure then enter your access key ID and secret access key. The default region will be us-east-1 then just press enter for Default output format.

To locate the user’s credentials on your computer, open file explorer. Go to (C:) drive > Users > Admin (may be different on your computer) > .aws. Check the path (blue).

Anytime you want to update your user’s credentials we can either run aws configure in the command prompt then enter the access key ID and secret access key or you can go straight to the path (blue) where your user’s credentials are stored and change/save it from there.

This is how you configure an IAM user’s credentials on your computer to authenticate with your AWS account.

20. Create an Amazon ECR Repository to Store Your Image

Reference Architecture

11. Create an Amazon ECR Repository to store your image.

Before we can push the container image we created to AWS, we have to create a repository in ECR just like we did with Docker Hub. To do this, we will use the AWS Command Line.

Let me show you how to find a CLI command on Google to create a repository in ECR. Open your browser and search aws cli create ecr repository. This brings you to the CLI documentation to create a repository in ECR. Scroll all the way down to Examples, copy this command and paste it in your notepad.

You can type the command this way

or this way

Now, update the name of your repository (blue) and region (yellow). us-east-1 is the region that we are using for this project.

This is the CLI command that we will use to create a repository in ECR. Run this command in the command prompt.

We have successfully used AWS CLI to create a repository in ECR to store our container image.

Before closing the command prompt copy these (blue) command and paste in your notepad. Please save this file because we will need some of this information on the next steps.

Next, let’s go to our AWS management console and verify that our repository is in the ECR.

In the Elastic Container Registry under Repositories (blue) you will see the Jupiter repository (yellow) that we created using AWS CLI.

This is how you create a repository in ECR.

We can do this!

Photo by Fab Lentz on Unsplash

21. Push the Image to your ECR Repository

Reference Architecture

12. Push the Image to your ECR repository.

Now that we have created our repository in the ECR the next thing we will do is push our container image to the ECR repository.

Open your project folder in Visual Studio Code and open your PowerShell terminal to the jupiter-website folder. Run docker image ls to look at the image we have in this repository.

These are the image we currently have. The first image (blue) is the one we renamed that we pushed to DockerHub while the second one below is the image we originally created.

To push our image to our ECR repository, we need to tag the image. Run the command docker tag followed by the your image name and the URI of the repository we created.

The URI is in the output we saved from the CLI command and put in our Notepad. In case you weren’t able to, you can retrieve this (blue) from the ECR on your AWS account.

After tagging the image, let’s check the Docker images again. Run docker image ls

Under your output, you will see the image that we tagged here (blue)

Before we can push the image to ECR, similar to what we did with DockerHub, first we have to sign in to ECR. This is the command that we need to run.

aws ecr get-login-password | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com

Before running this command, update your AWS account ID (blue) and region (yellow). You can copy your account ID here (blue).

We have successfully logged into ECR.

Now we can push our container image to the repository we created in the ECR. To do this, run docker push <uri of your repository>

It is now pushing the container image to the ECR repository. Give it some time to complete.

We’ve successfully pushed the container image to the ECR repository.

In our AWS account, we verified that the container image is in the ECR repository and the tag for that image is latest.

This is how we push the container image we created locally on our computer to the Amazon Elastic Container Registry (ECR). We will use this image to create Elastic Container Service (ECS) containers that will deploy our Jupiter website.

22. Build a Three-Tier AWS Network VPC from Scratch

Reference Architecture
  1. VPC with public and private subnets in 2 availability zones
  2. An internet gateway is used to allow communication between instances in VPC and the internet
  3. We are using 2 availability zones for high availability and fault tolerance
  4. Resources such as NAT gateway, bastion host, and application load balancer use public subnets.
  5. We will put the web servers and database servers in the private subnets to protect them
  6. The public route table is associated with the public subnets and routes traffic to the internet through the internet gateway
  7. The main route table is associated with the private subnets

For details on how to build a three-tier AWS network VPC from scratch, visit my previous project here where I explained it thoroughly.

23. Create NAT Gateways

Reference Architecture
  1. The NAT Gateway allows the instances in the private app subnets and private data subnets to access the internet
  2. The Private Route Table is associated with the private subnets and routes traffic to the internet through the NAT gateway.

For details on how to create NAT Gateways, visit my previous project here where I explained it thoroughly.

24. Create Security Group

Reference Architecture

We will create the security groups we need for this project. The following are the security groups that we will create.

ALB Security Group

  • Opens port 80 and 443 with source 0.0.0/0
  • This is the security group that we will add to the application load balancer

Container Security Group

  • Opens port 80 and 443 with source ALB Security Group

For details on how to create Security Groups, visit my previous project here where I explained it thoroughly.

25. Create Application Load Balancer

Reference Architecture

16. Application Load Balancer is used to distribute web traffic to the containers

We will create an ALB that we will use to route traffic to the Fargate containers in the private app subnet. The following steps are similar to my previous project here.

We will delete this Target Group after creating our ALB. When we create the ECS service, we will create a Target Group that the ALB will route traffic to so for now let’s create this Target Group and delete it later on.

Go to Target Group in your AWS account. Create the Target Group using these settings:

Target type: Instances

Target group name: Dev-TG

Protocol: HTTP

Port: 80

VPC: Dev-VPC

We don’t have any targets to register for the meantime so click Create target group. Here’s what we so far

Go to Load Balancers. We will create an Application Load Balancer so click Create load balancer and enter these settings. The information that won’t be mentioned, we will leave them to their default settings.

Load balancer name: Dev-ALB

VPC: Dev-VPC

Mappings

us-east-1a

  • Public Subnet AZ1

us-east-1b

  • Public Subnet AZ2

Security Groups: ALB Security Group

Listeners and routing

Protocol: HTTP

Port: 80

Default action: Dev-TG

We have successfully created our ALB and the state is active.

We will use this ALB to route traffic to the containers that we will create in the private app subnet. We will create a target group that the ALB will route traffic to when we create the ECS service that will create the container. For now, we don’t need the target group we created for this ALB, so let’s delete it.

To delete the target group, first we need to delete the listener we added the target group to.

Next, delete the target group.

We have successfully deleted the target group. For now. the only thing you should have is the Application Load Balancer without listener.

Once we create the Elastic Container Service (ECS), we will create new listener and target group. This is how we create the Application Load Balancer that we will use to route traffic to the Containers in the Private App Subnet.

26. Create an ECS Cluster

Reference Architecture

17. Create an Elastic Container Cluster

Now that we’ve pushed our container image to the ECR repository, we are ready to run our ECS Fargate containers. Let’s create an ECS cluster. In your AWS account, go to the Amazon Elastic Container Service and Create cluster.

Give your cluster a name. Selec our Dev-VPC and we will run our task on Private App Subnet AZ1 and Private App Subnet AZ2. We will leave the rest of the options to default then click Create.

We have successfully created the ECS cluster.

At the moment, our cluster is empty. Up next, we will create our Task Definition. So please do not close this ECS consol yet as we will still need it.

27. Create a Task Definition

Reference Architecture

18. Create a Task Definition

Now that we have created the ECS cluster, the next thing that we need to do is to create the Task Definition. Go to Task definitions and Create new task definition

Provide a Task definition family name and Container name. Provide your Image URI (you can get this from your ECR repository).

The Container port is 80, Protocol is TCP, and we will leave the rest of the options to default. Click Next.

Ensure that AWS Fargate is selected. The Operating system/Architecture is Linux/X86_64. CPU is .25 vCPU while Memory is .5 GB. We will leave the rest to their default settings and hit Next.

Let’s Review and create

We have successfully created the Task Definition and its got all the details inside.

28. Create Service

Reference Architecture

19. Create a Service

We will deploy a service for our ECS cluster. ECS service is how we start our container. To start the ECS service, go to the ECS console then Clusters. Select the jupiter-cluster we created previously.

Go to Services tab and click Create. It has already selected our cluster.

Under Task definition and Family, select jupiter-service. Provide a Service name. The Desired tasks is 2 meaning we want to containers running. For the options that were not mentioned let’s leave them to their default settings.

Under Load balancing and Load balancer type, select Application Load Balancer. Choose Use an existing load balancer under Application Load Balancer. Select our Dev-ALB under Load balancer. For the options that were not mentioned let’s leave them to their default settings.

If you recall, when we created the ALB we deleted the listener and the target group. This is where our Service will create a new listener and target group.

Provide a name for the Target group. For the options that were not mentioned let’s leave them to their default settings.

Next is Networking. The Dev-VPC is already pre- selected. Ensure Private App Subnet AZ1 and Private App Subnet AZ2 are selected in Subnets. We will use Container SG for the Security group and don’t forget to disable the Public IP. The container is going to be in the private subnet and we don’t want it to have a public IP address.

Click Create.

It is now deploying the service for the Jupiter cluster. When this completes, we will use the DNS name of our application load balancer to access our website.

The service has been successfully deployed.

You can see the service and 2 tasks running.

You can see the 2 tasks running under Tasks tab

Now that our service is running successfully, we should be able to access our website using the DNS name of the ALB.

Copy the DNS name of your load balancer. Paste it in the address bar and hit Enter.

There you have it! We can now access our website and its running in an ECS Fargate Container.

This is how you host the website in an ECS Farget Container. In the next step, we will register for a domain name so that we can access our website using our domain name instead of the DNS name of the Application Load Balancer.

29. Register a New Domain Name in Route 53

We will register a new domain name in Route 53. This will allow our end users to access our website using that domain name instead of the DNS name of our ALB.

For details on how to register a new domain name in Route 53, visit my tutorial here where I explained it thoroughly.

30. Create a Record Set in Route 53

Reference Architecture

20. We are using Route 53 to register our domain name and create a record set.

We will create a record set in Route 53 and point our domain name to our application load balancer.

For details on how to create a record set in Route 53, visit my tutorial here where I explained it thoroughly.

We are almost there!

Photo by Juan Goyache on Unsplash

31. Register for an SSL Certificate in AWS Certificate Manager.

We will register for a free SSL certificate from the AWS certificate manager. We will this to encrypt all communications between the web browser and our web servers.

For details on how to register for an SSL certificate in AWS certificate manager, visit my tutorial here where I explained it thoroughly.

32. Create an HTTPS (SSL) Listener for an Application Load Balancer

We will use the SSL certificate to secure all communication to our website. To do this, go to Load Balancer’s page and click the Listeners tab then Add listener.

Change the protocol to HTTPS. Default actions is Forward

Select your target group.

Under Default SSL/TLS certificate beside From ACM select your SSL certificate. Click Add.

There you go, you can see that our HTTPS listener (blue) it is forwarding traffic to our target group and it’s using the certificate manager.

One more thing, for our HTTP listener (yellow) we need to change the default action to redirect traffic to the HTTPS listener.

Select HTTP:80 > Actions > Edit listener.

Remove the Forward to Default actions. Select Redirect.

Protocol is HTTPS and Port 443. There you have it, under our HTTP listener we are redirecting the traffic to HTTPS.

Let’s verify if our website is secure. Type https:// “your domain name”

There you go, communications to our website is now secure. You can now see the lock icon. This is how you use an SSL certificate to secure the communication to your website.

33. Clean Up

We have built all the resources in our reference architecture and completed this project. Go ahead and delete all these resources including CloudWatch log groups so we don’t incur further costs.

Photo by Museums Victoria on Unsplash

Thank you for following along and Congratulations for completing this project!

This is how you Deploy a Static Web App on AWS with Docker, Amazon ECR, and Amazon ECS. Let me know if you have any questions and I look forward to see you on my next project.

Build real-world projects with me here! Show your employers that you are the right person for the job and stand out from the crowd!

Connect with me on LinkedIn

--

--

Eugene Miguel

Cloud DevOps Engineer • AWS Certified Solutions Architect