AWS Unveiled: A Step-by-Step Guide to Containerizing and Deploying application

Filipe Pacheco
7 min readFeb 3, 2024

--

Hello Medium readers, today I’ll delve into the evolution of my previous post, exploring the realm of best practices in the DevOps space. In the ongoing series about the HumanGov application, compliance with business rules necessitates deploying the same application in isolation for each of the 50 states in the US.

Naturally, there are both better and less optimal approaches to achieving this goal. Let’s discuss a more recent and highly effective method. My task centred around modernizing, containerizing, and deploying the HumanGov application onto the AWS Cloud environment.

A little of context …

The concept in today’s project aligns with what I discussed in the previous two posts (referred to as one and two), hence, I’ll keep certain details concealed. In the initial two posts, both the application and infrastructure were dependent on identical services within AWS — specifically, EC2 instances. To put it differently, the application operated within Virtualized Machines, VMs. The image below illustrates the architecture implemented in the preceding two posts.

Infrastructure and Application Architecture on EC2.

This approach offers several advantages, making it much easier to deploy the application compared to the one I’ll describe today using Containers. I previously discussed containers in this here. However, it’s crucial to emphasize that maintaining an application running on EC2 involves significantly more effort and challenges in terms of maintenance, patch application, and AWS configuration to ensure continuous service, irrespective of any issues.

This is a primary reason why many web applications naturally transition from Virtualized Machines to Containers, though I believe the main driver is financial. When comparing the cost of maintaining a single VM on AWS EC2 with the same application using ECS (Elastic Container Service), the difference becomes apparent. This financial aspect, I believe, strongly advocates for such migrations.

Below, you’ll find the updated architecture, encompassing both infrastructure and application. Here, three new services are utilized: Application Load Balancer, ECR (Elastic Container Registry), and ECS. I’ll delve into these in more detail in the following sections.

Infrastructure and Application Architecture on Containers.

Task of the day

After this contextual introduction, let’s dive into the task. As mentioned earlier, in this application, it’s imperative to deploy the same application, isolated by state, across all US states. Each instance of the application requires communication with two AWS services for storing information. The first service is DynamoDB, utilized for storing non-structured data related to the personal information of each user. The second is the S3 bucket, designated to receive the PDF image of a user’s personal document.

Services used

Most of the services utilized have already been covered in the last two posts, so I won’t repeat myself. However, it’s essential to delve into the details of three new services and technologies:

  • Docker: This technology facilitates the deployment of applications by isolating different components, which helps in isolating problems and avoiding compatibility issues during deployment.
  • AWS ECS (Elastic Container Service): This is a proprietary container management service from AWS. In ECS, you can use your Docker image, which is then managed by AWS, ensuring high availability across different Availability Zones (AZ).
  • AWS ECR (Elastic Container Registry): This service serves as a repository for Container Images. If you are deploying an application with containers in AWS, it’s recommended to use ECR to save time on authentication and downloading images from outside AWS.
Services used in this implementation.

Implementation

In the last two posts, I covered how to launch the S3 bucket and DynamoDB table using Terraform, so I won’t revisit those today. However, there is room to delve deeper into others.

Elastic Container Register

The initial step I took was to register two repositories to receive the Docker images created by the developer. As depicted in the image below, one repository is designated for the application itself, Flask App, and the other for the front-end, NGINX. Adhering to the best practices in the DevOps playbook, it’s always beneficial to decouple your application — meaning, to separate the front end from the back end and storage.

ECR Repository view.

As I outlined in this post about Docker, in the image above, there’s an option called “View push commands.” In this feature, AWS provides you with all the commands you need to authenticate yourself, build your image, apply a tag, and send it to your repository, making your life easier, believe me.

Elastic Container Service

The next step was to create the cluster to run the containerized applications, and it’s quite straightforward to do so. AWS offers three options: Fargate, which I opted for, EC2 instances, and External ECS.

  • AWS Fargate: It is a fully self-managed serverless service, so you don’t need to worry about defining the instance to run your application — AWS takes care of it all. DISCLAIMER: This service is not eligible for the Free tier.
  • AWS EC2 instances: This option provides more customization and choice because you can precisely select the type of instance you need to use. However, it comes with more configuration complexity and is a bit more challenging to deploy.
  • External instances: You have the flexibility to choose instances available in other cloud providers such as Azure, GCP, or OCI.
ECS Cluster creation.

Elastic Container Service Task Definition

After creating the ECS Cluster, the next step is to set up the task, which essentially is the container itself. In the image below, you can observe the configurations I used to set up everything. I’d like to draw attention to the Task Roles — it is crucial to navigate to the IAM section and create a role to grant permissions for ECS to connect with other services in AWS. The roles I used are as follows, and their functions are self-explanatory:

  • AmazonS3FullAccess
  • AmazonDynamoDBFullAccess
  • AmazonECSTaskExecutionRolePolicy
ECS Task creation initial configuration.

The next stage in the task definition is setting up the container itself. As depicted in the image below, I selected the URI from the container that I pushed to the ECR. At this point, it’s crucial to map the ports where your application and the container will communicate.

ECS Task container configuration.

In the final part of the task configuration, I was able to pass some special configurations to serve as environmental variables for the application running inside the container, as illustrated in the image below.

ECS Task Environmental variables' configuration.

The last two configurations of the task are related to networking. Ensure that your Security Group has an inbound rule allowing access to the port your application expects to receive. The next one is to configure the Application Load Balancer. Since the application is supposed to have high availability, it’s necessary to have an actor distributing incoming requests. The configuration I used is shown in the image below.

ECS Task Load Balancer configuration.

Elastic Container Service — Service Definition

The final stage involves the configuration of the service. In this context, the service represents the last configuration you undertake to deploy the application. Here, you can choose the deployment strategy and launch type, referencing the cluster you created earlier.

ECS Service Environment configuration.

In the subsequent service configuration, you can specify the version of your task as well as the number of replicas of your container you want to keep alive. This represents the level of redundancy for containers, as depicted in the image below.

ECS Service Deployment configuration.

Once all of this is set up, you can click to create, and the service will be deployed. Now, it’s just a matter of accessing and experimenting.

Conclusion

In this project, I explored cloud architecture intricacies, focusing on modernizing and containerizing the HumanGov application in AWS. From IAM Role setup to Docker containerization and Terraform’s Infrastructure as Code for AWS resource provisioning. The deployment of AWS S3, DynamoDB, and Elastic Container Service (ECS) yielded a fully functional, scalable, and resilient HumanGov application. This experience increase my understanding of cloud deployment, emphasizing the significance of IAM roles, Docker, AWS ECR, Terraform, and ECS service deployment.

I hope you like it ;) .

--

--

Filipe Pacheco

Senior Data Scientist | AI, ML & LLM Developer | MLOps | Databricks & AWS Practitioner