Breaking the Monolith using Docker, .net Core, Nginx, Amazon ECS and AWS Fargate

Nuwan Wijewardane
7 min readJun 14, 2019

--

Transforming monolithic software applications to Microservices based architecture is a very demanding nowadays. Many organisations already started this epic journey as they are gradually moving applications.Monolithic applications becoming too large to deal with in some point of time everybody needs to rethink about this journey even though there are some noticeable challenges involved.

In this article i will be exploring and sharing the steps how to host a .net Core microservices application with Nginx reverse proxy with docker to AWS ECS with Amazon Fargate. There are some steps involved but nothing complicated and found very easy to deal with detailed AWS documentations.

What we need.

01. Visual Studio Code
02. Docker installation
03. AWS CLI

Step 1- Create .net core web API using command line app and publish it

mkdir customer
cd customer
dotnet new webapi
dotnet restore
dotnet build
dotnet publish -C “Release”

Step 2 — Add docker support to the customer api

Add docker file under the customer

Add bellow in your DockerFile , Refer docker docs for more info

FROM mcr.microsoft.com/dotnet/core/aspnet:2.1-stretch-slim AS base
WORKDIR /customer
COPY bin/Release/netcoreapp2.1/publish .
ENV ASPNETCORE_URLS http://+:5000
EXPOSE 5000
ENTRYPOINT [“dotnet”, “customer.dll”]

Step 3 — Create a reverse proxy using nginx and add docker support

A reverse proxy is a server that sits between internal applications and external clients, forwarding client requests to the appropriate server. While many common applications are able to function as servers on their own, NGINX has a number of advanced load balancing, security, and acceleration etc..

cd ..
mkdir reverseproxy

To enable docker to the reverse proxy add bellow in your DockerFile

FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

Configure the reverse proxy properly it will listen to the port 70 and rout the traffic to the internal port 5000

worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream app_servers {
server customer:5000;
}
server {
listen 70;
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}

Step — 4 Create a Docker-Compose file

Docker compose is a tool for defining and running multi container apps. Since we are using a multi container app (customer container and reverseproxy container ) we should use docker compose which will enable us to start all the services by one command.

version: ‘2’
services:
customer:
build:
context: ./customer
dockerfile: Dockerfile
expose:
— “5000”
reverseproxy:
build:
context: ./reverseproxy
dockerfile: Dockerfile
ports:
— “70:70”
links :
— customer

Step 5— Build the images and run it locally

docker-compose build
docker-compose up

Now all set :) , Application can be access through the reverse proxy http://localhost:70/api/values

Step 6 — Push the containers to Amazon ECR

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS)

Use bellow commands makes sure you already installed aws cli , first use aws configure to log in to the aws using your app id

aws configure
aws ecr get-login — no-include-email — region us-east-2:

Once you receive the docker login text, Paste it in the console to log in to the docker.

Once the docker login is successful, All set to push the images , Now Tag and Push (X****X == your aws account number)

You can check all the images by — docker images before tagging.

Step 7 — Create relevant repositories in the ECS

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.

Step 8 — Tag and Push

a) Tag the customer image using bellow command

docker tag servicehub_customer:latest X****X.dkr.us-east-2.amazonaws.com/customer:latest

b) Push the customer image
docker push X****X.dkr.us-east-2.amazonaws.com/customer:latest

c) Tag the reverseproxy image using bellow command

docker tag servicehub_reverseproxy:latest X****X.dkr.us-east-2.amazonaws.com/reverseproxy:latest

d) Push the reverseproxy image

docker push X****X.dkr.us-east-2.amazonaws.com/reverseproxy:latest

Well, Now most of the hard work is done, Next will log in to aws and make sure our containers are grouped, Load Balanced, linked properly.

Step 9 — Create amazon ECS Cluster and select networking only

An Amazon ECS cluster is a regional grouping of one or more container instances on which you can run task requests. Each account receives a default cluster the first time you use the Amazon ECS service. Clusters may contain more than one Amazon EC2 instance type.

Once the creating the cluster gets successful, It will create a cloud formation stack also

A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks.

Step 9 — Choose an Application Load Balancer

Application Load Balancers provide advanced routing and visibility features targeted at application architectures, including microservices and containers.

Step 10 — Create new task definition

Task definitions specify the container information for your application, such as how many containers are part of your task, what resources they will use, how they are linked together, and which host ports they will use.

Make sure not to use any special characters for the container names.

Amazon ECS Task Definitions. A task definition is required to run Docker containers in Amazon ECS. Some of the parameters you can specify in a task definition include following

— The Docker image to use with each container in your task
— How much CPU and memory to use
— The launch type to use
— The Docker networking mode
— The logging configuration to use for your tasks
— Whether the task should continue to run if the container finishes or fails
— The command the container should run when it is started
— Any data volumes that should be used with the containers in the task
— The IAM role that your tasks should use

Amazon ECS allows you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. This is called a service. If any of your tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks in the service depending on the scheduling strategy used.

Yes , We did it !!!!!!!!!!!!!!!!!!!!

Containers are up and running with the reverse proxy.

you can configure cloud watch now.

Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers.

References

--

--