Microservices and AWS App Mesh

Deep
The Startup
Published in
8 min readDec 30, 2020
Monolith to Microservices

This is going to be the part 2 of the Microservices with ECS implementation. In part 1, we covered how to implement microservice architecture using Amazon ECS and its service discovery feature and used Rolling Update Strategy. In this tutorial, we are going to see how we can use AWS App Mesh and block/allow service to service communication and do a canary or blue/green Deployment without any impact to the services. Please read the Part 1 to get better continuity.

Quick Intro to AWS App Mesh

AWS App Mesh is a service mesh that makes it easy to monitor and control services. App Mesh standardizes how your services communicate, giving you end-to-end visibility and helping to ensure high availability for your applications. App Mesh gives you consistent visibility and network traffic controls for every service in an application.

We have 4 main components on AWS App Mesh.

Service MeshA service mesh is a logical boundary for network traffic between the services that reside within it.

Virtual Services → Virtual Service is an abstraction of your actual service provided by either a virtual node or virtual router with routes.

Virtual Nodes → Virtual Nodes is a logical pointer to your actual discoverable service. Virtual Services must be attached to either Virtual Nodes or Virtual Routers.

Virtual Routers and Routes → Virtual routers handle traffic for one or more virtual services within your mesh. A route is associated to a virtual router. The route is used to match requests for the virtual router and to distribute traffic to its associated virtual nodes.

What do you learn from this blog

By the end of this tutorial, you will know how to deploy or manage your microservices in AWS ECS, create or manage your microservices using AWS App Mesh and do a canary or blue green rollout safely on your microservice infrastructure.

Technologies used

  1. GitHub.
  2. AWS ECS (with service discovery).
  3. AWS Fargate.
  4. AWS Auto Scaling.
  5. AWS ALB.
  6. Cloud Map and Route 53.
  7. AWS App Mesh

Sample Application Architecture

My sample application named bookingapp has 4 microservices.

bookingapp-home → Home page of the website

bookingapp-movie → Movie booking page

bookingapp-moviev2 → Movie booking page release version 2.0

bookingapp-redis → Backend Redis server.

Initial Architecture
Canary Deployment
AWS App Mesh with Canary

The Application is similar to the one described in the Part 1. I just added few lines of extra code to get the response from microservice bookingapp-movie. Refer my GitHub Repo for bookingapp-home for the python code I used.

From Part 1, I have my ECS cluster ready with 3 tasks, bookingapp-home, bookingapp-movie and bookingapp-redis all 3 tasks have service discovery configured and resolving to the endpoints properly. Let’s assume that our application is working fine and we want to rollout new code changes only to bookingapp-movie microservice. We can rollout the changes using rolling update strategy, but if we face any issue in the new code, all the traffic will get impacted. To do a rollout of new changes safely, we can use canary model i.e. route 75% traffic to the old bookingapp-movie service and 25% to the new bookingapp-moviev2 service, if we don’t observe any issue, send 50% to new bookingapp-moviev2 service and eventually send all traffic to the new service. By this method, by changing the simple weight parameter we can safely rollout new code changes without any impact.

Create new service in AWS ECS

I have cloned my GitHub Repo bookingapp-moviev2 and created a new docker image and pushed it to docker hub.

I am going to create a new task called bookingapp-moviev2 using the new docker image and bring up a service moviev2 and add it to the ALB.

Add the container bookingapp-moviev2:latest and create the task definition.

Now create a service moviev2 for the task.

Add the new service to the ALB and service discovery enabled as moviev2.internal-bookingapp.com.

I have the autoscaling configured for this service as well.

Finally review and save the service. Now you will be having 4 services in place.

home → from bookingapp-home task

movie → from bookingapp-movie task

moviev2 → from bookingapp-moviev2 task (running modified code of bookingapp-movie)

redis → from bookingapp-redis task.

ECS services

The sample application has a ALB in front of it, and the ALB listens on port 80 and the backend is configured based on URL paths.

/home → bookingapp-home-tg → refers home service → bookingapp-home task.

/movie → bookingapp-movie-tg → refers movie service → bookingapp-movie task.

/moviev2 → bookingapp-moviev2-tg → refers moviev2 service → bookingapp-moviev2 task.

/redis → bookingapp-redis-tg → refers home service → bookingapp-redis task.

When you see the canary deployment architecture diagram shown above, you will see that the home service contacts movie service (endpoint movie.internal-bookingapp.com). I successfully made some code changes and created a new service for movie called moviev2 (endpoint moviev2.internal-bookingapp.com). Now moviev2 service is in place but requests are not going there. Let’s see how we can replace movie service with moviev2 using canary deployment model with the help of AWS App Mesh.

AWS App Mesh

Now the good part about App Mesh is you don’t have to change anything in your application code to use it. Let’s create the necessary stuffs in AWS App Mesh.

Create a mesh for our application — bookingapp.

Create mesh

Create virtual node for all our services. Now create for home service, listener on port 5000 as we exposed the same port from the container. Leave the backend empty for now, we need to update it later once we create virtual services.

Now repeat the same for other services also, bookingapp-movie and bookingapp-moviev2.

Create virtual services for all of our services. Make sure the service name is same as the one you created in ECS service discovery.

Create the same for other services as well. We have a total of 4 virtual services now.

After we create services, we need to add backend to the home-virtual-node because home service has to contact movie service.

Virtual Nodes → home-virtual-node → Edit

Create a virtual router only for movie service. As mentioned above, virtual router will route the traffic based on the routes we listed.

In the route section, specify route type ass http, target as virtual node movie-virtual-node and moviev2-virutal-node and weight as you wish with match as /movie. That is the path we use to access the service in the container. Create the virtual route.

Now add the virtual router to the service movie.internal-bookingapp.com.

Let’s pause and understand the flow here, when the traffic comes to service movie.internal-bookingapp.com, it will reach the envoy proxy and service movie.internal-bookingapp.com has a provider called movie-virtual-router, so it will route the traffic there, virtual router has 2 routes as 50% weight each, so each request will go to each virtual route, one virtual route points to virtual node movie-virtual-node which maps to AWS CloudMap Service movie. AWS CloudMap service resolves to IP and forwards the request. This is how the overall traffic flow happens using AWS App Mesh.

Now update the task definitions to use App Mesh. On the ECS cluster go to task definitions of bookingapp-home and create a new revision.

Enable App Mesh and provide all the necessary details.

Click Apply and the proxy configuration will be auto populated.

After you apply you will now see the envoy container added to the container section.

Click create to create the new task definition version. Repeat the same for other task definition also bookingapp-movie and bookingapp-moviev2.

Now, update services to use the latest task definition. In the ECS cluster go to service tab, select movie service and update.

Make sure you select Force new deployment check box and deploy the service. Repeat the same for moviev2 and home service as well. Wait for the fargate to pull latest container images and bring it up. Once instances are up, make sure it is added to ALB target groups respectively.

Simple curl request to ALB /home path equally distributes the load to both the services (movie.internal-bookingapp.com and moviev2.internal-bookingapp.com).

Now I confirmed that my moviev2 service works fine with 50% of the traffic. We can now increase the traffic from 50% to 80% and see the traffic distribution.

Traffic is mostly routed to moviev2 service still around 10–20% approx routed to movie service based on weight. Now we can simply assign 100% weight for moviev2 service and eventually stop the fargate instances and delete the ALB target group.

Closing Notes

Using AWS App Mesh we can easily integrate our existing services without any code changes on our application stack and we can deploy the code changes on the fly just by increasing a simple weight parameter in the App Mesh Route rules. Very easy to revert the deployment to old code by just switching the weight parameter to 100% for old service.

--

--