Deploying Microservices with AWS Fargate

Recently, in the last re:Invent (2017), AWS launched a new service called Fargate, which can simple be described as an abstraction to deploy applications through containers.

AWS already had a service to manage containers, called Amazon Elastic Container Service (ECS), but ECS is more connected with the infra-structure layer than with the application layer.

The main idea behind Fargate is that you only have to think about the computational footprint of your application, memory and CPU, and it will manage all the other stuff for you automatically.

My case and why Fargate

In B2W, we have many environments where we run our Microservices. One of these environments is AWS, and we deploy our applications using Elastic Beanstalk. Another environment is managed by us, and to simplify our operation, B2W’s DevOps team built an abstraction that uses Mesos and other Mesos frameworks. The concept of Mesos is to provide a unified vision of your infrastructure and their compute resources, so you can organize all of those resources through Cluster(s).

For task scheduling, like deploying a new version of a Microservice, we use a Mesos framework called Marathon. The role of Marathon in this process, is to deploy the new service, check if it is healthy and register it in a Load Balancer, in this case Traefik, through a integration that uses Marathon as a backend configuration engine. A scheduled task for Marathon is a container running with a /health endpoint where Marathon can check the application's heartbeat.

Deploying and managing an application in ECS with EC2 x Fargate

But in AWS, we don't use containers, and this leads to some problems, like:

  1. Our CD process varies. In one environment (AWS), our deploy artifact is a FAT JAR, and in another one (Mesos), our deploy artifact is a Docker container;
  2. Our troubleshooting process is also different. In AWS, we have a virtual machine (EC2) just running a Java Application. In Mesos we have a different scenario, with many containers (applications) in the same VM. In practice, this is quite different because your application shares some OS resources with others, like file descriptors;
  3. Deploy containers is better. The startup time is much faster and you have the guarantee that if the container is running in your machine then it will run in any environment.

For the reasons above, using containers at AWS seemed like an obvious path for us. We started trying using ECS, but it is a step before Marathon; ECS is more like Mesos than Marathon. As we are responsible for applications, not for machine level resources, Marathon is closer to our reality, where each team is responsible for (Micro)services.

If you want to benefit using Fargate, all the steps involved in the process are described below. For this, I will create a Microservice, a Docker image of this service, push it to ECR, and deploy in AWS using Fargate.


Microservice creation

The first thing is to have a Microservice. In my case, this will be a Spring Boot application with Actuator module, because we need a /health endpoint in this service.

Fortunately, Spring has a kind of "application generator" in start.spring.io, so go there and generate an application with Spring Web and Spring Actuator modules and download it!

Now, run the maven package command to generate your FAT JAR:

mvn package -DskipTests

The ouput of this command will be a file called your-application.jar. In my case it is fargate-0.0.1-SNAPSHOT.jar.

Dockerizing the Microservice

The second step, is to dockerize my Spring Boot Application, and for this I'll need a Dockerfile.

I assume, that you are familiar with some Docker Concepts, like images. This is mandatory for using ECS, Fargate, and any other service that uses Docker containers.

Creating the Dockerfile

Dockerfile is the file that I'll use to build the container image. It is like a cake recipe: a sequence of steps that Docker's engine uses to generate images.

My image should be very simple, just with a Java Application, so the Dockerfile look like this:

FROM openjdk:latest
MAINTAINER marcus.cavalcanti@gmail.com
WORKDIR /opt/springbootapp/
ADD fargate-0.0.1-SNAPSHOT.jar /opt/springbootapp/
RUN chmod +x fargate-0.0.1-SNAPSHOT.jar
CMD ["java", "-jar", "fargate-0.0.1-SNAPSHOT.jar"]
EXPOSE 8080

Generating the image

Now, I can build the Dockerfile to generate the image. I'll also tag this image. Using a tag is very important, because the tag will be the Microservice version to be deployed using Fargate.

To build the image, run the command below:

docker build -t fargate-java:1.0.0 .

You can see that I named (locally) my image as "fargate-java" and I used the tag "1.0.0", the current version of my Microservice.

To check if the image was created succefully, run the command below:

docker image list | grep fargate

To check if the image is working properly, run the container with this command:

docker run fargate-java:1.0.0

Inside the container, make a curl in localhost:8080/health. If this command returns a 200 status code, it is working.

ECR

To work with Fargate properly, I must use Amazon Elastic Container Registry (ECR), which is a private Docker Registry managed by AWS.

The container's image that I'll deploy using Fargate must already be in ECR, so let's follow the steps below to accomplish this.

I'll assume that you already have aws-cli configured in you machine. If you don't, use this link: https://docs.aws.amazon.com/cli/latest/reference/configure/

Logging in to ECR

To log in to ECR using your AWS Credentials, please run the command below:

aws ecr get-login

This command will generate an ouput. Take this output (docker login command) and run it as a command and voilà!

Tagging the image

Now, I'll tag the image that I created previously. I already have this image in my local machine, but this image is not in ECR, so let's run this command:

docker tag <IMAGE ID LOCAL> <repositoryURI ECR>:<VERSION>

If you don't know what the IMAGE ID is, run the command below and take the value of the third column, a hash value:

docker image list | grep fargate

In case you don't know what your repository URI is in ECR, please run the command below, choosing your repository and taking the value of the "repositoryUri" field.

aws ecr describe-repositories

Your command should look like this:

docker tag 3b893681f505 562821017172.dkr.ecr.us-east-1.amazonaws.com/marcus:1.0.0

Pushing the image

Now my image is already tagged in ECR and the final step is to push this image to AWS:

docker push <repositoryURI ECR>

Deploying with Fargate

Finally, with all prerequisites satisfied, it's time to make my Microservice visible to the world.

Fargate is an abstraction of ECS, so instead of you managing your instances and clusters, it will accomplish this for you. In other words, Fargate is just a launch type of ECS.

ECS launch types: Fargate and EC2.

Container definition (standard)

The first step is to define my container, in this case, as I created a Microservice from scratch, I must define a custom container. Click in configure and follow the steps below:

  1. Container name: fargate-application;
  2. Image: the image path in ECR, including version/tag (e.g.: 562821017172.dkr.ecr.us-east-1.amazonaws.com/marcus:1.0.0);
  3. Memory limits: define a soft-limit (128) to "tell" ECS about how much memory must be reserved for your container execution. Define a hard-limit (512) to kill your container when your application reaches this limit;
  4. Port mappings: my service is running on port :8080, so I mapped this port here.

Container definition (advanced)

At this point, it is only necessary to define the fields listed below:

  1. CPU Units: how many CPU Units does your application demand? In this case, how much CPU will your application share with other tasks in the same host machine. In case of use Fargate, you can leave this field blank.
  2. Entry point: sh, -c
  3. Command: /bin/sh -c “java -jar fargate-0.0.1-SNAPSHOT.jar”
  4. Working directory: /opt/springbootapp (same as used in Dockerfile)

You can leave all the other fields in this section blank. Fargate will handle this for you.

Technically I don’t need this because I created my Docker image with all required steps to run my Microservice.

But if you need some specific tuning, or to mount a volume in your container, or configure a network, here is the place. You can also change all that information at another time with Task Definitions.

With tasks I can configure how my containers will run. Like network configuration, memory, CPU, bind ports, container orchestration, etc.

As one of the purposes of using Fargate is just worry about application aspects, my first task will be very simple:

  1. Task definition name: first-task-fargate;
  2. Network mode: awsvpc;
  3. Task Execution Role: I created one previously with full ECS access;
  4. Compabilities: FARGATE;
  5. Task memory: 512mb (you're billed for this);
  6. Task CPU: 0.25 vCPU (you’re billed for this).

Defining a service and creating an ELB

After configuring my custom container and also my first task, I want AWS to automatically create an ELB (ALB) for me. For this, I have to turn my container/task into a service.

  1. Service name: automatically generated;
  2. Number os desired tasks: this is an IMPORTANT piece of information, this is about how many containers, or applications instances you'll have. It is always a good idea to choose more than one, so choose 2;
  3. Security group: automatically create new;
  4. Load balancer type: Application Load Balancer;
  5. Load balancer listener port: the port that your application is running inside the container;
  6. Load balancer listener protocol: HTTP.

Defining ECS Cluster

An ECS Cluster is a logical organization to separate resources, like memory and CPU.

A cluster can have many instances and a given container (task) runs in a single instance of the cluster. A service, can run in many instances of the cluster.

Diagram of ECS objects and how they relate

In my case, I'll create a new cluster, so follow these steps:

  1. Cluster name: my-cluster;
  2. VPC ID: automatically create new;
  3. Subnets: automatically create new.

Reviewing and creating

The last step in the AWS console, is review all the previous steps.

If everything seems OK, click on the create button and wait for AWS to create all the components and integrations.

In case of success, you'll see a screen like this:

Calling the Microservice

Now my Microservice is deployed and ready to receive HTTP calls. To discover which address I have to use to call this service, I entered in my ELB console and took the DNS name.

Remembering that my service is running on port 8080 (default port of a SpringBoot application), if you may need to call this service through ELB on port 80, you can do this using target groups.

Call the microservice with a simple cURL command like this:

curl http://ecs-first-run-alb-1100347479.us-east-1.elb.amazonaws.com:8080/health

This command should return this:

{"status":"UP"}

Understanding the integration between AWS components

When you use a cloud managed service like AWS Fargate, you're using many building blocks (components) which are integrated with one another.

When I deployed this simple Microservice, under the hood AWS created a stack in Cloud Formation. With this stack, all the necessary resources were created as a single unit.

With Cloud Formation, AWS created a target group and also an Elastic Load Balance to expose our application as a HTTP resource.

Another element is when we create a VPC and subnets. We did this automatically with Fargate and it was accomplished using that Cloud Formation's stack.

We're also using Cloud Watch to send logs of all operations of each AWS component. In the same way, we're using EC2 to manage ECS clusters and run tasks (containers).

Finally, we're using IAM roles to grant access to our services and ECR to store our Docker image(s).


As you can see, is very simple and easy to deploy a Microservice using Fargate. For anyone are already working with containers, Fargate looks like a good option to replace Elastic Beanstalk, or even just EC2.

If your case is more complex, and demands other scenarios like features provided by a Service Mesh solution and/or Kubernetes, you must start to work with containers. Containers are now the standard of industry for deploy and run applications.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.