Our staging deployment in ECS

Subhash Chandran
Bamboo Engineering
Published in
5 min readJul 27, 2017

--

We at bamboo use Go for developing our microservices. Docker is used by us in all stages of SDLC, including deployment to staging and production. This blog post covers in detail a recent deployment that we did for our staging environment using AWS ECS. The deployment was for four microservices in two t2.micro instances behind one ALB.

Staging Environment Requirements

Staging requirements for our scenario:

  1. We wanted to deploy all our microservices. We have 4 in number.
  2. HA is not needed. We do not mind downtime in staging.
  3. We are extremely cost conscious when it comes to hosting.
  4. Staging environment is used by developers to write their frontend/app connectivity to the APIs. No high-throughput / latency requirement.

Setting up Staging

I typically follow the DID technique when it comes to capacity planning:

D: Design for 20x capacity.
I: Implement for 3x capacity.
D: Deploy for 1.5x capacity.

The challenge in microservices architecture is, capacity planning becomes more complex. Previously, when I designed my first microservices architecture, I had HA based cluster for each and every microservice we developed. Most of the cluster servers were scandalously under-utilized, and it simply did not justify the cost. Later when I started exploring opportunities for utilizing existing capacity in already configured clusters, I discovered Docker based hosting. Us, being AWS customers, the choice of Docker-deployment came to be ECS.

Code Organization

As said earlier, we use Docker for development, integration testing, and for deployment. But that does not mean we have separate Docker files for each usecase. I will describe our code organization so that you understand how one code is used across for all usecases, including deployment.

Each Go microservice is a separate code repository. At the root of the repository we have two Dockerfiles (example available in this gist):

  1. Dockerfile
  2. Dockerfile.migrate

Dockerfile creates the microservice image. Dockerfile.migrate creates the image for running DB migrations. Yes, we create a one-time-run containers for running DB migrations during deployment. We also have a docker-compose.yml file at the project root directory, that defines all the dependent services the microservice needs. We have configuration files stored in two locations, conf/ and docker/conf/. When running the code in developer laptop without Docker, configuration under conf/ will be taken (typically localhost DB URL), and when running under docker-compose up, configurations under docker/conf/ with hostnames configured to docker-compose.yml will be used. So, our project code looks like this:

You would also notice two scripts at the root, dkr-run.sh and dkr-push.sh. The former is used to do a docker-compose up and the latter for pushing the built images to ECR.

When running the containers in staging and production environment, we use special environment variable MYENV set to stg / prd. Our ENTRYPOINT in Dockerfiles are mapped to a script docker/entrypoint.sh that recognize this environment variable, and copy sensitive configurations for that environment from our vault before running the Go binary. More about this design in this excellent blog post from AWS. Also, do NOT make the newbie mistake of keeping sensitive information like DB passwords in environment variables — they tend to get leaked to other services like DataDog.

In summary, this code organization helps us to run the code:

  1. Directly, without Docker, in the developer’s configured laptop.
  2. In the developer’s laptop using docker-compose up.
  3. Running integration tests in our CI.
  4. Running in staging and production environment with protected configuration files.

Deployment Using ECS

I expect readers to understand basic ECS concepts before proceeding. This is not intended to be a ECS tutorial, but a note on how we at bamboo use it.

We followed this approach:

  1. For each microservice create two ECR repository: one for the microservice itself, and the other for the migration container.
  2. Just like in ECR configuration, we create two Task Definitions.
  3. The microservice Task Definitions are configured to a corresponding Service. For all our microservices, we have created one ALB. When configuring the Service, we attach the ALB to the container via a Target Group that is created right within the ECS console (screenshot in next section). For staging, we run just one instance of each microservice.

Screenshots

We create 2 repositories and 2 Task Definitions per microservice. One for running the microservice itself, and the other for running any DB migrations that may be needed.

A screenshot showing bamboo Services. Note we do not configure *-migrate Task Definitions as Services:

How the microservices are deployed to the EC2 instances:

In the above screenshot, we see two tasks (containers) placed in each EC2 instance. In ECS, task placement can be customized using Task Placement Strategies.

This is an example configuration of attaching an existing ALB to a Service by creating a new Target Group for the path-pattern:

One thing to note in ALB-path based routing is that, ALB does NOT do any path re-writes. So in the above case, your microservice code must have route paths starting with /x/*.

When running DB migrations, we directly run the one-time run containers from the Tasks section:

The DB migration task will be executed in one of the cluster instances. In our case, each of the microservice-container has been configured with 256MB hard limit for memory. A t2.micro instance has 1GB RAM, and we were able to run 3 microservices in one t2.micro instance. So when we reach 6 microservices, without adding additional EC2 instances, we will not be able to run DB migrations.

We stream Docker container logs to CloudWatch Logs using awslogs driver. This is how we get to see the logs:

Conclusion

This setup has given us huge flexibility in terms of cost. We are spending for 1 ALB and 2 t2.micros for hosting 4 microservices in our staging — two years back, I couldn’t have imagined that! As the number of microservices increase, we plan to add more t2’s. When doing performance testing, we plan to use a separate cluster that will be brought up with bigger, more production-like instances.

--

--