Migrating from bare metal machines to AWS ECS

Sebastian Struß
applike
Published in
5 min readSep 19, 2019

1. Our way to docker and orchestration

Having bare-metal machines is good if you need the raw performance for one or few specific tasks. When you want to drive the business forward, it’s however more likely that your backend will rather earlier than later need to become more dynamic. Doing this with bare metal is tedious, not very agile and doesn’t scale well.

From bare metal into the clouds.

For this reason we considered using docker for the developers and AWS (Amazon Web Services) ECS (Elastic Container Service) for the production system.

Kubernetes came up as well, but as it wasn’t available with AWS service back then and we wanted to build an infrastructure that is easily maintainable we decided against using it.

After a few weeks of testing and tweaking we found out that:

  • we can scale much easier when having every service containerized,
  • we don’t seem to lose any measurable performance,
  • we can fit more services on one machine, while still having them separated due to the nature of containers,
  • the setup is easy and quickly done, when onboarding new colleagues,
  • tinkering with new versions of software becomes a lot easier, so upgrades can be prepared and tested more easily.

2. Our php applications infrastructure (run on bare metal) wasn’t changed for years

We are using php 7.1 via a fork process manager (fpm) sitting behind a nginx web server.

Our Application uses AWS RDS (mysql), ElasticSearch, Redis, Logstash and RabbitMQ.

You could imagine our development environment prior to docker looking like this:

As you can obviously see: This is not good for docker, as it doesn’t follow any of the principles that docker gives to its users!

But worry not, we are making it the docker way — each service into their own respective container.

3. Prepare the application for docker

The first obvious thing was that we needed to separate services to follow the principles of docker to to get the best out of it.

After we did so, we thought about how we should actually create container images, so they don’t contain the credentials but are still runnable. As of us using the Symfony Framework, our configuration parameters lied in the parameters.yml file.

The second change was using the DotEnv Symfony component to use the containers environment variables to get rid of the credentials being in the parameters.yml.

This enables us to keep the credentials away from the filesystem to have reusable containers.

The changes looked like this:

That was all it took to use it the same way via docker as we can do in AWS ECS.

The resulting container structure looked like this:

4. We made it for our developers! How do we deploy it to production?

We decided to use AWS ECS instead of Kubernetes (EKS didn’t exist back then) as it is not as complicated to configure and still has lots of advantages over bare metal machines.

Compared to bare metal machines we were now easily able to scale our application in case of a surge of requests.

Rollbacks have become easy and fast; we don’t need them often, but if we do it won’t be affecting too many users.

Credentials Management is greatly integrated with ECS, as you can add “secrets” to your task definition which will then later on be available as environment variables.

These and many more features make it a great upgrade for us!

4.1 preparing the cluster

When you want to build an AWS ECS cluster, the first thing you need is servers — or VMs.

In AWS most instance types are not bare metal machines but rather virtualized via HVM or Paravirtual.

So launch some instances and let them join the cluster (adjust /etc/ecs/ecs.config).

4.2 adjust the deployment

Before using AWS ECS, we used jenkins to deploy our changes to any of our environments.

While doing the switch to ECS, we also changed from jenkins to gitlab, so we also used its CI/CD features.

For deploying to ECS however, we needed the ecs-deploy application, which enables us to update services, crons and task definitions.

4.4 fighting pitfalls

The first, easiest and most noticeable pitfall we made was on our sandbox, where we only had a desired count of one for the api. We first chose a minimum and maximum healthiness of 100%.

This ended in ECS not being able to deploy the service (how would you, if you can’t make the gears spin?).

Another pitfall was to have too many worker services on nodes that also host the api. This resulted in slow response times when a request was routed to an api service that was hosted on an ecs instance with many workers.

5. Conclusion & next steps

We are now able to scale fast and easy.

Upgrades or updates have become much easier and can be done faster than before.

ECS makes us more profitable, since we can use our computational resources more efficient than before.

Our developers enjoy the use of docker much more compared to vagrant.

What’s missing:

  • Moving php parts to golang to gain more speed and use our resources even more efficient
  • Microservice-ing our infrastructure
  • More developers (you?)

You made it till the end!

Thank you for your interest and if you want to never stop learning cool new technology, then think about applying to one of our open positions at AppLike or directly to jobs@applike.info.

--

--