From 0 to Fargate — Migrating your Ruby on Rails App to Docker
This is the second article in a series about migrating your production application stack to Docker and AWS Fargate. If you haven’t read the first post in the series make sure to check it out as it provides an overview of the details here.
Scenario
You have an existing Ruby on Rails application with the following:
- Running on an AWS ec2 instance with all your other apps.
- Application configurations are managed with .yml files that need to be updated on the ec2 instance during the deployment process.
- No CI/CD deployment pipeline.
- Downtime during deployments.
Setting up an application in this manner is common and accepted in the early development of an application, but once the application and engineering team begin to scale - problems with this approach begin to arise:
- Lots of room for manual error that can cause deployment failures and further downtime.
- Lost developer time fighting with deployments.
- A problem with one application or service can cause the whole ec2 instance to go down, therefore taking down the entire stack.
- Developers must constantly monitor and update the development environment with their code changes.
- Downtime can mean lost revenue, which prompts deployments to occur at strange times (like 2 A.M.).
All of these problems result from the monolithic AWS ec2 approach to running and deploying a production application. The following article will cover how to prepare your existing Ruby on Rails application to run on Docker, and in a later article, how to run on AWS Fargate.
Application Configurations
Let’s assume that your existing Ruby on Rails app uses the Figaro Gem to manage application configurations with an application.yml file. To run your application in Docker, you need to be able to set the application configurations without rebuilding the docker image, but you also don’t want to make a non-backward compatible change that would break your existing infrastructure and code by removing the Figaro Gem and changing how the application configurations are loaded. To solve this problem, you can dynamically create the files required by Figaro inside the docker container before the application starts.
AWS SSM Parameter Store
To start, you will need to place all of your existing application configurations into the AWS SSM Parameter store. You can bulk load data into the parameter store with the AWS-SDK as follows:
aws ssm put-parameter — name “/<ENV>/<APPLICATION>/<KEY>” — value “<MY-KEY>” — type “SecureString”
The scheme that we decided to use for the path of the keys is shown above. For core parameters, like database configurations, we use the following scheme:
aws ssm put-parameter --name "/<ENV>/core/<KEY>" --value "<MY-CORE-KEY>" --type "SecureString"
The get-env-vars Script
Once the keys are in the parameter store, they need to be retrieved and used in the application. The following is based on this bash script with inspiration from this article.
The script that we used has a few changes. There are certain keys and values that are application-specific and don’t change if the application is running in local development or in the development environment. There are other keys that do (ex. database configurations, APIs, URLs, etc). Those are included at the top of the script (with the local configuration set as default). If the environmental variable RUN_TYPE
is set to release, then the parameters from the param store will be pulled for deployment. Otherwise, the local parameters at the top will be used.
This script requires certain environmental configurations to be passed in, which we will cover soon. When this script is executed, it generates the config/database.yml and config/application.yml file that is required for the application with the correct parameter store values.
The Dockerfile
We use the following Dockerfile to build the Ruby on Rails application. The script we wrote above is called as an ENTRYPOINT
to the Dockerfile, so that the environmental variables will always be loaded before the application starts.
Note: Don’t forget to add a .dockerignore
file to exclude the files that you don’t need (.git
, .env
, etc.) in your docker build.
Container Startup Scripts
You will want to have separately running containers for each task that you want the rails app to perform. For example, we have api-web
, api-clock
, and api-worker
. Api-clock
runs cron jobs through Rake and api-worker runs Shoryuken workers to process an SQS queue. These scripts are placed in an app_bin
folder at the root of the project, but they could be placed wherever you see fit. The scripts are shown below.
Ideally, the Redis instance should be running in its own container but there were some problems we ran into that prevented us from doing so.
The docker.env file
This env file will be used by the following compose stack to contain the environmental variables necessary to start the stack locally. Here is an example file with the parameters you’ll need:
Docker Compose
To run your newly dockerized application, you will need to setup docker-compose to orchestrate all your containers together. The following compose stack will run the application and a database container locally. This should ONLY be used for local development as it is not designed for production. Deploying the dockerized stack to AWS Fargate will be contained in a subsequent article.
In docker-compose the network resolution between containers is done by the name of the container. So, if you wanted to resolve the db container from the api-web container, you would perform the lookup using db:3306
.
Conclusion
You should now have a functional docker-compose stack. To start the stack, execute docker-compose build && docker-compose up
. After following this guide, keep an eye out for the next guide in my series on deploying your production-ready docker application to AWS Fargate with UFO: ECS Deploy Tool.