Deploying Rails to AWS Elastic Beanstalk Using GitLab CI
PREVIOUS:
— Basic Docker Image For Rails
— Building and Testing Rails With GitLab CI
What the Deployment Will Look Like
Configure Puma
The puma config is usually located at config/puma.rb
# config/puma.rbmax_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }threads min_threads_count, max_threads_countapp_dir = File.expand_path('..', __dir__)bind "unix:///#{app_dir}/tmp/sockets/puma.sock"# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
port ENV.fetch("PORT") { 3000 }# Specifies the `environment` that Puma will run in.
environment ENV.fetch("RAILS_ENV") { "development" }# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
For the most part this file is the default generated file with the exception of:
app_dir = File.expand_path('..', __dir__)bind "unix:///#{app_dir}/tmp/sockets/puma.sock"
We are binding puma to a socket located in our application directory at tmp/sockets/puma.sock
. This is important down the road when we need NGINX
to bind to that same socket.
Configure NGINX
# config/nginx/conf.d/default.conf
upstream app {
server unix:///app/tmp/sockets/puma.sock fail_timeout=0;
}
log_format elastic_beanstalk '$http_x_forwarded_for - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent"';
server {
listen 80;
server_name localhost;
set $default_remote_addr $remote_addr;
if ($http_x_forwarded_for) {
set $default_remote_addr $http_x_forwarded_for;
}
access_log /var/log/nginx/access.log elastic_beanstalk;
root /app/public;
try_files $uri/index.html $uri @app;
location ~* ^/assets/ {
try_files $uri @app;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location @app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
This is a basic nginx config for a Rails application.
The Dockerrun.aws.json
This file is needed for the Multi-Container Docker Platform on Elastic Beanstalk.
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/config/nginx/conf.d"
}
},
{
"name": "app-sockets",
"host": {
"sourcePath": "/app/sockets/"
}
}
],
"containerDefinitions": [
{
"name": "rails-app",
"image": "<APP_IMAGE>",
"essential": true,
"memory": 1024,
"mountPoints": [
{
"sourceVolume": "app-sockets",
"containerPath": "/app/tmp/sockets"
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "app-sockets",
"containerPath": "/app/tmp/sockets"
}
]
}
]
}
We will be zipping this file up along with the nginx config which gets uploaded to Elastic Beanstalk. This compressed package we will call the source package
.
The Source Package
When Elastic Beanstalk deploys, this source package gets extracted on each EC2 instance to the /var/app/current
. So if you have a package with the following structure:
# Zipped Source PackageDockerrun.aws.json
config/
nginx/
conf.d/
default.conf
It will be extracted to:
# EC2 Instance/var/app/current/
Dockerrun.aws.json
config/
nginx/
conf.d/
default.conf
Back to Dockerrun.aws.json
We’ve defined two Docker Volumes in our config. The first contains our NGINX config and the second will hold our socket.
When you look at the container definitions you’ll see that we map our puma socket so that it is contained within our socket volume which then gets mounted with NGINX. This allows the NGINX container to access the puma.sock
file created within our Rails container.
The nginx-proxy-conf
volume gets mounted onto the NGINX container where the default config location is /etc/nginx/conf.d
.
Notice that when defining the rails-app
container, you need to replace the image name <APP_IMAGE>
with where your docker image is stored. For this article, we’ll assume this image is public.
Package Up
Now that we’ve configured Elastic Beanstalk, NGINX, and Puma, we need to zip it all up.
zip -r deploy.zip Dockerrun.aws.json config/nginx/
Create an Elastic Beanstalk Application
Instead of using a sample app, you can update the deploy.zip
created when packaging our source package.
Configure database.yml
Change the production section of you database.yml
to look like the following:
production:
<<: *default
database: <%= ENV['RDS_DB_NAME'] %>
host: <%= ENV['RDS_HOSTNAME'] %>
port: <%= ENV['RDS_PORT'] %>
username: <%= ENV['RDS_USERNAME'] %>
password: <%= ENV['RDS_PASSWORD'] %>
When we created the database in Elastic Beanstalk, AWS exposes the necessary environment variables to connect. Read more here.
Configure GitLab CI
# .gitlab-ci.ymlimage: ruby:2.6.3-stretch
stages:
- build
- test
- release
- deploy
variables:
BUILD_IMAGE: ${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_SLUG}:${CI_COMMIT_SHA}
EB_VERSION_LABEL: ${CI_PROJECT_NAME}-${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHA}
EB_APP_NAME: "<Insert Elastic Beanstalk Application Name>"
EB_APP_ENV: "<Insert Elastic Beanstalk Environment Name>"
EB_PKG_BUCKET: "<Insert S3 Bucket>"
EB_PKG_KEY: ${CI_PROJECT_NAME}-${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHA}.zip
.aws-cli:
before_script:
- pip3 --version
- python3 --version
- echo "Installing AWS CLI"
- pip3 install awscli --upgrade
- aws --version
build-container:
stage: build
image: docker:latest
services:
- docker:dind
script:
- echo "Building Docker Image $BUILD_IMAGE"
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
- docker build -t $BUILD_IMAGE .
- docker push $BUILD_IMAGE
rspec:
stage: test
services:
- postgres:latest
variables:
POSTGRES_HOST: postgres
POSTGRES_USERNAME: postgres
POSTGRES_PASSWORD: postgres
RAILS_ENV: test
before_script:
- gem install bundler
- bundle install
script:
- bundle exec rspec --color --tty
rubocop:
stage: test
image: ${BUILD_IMAGE}
before_script:
- gem install bundler
- bundle install
script:
- bundle exec rubocop
create-application-version:
image: python:3-alpine
stage: release
extends: .aws-cli
script:
- apk update
- apk add zip
- zip -r deploy.zip Dockerrun.aws.json config/nginx/
- echo "Creating Application Version '${EB_VERSION_LABEL}' for '${EB_APP_NAME}'"
- aws s3 cp deploy.zip "s3://$EB_PKG_BUCKET/$EB_PKG_KEY"
- aws elasticbeanstalk create-application-version --application-name "${EB_APP_NAME}" --version-label $EB_VERSION_LABEL --description "$CI_COMMIT_TITLE" --source-bundle S3Bucket="$EB_PKG_BUCKET",S3Key="$EB_PKG_KEY"
only:
- master
update-application-version:
image: python:3-alpine
stage: deploy
extends: .aws-cli
environment:
name: production
script:
- echo "Updating '${EB_APP_NAME}/${EB_APP_ENV}' to $EB_VERSION_LABEL"
- aws elasticbeanstalk update-environment --application-name "${EB_APP_NAME}" --environment-name $EB_APP_ENV --version-label $EB_VERSION_LABEL
We’ve added two new stages to our CI release
and deploy
. The release stage we are uploading our source package and create an application version. The deploy stage we actual tell our deployment to update to the new version.
Our pipeline now looks like:
Gotchas
When deploying you need to ensure you’ve got all the proper env variables set in Elastic Beanstalk as well as GitLab CI.
Make sure the user deploying has the proper permissions to make changes in Elastic Beanstalk.
This doesn’t include running migrations and any database migrations pending will need to be triggered manually.