Elixir deployments on AWS

If you spend any time around the Elixir community, it won’t be long before you come across mention of the state of deployments. As things stand at the start of 2017, they are a little on the rough side:

My biggest issue with Elixir is that it’s hard to set up a proper CI/Deployment pipeline. It CAN be done and I’ve done it but nothing “just works” like it does with Node. Try dockerizing a phoenix app or setting up a heroku instance to see what I mean. All of these things work but it’s an obscure language so one needs to be more advanced to understand and address problems in that area.

There’s been some great work around this in the community, with projects such as Edeliver, Exrm, or Exrm’s more recent replacement Distillery, but for developers used to running a simple git push heroku master there are still many areas which can be painful to overcome.

TL;DR: Here’s a recipe for deploying Elixir. Even if the recipe isn’t relevant to you, please fill in our quick survey, as we want to better understand where people struggle with Elixir deployment and how we can best extend this recipe in the future.

Your existing deployment processes might not work

While it’s possible to deploy Elixir apps to Heroku, or using a tool such as Docker, each of these brings limitations which rule out the use of some of the VM’s features, some of which may have been key reasons for choosing Elixir in the first place. Features such as hot code swaps for zero-downtime deployments, or networking of nodes on separate machines, for an HA deployment, may be ruled out from the start:

Hot code reload

Erlang and Elixir allow us to swap out old code for new at run-time, while persisting state. This might add unnecessary complexity for many systems, but you might want this if you’re keen to deploy new code while keeping your application running, so maintaining state and allowing zero-downtime deployments.

  • Not an option if using Docker; deployments will typically swap the currently running container for one running updated code. Your users might not experience any downtime, but the codebase will be swapped out in its entirety, and state will not be carried over to the new container.
  • Not an option on Heroku; Heroku will restart dynos as part of deployment, again losing state.

Distributed clustering

If running an app across multiple servers, you may need to make your nodes aware of each other (e.g if using Phoenix channels, you have no guarantee the WS and HTTP(s) requests will hit the same server).

  • Tools such as Kubernetes may help with this if using Docker.
  • Not an option on Heroku; dynos are firewalled off from one another, ruling this one out.

Process-based persistence

Erlang offers us tools such as mnesia and ETS for in-memory data stores, so perhaps no need to hit the db, or introduce a dependency such as Redis, if you can resist the temptation.

  • With Docker, this is possible, but no persistence through deployments.
  • Not an option on Heroku; even without restarts for deployments, Heroku dynos will restart daily.

The above won’t be issues for every application (for instance, at time of writing hex.pm runs on Heroku, and the latest Elixir Users’ Survey indicates that plenty are using these tools), but if you’re looking to take advantage of everything the ecosystem has to offer, then these tools might not make the grade.

Here at Mint we’ve been looking into various strategies for resolving these issues, and are settling towards a solution using Distillery for builds, with various services under the AWS umbrella for hosting and execution of a deployment.

Deployments to AWS using Distillery

AWS gives much greater control over setup vs. Heroku (with the main tradeoff being that you have to manage more of this setup yourself) and so allows us to setup hot code swaps, clustering with auto-scaling, and anything else the ecosystem has to offer.

We’ve been working towards a generalised solution for deploying Elixir apps in this fashion, which I’ll run through below. We’ve not tackled any of the more advanced features referenced above in this outline, but we’re keen to get feedback on which, if any, are most important to members of the community.

Launching web stack on AWS

We’ve used CloudFormation to automate the setup of a fairly typical web stack on AWS. You can get the JSON for this here, but the headline features are:

  • 2 EC2 instances (size configurable pre-launch)
  • Application load balancer
  • Postgres RDS instance
  • CodeDeploy for deployment to instances
  • S3 bucket setup for storage of encrypted secrets
  • Various networking bits & pieces (http through port 3333 to load balancer, ssh from configurable IP to instances using public/private key)
  • Currently this template requires you to launch the stack in the us-east-1 region.
An easy to understand diagram

You can launch this stack for yourself with the handy link below. (Going through this flow will create resources under your account, you may want to shut them down at the end if you don’t want to keep paying for them, we’ve included steps for doing so at the end of this post.)

You’ll need to sign in as a user with permissions to create and manage stacks, here’s a sample IAM policy granting suitable permissions:

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"Resource": "*"

We require you to set some of the values used in your stack pre-launch. These include the database name, username, and password; an SSH key and IP address for whitelisting SSH access to instances; and an email address for SNS notifications about the stack.

Once you’ve set your params, click through the rest of the flow, and wait for
AWS to create all of your resources. We’ve set the stack’s Outputs to show the values you’ll need as you continue through the steps below:

Outputs can be seen in the details pane for your created stack

Preparing secrets

Once our stack is ready, we need to upload our secrets to the created S3 bucket. There are many ways we could handle confidential values, but making use of S3 for this seemed a simple way to get going.

In the template we’ve set the bucket policy to reject uploads unless they are encrypted at rest, in flight, and only coming from another resource in the stack’s VPC.

As we’ve whitelisted ourselves for SSH access to our EC2 instances, we can get secrets into the bucket (and later retrieve them) by making calls from these. Taking the hostname for one of your instances from outputs, and using the user console, you should be able to access the instance using the key you set when initialising the stack. Do so, and create a file at /tmp/creds.txt:

$ ssh console@your-ec2-hostname-here
console@ip-address:$ vim /tmp/creds.txt

The contents of the file should look something like the following, with the database details coming from your new RDS instance (the user, password and database name come from the params you set earlier, whereas the host can be seen in the stack’s outputs.)


Once you have this, you should be able to upload the file into your new S3 bucket (again, the name is in the stack outputs) using the AWS cli:

console@ip-address:$ cd /tmp
console@ip-address:$ aws s3 cp creds.txt s3://name-of-bucket --region us-east-1 --sse

Configuring app for build with Distillery

Next we need to prepare our app for deployment. We’ve setup an example Phoenix application you can try this out with here, or you can try the steps below with your own application. Our example is a standard Phoenix install, with changes only to get the deployment working (you can view the ~250 line diff here).

1. Add/configure distillery

First add distillery to your app’s dependencies:

defp deps do
- {:cowboy, “~> 1.0”}]
+ {:cowboy, “~> 1.0”},
+ {:distillery, “~> 1.1”}]

Fetch your updated dependencies, and initialise distillery with mix do deps.get, release.init, then update your production config for running
a release:

config :my_app, MyApp.Endpoint,
http: [port: {:system, “PORT”}],
url: [host: “example.com”, port: 80],
- cache_static_manifest: “priv/static/manifest.json”
+ cache_static_manifest: “priv/static/manifest.json”,
+ server: true,
+ root: “.”,
+ version: Mix.Project.config[:version]

You can read more about these configuration options in the online Distillery documentation.

2. Update production config to retrieve secrets from environment

Our secrets are safely stored in a text file on S3, which we will pull down and set in the environment before compilation. We need to update the production config to retrieve these values instead of using the prod.secret.exs file:

-# Finally import the config/prod.secret.exs
-# which should be versioned separately.
-import_config “prod.secret.exs”
+config :my_app, MyApp.Endpoint,
+ secret_key_base: System.get_env(“SECRET_KEY_BASE”)
+# Configure your database
+config :my_app, MyApp.Repo,
+ adapter: Ecto.Adapters.Postgres,
+ url: System.get_env(“DATABASE_URL”),
+ pool_size: String.to_integer(System.get_env(“POOL_SIZE”) || “10”),
+ ssl: true

3. Add scripts to start, stop, compile and verify the app

These shell scripts are mostly just wrappers around the Distillery CLI. The compilation script will need some edits, updating the name (shown in your stack outputs) and region of the bucket where your app secrets will be stored:

### Update these values for your own S3 bucket ###
### Don't update below this line ###

4. Configure CodeDeploy

Finally we’ll need to add an appspec.yml in the project root. This configures CodeDeploy, which we will use to call each of our scripts at the right point in a deployment:

version: 0.0
os: linux
— location: scripts/stop.sh
runas: owner
— location: scripts/compile.sh
runas: root
— location: scripts/start.sh
runas: owner
— location: scripts/verify.sh
runas: owner


Finally, we are ready to deploy our application. Heading to the AWS console, under CodeDeploy we should see the application and deployment group created in our stack. We can now deploy to the instances by creating a new deployment with our app’s GitHub repo as its source:

Click through to your CodeDeploy application
Select your deployment group, then select Deploy new revision
Enter your source details, then deploy!

Assuming this is successful, you can view your running application by hitting the url for your load balancer (shown in your stack outputs) in browser.

Clean up

If you want to get rid of these resources, rather than keep paying for them, you’ll need to perform these steps:

  • Delete any files from the created S3 bucket. You’ll need to do this from a resource in the VPC, such as an instance. aws s3 rm s3://name-of-bucket/creds.txt --region us-east-1 should do it.
  • Disable termination protection for each of the EC2 instances.
  • Disable termination protection for your load balancer.
  • Once you’ve done this you should be able to delete the stack from the CloudFormation index page.

Next steps

While this is clearly a V1, we think it’s a straight-forward, repeatable method for deploying an Elixir application to a solid stack. We’ve not yet tackled any of the more advanced issues referenced earlier in the post, but we have a clean state on which to look to build.

Next we plan to look at clustering (with auto-scaling?), hot code swaps, or simplified deploy triggers, but we’re keen to hear from the community which of these features (if any!) are most important when planning a deployment.

We’ve put together a short (3 questions) survey around this. Are you deploying Elixir? If so, we’d love to hear from you, please help us out by filling it in!