Incrementally moving to Elixir

Moving from a Rails monolith to Elixir services with queues

Ville
4 min readDec 22, 2017

Like many working in a legacy Ruby system and learning Elixir we were struggling to figure out how to start moving our code over. Here I wanted to share a technique we used a year ago to start the process.

Background

At Fitzdares (a bookmakers) we have an ageing monolith of a system written in Rails over 7 or so years by various teams and contractors and much like the building above it’s starting to show it’s age and as is the case with many monoliths the responsibilities of the system started to include everything imaginable.

We decided we would focus our efforts on improving the code quality and performance for the core responsibilities of the system which for us were bet capturing and settling.

At the same time we would break out parts of the system that could and should live outside of the monolith.

Finding a candidate context

One such part was the task of processing one of our busier price data feeds.

The data feed is a push feed, very noisy requiring simple filtering to determine whether the data is of interest to us, spikey in traffic where we regularly go from ~30 updates per minute to 2000+ for a few minutes and then back down.

The old solution was using the daemons gem to spin up the full Rails environment, subscribe to the feed, do filtering, construct an ActiveRecord object and save to the database.

This was also running co-located on a beast of a server with a stack of other daemons and tasks.

Finally we had quite a few stability issues with the processing and resource competition on the server.

Scoping out the work

Having found a great candidate in need of attention and one with a set of properties that made it great use case for Elixir we were ecstatic!

We started picking the old code apart to figure out the size of the task

What we found was stacks of ActiveRecord Callbacks, auto updates to related models, ActiveRecord Observers etc 💔

The size of the task kept growing and growing with each part we opened up to a point where it was fast becoming a major undertaking that none of us felt comfortable estimating accurately.

Incremental migration

Most of the code up to the point of calling create on the record was of reasonable size and complexity.

We decided that we would write a new Elixir service to subscribe to the data feeds, process and filter the prices. Finally it would hand over persisting to the database to the old code.

Essentially we pulled out all the code before the call to create from the old codebase and wrote the new Elixir service to create a JSON object that has all the same values as the ActiveRecord object would have.

Gluing the old and the new together

This left us with a choice of how to do the handover from our new service to the old.

First option was to create a new HTTP API endpoint in the monolith or using something like gRPC but these both suffered from the same problem in that the core could still be brought to it’s knees when the spikes in activity happen.

We could have throttled sending the updates from the new service but that didn’t really feel like responsibility of the new service either.

As we already make extensive use of Sidekiq for all of our background processing tasks we decided to use that instead.

Bringing it all together

Our new Elixir service receives updates, spins up a new process for each update, majority of the processes we let die away as part of the filtering.

For the ones that survive we apply the secret sauce and then using the lovely Redix Redis client by Andrea Leopardi we create a job in a new Sidekiq queue with the processed update as payload.

Back in the land of the monolith we created a new Sidekiq worker to pick jobs from this new queue and simply take the payload, map it to an ActiveRecord model and save it; firing all the callbacks, payloads and everything else.

This allowed us to use concurrence to filter out the noise from the feed, protect the core from the spikey traffic and ship the new service in a timely predictable manner leveraging existing infrastructure and setup.

Moving forward

Best part of this approach was that the story doesn’t end there! It allowed us to put something in place quickly and resolve some of the early issues.

Now it was trivial for us to pick each callback, observer action and other magic bit that happened as part of saving the record and move them across to the new service one by one as individual efforts rather than needing to do them all in one big project.

For me it felt like establishing a solid base camp close to the foot of the mountain and mapping out the rest of the journey through the camps to conquer the mountain rather than attempting the climb in one big push.

We’ve had great success doing this and I hope it’s given you ideas on how you might start moving code across in smaller chunks.

Please click the 👏 icon below if you liked this post and follow me here or on Twitter @efexen to hear about new articles. If you haven’t already can I recommend checking out some of my other posts and let me know what you think 👍

--

--

Ville

Freelance Developer • Founder of @CodeHalf • Rubyist • Elixir Alchemist • Wannabe Data Scientist • Dad