Dirty Old Code — Part 2

Pierre Bails
Feb 4, 2019 · 3 min read

This is the second chapter in a three part series about how we moved our infrastructure and code from monolithic Ruby on Rails to a serverless environment.
If you missed part 1, read it

Clouds are drifting across the moon
— Ewan MacColl (Dirty Old Town)

As explained in part 1, we chose to migrate our infrastructure step by step: not a single evolution on the legacy code and new features developed on the new infrastructure. All old code is rewritten over time on the new side. But this is how it works in theory and as mentioned at the end of the first post, before we began we needed to decide:

  • Which cloud provider to choose?
  • How to share data?
  • How to communicate between two totally different systems?
Image for post
Image for post
Communicating between a legacy and new infrastructure during migration: Just like Catdog.

Choosing the right cloud provider

There are now a multitude of Cloud providers, and we selected AWS to host our services. AWS was an easy choice because we had internal competences working with them within Precogs, their community is the biggest and best for support and, <whispered>we’ve received a ton of free credits!</whispered>

Sharing data between platforms

Our first actual move was to migrate our RubyOnRails (RoR) application onto a AWS EC2 server and to use a managed database (RDS) plus S3 file storage. The main advantages of using S3 are cost, availability, easy to manage, security and scalability. Using this service was a small revolution for us at the time: it was our first move to an automatically scaling architecture.

The schema below illustrates how RoR (running on servers) and Lambda microservices are sharing data.

Image for post
Image for post

Communication is the key

Next, how to synchronise old legacy code with a bunch of new serverless microservices by making only small changes? The main goal here is to trigger the right functions at the right time in both systems.

To achieve that, on one side we implemented an SNS trigger in RoR to run a Lambda function; while on the other side, RoR is watching updates in the database and executing some actions when changes pop up.

Bonus: we implemented a SQS queue to push processed data in RoR. The main advantage here is that it is easy to disassociate pushing and processing actions on two different systems simply by adding a new input or output to the queue.

Image for post
Image for post

After a few modifications in your legacy code, like switching database parameters and triggering notifications instead of actually running the process, your infrastructure is ready to welcome microservices, and your dirty old code is ready to be replaced!

Stay tuned for part 3, when I’ll share an example of one of the processes we moved from Ruby to Lambda.


Tech articles from the precogs team

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store