Migrating Two Chunks of Our Monolith, and 100 Billion Database Rows, to Microservices

Josh Rai
Tech @ Quizlet
Published in
Oct 26, 2021

Quizlet is not unique in experiencing growing pains within the confines of a monolithic application architecture. But there’s no simple or universal solution to that problem; the reality of scaling tends to be messy. In this recent talk (at Denver Startup Week 2021) Dan Sulfaro and I recount how and why our team extracted two key parts of our application into their own microservices and migrated a hundred billion rows of data, and how we continued serving thousands of requests per second along the way.

Presentation by Dan Sulfaro and Josh Rai for Quizlet at Denver Startup Week, October 7, 2021

Here’s a peek at some slides from the presentation:

One part of the problem we faced: scalability of data
Another part of the problem: scalability of development
Three “inconvenient truths” that we encountered repeatedly in our migration to microservices: (1) Code, the business and the team structure don’t always evolve in perfect stride; (2) Prior art is good to know, but ultimately you need to know your problem well and think for yourself; and (3) At sufficient scale, when downtime is not an option, migration can be a project unto itself, distinct from the final system you’re building.
Looking back at two separate migration efforts, we distilled the process we took into eight steps.
Here we’re describing the refactoring work to help us migrate code in our monolith to start using our new microservice, which also entailed switching to an improved data model.
Switching the order in which we were dual-writing data to our old and new datastores sounded simple — but wasn’t.

--

--