Microservices have been a popular topic recently. People argue over whether you should start with microservices or evolve there from a single ‘monolithic’ app. Maybe in a well designed app microservices are simply a deployment option?
It all seems rather academic when you have upward of 100,000 lines of (non-test) Ruby code glowering back at you. (And that’s before you start counting the gems!) You know you want to split it up, but how do you move on from there?
What follows is a case study from Yammer, in which we gradually moved our core messaging data store out of an ActiveRecord model into a microservice responsible solely for managing access to messages in Yammer. I’ll present some useful gems, point out some pitfalls that we encountered and how we tackled them, and hopefully show some techniques that may save you some time if you have to do the same. If nothing else it should give you the courage that you’re not stuck with your current architecture. If it’s important enough, you can escape!
Yammer has had a ‘service oriented architecture’ for many years. Today I can count nearly 200 services listed in our deployment tool. But back in the beginning there was one Ruby on Rails app, and even as most significant new functionality was created in new services, that Rails app has continued to grow.
Is that a problem? Not necessarily; some organisations make the monolith work, but here at Yammer it’s something we’re keen to move away from. It’s too special when compared with the other services we run.
- Deploys take too long, and deploy to too many servers.
- Too much function is ‘at risk’ with a single deploy — the surface area for testing is too great. Other services have a very tightly defined responsibility, which greatly limits the risk of a deploy.
- Coupling so much function leads to inefficiencies everywhere. Just loading all the gems (more than 200) makes unit tests slow. Why is my deploy held up for asset packaging when I only changed JSON API endpoints?
One piece we were particularly keen to extract from the monolith was the storage of message and thread objects. Yammer is a messaging platform. Yes, it’s a social network — how you find interesting conversations and contribute to the movement of information in your company is what makes Yammer messaging special — but it’s messages moving information around that make Yammer so valuable to our customers. With messages fiercely guarded by a highly complex ActiveRecord model backed by memcache and Postgres our hands were tied as we attempted to improve the scale, reliability and performance of our messaging systems. What we wanted was a service that provided a single source of truth for messages, that hid the complexities of sharding those across multiple data stores, that could be called directly from many services.
Our Rails app would still need free access to create, read, and modify messages, but we wanted a model that was backed by this new service. The Rails app would just be one of many equal consumers of the service. So how do we go from a model subclassing ActiveRecord to one that isn’t?
Hey! ActiveRecord! Let go of that!
Step 1: Acknowledge
ActiveRecord is an amazing tool and you’re probably using a lot more of its features than you realise. You’re going to have to write a lot of code now that you can’t lean on ActiveRecord. (We were really surprised how much time we ended up spending on this!)
Step 2: Reduce
Use less ActiveRecord. Anything we used we would have to re-implement, so we cut down on how much ActiveRecord we used.
- No Arel. ActiveRecord relations (built on top of Arel) expose the huge power of a relational database, but that’s not an interface we could provide in an HTTP web service with a future that doesn’t include a relational database. So we moved all model loading to explicit find_by_xxxx methods. For each of those finder methods we would create a matching HTTP endpoint in the messages service.
- No scopes. They’re just ActiveRecord Relations in disguise.
- Cut code that relies on transactions for rollback. The only way to roll back an HTTP request is to issue a request that reverses what you have done.
- Reduce use of callbacks. You can of course add callbacks to your model without using ActiveRecord. Just mix in ActiveModel::Callbacks. But many of the callbacks provided by ActiveRecord only really make sense when you’re thinking of a model that maps directly to a database table. For example before_commit no longer makes any sense when save and commit are done together with a single HTTP POST or PUT.
Step 3: Recreate
Start recreating the ActiveRecord features you do want to use.
Note that you don’t have to implement everything at once. Remember “favour composition over inheritance”? Do that refactoring! Our initial ActiveRecord-free Message model delegated a lot of function to an internal instance of MessageActiveRecord. Over time we gradually cut out all that delegation.
Attributes. ActiveRecord does a lot of work to map database columns to attributes on your model. You’ll need to replace that.
- Virtus is a popular option but doesn’t have change tracking (which we wanted).
- attr_accessor + ActiveModel::Dirty. The basic building blocks, but ActiveModel::Dirty leaves you with a fair bit of plumbing to do, and you don’t get any checking that what’s assigned to your attributes will make a sensible JSON payload to send to your service.
- When all else fails, make your own gem. We made ModelAttribute. So you don’t have to! It also handles efficient serialisation and deserialisation to/from JSON. Yammer loads a lot of messages, so performance here really mattered to us.
So our model started to look like this:
extend ModelAttribute attribute :id, :integer
attribute :body, :string
attribute :message_type, :integer
attribute :references, :json
attribute :created_at, :time
# ... def save
# ModelAttribute provides #changes and #changes_for_json
return if changes.empty?
Loading records from the database. We started with just delegating to ActiveRecord for that. But our new model was supposed to be loading JSON from a web service, so we rewrote the loading to request JSON over a direct database connection. The model loading logic was then the same for both backends, except for the source for that JSON.
Getting JSON directly from the database is really very easy using Postgres’ JSON functions:
sql = "SELECT row_to_json(t) FROM (
round(extract(epoch from created_at) * 1000) as created_at,
round(extract(epoch from updated_at) * 1000) as updated_at,
) t;".gsub(/\s+/, ' ')
json = connection.select_values(sql)
Callbacks. ActiveModel::Callbacks for when you just can’t live without them. (Usually because we still wanted to use a library that relied on a callback.)
define_model_callbacks :save, :destroy, :commit def save
return if changes.empty?
run_callback :commit do
run_callbacks :save do
end # To match ActiveRecord behaviour, after_save callbacks expect
# to see a populated changes hash, after_commit callbacks don't.
# Similarly for destroy
Caching. Maybe this is not a problem for you, but it was a big problem for us. Yammer’s Ruby monolith uses a fork of the RecordCache gem to cache database reads (similar to the more recent IdentityCache gem from Shopify). Without the cache Postgres just can’t keep up, even running on a monster of a DB server, kitted out with FusionIO cards. But RecordCache is deeply entwined with ActiveRecord, so we had to re-implement that. (Sadly this gem is based on a Yammer special caching gem, so we haven’t open-sourced it as it’s not going to be much use to anyone.)
(Side note — as we move function from our Ruby monolith into services we also want to move responsibility for caching out of the monolith. So we store cache entries not in the Ruby-specific marshall format, but using a combination of JSON, MessagePack and ZLib compression, so that it can be read equally well from Java services.)
The rest. new_record? persisted? destroyed? primary_key to_param update_attribute… We wrote up a little module called ActiveRecordMimic that provided re-implementations of little helper methods that we still wanted without cluttering our model with methods that are nothing to do with its domain responsibilities.
end def new_record?
end def persisted?
!(new_record? || destroyed?)
end # This allows us to use this object as a hash key, and for uniq'ing arrays
end # See more in https://gist.github.com/dwaller/5474304cfea354a9701d
Step 4: Transition
Nearly there! We have a model now that doesn’t rely on ActiveRecord, but does still access Postgres directly from within a Rails monolith. Gradually we transitioned traffic to go via the new service instead, allowing us to tune performance, server provisioning, circuit breakers, network settings, etc. in a production environment.
(There are a myriad of ways of doing a gradual transition. We switched code based on user IDs (our own to start with!) before moving on to checking the last digit of the ID — allowing us to roll out 10% at a time. And we always had a kill-switch, so we could revert to the old code path at a moment’s notice.)
So long, and thanks for all the records
It’s easy to underestimate how much ActiveRecord gives you. It may encourage you towards designs that you regret later, but it does so much heavy lifting. It’s only when you try to quit that it become clear how much you’re reliant on it!
This is not a blog post on application architecture. There are already a lot of those! The end result still follows the Active Record pattern, just loading the data from a service instead of the database. It’s not the end state for Yammer either, just a step along the way to a messaging pipeline that is better decomposed into services. With message data exposed directly through a simple, extremely fast HTTP service we have already started to use that data in other services, pushing towards a faster, more resilient messaging infrastructure for our users.
It took a lot of effort, but it was possible, and a valuable move. I hope that this case study gives you some of the tools and confidence to break away from ActiveRecord when the time is right for you!
David Waller is a senior engineer in the Yammer London office.