A simple distributed Mutex implementation for Ruby on Rails
At Seedrs we’re currently using the delayed_job gem to handle some asynchronous tasks. One of those tasks is supplying content for the careers page using the Breezy API. The event that triggers this task will have few occurrences but we needed to ensure that we never end up with stale data due to concurrency problems while receiving and storing data.
Since we’re dealing with concurrency problems between different processes and the careers data isn’t stored on the database, we started looking for implementations of distributed locks.
There are a lot of distributed locks implementations for ruby using a range of different backends, for instance Memcached, Redis or Apache Zookeeper. Due to the simple nature of the problem we’ve decided to focus only on solutions based on Memcached in order to keep our current architecture.
We’ve looked for implementations that satisfy the following requirements:
- Non-blocking: We can’t simply block a worker until it gains access to the lock because this could compromise the execution of other tasks during a stress situation;
- Backend agnostic: We use FileStore instead of Memcached on our development environment and we want to execute the same code on all environments.
Although we’ve found some implementations with a non-blocking method, all of them were using a specific Memcached client. At the best case this would force us to create an adapter for FileStore.
That’s when we’ve decided to create cache_guard gem.
With the non-blocking and backend agnostic requirements in mind we’ve implemented a simple mutex that raises an error instead of blocking and uses the ActiveSupport::Cache::Store interface instead of talking directly to a Memcached client.
This way we just need to place the logic that updates the careers inside the guard method like the following example:
Note that we’re relying on the delayed_job retries mechanism to rescue the error and try to acquire the lock again.