A better mutex for Node.js

=> Highest performance locking library for high concurrency access; up to 40 times faster than existing libraries that don’t even do what they promise to do.

Several mutex libraries in Node.js have failed me bigtime in the most basic way possible — they simply cannot handle concurrent requests. This doesn’t make any sense. Locks/mutexes are for the very purpose of handling concurrent access to a resource. All of the existing libraries I have found use polling to attempt to acquire the lock; some of these libraries even expect the user to implement the polling mechanism. Polling leads to extreme performance degradation.

I didn’t really know what do in the face of this very disappointing and even alarming news, so I decided to implement something on my own to see if I could do better; of course something evented as opposed to polling, was the way to go. I decided that using tcp was better than using filesystem locking, at the very least so you don’t litter the filesystem, and so live-mutex was born.

There are at least 3 reasons to use live-mutex:

1.You can’t use Redis or a similar in-memory database, for whatever reason.

2.You want a higher-performance, polling-free implementation, for high-concurrency access.

3.You want a locking mechanism that is not only multi-process but multi-machine. (Redis can provide this also, but it won’t satisfy #1 or #2).

Even if you can use Redis, I will demonstrate that live-mutex has higher performance than the libraries I have tried. Feel free to prove me wrong — I’d love to discover a better locking library for Redis than the ones I have tried so far.

If you don’t know the strict definition of a mutex, it is short for “mutual exclusion”:

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions; it is the requirement that one thread of execution never enter its critical section at the same time that another concurrent thread of execution enters its own critical section.

So here I am writing a library and I needed a locking mechanism or a “mutex” during the npm postinstall phase. Normally I would use Redis for this, but this is a library, not an application and I can’t expect the users of my library to have Redis installed and running. I tried some existing mutex libraries, notably lockfile and warlock and they surprisingly did not work as expected, not at all. We would expect that locking libraries would be able to handle concurrent requests — that is the whole point of locking mechanism — if requests were not concurrent we wouldn’t need a mutex at all.

Here we can see lockfile in action, making 100 parallel requests to obtain a lock:

const path = require('path');
const async = require('async');
const lf = require('lockfile');
const a = Array.apply(null, {length: 100});
const file = path.resolve(process.env.HOME + '/speed-test.lock');
const start = Date.now();

async.each(a, function (val, cb) {

lf.lock(file, {wait: 3000, retries: 5, stale: 50}, (err) => {
err ? cb(err) : lf.unlock(file, cb);
});

}, function(err) {

if (err) {
throw err;
}

console.log(' => Time required => ', Date.now() - start);
process.exit(0);

});

If you run that script, it should take upwards of 3000 milliseconds to do only 100 lock/unlock cycles! If you can tune it to perform better by modifying the wait, retries and stale options, be my guest. So lockfile clocks in at 3000 millis /100 cycles which is 30 milliseconds/cycle. Not as horrific as warlock will turn out to be, but not too great and it took some tuning to get it to work at all; seems like lockfile will only work statistically, and may fail at some point.

Next up we have warlock, (you need to have redis installed to use this library, you can test that you have redis installed by issuing $ redis-cli at the command line.)

Here we can see warlock in action, making 100 parallel requests to obtain a lock:

If you run this warlock example, it will take FOREVER! warlock.optimistic is not even the default behavior. If you use warlock.lock, the standard call, this won’t even work and will error-out immediately.

Most locking libraries in Node.js land use Redis or similar and since I can’t even use Redis, and lockfile was clearly not working reliable or performantly I wrote live-mutex.

The live-mutex API is very simple and standard and just looks like this:

const client = new Client(opts, function(){
client.lock('<key>', function(err, unlock){
unlock(function(err){

});
});
});
Here’s is Live-Mutex processing 100 parallel lock requests:

Where lockfile and warlock took upwards of 3000 milliseconds to process 100 concurrent lock/unlock cycles, live-mutex took only 80 milliseconds! As you increase the number of concurrent cycles, for example 300 instead of 100, we see performance actually improve, processing 300 cycles in less than 150 milliseconds. That’s 2 lock/unlock cycles per millisecond. Pretty damn good for a networked mutex.

That makes live-mutex more than 30 times faster than both libraries for concurrent access. Furthermore, live-mutex requires much less fine-tuning than both lockfile and warlock and is much much more likely to work right out-of-the-box for your use case. Furthermore, the performance and usability of the live-mutex library will only get better from here, because I quite literally have just officially released it with this publication.

However, note that if you change the above examples from:

async.each 

to:

async.eachSeries

you will see that lockfile and warlock both outperform live-mutex 10-fold. Why is that? Why do lockfile and warlock both outperform live-mutex in serial requests but perform so poorly compared to live-mutex in parallel requests? The former is most likely because they are running on C/Lua in Redis and C on the filesystem whereas live-mutex is using Node.js for both broker and client (although uWebSockets is written in C, and live-mutex uses this websockets implementation.) The latter is because they have a bad implementation. The fact that they are faster for serial requests is practically meaningless, because if you are doing serial access to a resource, you don’t even need a mutex!

So, if you actually need a mutex, as opposed to an artificial need, and have lots of concurrent requests and want the best performing library, then please try live-mutex, and see how it works for you. I guarantee it has much higher performance than lockfile and warlock. I’d challenge you to provide me with any evidence that any locking library can beat the performance of live-mutex for a given number of concurrent lock requests.

If you’re interested in checking out the library it’s here:

If you clone the project, you can run the speed tests mentioned in this article, which are located in:

./test/speed
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.