Scalable Concurrency — Meet Non-Blocking I/O

How we can implement non-blocking I/O to improve performance in apps

Gernot Gradwohl
Nov 11 · 5 min read
Photo by Benjamin Voros on Unsplash

Why Is Non-Blocking IO More Scalable?

In nearly all modern web apps, we have a lot of I/O. We talk to the database and ask for records or insert/update them. More often than not, we access some files from the hard disk, which again is an I/O operation.

We are talking to different third-party web services, like OAuth integration or other stuff. Many web apps are also running as a microservice these days where they have to talk to other parts of the same app through HTTP requests.

If you write your web app with Ruby, Python, or many other languages, all of these I/O-related tasks are blocking by default, meaning the process will wait until it receives the response and then continues with the execution of the program.

Node.js [1], on the other hand, is using non-blocking I/O by default. Therefore, the process can continue to work somewhere else and execute a callback or a promise when the request finishes.

This allows the operating system to fully utilize one CPU core. But, is a non-blocking programming model possible in other programming languages too?

Yes, it is! In this blog post, we will discuss how to write a native event loop in Ruby utilizing (nearly) non-blocking I/O and then see how to improve this design.


Native Implementation

First, let’s take a look into a working native implementation:

Before talking about how to improve this design, let’s shortly discuss the IO.select method as this is the very heart of our event loop.


IO.select

As mentioned in the comments, this method is cross-platform and can be utilized wherever you run your program.

The first argument it takes is an array of I/O descriptors — file descriptors, Unix sockets, or something like that — that the program wants to read.

The second array is again an array of I/O descriptors but this time it is for writable connections.

The third array is an error array.

Finally, the last argument is the timeout. This is the maximum amount of time that the method is blocking. Therefore, in our example above, we can say that a tick is at least 10 ms, depending on the time that the processing of data takes.


Design Discussion of Naive Event Loop

When we take a look into this code, the disadvantages are quite clear. The complexity introduced with the concurrency is tangled with the business logic, and a separation is difficult.

The event loop knows about our business logic as it is calling the method right away. We could improve this with the help of a register that handles all reading/writing events.

The register could utilize a simple hash with two keys, read and write, and save callbacks there. In Ruby, the callbacks could be any of blocks, procs, or lambdas. Again, a simple implementation could look like this:

Now we have decoupled our business logic from our concurrency logic. But still, this would lead to kind of a callback hell.

JavaScript used to have this problem a lot but it kind of fixed this with promises and, more recently, with the async await feature. This way, you can write sequential code that runs concurrently.

Still, we have other disadvantages in this design left. It still uses one fixed set of descriptors to look after, and we have no place to configure this at run time. Also, every single callback will get notified for every single read event although we probably don’t want this.

How can we improve on that? Meet the reactor pattern.


Reactor Pattern

The reactor pattern is the base of most event loops. It completely separates the application logic from the switching implementation and, therefore, makes the code better maintainable and more reusable.

It consists of two main parts: an event multiplexer and a dispatcher and works with another two — resources and request handlers.

A reactor uses a single-threaded event loop, registers resources in the event multiplexer and dispatches to the callbacks after an event triggers.

As seen in our examples, this way, there is no need for blocking I/O and the process can, therefore, utilize a CPU core to the maximum.


Implementations

Famous implementations in Ruby are EventMachine, Celluloid, and async. Python also has — at least one — very good implementation, namely, Twisted. PHP has ReactPHP and I am pretty sure nearly all other languages have some pretty good implementations too.


Disadvantages

As with everything else, a reactor has some disadvantages too, which you have to be aware of to make a good decision as to whether using this pattern makes sense for your use case or not.

The main disadvantage is that it will block all callbacks if one of them is greedy and using a lot of time until it is finished.

In essence, a reactor is kind of cooperative concurrency. As mentioned above, a reactor is single-threaded and if the CPU is fully utilized from one callback, everything else has to wait.

Another limitation is that a reactor pattern is hard to debug since the logical flow is not the way that your program runs. This brings with it additional headaches for the developers as well.


From Here Onward

Is the reactor pattern the best thing we have for concurrent I/O?

Actually, no, there are still ways to improve on this. As mentioned above, a traditional reactor is dispatching an event with the demultiplexer synchronously and has to wait for the callbacks to finish. We could make this async as well with a proactor pattern.

If you still need more performance — throw hardware at it! At some point, this is the best option you have. And if you need to do this then a microservice architecture comes in handy as you can scale small parts of your application independently.

[1] Node.js is just an example because this is the most commonly used platform that uses non-blocking I/O as default.

Gernot Gradwohl

Written by

Passionate programmer. Especially love the elegance of Ruby. Follow me on twitter: @shimurokha

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade