NodeJS Event loop and libuv
Expanding on the previous blog of the event loop using libuv (https://medium.com/@jain.sm/non-blocking-i-o-using-libuv-1790b8fdbeff), we in this blog try to understand how the event loop actually works in implementations like nodejs.
First of all nodejs is basically the combination of v8 engine (which is used to JIT compile the js code into machine code) and libuv which provides the event loop and the non blocking i/o capabilities to nodejs.
The event loop’s responsibility is to keep draining the event queue/s in a phased manner. There are 4 types of event queues processed by the libuv event loop thread.
1. Timers — The timers are set using setTimeout and setInterval. The expired timers as compared to now are processed by the event loop. This is implemented as minheap.
2. IO Event Queue- This Queue has the completed I/O events. All callbacks as an example to be done on completion of an i/o are stored here and processed by the event loop thread.
3. Immediates Queue — All callbacks registered via setImmediate are processed here
4. CloseHandlers Queue — Any close handlers are processed here.
Over and above this Node adds its own queues to be processed by the libuv thread. This might make one wonder how these queues are made available to libuv. We will cover this aspect of how the interfacing between nodejs and c++ in both directions is accomplished. So the nodejs queues are
1. NextTick Queue — These are the callbacks added using process.NextTick method
2. MicroTasks Queue — These call backs as an example include the native (ECMA) promises.
So now we have an event loop (Node uses the default loop of libuv) and these queues . Node has a specific order to process these queues which we can term as phases
In each iteration it starts by
1. Processing the timers in first phase- All expired timers are determined as the expired timers use the minheap data structure.
2. Next it processes the IO events queue to process all callbacks for i/o completion events. In this phase libuv takes the fds which are ready for data processing(raed/write on sockets as an example) and makes callbacks which are registered against those. Libuv interfaces with the underlying OS mechanisms like epoll, kqueue, select or IOCP.
3. Next phase it goes over if any callbacks using setImmediate are registered.
4. Last phase it goes over all the close handlers callbacks
Now between all the above phases, it invokes the Nodejs specific Queues (nextTick and MicroTasks Queues). Between these two queues, the nextTick queue has a higher priority then microTasks. So basically the nextTicks are processed earlier then native promises. Bluebird promises till v 3.5 used to use setImmediate as a means to schedule them.
This is how the layout looks like

Lets try and understand this with an example
Promise.resolve().then(() => console.log(‘promise1 callback’));
Promise.resolve().then(() => {
console.log(‘promise2 callback’);
process.nextTick(() => console.log(‘next tick inside promise2’));
});
setImmediate(() => console.log(‘immediate1 callback’));
setImmediate(() => console.log(‘immediate2 callback’));
process.nextTick(() => console.log(‘next tick1 callback’));
process.nextTick(() => console.log(‘next tick2 callback’));
setTimeout(() => console.log(‘set timeout’), 0);
setImmediate(() => console.log(‘immediate3 callback’));
In the above example we have 2 promises, 3 set immediate, 3 nextTick and 1 timeout callback.
So when the loop starts, it checks for the nextTick and microTasks (native promises) queues. So it prints
next tick1 callback
next tick2 callback
promise1 callback
promise2 callback
Once this is done, again the loop checks if there are some more nextTicks and executes
So it executes and prints
next tick inside promise2
Now code moves to first phase which is the execution of expired timers and thereby the following gets printed
set timeout
Once this is done, the next phase is for i/o. There is no i/o related callbacks, so loop moves to next phase which is processing the immediate callbacks
So it prints
next tick1 callback
next tick2 callback
This is in general how the different phases of the event loop work in case of libuv combined with the 2 queues of nodejs.
This mechanism is what makes nodejs scale immensely for handling high concurrent i/o workloads. It uses libuv as the i/o multiplexer which allows to handle 1000s of tcp connections on a single thread. The thread per connection model is not a scalable model for network i/o intensive workloads.
If you are looking for more in-depth coverage of virtualization and container internals, please check this book from the blog author
https://leanpub.com/linuxcontainersandvirt
https://www.amazon.com/dp/1080299424?ref_=pe_3052080_397514860
Disclaimer : The views expressed above are personal and not of the company I work for.