Concurrent JavaScript — Introduction

So, I have been working on a multi-threaded JavaScript runtime, Nexus.js. You can think of it as the mad cousin of Node.js.

First of all: there is no event loop.

You heard that right, there is no event loop. Everything is scheduled on a thread pool, which picks “tasks” from a priority-queue and begins executing them — in parallel — on all CPU cores, simultaneously.

There’s no `process.nextTick()` either. There is, however, `Nexus.Scheduler.schedule()` and that’s the entry point into the multi-threaded world.

Internally, the Scheduler uses native C++ coroutines and all kinds of trickery to make asynchronous I/O possible.

Now we come to the next side effect: ES6 Promises that run on all cores.

(If you’re observant, ignore the fact that it didn’t print “bozo”… it’s an old screenshot; but if so, you’ll notice that the execution order changes every time the application is started, which highlights the indeterministic nature of multi-threading)

A multi-threaded scheduler means that a Nexus.js application can process and chain the next promise as soon as the previous resolves or rejects on any available CPU core.

Think of this example, your server receives 8 concurrent requests, in Node.js’ case, the 8 events will be queued in the event loop, whereupon the event loop will execute your JavaScript handlers one by one, let’s say one per a metaphorical “tick”.

In the Nexus.js case — assuming that you have 4 cores and 8 threads running — they will be enqueued in the priority queue, whereupon all 8 threads will each pick one of the JavaScript handlers and execute all of them in parallel; that’s 8 requests in one metaphorical “tick”.

(Note that sockets are still on the to-do list, I’m still working on the I/O API)

Speaking of I/O, the Nexus I/O API is structured a little differently from Node.

There are a few concepts here, if you’re familiar with boost::iostreams, you’ll feel right at home. (Warning: This part is heavily WIP, and may change at any given time)

First, you have devices, which are the basic building blocks of any I/O graph. There are Sinks and there are Sources, a Sink can be something like an output file. A Source can be a socket. For example.

Then there are Filters, which work to transform an input buffer into a different output buffer. (All I/O is performed via ArrayBuffer objects by the way)

Lastly, there are Streams, which take a Device, and a series of Filters, and tie them all together into something useful.

This is all done via Promises, of course, although there are readSync and writeSync functions that work synchronously. (Which I’m contemplating removing at the moment)

So, let’s take the code from the screenshot above as an example:

const device = new Nexus.IO.FileSourceDevice(‘../../tests/utf16.txt’);
const stream = new Nexus.IO.ReadableStream(device);
stream.pushFilter(new Nexus.IO.EncodingConversionFilter(“UTF-16LE”, “UTF-8”));
stream.pushFilter(new Nexus.IO.UTF8StringFilter()); => console.log(“buffer: “ + v));

The code does the following:

  1. Create a file input device.
  2. Create a ReadableStream with the device.
  3. Append an encoding conversion filter to convert `UTF-16LE` buffers into `UTF-8`.
  4. Append a special filter (UTF8StringFilter) which converts ArrayBuffers into strings.
  5. Perform a read operation and output the result (now a string) to the console.

As you can see, this is all Promise based, so no more callbacks in the API.

And speaking of the API, someone will probably ask this: Why not separate everything into modules like Node? Why have the entire API initialised for every global object regardless of whether or not it’s used.

The answer is simplicity, and fear not, the API is not initialised unless you access it. As a matter of fact. The `Nexus` API object itself is not initialised until you try to access it. Thank you JavaScriptCore for the amazing interface.

But won’t that affect performance, you ask? No, everything is initialised from static C++ structures, and cached afterwards, so there’s no chance of initialising the same object twice.

This concludes the first article on concurrent JavaScript, stay tuned! More will follow as the project progresses!

If you’re interested in the code for the project, you can browse it here.

Part II is here! We compare speeds with Node.js!