You can read the previous part here:
A mad cousin of Node.js would do crazy things, so let’s explore more of what it does and how it does it!
I just added an EventEmitter, but unlike Node’s emitter, it does indeed seem crazy!
First of all: Once triggered, all events are executed in parallel — on all cores — simultaneously, and a promise is returned. The returned promise is guaranteed to be fulfilled once all event handlers for the particular event triggered execute.
You can tell that the execution order above is indeterministic by looking at the console output from the event handlers. They all execute at the same time!
So, let’s pass some arguments and see what happens.
Here we chain two invocations with different parameters, the enqueued events will run in two bursts. One group will run first, then the other will kick in. Beware that if a handler modifies a non-scalar argument — object, array, etc… — then it will be modified for all handlers. The language comes short here. I wish there was a way to specify const arguments. Sadly, there’s no such thing. You’ll have to use immutable structures or Object.freeze(), and be mindful of such pitfalls in your code. Unless you can actually figure out a way to use this to make something clever!
Now to the next feature:
The result of a call to .emit() is a promise — we’ve already established that, but a side effect is that when the handlers return their own values, the promise will collect them all into an — unordered — array. Just like Promise.all(), and you can use them however you like!
In a matter of fact, this is implemented internally using the native version of Promise.all().
This allows you to do crazy things, and I’m sure you can think of more clever uses than my naïve example!
What I didn’t talk about is what this means for I/O. Now that I’ve implemented this parallel mechanism, I/O will be piped through it instead of the old approach, be prepared for a very fast benchmark the next time we talk about I/O operations!