Asynchronous Programming in JS

Part 1. Event Loop & Other Animals

This is the first part of a series of articles on asynchronous programming in JavaScript and it deals with mechanisms and constructs that make it possible. Later parts will talk about various paradigms, common pitfalls and best practices and discuss various code examples.

This article, as well as the ones to follow, is mostly aimed at a beginner — intermediate level developers, seeking some order amidst the async craziness.

Part 2: Paradigms and Constructs is also available. Enjoy.

Why?

JavaScript has been around for a long time. The last half-a-decade saw a truly incredible emergence of JavaScript frameworks, libraries, techniques and paradigms, each establishing or destroying trends and preferences. It saw it’s steering body, TC39, reconvene and put forth a vision and a roadmap for the language.

Asynchronous features of the language, one of the more important parts of that roadmap, go through a renaissance of sorts, so it seems like a great time to align on what these features are, how to properly use them and what to expect in the future.

Having said that, let’s start with a bang…

JavaScript is NOT Asynchronous

This bears repeating. JavaScript is NOT asynchronous.

That’s quite a statement to make in an article on asynchronous programming in JavaScript. Is it correct, though?

Most (all?) JavaScript engines, which actually compile and execute JavaScript code, are single-threaded and have no concept of asynchrony. Moreover, JavaScript the language didn’t have an active keyword related to asynchrony until Promises were included in ES6.

Having no built-in async constructs, until ES6, with the de facto solution — async callbacks, proved to be insufficient. In the second part of the series, we discuss the issues with the callback solution and the way Promises, Generators and async/await resolve them.

Clearly, you can write JavaScript code that adheres to the asynchronous paradigm, as is evident by anyone who ever added issued an AJAX request. How does it work then?

The Moving Parts

There is a combination of various moving parts that allow it to happen.

JavaScript Engine

There are several JavaScript engines around, with V8 (of the Chromium and internal combustion fame) being the most popular and the one that powers both Chrome browser and Node.

Engine is the component that actually compiles and executes the JavaScript code. In case of V8 (Editor note: < 6, before TurboFan and such), it does so by:

  1. compiling JavaScript into native code using unoptimised full compiler as quickly as possible
  2. instrumenting the compiled code with profiling components
  3. running an optimizing compiler, which, based on information gathered from the instrumentation above, optimizes or de-optimizes the native code throughout the execution

There are several fundamental concepts that will eventually play a part in how asynchrony is implemented and this is as good time as any to introduce them (or rather, refresh in memory).

Execution Context

In ECMA-262 specification, Execution Context is an abstract concept that defines “execution environment” for a piece of code. The quotes are not accidental, as the standard doesn’t define any specifics about the implementation, so “environment” and “execution” are approximations.

As often is the case with abstract terms, it is much easier to illustrate than to formally define. Consider this amazing piece of code:

const FACTOR_OF_THREE = 3;
function multiply(a, b) {
return a * b;
}
function triple(a) {
return multiply(a, FACTOR_OF_THREE);
}
triple(10);

Using (still unclear) the Execution Context, the code may be schematically illustrated like this:

which, of course, looks awfully similar to a Call Stack and is appropriately called Execution Context Stack. Now that we intuitively understand what kind of structure Execution Context Stack is, let’s briefly discuss some of the details.

Execution Context is a collection of information about a scope of a piece of code. There are 3 types of Execution Contexts in JavaScript:

While they differ in the way the JavaScript engine enters them, they all behave similarly. There are two stages in processing of an Execution Context:

  1. Scope Chain is created
  2. actual parameters, function declarations and variable declarations (in that order) are created
  3. context (this) is bound
  4. values are assigned to all parameters and declarations in 1b.

So, each Execution Context can be schematically represented by this JavaScript object:

const ExecutionContext = {
this: { ... },
vo: { ... },
scope: { ... }
};
For an extremely deep and fascinating discussion on Execution context and much more, see here, here and here.

With the general understanding of what Execution Context is, let’s return to our main discussion on how it is processed to allow asynchronous execution of code.

Run To Completion

JavaScript code is of a “run to completion” kind — there is no way to interrupt a piece of code being executed by the engine unless that code yields the control by itself. The function execution context is entered and then executed, with new functions being added to the top of the stack as they are being called by the initial context. Only when the stack is empty can any other code (which wasn’t a part of the original “chain”) begin its execution.

Such a feature allows, both on Node and in browser, to keep the interface “snappy” and serve a lot of calls, for the price of risking one of the call handlers to occupy the engine to starvation of others.

While there are measures taken to actually put a cap on such a behavior (like number of recursive calls or amount of callbacks allowed for execution), the only real way is to refrain from creating code that takes a long time to execute.

Or is it? In browser there is a standard called Web Workers that allows to outsource computation-heavy operations (or anything, really) to an off-main thread.

const worker = new Worker('worker.js');
worker.onmessage = (event) => { 
...
};

Node has its own ported versions of Web Worker, most of which are relatively stale and rely on things like Fiber or Node’s own paradigm of child/fork processes to handle offloading of complex/long-running tasks and a delayed standard implementation that won’t be there in ES7 (or probably ever).

The main line of reasoning of not including it in Node claims it to be an incorrect approach altogether and that the solution is to actually break the long-running tasks into smaller executable chunks. Whether that is always possible is a topic for a separate discussion.

Runtime

JavaScript engine, with all its power, doesn’t exist in a vacuum, rather relies on a runtime environment to provide it with services that allow to access the DOM, respond to user events, send AJAX requests, read a file or delay execution of a function.

Runtime is an envelope, a linker to engine’s compiler — providing it with libraries and mechanisms. Some of them, like the DOM API, are relatively straightforward and used much like you’d use any library — by importing/including them to add features to the language. Others, like AJAX or file system API, also require additional structures.

Runtime can absolutely be multi-threaded, as is evident by the way the browser handles many concurrent tasks like rendering, handling network requests and responses, fulfilling API calls from the engine and many others.

Event Loop and friends

Event Loop & Event Queue

Finally, Event Loop is the heart, the connection between a single-threaded code executed by the engine and multi-threaded runtime providing various APIs.

The job of the Event Loop is to introduce a concept of time into execution of the code — scheduling of execution (…that sounded a little morbid) and handling of “waiting”.

Non-blocking nature of JavaScript is a crucial characteristic and one of the main key to its success (along with an extremely low entry level). In browser we can’t block the user while waiting for the AJAX call to return. On Node we can’t block the requests handling (for example) while waiting for the file system API to respond. There is a need to allow timed execution of code, without having to wait for it.

Event Loop and Event Queue (sometimes called Message Queue) allow just that.

Event Queue is the waiting room for the code to be executed “when there is an opportunity” and Event Loop is the one that moves that code to the stack.

The above diagram depicts processing of the following code:

function handler() {
...
}
function retrieve() {
fetch('...', handler);
}

Let’s go over it, step-by-step:

  1. JavaScript code is downloaded by the browser, parsed and fed to the Engine
  2. global execution context enters the Stack (and so defines retrieve and handler on its Variable Object)
  3. retrieve execution context is added to the Stack
  4. fetch AJAX API is called and execution context is retrieved and removed from the Stack
  5. HTTP request is issued
  6. sometime in the future HTTP response is received and handler is placed on the Event Queue
  7. Event Loop receives notification (or checks) that the Stack is empty

Event Loop retrieves handler and places it on the Stack

If at any point either Event Queue or Stack have other items to process, handler would have remained on the Event Queue until both are cleared. That, of course, because the engine is single-threaded so all Execution Contexts that are already on the Stack must finish, before any new code may be added.

Summary

In this part of the series we discussed, however briefly, what makes async execution of JavaScript by a single-threaded engine possible.

We also laid some groundwork towards understanding why some of the paradigms we are going to discuss in the next part are lacking in expressive power and features. After all, they weren’t really a part of the language, rather enabled by the environment.

If some of the concepts seem detached from “reality” or a little disjointed — that’s on purpose. We’ll try to complete the picture as we go through the articles that follow.


Originally published at blog.naturalint.com on February 15, 2017.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.