Adding Promise Support to Core

Two Sundays ago, I opened a pull request to the nodejs/node repository introducing support for Promises. There’s been a lot of discussion since that point, with over 400 comments in the main thread alone. Several discussions have split into other issues. This post will summarize the state of the discussion as it stands. I will address the question of why I believe they should be supported, describe the current proposed API & timeline, and note the current items being worked on.

Why support Promises?

Given that many discussions around Promises turn adversarial, and that solutions currently exist in the ecosystem, why support them in core?

While it’s true that the topic can be contentious, I believe that this is largely due to the framing of the conversation. It is often set up as a zero-sum game: for Promise users to make gains, callback users have to lose ground, or vice versa. However, I believe that we can frame this discussion in a more productive fashion.

If we accept that both callbacks and Promises are both valid ways to build asynchronous programs, the discussion becomes about better supporting a set of users, instead of about one pattern invalidating another. Callbacks are, and will continue to be, a sturdy, reliable pattern on top of which to build your application — as will Promises. This discussion is about making life easier for a set of our users without incurring a cost for another set of users. It is not about which pattern is best.

The status quo front-loads a lot of decisions onto authors of Promise-based packages. That is, if I and another package author wish to interoperate using Promises, that agreement alone is insufficient for us to proceed.

In order to interoperate, we have to agree on the shim, implementation, and specific version of Promises in order to allow Promises to safely be used between packages. Otherwise, a tricky (and manual) process of wrapping and unwrapping Promises at the edges of the package APIs is necessary. For most users, both of these approaches are not ideal — they just want to use native Promises, not to specify all of the details. Users that wish to use different implementations still can, but the requirement of marshaling and unmarshaling at package boundaries become “opt-in” for the majority of users.

Compounding this situation, async/await syntax uses Promises as the underlying mechanism for representing asynchrony. Users that wish to use this feature, at present, will have to pick a shim, implementation, and version in order to use this syntax with Node APIs. This introduces friction for newcomers to Node, especially those coming from the frontend where these patterns work out of the box.

Users expect Node to provide a JavaScript runtime comparable to browsers environments. Node has a history of trying to align with these platforms — from small things, like making the timer API look-and-feel like the setTimeout offered by the browser, to bigger things, like providing users our own TypedArrays in-tree before our vendored copy of V8 offered them. Users have asked for a Promise-based API many times in the past, and are likely to continue doing so into the future; I believe the proximate cause of this is that the lack of Promise support creates a dissonance between Node and other JS environments.

Finally, exposing a Promise API starts to bridge a divide in the community. By including a core Promise API alongside the callback API, at a platform level we reinforce the notion that both patterns are valid. It does not matter to Node which pattern suits you best. This makes participating in the Node ecosystem and in Node core more welcoming for Promise users.

The Approach

The callback API will not change to suit the Promise API. It will remain the same. The current approach installs a promisified version of each method alongside the original method. We have the option of exposing core’s promisification method via require(‘util’), but currently it is internal to core.

In presenting the Promise API as a regular transformation on top of the callback layer, we ensure that the maintenance cost of the Promise layer is a small constant added to the maintenance cost of the callback layer. We also ensure that callback users are not paying a performance tax to support promise users.

This approach doesn’t preclude the later inclusion of a submodule or ES2015 module-based approach to surfacing a Promise API, however, that is out of scope for this PR. EDIT: That discussion will be held after this PR is merged.

In the vein of closed avenues of inquiry, returning a Promise from callback APIs when no callback is passed is not workable. Some callback APIs already have significant return values separate from their callback value. Additionally, a few crypto APIs are synchronous when called without a callback — changing this would introduce backwards incompatibility.

The API is experimental and, if approved, will land behind a flag which will be set to false by default; likely it will be “— enable-promises”. Work may continue on the API once it has landed. It would be at least one major version until the flag is lifted, and if the API proves untenable to support it may be removed before that point. At least two major versions later, it would become officially supported, meaning that it will not enter LTS support until another year after that, as I understand it. The timeline on this feature is long and there are go/no-go checks before it becomes supported.

The earliest a flagged version could land is the week of the 22nd, February 2016. If landed, the first go/no-go check for unflagging should happen in October 2016, if I have my major version timings right.

The proposed API looks like this:

const fs = require('fs')
fs.readFilePromise('some/file').then(data => {
// shortcut for top-level promise API
const {readFile} = require('fs').promised
readFile('some/file').then(data => {

Only “single occurrence” async APIs provide Promise variants at the moment. In practice, this means that the following core modules will include Promise variants:

  • cluster
  • child_process
  • crypto
  • dgram
  • dns
  • fs
  • zlib

Some methods of net.Socket, readline, and repl also provide Promise variants. Streams do not currently provide Promise variants.

The Node Core Technical Committee is planning on meeting to determine how to move forward on this PR sometime during the week of February 22nd.

Technical Issues & Proposed Solutions

AsyncWrap, Domains and Microtask Queue

Native Promises currently interact incorrectly with Domains and AsyncWrap. This is largely due to a lack of visibility into and control over V8’s “microtask queue.” The microtask queue is a language-mandated queue of functions to be run after the exhaustion of a JavaScript stack. In the case of Promises, settling a Promise with pending handlers OR the addition of handlers to a settled Promise is mediated through the microtask queue. AsyncWrap currently relies on Node’s ability to exert full control over the top-level entry points from event loop to JavaScript in order to provide hooks for users to track new sources of asynchrony. Domains also lean on this ability. The microtask queue is opaque to Node — Node cannot see when functions are added to the queue, or when a given queued function is executed. Given this, it’s impossible for Node to fire the appropriate AsyncWrap hooks, or to associate the appropriate domain with a Promise handler.

I am exploring making V8’s Microtask Queue pluggable, in the style of V8's ArrayBuffer::Allocator API. This should allow us to add the necessary hooks for AsyncWrap and domains, while affording us the opportunity to consolidate the process.nextTick and microtask queues into a single queue. I have a patch that implements this, and once I’ve had a few Node core members take a look at it I’ll submit it to the V8 team. If accepted upstream, we’d float this patch against V8 in order to unblock AsyncWrap and Promise unflagging.

Promises Unsuitable for Post Mortem Debugging Tools

The @nodejs/post-mortem working group has expressed concerns about Promise use interfering with post-mortem debugging tooling. Specifically, the reliance on using throw as a mechanism for error propagation means that in many cases, processes that crash on unhandled rejection will have already unwound the stack. Unwinding the stack is highly undesirable for post-mortem purposes, both because the stack contains stack frame parameter information (which Error objects lack), as well as because distancing the exceptional state from the serialization of program state means that valuable heap information may have changed before the program was serialized.

Three mitigations are currently being investigated:

  1. Skip intermediate non-user-installed handlers on Promise rejection. This allows the rejection to propagated to the unhandledRejection handler without unwinding the stack. Change the default unhandledRejection behavior so that in the absence of a user-installed process.on(‘unhandledRejection’) listener the process crashes, and investigate making synchronous rejections signal unhandledRejection immediately, versus the current “on next tick” behavior. Combined, these steps make unhandled rejections work “as expected” with current post-mortem tooling, & align unhandled rejection behavior with uncaught exception behavior. This also makes explicit an extant unwritten rule, which is that package authors should avoid leaning on unhandled rejection behavior. Package authors cannot presently rely that consumers of their package will not crash the program on unhandled rejection. See this comment for more.
  2. As a slight alternative to mitigation one, continue to wait until next tick before firing unhandledRejection to allow synchronous Promises to be handled unless the “ — abort-on-unhandled-rejection” flag is set. This causes the flagged and unflagged behavior to diverge, but preserves the current synchronous rejection behavior for unflagged programs.
  3. Explore adding a V8 Context-associated hook for VM-originated programmer errors, like ReferenceError, TypeError, etc. This would allow post-mortem users to short-circuit Promise (& try/catch) machinery and abort on programmer error.

I believe that we should have a clear answer for the Post Mortem WG’s concerns before unflagging the feature, but not necessarily before landing the PR.

Rejecting Operational Errors Hides Programmer Errors

Some users wish to limit the use of try/catch to truly exceptional, unexpected situations. Node currently caters to this need by only throwing exceptions from asynchronous methods if the provided arguments are invalid for the API in question. These users would prefer to continue to reserve exceptions for unexpected behavior as they start using async/await. Most existing shims, and indeed the original PR, treat operational errors, like EEXIST, ENOENT, and EMFILE, as rejections, which map to thrown exceptions in async/await programs. As an example:

async function reader (filename) {
try {
const data = await fs.readFilePromise(filename)
} catch (err) {
// could be a programmer error ("filename was undefined") or
// it could be an operational error ("filename does not exist").

For many users the above example represents acceptable use and is analogous to the fs.readFileSync API. Others wish to keep programmer errors separated from operation errors.

There are three potential mitigations:

  1. Via @littledan: When implementing the Promise API, use destructuring to return [err, value]. EDIT: It looks like there’s strong objections to this approach, and it’s unlikely that we’ll be going this route.
  2. Via @zkat: Allow a recovery object to be passed to Promise-wrapped functions in order to specify behavior in case of operational errors. EDIT: Notably, this does not modify Promises/A+ behavior. This is purely a pattern employed by the Node API to allow users the option of swapping an operational error for a resolution value before the Promise is settled. Users that don’t wish to use this pattern may omit the recovery object entirely, and the Promise API will work as-expected, rejecting operational errors.
  3. If implemented, the first mitigation approach from the post-mortem section would separate programmer errors from operationals error by crashing the programs lacking a process.on(‘unhandledRejection’) handler on synchronous rejection.

Examples follow:

// Mitigation 1 with async/await
// - err may be a programmer error _or_ a operational error

const [err, value] = await fs.readFilePromise(path)
// Mitigation 1 with raw promises
fs.readFilePromise(path).then(([err, value]) => {
// Mitigation 2 with async/await
// - throws exception on bad "path" param

const value = await fs.readFilePromise(path, {
return null
// Mitigation 2 with raw promises
fs.readFilePromise(path, {
return null
}).then(value => {})
// Mitigation 2 with no recovery object
try {
const value = await fs.readFilePromise(path)
} catch (err) {
// Mitigation 3
// - throws operational errors (ENOENT)
// - when passed a bad path, crashes the program (regardless
// of try/catch)

const data = await fs.readFilePromise(path)

Where to Participate

The main PR is the best place to participate. I’ve added a FAQ at the top with links to specific responses to make navigating the thread a bit easier. If you don’t have the time to read the entire thread, please continue share your concerns and I will attempt to address and direct them as they come in. If you’re nervous about wading into the thread, please feel free to contact me directly at the email associated with my GitHub account, or via Twitter @isntitvacant.

Thanks for your time!