Building a Better Promise

Future-Facing Functional Tasks: what they get right where ES6 Promises got wrong

Drew Tipson
13 min readOct 20, 2016

I used to love jQuery’s Deferreds. Not just as a nicer callback interface, but as a fundamental structure for building program around. Then I found out that there was something even better out there: the Promises/A+ spec (which jQuery now supports). So when Promises rolled out with ES6, I was overjoyed to have my favorite plaything right smack in the language itself.

But then I stumbled into functional programming and, well, everything fell apart. I don’t expect you to agree with me that Promises are deficient in a lot of ways, but I do think it’s worth knowing that there are alternatives out there, and how they work.

So today, I’m going to walk you through the basics of creating a simplified functional “Task,” loosely modeled after the Task in the folktale library (which I see is preparing for a major new release, yay!). While you’d probably want to use a battle-tested, edge-case wrasslin’ real Task/Future library in practice, I’ve found that thinking through the basics of how Tasks (and Promises work) gives me a much sounder intuition about what’s really going on here.

As we’ve done in the past, we’ll start by creating a sort of type container constructor, using a little trick from the daggy typed-constructor library to allow us to create “typed” containers for values without being forced to rely on the new keyword. Here’s our basic constructor for a Task:

If you ignore the boilerplate of the little constructor trick, this is actually extremely simple: take an argument, store it as a named prop. In fact, sweetened up with ES6 syntax, all we’re really doing here is this:

So, now we can just call Task with some “forkable computation” as its only argument to create a new Task with that computation stored as a named method called .fork

Notice that by our arbitrary choice of name we haven’t really nailed down anything about what a “computation” is here: this structure is super abstract (so abstract that it could be made into any number of other types). But you can probably infer from the name that we intend computation to be some sort of function.

So, to make it more concrete: let’s now define a way to get a simple value up into the Task type. Here’s where we’ll start to see the shape of the sort of thing we’re building up.

We’ve defined a function called .of as a “static method” and then stuck it on the prototype as well (for convenience). But what is it? We know that a type’s .of method is a way to get any simple value up inside a type: often referred to as the “default minimal context.” We already said that the type is going to hold a function: so how does a Type that holds a function represent a value? Well, by just creating a particular, pre-baked “computation”:

(l, r) => r(value)

That function, when called, simply runs some t.b.d. function r with that original value, unaltered. This Task.of should seem familiar if you’ve ever looked into the IO type and its .of method:

I’m saying “value” here, but it’s worth remembering that the value could be anything, including another function. We don’t really need to worry about that: Task/IO are just about managing these values: not inquiring into their nature.

The definition of IO.of was basically just a generic way to create a nullary function that, when later called, will just return the original value again. Our simple Task.of is very similar, with two very important differences:

  1. The main “computation” function that we’ll be using to create Tasks is a binary one (a function that takes two arguments, l and r). In fact, it’s even trickier than that: both of those arguments are themselves going to be functions!
  2. Instead of just returning the value like in IO, we’ll return the result of calling the as-yet-unspecified second callback function (r) on the value

In the case of .of, the first of those arguments is basically ignored, and the second one is queued-up to be immediately called with the original value (why the second? Later…).

Let’s try this out: if we write Task.of(5) we get back a Task type with the 5 buried somewhere inside. The only method available from here is the value that we assigned the computation function to: .fork. That means we can do this:
Task.of(5).fork(e=>e, x=>console.log(x));

Which will just log (and return) our original 5.

Simple result, but there’s some pretty tricky stuff going on behind the scenes, so let’s spell it all out again.

  1. Task.of is a function that takes an argument, and then defines and stores away a “computation” function with 2 arguments inside a Task type
  2. that function, when run, will apply original value (5), to whatever the second argument (which will be a function) happens to be.
  3. It names this stored function “fork,” thus making .fork a method of our new Task[5] type.
  4. When you actually call .fork, you pass it 2 functions. And thus, by the logic above, a 5 is then immediately applied to the second, x=>console.log(x) one, causing a side-effect by logging out 5.

Seems pretty convoluted, doesn’t it! But every time you call Promise.resolve(x) you’re following a very similar pattern, at least in terms of the code you write.

That is, if you mentally model Promise.resolve(5) as “creating a Promise of a five” then our Task.of(5) is doing much the same thing in the same way: it’s creating a Type that in some way emits a 5, and thus “holds” a 5. And, just like with Promises, to get access to that encapsulated 5, you’ll later need to use a special method that takes a function which consumes the inner value and does something with it.

However, our Task type is actually a little more reserved than even that: in fact, it’s downright lazy. Promise.resolve(5) actually generates a “Promise” with a 5 in it right away: it (immediately) becomes a sort of stateful container in a particular final “resolved” state that then contains an actual 5. But Task.of(5) doesn’t actually do anything: it’s a purely conceptual “5,” queued up via a closure to appear only when requested.

Wait, wait, how is that any different? Isn’t .fork just an alias for .then, with the arguments just reversed (success callback/error callback vs. error callback/success callback)? They both receive “callback” functions which are called with an inner argument when it’s available, no? Yes… and no. Remember, you saw how .fork was defined: it’s literally just storing an operation that’s waiting for another function before it can run, generate the value in some way, pass it into our callback function, and finally return something.

But that operation itself doesn’t return a Task, or a Promise, or anything special. In the case of Task.of(x).fork(failFn,successFn) it could even just return a value directly. So running .fork, unlike .then, is not going to return a new/transformed Promise/Task.

Don’t worry if you think this means that you can’t chain together multiple Task operations to work with and transform the values “in” the Task: you can! It just means that you can’t chain Task executions.

In fact, .fork will just synchronously return whatever the “computation” function returns. As we’ll see, it might not even have to return anything. Let’s .fork our simple Task.of(5) case again and just use the identity function for the second function (passing the result through untransformed):

Wait… what’s that, it synchronously returns a value?! Well, yes. For Task.of, the logic we set up was just to call the second function with a 5 AND return the result,… a 5. It didn’t have to return the result directly of course (and we’re about to see cases where it literally cannot), but that’s what made sense for now.

Now, if when you first taught yourself about Promises, you spent weeks training yourself out of the habit of thinking that an asynchronous type could ever return a synchronous value, this might be a little maddening. But as it turns out “asynchronous” was probably always too specific. What we’re really modeling here are continuations: operations with dependencies that may or may not be blocked and waiting on some other operations to complete. If you squint away all the boilerplate, you can probably see that Tasks are just a trick that allow us to use our old-friend function composition even in cases where the necessary values are not immediately available. Sometimes such operations are blocked and need to wait before continuing on. Sometimes they aren’t. Structurally, the basic Task Type can handle either case quite naturally.

The laziness is actually a lot easier to see if we compare Promises to Tasks using setTimeout. Now we’re dealing with a simulated case in which values simply aren’t immediately available:

Here, the constructor code in the Promise is run immediately, eventually triggering our log callback right inside the constructor (that is, causing a side effect) even if we never ever do anything further with pFive. The Task code, however, doesn’t log anything: as we now know, it won’t run until it’s explicitly forked.

What we’ve gained with this is that “describing the computation necessary to retrieve some value” and “running that computation” are now very cleanly separated. Promises are unavoidably both at once: both the contract promising an eventual value and the actual execution of the process that will realize it. That’s what often forces you hide Promise-based operations inside an extra outer later of functional wrappers: so that you can describe what they do without accidentally doing them (i.e. set up dominoes without knocking them over).

Here’s a more real-world example:

We haven’t covered .map yet, but think of it as .then in the case where we’re not returning a new Task, just transforming the eventual value in some way.

Our Task type is just a description of the contract and nothing more. It’s not stateful: you can call tFive.fork as many times as you want, and each time it will execute the same operation: running the same instructions, having the same effects. Even though it’ll coordinate activating the correct callback asynchronously if needed, it, unlike Promises, has no inner “state” that changes from “pending” to “resolved/rejected” over time (this “statefulness” is one reason native Promises are and will always be slower than functional alternatives).

But let’s look even more carefully: the functions that define these types are, in the end, just simple functions, right? But what are they returning in this setTimeout case? They can’t return a 5, because the “5” we’re imagining here isn’t “available” synchronously: it’s only available after a few seconds.

But there IS something useful that we can return immediately: the synchronous result of setTimeout, which is a unique id that you can later use to cancel the operation!

And here we run into the next major difference between Promises and Task: in the case of a Promise, that returned timeout id fell into a black hole: it never came back out anywhere. What you return from the Promise constructor function is basically irrelevant. I have no idea what happens to it, if anything.

In the Task case though, the return value isn’t lost at all: it is literally the synchronous result of calling fork (if you want it to be)! Which means that when we decide to execute our operation, we very naturally have a clean way to return, via closure, any sort of API surface we might need, including (and most commonly) a means to cancel the original operation.

This makes sense, right? When you call fork, you’re in the context of the call-site, so whatever called fork gets back whatever control you exposed over the async operation it called.

So here’s an example of how Tasks are inherently easy and natural to cancel and control, while Promise… well it’s almost impossible to do in a generic way without resorting to tricks like interweaving some outer variables right into the constructor.

You might have been reading a lot of controversy over fetch, cancelable Promises, cancelTokens, etc. It’s a mess. At the moment, the spec basically entails exactly the “tricksy” approach I noted above: creating a special side-effecting token function ahead of time and then piping it into the Promise constructor (or Promise-returning api method, like fetch, or all it’s asynchronous callbacks, or… etc.).

Well, this mess exists precisely because the Promises/A+ solution inextricably mashes together the (pure) description of an operation with its actual (usually impure/side-effecting) execution, leaving no sensible place to return any separate control over what happens to the execution.

A Promise of x means that x is already on it’s way, and that the Promise itself is a stateful object representing an eventual future. In this conceptual model, canceling the promise isn’t just the matter of a tricky, awkward api (though it is tricky and awkward): it’s conceptually a BAD thing! It’s like introducing time-travel to a formerly predictable timeline. Because potentially multiple side-effects can depend on the result of a single promise, cancelation can throw a series of predictable, loosely-linked outcomes into disarray: different parts of an application might consider a resource and its effects to be no longer relevant at different times for different reasons.

The value of Task is that it allows you separate compositional logic from side-effects entirely. There’s no need to deal with the confusions of time travel until time is actually allowed to run forwards in the first place! This is precisely why Task’s method is called “fork” (like forking a process). While javascript is single threaded, we can sort of imagine any code running “in the future” as a separate thread, and any thus code running inside a fork callback as existing in that future thread.

Let’s add some pure functional interfaces to Task to see what that all means:

Task wouldn’t be a lot of fun it it wasn’t a Functor or a Monad: there wouldn’t be any pure way to transform results. With .map and .chain though, we can either by applying a function to the inner value (Functor) or returning another, new Task that takes over from where the last one left off (Monad). This means we can easily define a Task but then also define a new one that extends the computations in the old one.

Adding a standard interface for cancelation (i.e. a fork interface that returns a function that cancels the effect) isn’t going to make the implementation of .chain as pretty, but it is still pretty straightforward: we just need to shift the cancelation behavior over to the logic from the new Task whenever one replaces another.

Ew, a bit of statefulness, but at least it’s captured and localized

Obviously, one can be a lot more disciplined about this, and indeed, if you decide to start using functional Tasks/Futures instead of promises, you’ll find that most of the well-established libraries do define these things in a stricter, more comprehensive way. But we’re just learning about the guts of the approach at the moment, what’s possible: not creating a real utility library ourselves.

Anyhow, with these standard methods in place, we now have our endlessly extendable functional toolkit to describe operations over potentially “future” values without inadvertently creating them at the same time.

“Error” Handling

We’ve held off discussing a pretty big element of Task: the fact that it can represent a particular expected value OR something else. Note that we didn’t really say “error.” We’ll get to that in a second. With Promises, handling errors works like this:

Why do we have to be careful?

Because Promises absorb runtime errors. That is, for every supposedly compositional operation you perform, if you’ve made a mistake in your code, then Promises will automatically catch the error and switch branches on you, turning a resolution into a rejection. This property is often actually celebrated: Promises model “try/catch” for asynchronous code!

That might sound appealing, but it’s worth asking why that’s considered a good thing: do we normally wrap nearly every major line of synchronous code in a try/catch block? No: usually we expect our code to outright break when it’s not written well: so that we can fix it! It’s only in very specific, carefully selected situations that it makes sense to use try/catch (where we might not have control over some input and have no other way to detect and recover from that circumstance). Otherwise, we normally expect bugs to crash our programs, forcing us to fix them once and for all. Promises don’t give us a choice: we’re opted into try/catch automatically.

Worse, because of the mess with cancelations, part of the proposed solution is not to fix Promises, but rather to complicate try/catch itself to introduce a 3rd state: a sort of “was canceled” state. Yowsers.

Tasks, on the other hand, simply don’t include such logic in the first place: not hard-coded into the Type at least.

What they do offer instead is something like the Either type: the ability for our “computations” to choose to run either the leftsideHandler with some (usually) unhappy-path value, or the rightsideHandler with some other (usually expected/happy-path) value. Remember how, with our definition of Task.of, we ignored the first function that got passed to .fork? Well, Task.of has a twin:

This just allows us to create a Task that pipes a value into the first callback in .fork

With Task, when we start using the “leftside” branch as a place for errors, the error handler comes first primarily to make sure you remember to have an error branch. In most cases you’re not going to need to create a rejected-branch Task upfront: instead, you’re instead going to create a Task that can fail gracefully. Here’s an example of wrapping a Promise-returning operation in a Task, wiring the resolution or rejection right into our Task’s rejection/resolution:

The point of this is really to handle expected failures: known cases where we’re not going to get the type of value our program needs, and we thus want to just skip all the operations that depend on that value and jump right to that errorHandler that we’ll always include in .fork(errorHandler, successHandler)

Now we could actually have designed a Task that only has a success branch and doesn’t try to model the possibility of failure at all (alternatively, we could do the above but then also express failures by always wrapping the return value in a type like Either… though then you’d always be working with nested types). And if all we ever used Task for was capturing setTimeout, that might make sense, because setTimeout itself doesn’t fail. But it’s so extremely common that async operations involve things (like network operations) that could fail that it just makes sense to have it baked right into the type. (fwiw, a pure Continuation type is a real thing, an in fact has some rather amazing properties).

Now, if you wanted to introduce some try/catch logic in your Task’s computation function, you could of course, and you could even hook that into your Error/Success callbacks if you wish.

Furthermore, if you want to catch errors that happen as the result of running the side-effects in the fork handlers, you could (some popular FL libraries offer an option in their fork methods to catch errors, much like Promises).

But none of this is mandatory. And it’s specifically isolated to areas of the program that actually cause side-effects (and thus could cause completely unexpected errors), rather than forcibly opting in even pure operations like .map and .chain where try/catch really isn’t all that appropriate.

It may sound counter-inuitive, but most of the time we actually DON’T want to automatically catch errors. Most of the time we want to write pure operations that are either written correctly or not (making sure all possible input and return types all match up). And with pure functions we instead would want to use union types like Either or Task that represent different possible paths a program can take and eventually .fold or .fork back down into an effect. Doing that properly can in most cases make runtime errors impossible in the first place.

Sold on the idea? Check out Flutures, which provide you a Task-y type to work with and an easy way of converting Promise-returning apis to Flutures. Try using them instead of Promises. See how that feels.

--

--

Drew Tipson

Primarily Javascript, potentially personal, possibly pointless. I welcome and am fascinated by your many marvelous opinions.