Cancelation vs. Fast-Forwarding in Async Javascript

In my last article about pure, cancelable Tasks/Futures and their advantages over Promises, I talked at length at cancelation done right: requesting an operation that will generate a future value, but then also being handed back a functional option to cancel both it and any of its chained side-effects.

What was so powerful about this “thunked” pattern for cancelation is that it returned the cancelation control directly back to the call site at exactly the time the request was actually executed. That is: the precise part of a program that makes a request then has synchronous and direct access to cancelation without any weird dance of scopes or extra store values to keep track of. It can pass the cancelation function back to whatever function it called it, stick it into a larger cancelation queue, whatever.

It also abstracts away the mechanism of cancelation. Different asynchronous operations do cancelation in wildly different ways: from setTimeout’s simple token/cancelTimeout system to XHR.abort and so on. The basic cancelation pattern means we can cleanly abstract that messiness out into a common api: all we do is just return a function that:

  1. takes no arguments
  2. returns nothing of importance,
  3. has exactly one known side-effect (well, maybe better to say one known cancelation of a specific side-effect from having any further effects from then on) and…
  4. does nothing harmful if it’s ever mistakenly called long after the operation has already completed/failed/timed out on its own etc.

That’s cool and all, but why would we ever need such fine-grained control over cancelation?

Well, for one thing: because we need fine-grain control over things like debouncing. We often debounce operations so that expensive side-effects will only happen on the “most recent” of several possible requests for the operation. We can even abstract this behavior out into a functional wrapper so that we can make any function we want both debounced and cancelable. [if this is all just review for you, I’m going somewhere with this, I promise!]

Now think about a user typing into a search field in a web app that’s hooked up to a network request. If the user types in “David Bowie” we don’t really want to fire off a network request for search results on every single keystroke. Instead, we can wait 200 milliseconds after each keystroke and fire off a request only if there hasn’t been another one in that time. If there is another, then we’ll wait another 200 milliseconds, and so on.

Great! But because we can also cancel the debouncing entirely, we can also prevent pointless network requests in the case of, say, a user typing but then navigating to a new page. So we have total control from every angle.

On top of that, the additional ability to cancel network requests is a great addition to this pattern as well: if the user types a letter… 200ms pass… we send out a network request… but then they suddenly type again before the result returns: network cancelation will allow us to kill the old request before we fire off the new one. This pattern minimizes our bandwidth usage, prevents weird out-of-order effects, and so on.

So, that’s all important, cool, powerful stuff. But here’s the wrinkle for today: it’s not logically the only sort of control you can have over an asynchronous operation!

In fact, there’s something of an exact opposite that I want to call out and talk about today: fast-forwarding. Instead of handing the call-site the ability to cancel an effect, you could, alternatively, demand that it “turn its work”… early.

Here’s how that could work in the case of a simple setTimeout-backed delay:

So, that’s sort of neat. But why would you ever want to cancel or fast-forward things anyhow?

Let’s return to debouncing to see why. Simply put: that 200ms delay can be eons in terms of an application’s state. Other non-debounced actions might happen in that period: happening chronologically after the original debounced event “happened,” yet in effect updating the application’s state before the debounced event has its turn. The result would be an inaccurate “history” of application state. To solve this problem we wouldn’t want to cancel the debounced effect, we’d really just want to be able to cancel the debouncing. That is, we’d want to compress our application’s sense of time back into a singularity, have the effect happen immediately, and then let things run forwards from the new effect onwards. That is: make the debounced effect fire immediately such that a new event can then also fire immediately, ensuring the right chronological order.

Now, it’s a bit hard to explain the sorts of cases where this can be a problem precisely because they tend to involve an application of a certain degree of complexity. But they are, I assure you, real and, in fact, can cause extremely confusing and hard to debug behavior that you may well have run into without even realizing it.

The coders who might be most familiar with the problem are probably not even your standard front-end developer but, in fact: video game programmers. Those folks deal with this problem all the time: multiple users emitting application updates at various points, but with some actions debounced, some not. I’m willing to guess that anyone who has tried to create high-performance applications (such as collaborative document formats) that allow more than one user or thing to create events at any time. Most web-developers don’t run into it precisely because they only have one very sluggish user emitting events at any given time, and rarely fast enough for debounced events to screw things up. Or, if they do, because most applications don’t yet actually have a “single-source of truth” keeping track of all this stuff.

But increasingly, web applications are full of extremely expensive operations that require careful management, including techniques like debouncing. And, increasingly, they do have a single-source-of-truth acting as the stateful historian tracking all changes. Given that, anyone who starts to play around with precise event timing is liable to run into these sorts of ordering problems.

I don’t want to be purely hypothetical here though: one case I’ve personally run into is debounced typing vs non-debounced row deletion/reordering. Here’s the gist of it: a user types into a WYSIWYG with a debounced store update that’s just one in whole a row (list) of them, then quickly deletes that row. The result is that the row will first vanish… but then immediately reappear (because the deletion event will arrive before that final update to that row does, having the effect of recreating the row)! Or, alternatively, they do the same thing but add a row above it: now the out-of-order update thinks it’s updating the wrong list index. The result is… chaos.

Chaos isn’t done with us yet though, because debouncing is only the simplest case where we might want to give a requests’ call-site the power to “stop waiting” for some result.

How about a network request for some bit of data that we already have some “good enough” fallback for? If it goes on too long, we might want to just give up and return that fallback instead. Now, of course, we could always write that fallback effect as its own operation and dispatch it alongside canceling to the old one. But this has the downside of, well, being a whole ‘nother operation we’d have to keep track of. A potentially fast-forwardable request, on the other hand, would allow the fallback dataset to just flow through the exact same, already established, operation pipeline (no matter how complex it might be). And, more importantly: with that weird, conditional detail abstracted away from the core operation code that generates the effects!

Another example might be an expensive set of chained asynchronous steps to process some large bit of data, but with each step producing some result that’s acceptable/approximate enough to be dispatched as the final effect if we decide that time has run out and we need to run with what we have available (think image processing or creating fractal topologies in multiple passes: at any point we can give up and abandon further work, and still have an acceptable type/result).

It’s not that there aren’t other ways to handle those cases: it’s just that the alternatives tend to be elaborate, imperative, and often confusing to read and reason through after the fact. Fast-forwardable operations, on the other hand, would in theory make that control as simple as cancelation was: no extra scope-crossing tokens to create and pass around, no special syntax or extra state to keep track of, no alternative operations to write and run. Plus, it’d be easy to put a bunch of “fast-forwardables” into an “oh shit, fire all the missiles now, we can’t wait any longer” queue. You’d just start the operation, and then, from the exact point it was triggered, you get back a thunk’d means to give up and demand whatever results are available so that the application could move on with it’s lifecycle without waiting for everything in the queue to complete.

Of course, the obvious problem now is: mixing cancelation with fast-forwarding gets extremely complex very quickly. In the examples above, instead of writing Higher-Order functions that annotated regular functions to make them “debounced,” I could have just written out examples of functions that just create cancelable debounces OR fast-forwarable debounces instead. It would have been simpler and a lot easier to get people to read!

But why didn’t I do that? Because our goal as ethical programmers should be to be serious about the complex abstractions we try to advocate that people use. And it would have taken us longer run smack into the actual problem.

What is the real problem? You might have noticed that those cancelable/fast-forwardable operations actually mixed together two different things. That is, the fast-forwardable implementation actually mixed cancelation together with fast-forwarding.

If you create a function using the latter logic and then call it, it’d return a fast-forwardable control. But if you called it again before the delay had run out, it’d cancel the previous request and instead create a new one: one that, sure returned a fast-forwardable function… but it actually still had a cancelation-based logic working under the hood!

That’s not a given, though. If you were to call that function a second time within the delay period, instead of canceling the previous request, we could have just fast-forwarded it instead (causing the effect immediately before kicking off a new one). That we didn’t do that was a choice. It might have been a choice that made sense in many cases, including whatever you assumed that vague implementation required, but it was a choice none-the-less. And we need to be wary creating examples that fit “most cases” exactly because we haven’t thought through the alternatives (if we had, maybe we’d be writing different examples!).

The reality is that we’re dealing with a matrix of possible options when it comes time to debounce side-effecting actions. And that’s very a messy thing to express in a simple way. Should a debounced function, called a second time within a certain period, cancel… or fast-forward? There’s no universal answer. We’d have to choose a particular implementation and run with it. Likewise, should that function return a simple cancelation function or a simple fast-forwardable function? Remember: what we originally celebrated was the ability to return just a simple function that 1) took no arguments, 2) had no other side effects, etc. etc.

But if that’s so, then we have to grapple with the reality that a single anonymous function could do almost literally anything, especially in untyped javascript. How would we know that it’d either cancel or fast-forward? From the perspective of a larger application, we simply can’t know: it’s just a function, after all, and so all we know is that it’s the one control that was returned synchronously, and our choice is just that we can either choose to call it or not. Any yet: now it’s ambiguous whether calling that function will cancel an effect or cause it to be called immediately! The resulting effects are radically different!

Anyhow, that’s what’s troubling me. Cancelation vs fast-forwarding turns out to be a messy problem. It’s easy enough to ignore if all you care about is cancelation. But my point is that that’s not all we have to worry about. We just haven’t been worrying about it (in javascript-land, mostly because most Promise-based approaches haven’t even gotten cancelation working well in the first place!).

There are, of course, some ad-hoc solutions to this problem. Instead of returning a nullary function from Task.forks or debounced function executions, we could always return an object with both a .cancel a .ff methods to call. Now we have both options! The nice thing about this is that not all forks/functions would have to return working interfaces for either: just the ones that made sense. The beauty of these sorts of control functions is that doing nothing at all (ie. noop functions, now easily written in ES6 as _=>{} ) is perfectly acceptable. But this is approach is dramatically ad hoc, especially because now programs would have to rely on knowing the exact names of these operations to call on the returned object. Before we just had a simple, carefully restricted function to call. An object interface result is a complex, extremely demanding signature.

Another option though, might be recoverable cancelations. By that, I mean, nullary cancelation functions that do return something: yet another function that would fast-forward the now already canceled effect. Then all you’d have to do to cancel an effect’s timeout and also force its execution just to call it twice:

That’s actually not that crazy! Fast-forwarding ultimately involves canceling the timeout too, after all, and the act of canceling a timeout could always just return something that would forcibly fast-forward cause the original operation to run immediately all over again. You’d have to be careful so to avoid the operation from ever getting called twice (hence the “called” guard), but it’s a plausible solution at least.

And yet… it still rubs be the wrong way, because in addition to the extra restriction (cancelation functions MUST return another function, or else risk breakage), calling something twice to do one thing just feels gross. I mean, just look at it:



Of course, like I said, we’re trying to deal with complex, real-world interface designs though. So the real complication comes not just from single interfaces, but their implementations in higher-order operations. I haven’t mentioned Tasks or even Promises yet, right? But ultimately we want to get there: create a coherent type-signature for cancelation and fast-forwarding. And man: that’s tricky. Because one thing that Tasks can do is combine with each other: either via .ap (apply two parallel effects to an operation) or .chain (apply two synchronous effects sequentially). To get all that working, we’d have to make Tasks much more complex, forcibly wrapping their constructor returns, ensuring that they’re functions, forcibly wrapping the returns from their forcibly returned functions and ensuring that they’re also functions.

There’s another wrinkle there too, because Tasks need to carry through some logic for cancelation that works across operations like .chain, which involve two different Tasks, each of which might have their own cancelation/fast-forwarding logic. But back when we just assumed that cancelation was the only thing to worry about, we could have implemented that passthrough as a sort of stateful switch. Consider:


In that case, we can write our merged cancelation interface to run the cancelation function from the first Task OR the Task cancelation function from the second depending on what point the computation had reached when the cancelation was called.

But for fast-forwarding, that won’t do: we’d want to call ALL the fast-forwarding functions at once, such that all the chained operations are jumped ahead from that point on. So the logic involved actually has to collect ALL the nullary cancelation functions in any computational chain and then execute them all at once if the function returned from fork is ever called: fast-fowarding whatever in that chain can be fast-forwarded.

Anyhow, I don’t have a great solution to offer to all this yet. I’m just going to keep thinking about it! I just thought it was an interesting wrinkle in larger the cancelation debate: one that makes it even larger still!




Primarily Javascript, potentially personal, possibly pointless. I welcome and am fascinated by your many marvelous opinions.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Trillo Workbench for Similarity Matching on the Google Cloud

Playing with EKS Fargate

Should You Speculate on SHIB? The Latest Crypto Meme-Coin

Finishing up: Self-Aware Software Artisan

Blockheadz Rarity Lists

Creating Collectables in Unity

MongoDB and E-commerce

Maven build stage status from Jenkins to Slack

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Drew Tipson

Drew Tipson

Primarily Javascript, potentially personal, possibly pointless. I welcome and am fascinated by your many marvelous opinions.

More from Medium

7 Reasons to run React JS for speed boosting in software development vs Angular JS

Fun with Promises in Javascript

HashMap: Map vs Object in JavaScript

Does JavaScript “constructor” really create the object?