Transparent APIs: Motivation Behind Redite, and Asynchronous Proxies

When I first made Redite, I had one goal in mind: a fully asynchronous, basic Redis library with an API that was similar to accessing regular objects. For the most part, I believe I managed to accomplish that. However, due to limitations with how Proxies currently work in JavaScript (with return values of various traps being unpredictable at times), I had to change those types of operations to be functions instead of standard syntax like delete db.foo or db.foo = ‘bar’, which really annoys me.

The original v1.0 of Redite required the user to attach the word .get or ._promise (the latter provided for compatibility with Rebridge) to the end of any chain in order to get the value they want, and then either branch a Promise off of it or use the async/await syntax, like so:

const db = new Redite();
const result = await db.foo.get;

This may seem fine to some people, with the verb stating the action at the end, but to me, this seemed quite intrusive and didn’t line up with my visions for the project, because:

  1. It reserves a key that the end user could potentially wish to use.
  2. It’s not as close as to regular object syntax as I’d like.

So I did some researching to figure out how I could get the most natural API I could for it, and this is what I managed to achieve with v3:

const db = new Redite();
const result = await db.foo;

Which can be compared to accessing a regular object like so:

const obj = {foo: ‘bar’};
const result = obj.foo;

This was managed to be achieved by manually adding a case for the then property, making it return a function that returns a Promise, which in turn practically gets hidden with the await keyword.

I personally call these “transparent APIs”, although I’m not sure if this is exactly the right term for them, but I don’t really know what else to call them. Essentially, I view it as creating APIs in such a way that it utilises core language features in order to make it seem like it’s doing less work than it is — having all the heavy lifting transparent to the end user, so they only see the end result.

This design pattern can especially be seen in use in Python, where the language provides various different “dunder” methods (class methods which have a leading and trailing double underscore, like __init__) which are then treated specially by Python in situations like iteration or adding two values together. (There’s a good talk on using dunder methods like this, given by Raymond Hettinger: https://youtu.be/wf-BqAjZb8M?t=1480)

Of course, the main problem with this sort of system is how do you do asynchronous tasks with them? Well in Python’s case, before 3.4, asynchrony didn’t really exist. This wasn’t a problem as libraries would do basically everything synchronously. However in later releases, with the introduction of asyncio, people wanted ways to do asynchronous iteration, context managers, and the like, which resulted in the creation of the __aenter__, __aexit__, __aiter__, and __anext__ dunder methods, giving language-level support for asynchronous iterators and context managers. Other than these specially crafted methods, however, there’s not really much other support for asynchronous operations.

In JavaScript, the closest equivalent of Python’s dunder methods is the Proxy object introduced in ES6, at least when it comes to overriding common operations like getters and setters, with Symbols being used instead for things like iterators. Asynchronous tasks are a lot easier in this case, as you can just mark the get trap as an asynchronous function and you’re off to the races. But what about the other traps; set, delete, etc.? Well that’s the problem, in the case of Redite at least.

The return values of the set, delete, and other traps, don’t take the value you want to return into consideration: either coercing it, or just dropping it completely. This behaviour isn’t exactly a problem if you’re just doing simple operations in them, say an implementation of a simple cache, but they’re a very big problem if you want a transparent API with asynchronous operations.

In my opinion, this is a bad decision to do for people who wish to create interesting applications of metaprogramming, while still keeping the benefits of design patterns like asynchronous actions. And sure, await db.foo = 'bar' might seem more weird than await db.foo.set(’bar’) syntax wise, but looking at it from the perspective of a human, I’d much prefer it.

So hey ECMA, can we get async proxies yet?

What’s your thoughts on this style of API?