TL;DR: APIs are hard to use. We’re making it way easier.
- Game programmers know a trick we should all be using: an inspector.
- membrane.io is an inspector for the Web that makes interacting with APIs radically simpler and enables a whole new way to build customized tools for teams or individuals.
Every online service has two faces. One is the GUI in the form of a website meant to be intuitive and easy to use. The other is the API, the machine-readable version. One is human friendly, the other robot friendly. Two interfaces into the same data.
Most people find GUIs intuitive and APIs not so much. Programmers though, happen to find APIs intuitive. We understand that GUIs are just a layer over a CRUD API and we’ve developed an intuition around that. As a programmer I frequently see opportunities where switching from GUI to API would be advantageous. The GUI is limited to what some UX designer thinks you’d like to do. The API is freer; do whatever you like type of thing.
An example of this situation is what follows. I can access my connected lights through their app. I can even create a bit of programming through the app’s GUI, for example, to turn off the lights after a couple of minutes. The fact that I already know how to set timers and flip booleans but I can’t use that knowledge here is, to me, not ideal. There are things that a GUI simply won’t let me do, so naturally I want to use the API.
To use the API however, there’s no other option but to read its documentation, figure out which client library to use, figure out how to handle authentication, get an API key, find a place to host and run the code, etc. Who has time for that? Even HTTP is an implementation detail that I shouldn’t need to know about. All I want to write is:
There is a better way and game programmers have sort of been doing it for ages.
As a game developer I’m used to working on virtual worlds where every single thing is programmable; cars, doors, people, and even the weather is “connected” through APIs we can manipulate to create gameplay. The real world also has a ton of programmable things, and I’m not just talking about connected lights and appliances, but things that are really important to our everyday lives. Everything is electronic these days. For some reason though, programming the virtual worlds of video games is way easier than programming the real world we live in. Let me explain.
Game developers typically use an engine/editor that allows them to click an object in the virtual world and inspect its properties. The inspector shows values and lets devs tweak them with immediate feedback. Just by looking at these properties, programmers can understand what the object is capable of; what the object’s API is. This inspector is a unified interface for every object in the game regardless of type, it knows how to display NPCs, spawn points, walls, weapons, particle systems, etc.
Just by clicking on an object a programmer gets immediate understanding of the API that the object provides. What would an inspector look like for the real world?
What if on GitHub I could right-click an issue, select an option called “Inspect” and immediately get a live API view of the issue itself (live values) alongside a text editor with everything wired-up to control that specific issue, and a green button labeled “Run” which you can click to have the code go live? A scripting environment for the real world. Squeak but for things we actually use. The same inspector would work across all your services and give you programmatic access to: Gmail, Slack, Jira, Spotify, Google Sheets, your car, phone, thermostat, lights, and anything else with an API. You’d be able to mix and match as needed. To me this sounds like a step towards a better Programmable Web and a hell of a lot easier than having to understand the specifics of each individual API. Data is widely available, but how can we remove all the overhead involved in accessing it?
If we programmers had an “Inspect” button for everything on the web, perhaps we would be building ad-hoc features as needed instead of requesting them. Typically, a feature request for a product will only be considered by the developer if it’s aligned with the their business goals; a big enough market must exist for it. And, even then, if you’re lucky enough🤞, that feature you really needed could take months for it to be released. Computers should help us regardless of whether millions of people need something or just one.
I only watch 3 channels, why can’t I get a remote with 3 buttons?
Let me show you how I’m building this.
I will now explore three parts that I believe are necessary to have a Universal Inspect Button and how Membrane implements them.
The first part is the “Universal Object Model” (for lack of a better name). If the DOM is a programmable wrapper for HTML documents, the Universal Object Model is a programmable wrapper for web APIs. Game engines can have inspectors because objects exist within a uniform and introspectable type system (e.g. C#) so we need an analogous concept for the Web. Of course, there is an infinite number of web APIs so the system must be pluggable with “drivers” that know how to map an API into the Universal Object Model.
In Membrane this concept is represented by a user’s Graph. A single data structure that contains all nodes exposed by drivers (API connectors) that the user can use to interact with her services. Nodes in the graph typically represent resources available through the API. The Graph is personal as drivers are bound to user-specific accounts. In Membrane users bring their own API keys.
The second part is the ability to reference any node in the Graph. A sort of URL on steroids. URLs can point to arbitrary resources but are limited in that they cannot point to data inside or referenced by the resource. For example, try to craft a URL that points to the “star count” (an integer) of React’s repository on Github. You can’t. Github exposes an endpoint to retrieve a repository resource as JSON:
It’s now up to you, to drill down into the response and get the
stargazers field. Again, if you wanted to merely point to it? No can do. It’s like it exists in a different, unreachable-by-URL dimension. But why? There’s no reason for data to be sliced in these arbitrary ways other than convenience for the implementers. There’s value in being able to point to more granular pieces of data though. For example, I could create a generic application that tracks a number over time and notifies me if it spikes. I can then use this generic application to keep track of stars on Github, my blood sugar level or the price of Dogecoin just by pointing to one or the other. The concept of a C pointer applied to the Web.
In Membrane, you use “Refs” which are analogous to URLs but actually designed to work with programmable interfaces, for example, Refs are typed and arguments are explicit (let’s be honest, we’ve been abusing the original intent of URLs for a while). Just like a URL, A Ref encodes the steps needed to reach a particular node in the graph, for example, to point to React’s repository on Github you would use:
This Ref is semantically equivalent to the URL above; they both point to the same conceptual data: The React repository. For now try to ignore the verbosity of a Ref, there are good reasons for it, I swear. To point to the stargazers node you’d simply append
Conversely, you can point to a single user:
or simply to the entirety of the Github graph as exposed by its driver:
This works for any other service for which there is a Membrane driver, I’m using Github as an example. By the way, drivers are open source and community driven.
Now that we can point to arbitrary pieces of data, in any web service (provided that someone has written a driver for it, Membrane drivers are easy to write, I promise), we need the third and final part: the ability to turn the things that we’re seeing through a GUI (for example, Github’s website) into a Ref so that it can be inspected in the Graph. This would be the equivalent to clicking on an object in the virtual world of a video game.
In Membrane we use a simple trick to make this work. Each driver has a known set of URLs it “knows” how to interpret. For example, the Github driver “knows” what Github URLs look like and what they refer to so it has the ability to turn a regular Github URL like:
Into the Ref:
Which in Membrane is a programmable, typed, construct.
(notice how the URL is a regular Github URL, not a Github API URL. The driver understands both but the former is what allows us to go from GUI to API since it describes what we see in the browser)
Let’s recap. We now have:
- A graph trough which we can access arbitrary data, typically from APIs
- A way to reference (and access) any node in the graph
- A way to turn URLs into graph references
So if I’m browsing the web, I now have a way to turn the thing I’m looking at (an email, a JIRA ticket, a Github issue, etc) into a programmable version of itself. A standard way to go from GUI to API that works for all services. The Universal Inspect Button.
Finally, The Graph wouldn't be interesting if we couldn't write code against it to create tools, personal or otherwise, but that's a topic for a follow-up. Sign up at https://membrane.io to receive updates and get early access.
Membrane brings a radically different way to interact with APIs in hopes of facilitating the creation of tools and squeezing out more functionality from our computers. Here’s a list of topics I'll be writing about in the coming days:
- How we abstract away the concept of pagination and the quirks of each individual implementation. Simply use
reduceto iterate over anything 🔥
- Capability-based system so programs are restricted to a subset of the Graph 🔥
- Build personal or team dashboards by composing functionality from multiple services and rendering custom node views 🔥
- Open-sourcing Membrane's core so users can self-host it. Privacy and security very important to us 🔥
- … and many more stuff I’m excited to share with you.