I/O bound

JULY 13TH, 2016 — POST 191

Daniel Holliday
EXECUTE

--

I’m fighting a losing battle. Every morning I bring a 60% mechanical keyboard in my bag to work — one without legends (letters, numbers, symbols) — and plug it into work’s computer. Initially, I got the “oh that’s cool” from my coworkers — the custom keyset colourway of classic two-tone grey, a pink cluster on the right, green keys for the modifiers either side of the spacebar, and a triumphantly red escape key is striking in a sea of shiny slimy black rubber domes. The sound of Cherry MX Clears, the switches that sit underneath every key, is obnoxious, not helped by the aluminium plate and case which can give the board a resonant ‘ping’ when keys are released, their springs firing them back upright. But even if coworkers look at it with idle fascination, their reaction borders on speechless disgust when I tell them how much it cost. For them, at that price, their $30 Logitech combo the company provides is perfectly fine.

The battle I’m losing is trying to convince them otherwise. And it’s fought on a battleground that itself is once again seeing troops flock to. The battleground is I/O, inputs and outputs, the conduit by which humans are able to relate to computers. But the fact that this battleground is largely invisible is testament to its importance, its naturality, its paradigmatic force to shape not only technology but everything. It’s impossible to overstate just how fundamental I/O is.

One can get an insight into its importance in rumours of an headphone-port-less iPhone, the possibility of the severance of a sensory umbilical enough to whip up the tech journalism cavalry. More recently, The Atlantic reported on the continued search for the next computer interface, solidifying an implication that a cursory glance at the history of consumer electronics might suggest.

When the Apple Watch was announced with it’s new input method, the Digital Crown, Nilay Patel of The Verge chalked up Apple’s success to an aggressive exploration of input. The first consumer-ready mouse was imperative for GUI OSes like that of the Apple Lisa. The clickwheel on the iPod was the best use of space to allow efficient scrolling of huge music collections. And multitouch on the iPhone… well, shit, look around. Sure, the Digital Crown didn’t fulfil this promise, not even a footnote at the recent watchOS 3 announcement at WWDC. But if we fold the The Atlantic piece onto this thinking, it approaches impossible to conceive of a Next Big Thing™ happening without a shift in the mode of interaction.

This problem is becoming increasingly hard to solve, however, as we come to nestle up against what are hard boundaries for humans and computers. And these boundaries can be defined in terms of I/O. Humans’ essential boundary is our output capacity: we can only make so many moves in so many hours because our bodies are made of meat. I type at around 95wpm, for example. If I really applied myself, I might be able to get up to 120wpm, paying attention to my technique and using right ‘Shift’ and double-thumbing the spacebar. If I went away from the QWERTY layout and embraced Dvorak or Colemak, layouts that are designed to be much faster, I might be able to crack 150wpm (provided I learn to adapt before succumbing to RSI or arthritis).

If I really wanted to take this too far, I could get into stenography, the technique used by those taking notes in court or parliament, in which instead of typing individual letters, phonetic units are “chorded” and computer software does the rest. I might be able to get to 200wpm, and that’s a big might. But I know for sure my brain is still going to output words at a speed no keyboard system in the world is able to accommodate. My mechanical keyboard — with it’s programmability allowing me to keep things like arrow keys on the home row and embedding strings like emails to just a single key press, and blank keys forcing me to keep eyes off the board — allows me to get far above a stock-standard rubber dome in terms of output (or so I tell my coworkers).

Compared to our paltry capacity for output, our input capacity is seemingly unbridled. Provided the data comes through one of our few ports (this is important, I’ll come back to it), we can literally drink it in. Our eyes and ears are phenomenal chasms which data from the world fills, our brains (almost) experts at its interpretation to assist our navigation of the world. On the input side, we’re probably close to as good as computers are at outputting.

Computers, spidered into networks of electricity and electro-magnetic radiation, are similarly unbridled in their output. And they’re still based in binary. With quantum computing, this can only increase inordinately. They are specifically designed for output, for simply being better than “manual” work. But computers’ essential boundary is their inputs. Even if a computer had every port imaginable, the ways computers can receive information is pathetically narrow compared to humans. Computers need everything made blindingly explicit, the inputs painfully precise. That’s why we use keyboards. We hit the ‘A’ key, the computer gives us an ‘A’ onscreen — it gets it. Even with developments in machine learning, object recognition, and context-sensitive artificial intelligence, computers are pathetically pedantic.

As the The Atlantic piece points out, we need something — we need some way of closing this gap. Both of us, humans and computers, are so much better than this and we know it. So even though we can invent a bunch of different ways to get information into a computer, upping its input capacity, we hit a wall when we try to expand human output capacities. We don’t really get hardware upgrades on our bodies. We have fingers, hands, voices, all of which are cripplingly inefficient. Before you tell me that voice is the next interface paradigm, have a go at reading this piece aloud. Then imagine if that’s how I had to compose it.

Computer interfaces have always been a hack, some way to repurpose our meaty hardware. And this hacking is exactly what makes certain technologies like VR and even the humble headphone so transformative. They hijack our meaty means of getting stuff into our brains by presenting it as if it were natural, simulating a sensory “stage” (interestingly both taste and smell have largely been put into the too-hard basket). Hacking means of output is just harder.

The The Atlantic piece concludes with what we all accept will probably be the next paradigm: something tantamount to a neural lace, popularised by Elon Musk. A brain-computer interface could literally unbound both entities: our shitty output no longer fed into their shitty input. This is as wild to think about technologically as it is culturally, paradigmatically. What this could enable is perhaps inconceivable just yet.

Or maybe I am conceiving of it and just having a hard fucking time getting it through my keyboard.

Only the best stuff. In your inbox each week.

Sign up for one email per week with the best of the week before (and sometimes a little more).

Join my email list.

--

--