Visions of the year 2000

Leaps and Bounds

A journey through the past, present, and future of human computer interaction

Ryan McLeod
Far Flung
Published in
9 min readJun 13, 2013

--

En L’an 2000 (translated as ‘in the year 2000') was a series of French postcards illustrated by Jean-Marc Côté around 1900. Heavily inspired by Jules Vern and a fascination with personal air travel, they depicted a future filled with ornithopters, automatons, video calling, and Matrix-style learning (you should really take a minute to check out the full set). While the future Côté depicted is built of lamps, cogs, and gears, many of the ideas he envisioned have been realized in one way or another today. Sadly these illustrations were seen for the first time in 1985 when Isaac Asimov found and bought them to use in his book Future Days.

Tony Stark’s computer interface (JARVIS) in The Avengers

It’s easy to laugh at some of these passé projections while looking at a more modern one like Tony Stark’s JARVIS with longing awe. We badly want to be clairvoyant, but when we draw from the well of cultural context and current tech we’re only able to conceive of far-off futures built with anachronistic wings and levers, or glass and heads-up displays. Just like Côté’s depictions, our predictions of the future are built from time-bound stuff where only our intent is timeless. Any sort of real prescience is reserved for precogs or those who choose to be cautiously abstract. We don’t have to look like fools though; we can still predict the future if we’re careful.

“O day and night, but this is wondrous strange!
And therefore as a stranger give it welcome.”

—Shakespeare

Parallels

If you’ve read Edwin Abbott’s beloved novel Flatland or had a friend try to explain dimensions beyond the third, you’ve likely felt a unique feeling of eye-opening unease…

Flatland centers around two dimensional creatures called Flatlanders who live on a 2D plane. The Flatlanders laugh at the Linelanders and Pointlanders who reside below them in the 1st and 0th dimensions. Simultaneously, they are in awe of the mysterious 3rd dimension that they can only see 2nd dimensional shadows of. We laugh at the Flatlanders who don’t understand a cube spinning through their world, but are humbled when we realize our perception is similarly limited.

Trying to imagine what living in a dimension beyond our own is a huge mental feat. The only decent way for us to try to comprehend the land beyond our own is to learn from the parallels below us the way the Flatlanders did. But how is this useful in understanding the future of human computer interaction?

From Flatland to Mouseland:

Exploring the Dimensions of Computing

one

The first dimension of computing was essentially a console defined by a stream of characters. This stream can be read and added to one character at a time via a keyboard that we poke at with our fingers. We computed like this for years, reading and writing cryptic codes in glowing monochrome green.

two

Add the mouse to the equation and suddenly by pairing our hand with a digital tool we can recognize a whole new dimension of input, giving way to the graphical user interface (GUI) and windowing we use today. Sparked by Apple’s adoption of Xerox PARC’s WIMP interface, things arguably changed for the better. Although many saw windows and the desktop metaphor as gimmicky at first, this mentality obviously changed and 2D GUIs still dominate today.

Second dimensional computing has evolved many times over the years. Mice have grown from wooden blocks with pizza-cutter wheels to be wireless with optical lasers. Touch screens on phones gave us the first real 2D gestures, which eventually found their way into the trackpads of our computers. When done well, these smooth gestures like swipes, drags, and pinches not only replace the multiple taps and mouse drag carryovers from a point-and-click era, but also improve interaction.

“’Pinch to zoom’ is now probably the single most broadly understood multitouch gesture; it feels like something we’ve been doing all our lives…” —FiftyThree team

three

Predicting the third dimension is a gamble. Its lineage is just beginning and it’s hard to know what it will grow up to be. Akin to the first wooden mouse, we’re beginning to see early implementations of consumer devices that can recognize 3D gesture input. While the Kinect has been out for a while, it has obvious limitations for more serious applications. How will new gesture input devices continue to develop and how will human computer interaction evolve with them?

Baby Steps

What’s currently out there, and what’s coming?

Leap Motion being used to play Block 54

Building blocks

In May of 2012 the Leap Motion was announced. Its first demo video demonstrating sub-millimeter decetion of finger/hand movements was arousing to say the least. Something about lifting our hands from the trackpad and watching our computers react in real time to our hand gesticulations evokes something visceral and exciting; on a deeper level we recognize a tectonic shift begining.

Thalmic Lab’s Myo demonstrating impulse readings to recognize complex gestures.

Next to be announced was the Oculus Rift, which allows us to explore 3D worlds via full virtual reality. Then came the Myo, planning to change the gesture input game by reading the electrical impulses in our muscles, sometimes before our fingers can even move. The Kinect One was just announced, and a fully augmented reality headset called the Meta just got funded on Kickstarter.

Something big is happening…

Bringing the Dimensions Together

One at a Time

I’ve been lucky to have a Leap Motion to play with. While I can easily speak to the experience being novel and fun, there is still something huge missing. The ability to swipe off items in to-do lists, scroll through webpages, and switch spaces with tiny flicks, is progress, but essentially maps 3D gestures onto 2D ones we already had, effectively bringing the trackpad into the air. This leads one to believe that using something like a map would involve casting awkward finger spells in the air to zoom in on our home addresses. Instead the Leap and Google Earth team has broken through that unoriginal plane by allowing us to fly. With one quick swoop of the hand, we can pan around, zoom, turn, and tilt the view. Being able to zip around with your hand is refreshingly novel, however, even this is just a small step.

Leap Motion being used with Google Earth (video)

Without the console, the keyboard was just a slab of buttons. Without the GUI and WIMP, the mouse was just a device for inputting coordinates. What will make 3D input devices more than just the device it is? We’re finally beginning to bring real 3D input to computing, but it’ll be awhile before it’s fully at home.

Scratching the Surface

We forget the daily compromises we make when we use traditional 2D interaces. It is easy to see how adding an extra dimension of input could help when we imagine stacks of browser tabs fanned out by the quick twist of the wrist and selected with a deft point and flick, bringing up chatheads on our finger tips, and turning over our hands to air type or dictate.

These possibilities barely scratch the surface though. 3D computing’s full potential can only be realized once we stop building off patterns and interfaces that arose from the mouse and trackpad.

Getting in Deep

What happens when we take 3D input devices and pair them with 3D output via something like augmented reality or virtual reality?

Video games like first-person shooters and applications like Google Earth may not have to change much to adapt appropriately, but how will the experience of things like messaging or web browsing be transformed? My vision is naïve like Côté’s, and the real future a century out must be less obvious than the one I see. In the next dimension of computing do windows or tabs make sense for segregating contexts? Do we even still point, click, or tap to direct our choices?

Simulated experience using the Meta

Currently these 3D input devices are being affected by the 2D culture they’re born into: air taps are a simple substitute for the way we click; gestures, the way we use shortcuts. They’re limited culturally by what we’re used to and physically by 2D screens. Once these devices begin to have more influence on current interfaces, we might not continue to do just one thing at a time. We might not be sending discrete, segmented commands via the mouse, keyboard, or gestures; we might be interacting on a continuum, pulling apart clouds of graphed search data with one hand, while we mark a song as a favorite, turn up the volume, and add it to a playlist with the other. Context, no longer contained by chromed windows, could be effortlessly changed with a redirection of gaze, utterance of a command, or trigger of a thought.

A Feedback Loop

We’re severely limited by the second dimension of computing. We’re pressed against a wall, but as 3D output catches up to what input devices can provide and continue to co-evolve, we will eventually break through with explosive innovation, and fall headlong into the third with a thousand new ways to immersively compute.

As 3D output catches up to what input devices can provide and continue to co-evolve, we will eventually fall headlong into the third with explosive innovation of new ways to immersively compute.

Long-range scoutting in the year 2000.
The first satellite (Sputnik I) wasn’t launched until 1957.

Other technologies will continue to enter the scene causing the third dimension of computing to swell beyond our devices and into our world. What new leaps and bounds will we see from the amalgamation of current and coming tech, such as brain interfaces, gaze detection, hyper-accurate voice recognition, and natural language processing? What new technology are we missing from our palette that prevents us from painting a postcard of the future?

En L’an 2100

Human computer interaction can only continue to improve as it closes the gap between a dimension far removed toward the one we live and breathe in. Looking to the parallels of Flatland and Côté’s futuristic visions it’s wild to imagine how radically different and connected the world will become. What new unimaginable tech will emerge from breakthroughs in detecting gestures using only wifi, to 3D holograms, even to tangential fields like optical cloaking, that will alter the course that seems so clear to us now.

Onward and forward – towards the future, the inconceivable, and the far flung.

Written in part with Matthew Newsom for Dr. Kurfess’s Human Computer Interaction class at California Polytechnic University San Luis Obispo.

Thank you Hannah Suzanna and Tag Ashby for your edits and inputs, and for making me aware of my opinionated grammar and weird constructions.

--

--

Ryan McLeod
Far Flung

Making @blackboxpuzzles – Apple Design Award cube carrier – was expelled from preschool for “defiance”.