Photo credit thalmic labs

Getting Physical

Or, “Why 3D Gesture Control Is An Area Of Focus On Innovation We Might Not Yet Know That We Want But That We Absolutely Need”

Alonso Holmes
3 min readAug 18, 2013

--

I read an unfortunately shortsighted post this afternoon entitled “3D Gesture Control Is An Area Of Focus On Innovation We Likely Don’t Need Or Want”. The author focused mostly on the shortcomings of various gesture control products, and wrapped up with this gem:

Even if executed well, I’m not sure any solution is going to be anything other than a niche curiosity – we’ll probably see input take other, unexpected courses of evolution instead. They MYO and others could still prove me wrong (and I hope it does), but if you’ve got a farm to bet, I wouldn’t bet it on a gesture control revolution.

This misses the point completely. Will there be a “gesture control revolution” in which the mouse, keyboard, and touchscreen are replaced completely by hand-waving? Probably not. Are these gesture control devices kind of disappointing as replacements for your mouse? Absolutely. Does this matter? Not at all.

Companies like Thalmic Labs and Leap Motion aren’t trying to kill the keyboard, they’re trying to expand our conception of what it means to interact with a machine. That they offer some overlapping features enables them to bring a product to market that is somewhat ahead of its time. Ahead of its time not because of a future revolution in computer input devices, but because of a steady change in the way that we interact with the physical and digital world around us.

The nature of the computer is changing. Processors and wireless chips are tiny, cheap, and power-efficient. This allows us to add intelligence to the physical world in areas where it wasn’t previously possible or practical. Some people call this the Internet of Things, but it’s less of a singular revolution and more of a natural blossoming of applications driven by higher capabilities at lower cost.

In the context of the so-called Internet of Things, computers don’t have keyboards, and they don’t have screens. If you log in at all, you might do so by tapping your wallet or your phone — instead of a visible interface, they have a physical one.

Naturally, there’s a whole new “vocabulary” of interactions cropping up here. In the same way that manipulating a map is the same across virtually all touchscreen devices (pinch to zoom, drag to move), controlling a quadcopter via gestures is generally done by holding the hand parallel to the ground, and tilting it in the direction you would like to fly. This makes good physical sense — it’s intuitive.

Gestures are a huge part of how we will be interacting with devices in the physical world in the years to come, and it’s an extremely good thing that there are several companies in this space already. Nevermind that most of the consumer-facing applications aren’t especially earthshattering — we’ll get there.

Gesture control devices aren’t valuable in and of themselves. They’re building blocks — essential components of a new type of interaction. I’m more than happy to shell out $150 for a device that interprets the electrical activity of the muscles in my arm in a meaningful way — not because I want a more interesting way to scroll through a webpage or check my email, but because I want to build something exciting.

--

--

Alonso Holmes

My mission is to delight people with software. Sometimes available for hire. alonso.io