Thumbs up? 

Gestures, microgestures and how they’ll control the next generation of devices.

eamonncarey
Apr 30, 2014 · 5 min read

Devil horns have been a staple of metal gigs around the world since the 70s. Blame Dio. You may also want to blame him if you end up hospitalised after making the gesture in Italy. There, showing the horns to a man (particularly in the South of the country), is a non-verbal way of passing on the news that his wife is not the virtuous sort…

Consider the ubiquitous thumbs up. Contrary to popular belief, that wasn’t the gesture a defeated gladiator wanted to see in Roman times. A quirk of fate and art means it now means something far more positive — a way for us to say ‘well done’. If you find yourself in Iran, Afghanistan or other countries in that part of the world, saying those words may be the best course of action. Let’s just say that dispensing a grinning thumbs up to a local won’t end well in that neck of the woods. Keep your thumb to yourself.

Gestures are a universal language. We use them all the time. Some are easily understood — pointing, smiling, shrugging your shoulders and more. Some are far more nuanced.

https://www.youtube.com/watch?v=Uj56IPJOqWE

Every day, hundreds of millions of us use gestural interactions on the x-axis and y-axis of our smartphones and tablets to get what we want. Already people are starting to think about depth, layers and what the z-axis means from a design perspective. Microsoft recently released details of a prototype keyboard that will use infrared sensors to track gestures and movement. The Verge had a really interesting report from CES earlier in the year talking about the individual as the interface.

For me, what’s fascinating is how gestures become paramount as we move away from touching glass, plastic, aluminium or something else in order to control the device. We’re already seeing some of that with the Wii, Kinect, Leap Motion and other devices.

There is an entire species of devices in development which will see gestural operating systems assume a far greater day-to-day role for users. Google Glass is the thin end of this wedge. Right now, it’s interesting, but it’s still a box on the side of a pair of glasses. When that type of experience is available on contact lenses, or when it’s a virtual heads-up display that’s beamed directly onto your retinas, then the interface sure must be one that uses your body and the electrical impulses it generates as the control mechanism.

I can’t be alone in thinking that a largely voice based operating system is not going to work. Wandering the streets saying ‘Ok Glass’ every time you want to do something is not going to cut it. You’re never going to dictate your WhatsApp, Secret or Tinder messages in public unless you’re an attention seeker, an idiot or you lacked attention as a child.

Instead, we need to look at how gestures, and increasingly microgestures will be used for these devices as well as our day to day ones. If I had to place a bet on the next hardware startup to get acquired, I’d be considering a fairly hefty stake on the guys at Thalmic Labs. Seeing the product in action for the first time was up there with the moment I first took a helicopter out for a spin using the Oculus. It’s transformative.

https://www.youtube.com/watch?v=oWu9TFJjHaM

Reading the developer forums on the Myo site gives you some inkling of what’s to come. Future generations of this and other similar tools will allow us to use far more nuanced, subtle microgestures to open apps, navigate between cards, input data and much more. This is the point where wearables become genuinely useful for a lot of people rather than interesting to a small subset.

The difficulty that I’ve had with the idea of smartwatches and other wearables so far is that they tend to just do one or two things — they track your steps, maybe tell the time and deliver messages and other card based content to your wrist. There’s obviously more than can, could and is being done, but I still don’t see any real mass-market utility in the current crop of wearables.

If, however, you were to add in the ability to control your computer, a presentation, your Google Glasses, contact lens HUDs or anything else, then suddenly they go from being semi-interesting gadgets to something that’s fundamentally useful. MEVU’s demo of their Alive OS — featuring a basic wearable that could quite easily evolve into exactly that.

https://www.youtube.com/watch?v=DK6pUfTe_6Y

While I think the technology is interesting, I think MEVU made a mistake of picking one of the few use cases that consistently freaks people out when it comes to new devices. Payments are tricky. Gestural payments would be nifty, but there are dozens of simpler executions that will need to assuage people’s fears before they pay-with-a-point or pay-with-a-wave. With that said, they’re definitely onto something.

https://www.youtube.com/watch?v=hJ4z0GR4Vu8

While I will happily invest in any company that produces an app that allows me to mimic that gesture and broadcast the sound through my phone’s speakers, I don’t think that will be the limit of the technology. The challenge and opportunity that lies ahead will be in defining a gestural language — a framework which maybe avoids devil horns, thumbs up and other contentious gestures, and instead focuses on how best to make use of the utility that will be on offer from Thalmic, MEVU and myriad others over the coming months and years.

For this to work properly, we need to think about it being a universal language rather than something unique to each device. That’s the case with most gestural controls on phones and tablets at the moment, which is grounds for cautious optimism.

https://twitter.com/eamonncarey/status/458973098570964992

Whatever else this is — and you might think it’s bonkers, nonsense, pie-in-the-sky or something amazing, exciting and cool — to me, it’s the single biggest and most exciting opportunity to come up since Oculus and others made VR reality rather than virtuality. It’s a fundamental shift in the way we’ll engage with devices, the Internet and one another. Let’s just make sure we make the universal ‘OK Glass’ gesture one that’s not going to get us killed.

Also, if you do want to build the Bill and Ted gesture app…

    eamonncarey

    Written by

    Early stage investor & karaoke enthusiast. MD at Techstars. Board at Lingvist & Paranoid Fan.

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade