Computing unleashed @CES2015

Thalmic Labs
thalmic
Published in
4 min readJan 14, 2015

CES 2015 has come to a close, and the opening keynote from Intel CEO Brian Krzanich stands out as the highlight of the event.

The address touched on three familiar themes: computing unleashed, intelligence everywhere, and the wearable revolution. The eight-minute recap above is a must-watch, essentially a reader’s digest of the state of tech in the 21st century.

Krzanich showed off a computer the size of a button, a swarm of autonomous drones able to navigate an obstacle course, a wrist-worn selfie bot, a virtual keyboard, a fast and powerful 3D printer made in partnership with HP, and (believe it or not) a lot more. It might as well have been titled “welcome to the future.”

According to Krzanich, 2015 represents a “new technological wave,” something we haven’t seen since the personal computer reshaped our world 20 years ago.

He’s talking about wearables.

Computers have been getting smaller for decades — first filling rooms, then desks, now pockets — but have finally reached sizes so small they can be integrated into our lives seamlessly. When your main CPU is a button on your coat, the dream of ubiquitous, wearable computing seems within reach. In a few years you won’t need to remember to bring a device with you when leaving the house. It’ll be sewn onto your jacket, with a Bluetooth LE signal talking to all of your connected devices.

Intel’s offerings are impressive, but just the tip of the iceberg. Take drone technonolgy: curiosity-driven researchers from the Flying Machine Arena at ETH Zürich, Switzerland, have made a drone that can balance a pole on its back, toss it in the air, and have it caught by another drone. This is a mathematical feat.

The autonomous drones shown by Intel in the keynote above — AscTec Fireflies — give us a new way to think about controlling these craft.

If my drone can navigate obstacles and make its way safely from point A to point B alone, then it doesn’t need me controlling it every second. Commands could be as simple as “follow me,” “go where I’m pointing right now,” “land as soon as you can,” or even “pick that up.”

Gestural controls are ripe for this kind of interaction, and the Myo armband in particular offers a truly mobile way to give commands to your robo-pet. This year’s CES has thrown into sharp relief how many new companies are flooding into the drone market with new cameras, sensors, and toys. The Myo armband is ready to control them.

Of course they have their own controls, and RealSense — Intel’s camera-based gesture control interface that has the mind-boggling ability to accommodate depth — is part of what Krzanich wanted to highlight.

“We’re going from a two-dimensional world to a three-dimensional world,” he said, moments before unveiling RealSense. “This additional dimension will change how we experience computing.”

He’s absolutely right, but it sounds like a Zen kōan: what does “three dimensional computing” mean?

Well, there’s the literal view. Look at the mouse on your desktop. It can tell your cursor to go left and right or up and down, so you can click any point on a flat screen. Two axes, two dimensions, 2D computing.

Now look at your hand. It can move anywhere in 3D space. You could grab something and pull it towards you to make it bigger, toss it to the side to discard it, and it wouldn’t matter if you were interacting with a flat screen or reaching into a 3D space. This is 3D computing.

But Krzanich’s key themes were “computing unleashed,” “intelligence everywhere,” and “the wearable revolution.” Is this the wearable revolution? Is this computing unleashed?

So long as you’re using a camera, it can’t be.

We’ve talked about the limitations of camera-based gesture control before, but the main problem is that you need to be in front of a sensor. You can’t have a mobile solution that requires you to stand in front of a fixed camera.

Something you wear that doesn’t let you move freely is a tether, not clothing. Wearables are supposed to make life easy.

Take the second key theme, “intelligence everywhere.” It’s an idea we’ve discussed before: connected objects in public spaces that people can approach and control touch-free. They have the capacity to vastly improve our daily lives, but camera-based sensors take the “everywhere” right out of it. Gestural controls can only work in any environment if the device directly reads your motion and muscle activity. In other words, it needs to be worn on your body and get its information straight from your brain.

It’s becoming clear we’re at the cusp of a revolution in technology. Just as Krzanich notices, we’re unleashing computing and bringing digital technologies into new environments.

How we’ll control it all remains to be seen.

--

--

Thalmic Labs
thalmic

We’re building the future of human-computer interaction,