What defines the design of interfaces and where the future of interfaces is going?

Tanya Smirnova
Tilda’s friends
Published in
10 min readJul 16, 2015

--

Remember how it became easier to work with files when you got the ability to drag and drop them from one folder to another on your desktop? And, how fascinating was the release of the first mouse?

It’s all because of the illusion that you are leading with your hand in real space. That you’re dragging real objects.

In 2012 we gained a scientific explanation. Swedish scientist Nenrik Ehrsson discovered special neurons in our brain, which are kept busy by calculating the only information we get around our arms. For instance, they allow us to avoid danger to our arms faster than usual optical neurons.

So we can say that when we use our gestures “hand by hand” with eyesight we are more efficient.

Screenshot from the video about Project Soli

Keep that in mind and add to one of the new and very interesting technology. In 2015 Ivan Poupyrev from the Google ATAP Project Soli team presented how new radar sensing technology can be used to precisely control computer interfaces without a tactile connection between the user and the computer.

I find this technology deeply fascinating — and we’ve only scratched the surface.

That’s why I decided to ask several experts about Soli:

I connected with all of them in July this year via Social Networks and asked to answer my questions below.

With this technology what do you think will change in UI/UX design?

Claudio:

UX/UI is about giving users context, content and purpose. This technology is about the purpose. UI/UI has been traditionally limited to certain input sources like touch or voice, gestures are still something that hasn’t been nailed down just yet. Leap Motion has tried and so is Oculus with their new handles… but the cost is still pretty high. Project ATAP could finally be the cheap and accessible technology that would be integrated in everyday devices. That’s a first.

In terms of changing our interaction this technology will bring a new dimension to it. From the current touch, we could evolve to Air Hovers, gestures, etc. Therefore the possibilities would increase enormously. On the top of my head, I imagine the simulation of directing a symphonic orchestra with it 😃 …or choosing a tone in a color picker with two hands.

HOWEVER, for this technology to work it still needs visual / audio feedback to the user’s action. A sound or an image that will communicate you are doing it right. That’s where the old buttons still do a pretty good job as you can move your eyes away from the screen and still get the ‘clicked’ response.

The same doubts had Tiago. He said:

In terms of UX / UI, this feedback-modal question is probably one of the key questions that I would assume need answering as a designer. “Minority Report” does an interesting work at this: it suggests that you would need special gloves to interact with the computer via gestures. Gloves on (and possibly facing a given screen) means your gestures are being acknowledged by the machine; otherwise they’re not. The right constraints means that scratching your nose won’t make the computer, delete an important file.

Photo from Depositphotos.com/@everythingposs

Gia Djahaia:

I’m very skeptical about gestural interfaces in general. Sometimes people sacrifice usability for trends or whatsoever. Soli on the one hand is capable to recognize micro-gestures (wasn’t possible before), on the other it can work through surfaces and it’s really tiny. We can imagine this combination of smart-watch with the physical crown replaced by Soli as shown in the video, but without trying it’s hard to say how comfortable it is in comparison. But obviously it can embody more functions than physical one, because we are able to define some gestural grammar to trigger additional functions (not by only turning or pressing moves), however this generates additional problems: grammars inconsistency, unobvious rules, etc. It’s easy to distribute multiple functions to multiple inputs. But it’s quite challenging to do vice versa.

Alex Bystrov:

No-touch interfaces were pure science-fiction until recently. Players like Oculus and MS Kinect are just scratching the surface of what will come in several decades to go.

The only thing that I can say for sure now — these interfaces will be way more human friendly than the ones from Iron Man. But don’t expect anything mass-market-ready too soon. We don’t have a decent Apple TV interface still, and this is not a rocket science compared to ATAP’s vision of the future.

Yury Vetrov:

Designers get more and more tools to control a UI, that are going beyond the GUI paradigm. We already have a gyroscope, an accelerometer, and other sensors; recently we’ve got 3D cameras for a motion capture like Kinect and Leap Motion.

Soli is not the first concept trying to expand our toolkit: we have seen attempts to control a UI with an ultrasound, a cell signal, a muscle tension, a gaze, and a thought. Right before the Soli announcement, some guys presented a similar idea in Aria smartwatch clip. We can’t predict whether they will all succeed or not, but I bet on Google pushing their idea really hard.

IMAGE: ARIA

Who (devices, industries, markets, etc.) do you think it will benefit the most?

Tiago:

Innovative technology, I find (and this is just me blue-sky thinking), often comes from the military, the gaming industry, or directly from other consumer market providers when they find a niche interesting enough to build themselves a reputation or make a handsome chunk of money.

Gaming. There are some good controllers for the home consoles out there, with varying quality, of course, but precision doesn’t feel again the main selling point. I would go for “portability” and “accessibility”.

But imagine not being restrained by a controller? There are “split controllers” which have a physically separate left and right sides of the controller, so you don’t have to have your arms constantly locked in the same angle, as the controller is actually two non-attached controllers.

And what if an illness or old age has crippled your ability to use a controller? Controllers are physical devices — once manufactured their ability to change and adapt to new circumstances is quite limited. Software, on the other hand, can keep on learning, perpetually calibrating itself to better interpret and serve its user.

Just train the software to read your highly unique individual array of gestures and inputs, and you’re one step closer to interacting with your game console in a more comfortable fashion.

Consumer market — My first guess here would again be “accessibility”. Think people with difficulty typing or holding a pen or moving a mouse. Think people who just don’t want to sit at a desk when writing, or having to hold their phone with 8 of their fingers locked holding the device while their two thumbs do all the blunt work. Think people who just want to be comfortable (or plain lazy). Think people who can’t see, and are dependent on tactile, haptic or audio feedback to interact with a digital system. Heck, think Stephen Hawking.

Claudio:

The world!!! Cheap production, mass adoption. I think to get the key question is, how can this technology improve our interaction with machines, and if so, how different would it be from what we have at the moment. This technology could bring a new interactive dimension to devices. Well used, it could build up to a more natural relationship with them. How could this benefit us? I think the more human our relationship with devices is, the more we can expand our skills beyond our senses. This is definitely a step forward.

Yury:

We’ll definitely see it in the next Android Wear generation, as well as other smartwatches and smartbands, whether they have a screen or not. Disney MagicBand case study have made a lot of buzz recently, so we will see other companies following this idea soon, probably using something Soli. Disney also released an amazing offline multiplayer Playmation toy last month, where Soli could add even more fun into the gameplay. By the way, Ivan Poupyrev has worked in the Disney R&D lab before joining Motorola and Google.

Professional equipment can benefit from the technology, because specialists are always ready to learn niche devices and tools, especially if they provide shortcuts for frequent actions. Lastly, we’ll see marketing campaigns using it to create breathtaking and award-winning interactions.

And of course, everybody sees the interface challenge in this technology:

Tiago:

I’m a big fan of this kind of stuff so definitely I would like to play with it.
I do find it a bit off-putting though that there were barely any compelling use cases shown (other than the smart radio and tablet drawing app) that hinted at a genuine user problem that they were trying to solve. Other than the novelty (which wears out quickly), what is the selling point for not touching something that is within hand’s reach? What is the biggie that they’re trying to solve? And how close do you have to be to the sensor to use it, and facing what direction(s), etc?
(…) The other big question in terms of UX / UI is: where is this kind of input actually valuable to the point where a mouse / trackpad / touch screen / joystick won’t do? iPhone and iPad have already suggested a possible answer: anywhere where you don’t want to be restrained by the physical limitations and constraints of any of those physical input devices. That’s why Apple decided not to have a physical keyboard with their iPhone — it would orient and constrain too much the creative potential of the designers. Good for them, it worked, and created a new language of interaction, fingers touching a screen.

I am yet to see exactly how reliable and precise this Soli technology is, but yeah, considering how hungry people seem to be for exciting new technology… Yeah it holds quite a bit of potential.

Alex:

This technology is quite compelling. But technical aspect aside for mass-market adoption, we will need another interface paradigm shift to utilize this one. Today’s interfaces are still not there. For instance, this type of controller can obviously work with a wide range of home automation devices, but this field is now still a wild wild west of experiments with no common interface and communication language. I’d give it several years at the very least before saying anything for sure.

Anatoly:

I guess the most challenging here is the sensor’s precision. If they’d be able to build the solid product which would actually work and recognize user’s gestures with an accuracy close to ideal, it would have every chance to become popular.

Max:

Usability issues.
- ON/OFF. You need dead-fast on-off functionality for it. Not to interfere with your natural behavior.
- Bad Hands. How they manage your not_clean hands (dirty/oily/etc.) — i mean when it makes your gestures work incorrectly.
- User management. multi-user management issues — who takes over the control / permissioning / many more here.
IOT.
- Device management. What device responds to gesture, how you manage switches between them.

Yury:

It’s not so often that we see a new technology, making a true revolution in interaction design; so it’s safer to talk about the new tool in a designer’s toolkit. Soli shows new frontiers in data input, although it doesn’t improve information presentation. It depends on a wearable and is limited by what we see on its screen. The user has to know this GUI well, because there is no affordance, which is one of the key requirements of a good interaction design. Luke Wroblewski nailed it, saying that smartwatches give us a false interaction simplification. Yes, they have less actions and information on a screen. However, they do it by “offloading” a part of UI into your memory, leading to a steep learning curve and a need to remember it all.

Maybe this problem will be less significant if vendors will define a common gesture language, similar to what happened to the GUI on the web and in mobile. On the other hand, there is more risk of false interactions, as soon as we overload our body language.

We should always look at a technology from every side, understanding its possibilities and drawbacks. Doing so, we’ll not be disappointed in a real usage and will utilize it in full power.

In the end, it’s worse to say that no matter how many challenges we are facing the possibilities are endless. As Donald Hoffman said in his TED talk: “The idea is that evolution has given us an interface that hides reality and guides adaptive behavior. Space and time, as you perceive them right now, are your desktop. Physical objects are simply icons in that desktop.”

So maybe this technology will give us a new perception of a new reality, who knows.

--

--