Future Interactions

Kinneir Dufort
Focus by Kinneir Dufort
6 min readApr 9, 2018

--

Over the last decade, tapping and swiping glass touchscreens has become second nature to many of us in our interaction with technology, but an increasing array of new interface systems are vying to replace our touchscreen tapping.

Like the sound of your own voice?

Voice interfaces are one technology looking to disrupt our reliance on the touchscreen with Amazon, Google, Apple and Microsoft all pushing their voice assistant platforms as a means to control devices or gain access to information; all without ever having to touch, or even look at, the technology you’re interacting with. Consumer feedback suggests that these interactions range from convenience bliss to awkward frustration, but it’s clear that the voice interface isn’t going away. Voice interactions aren’t going to replace all of our interactions with technology, but they are finding their place. Recent research conducted by Adobe found that growth in sales of voice assistants were up 103% year on year [1].

The development time and costs associated with voice interface technology is also rapidly falling as large platform companies such as Amazon and Microsoft try to entice developers to create ‘skills’ for their platforms. This has created a wealth of opportunity for companies of all sizes to voice enable existing products or to create entirely new product categories which simply couldn’t have existed before. Two areas where this is particularly relevant are where large colour touch screen interfaces are not practical, or where hands or eyes-free operation is a real user benefit such as in the car.

Where voice solutions need to be more tailored to a specific application, an increasing array of hardware and software tools make possible the development of entirely bespoke solutions.

Voice technology specialists such as Bristol-based XMOS are developing next-generation signal processing tools to enable a whole new range of smart, connected products with enhanced levels of accuracy and voice capture functionality.

A key challenge faced by designers when confronted with the question of whether to consider voice interaction as part of the interface for a product is understanding which scenarios work well and which scenarios will feel awkward or out of place to users. Rapid-prototyping these interactions can quickly help to identify what is likely to work well and where alternative interfaces may work better.

Microsoft’s HoloLens

Heads Up

Voice interfaces are great for triggering an action or retrieving basic information such as the weather, or the system status, but they aren’t well suited to conveying lots of information or keeping a user regularly updated with information. For these use cases, the display is still king.

Google Glass may have given head-worn wearables something of a bad reputation, but this type of heads-up display is seeing renewed interest with the increasing capabilities of VR and AR systems. For example, Intel’s Vaunt system uses retinal projection to display information within the user’s field of view from a device no larger than ordinary glasses. In a factory setting, Microsoft’s HoloLens has been used to demonstrate augmenting reality for workers to provide useful information overlaid in front of their work environment right when they need it.

This type of technology has the potential to get users ever closer the information they need, seamlessly and without the need to stop and pick up another device.

Get out of the way

Whilst some devices look to persuade us to interact with them in new ways, other technologies aim to reduce our need to interact with them at all. Andy Rubin, the co-founder of Android, believes that the next big platform will be Ambient, a platform which draws on AI and assistant technologies fed with data from a huge array of sensors to create the smart environments of tomorrow. Applications could include smart homes which can collect information from sensors, then connect with smart lights, door locks, TVs and speakers to predict when you will return home, put the heating on, unlock your front door, turn on the lights and start music playing without you ever flicking a switch or launching an app.

The internet of things continues to enable a wide range of connected devices which can monitor themselves and only prompt user interaction when necessary. Examples include: connected bins, which only alert maintenance staff when they are full, to machines which monitor wear and inform engineers only when there is a potential problem. This leaves more and more in the hands of sensors and algorithms, and reduces the time we spend interacting with systems which don’t need our attention.

Project Soli by Google

Get Physical

We can’t talk about new forms of interaction without discussing gestural interfaces, although it’s not really a ‘new’ form of interaction at all. People have been exploring how to use gestural information to control software since the 70s, with early pioneers experimenting with musical and artistic performances. Throughout the 80s and 90s early interest in virtual reality systems spurred development in this area, and in 2002 we got a compelling vision of what gestural interfaces of the future might look like with the release of the movie Minority Report. Since the launch of the Microsoft Kinect in 2009 and its subsequent hack, technologists, artists and designers have been exploring how gestural interfaces might come to influence our daily lives. In the decade since, gestural technology has failed to stick. Compared to the rapid uptake of voice-controlled systems since consumer-grade technology has become available, it’s been a complete flop.

So, what’s the problem? Anyone that has trialled a Minority Report-style interface will immediately grasp the issue. Gestural interfaces are tiring. The use-cases to which the technology has been deployed means we’re often trying to use larger parts of our body to control what we’d normally do with fine motor-control, and without the positive haptic feedback of having a mouse or keyboard in-hand. It’s difficult and frustrating, and the visual design patterns that we’re used to don’t lend themselves well to this kind of interaction.

Recent breakthroughs in haptic feedback and hand-tracking might be set to change the fortunes of gestural interaction, with companies like Bristol-based Ultrahaptics combining both technologies into one tidy package.

The haptics provide the user with much needed location awareness and confirmation feedback for otherwise difficult interactions. In the worlds of VR and AR where interest is surging, hand-based interactions are desirable for their natural feel — they don’t break the illusion and mean that additional controllers are not required, saving on hardware costs. Leap Motion’s hand-tracking system has re-focussed its efforts on VR applications and similar technology is already built into some laptop and desktop computers. Breakthroughs like Google’s yet-to-be-released Project Soli look like they could address the accuracy issues in gesture recognition, allowing tiny hand movements to trigger interactions.

The future could be looking brighter for gestural interfaces, but the issue of finding appropriate use cases outside of VR and AR still remains.

Summary

It seems that so many devices and technologies have been billed as the next big thing ready to replace the smartphone as the centre of our digital lives. Every year, however, we interact with more and more technology sometimes in very obvious ways and in some ways far more subtle. It’s not a question of what will replace the smartphone rather a question of what we will be using as well as the smartphone. From the screen on your wrist to a display built into your glasses, the voice assistant to The internet of things device which anticipates your needs; product designers have never had a broader canvas of technologies to help us design the interactions of tomorrow.

--

--

Kinneir Dufort
Focus by Kinneir Dufort

We focus on designing a better world; creating value through tomorrow’s products and experiences.