New Interfaces and Emerging Questions

Lately, there’s been a lot of excitement regarding new forms of human-computer interfaces. The way users interact with their computing devices are becoming more varied as we shift beyond traditional point and click GUIs. Textual conversations, voice commands and VR/AR experiences are new user interfaces that present interesting questions in relation to how their ecosystems will continue to develop and impact markets they penetrate.

For example, a fairly obvious question for those building these new interfaces, how will we track, analyze, and guide user behavior in these new environments?

This is a challenge that is being answered by companies building products across all types of interfaces. For instance, Dashbot is a company that provides bot developers analytical tools to help increase user engagement, acquisition and monetization. They are using NLP-based technology to derive insights around user sentiment and relate that to specific outcomes within a specific conversational context.

In Virtual Reality, Eyefluence, provides eye-interaction technology for HMD manufacturers to improve how users interact with their devices. The company enables devices to accurately capture eye movement and subsequently use that information as a means to enabling eye-interaction with VR experiences.

Demo of The Eyefluence Product at AWE 2016

Another question that becomes interesting to discuss is what use cases will these new interfaces be best served to handle?

It’s becoming clear that once the hype settles (e.g., “Bots are the future!” “VR will transform the world!” etc.), very targeted use cases will be the focus for most companies building businesses in these spaces. Even further, it’s likely different use cases will align with different interfaces.

For instance, bots seem pretty well-suited to handle short-form customer interactions, particularly related to service issues. However, are bots the best interface for users to make high-intent purchases? Conversational commerce is certainly getting a lot of attention, but at the end of the day if someone is looking to buy a new refrigerator, are they going to do it through a dialog box with a cute title (maybe Fridge.io, FreezrBot or StainlessSteelr)? Or will someone want to see, touch and feel a refrigerator before they make the purchase decision?

Similarly for VR, what will ultimately be the consumer use cases to prevail? Gaming, live events and entertainment seem to be the obvious answers here, but there are new use cases that are being explored everyday (e.g., health and wellness, productivity, tourism, etc.) that begin to spice things up a bit.

Taking it one step further, how will all of these interfaces interact with one another?

Once use cases align with interfaces, the traditional consumer journey across all kinds of markets will look fundamentally different than they do today. We already saw this when mobile computing was introduced to the mass market. Marketers across all industries scrambled to understand which devices consumers were using and how they use them. I can remember my time as a consultant sitting in countless executive discussions listening to them scramble in trying to identify what their “mobile strategy” was going to be (as if this was truly distinct from their general customer experience strategy).

Perhaps in the near future, the majority of us will discover a new product through our mobile phones (Instagram blogs anyone?), throw on our VR headset to see what it would look like in our apartment and then scream out “Alexa! Buy me this new coffee table!” Can you imagine what corporate executive meetings are going to look like when this happens? (“What’s our Alexa strategy?!”)

The democratization of computing devices has created the need for better ways to interface with them. This inevitably leads to questions about the development of the ecosystems around these technologies in addition to the markets they access. I certainly do not know the answers, but it certainly will become more clear over the next 2–3 years.