The Rise of The Voice Assistant
Voice interfaces have been adopted faster than nearly any other technology in history. To get a little perspective on the voice boom, consider the rise of Amazon.
The Echo Dot was the top-selling product on Amazon.com at Christmas for three years in a row (2016–2018), while the Alexa app topped the Christmas free app charts in the iOS App Store and Google Play Store in both 2017 and 2018.
Research by Gartner and the US Census forecasts a rise in US household smart speaker ownership, from 7% of households in 2016 to a staggering 75% by 2020.
Use of voice assistants is set to triple over the next few years, with TechCrunch reporting an estimated 8 billion assistants in use by 2023. One thing is for sure: voice assistants are here to stay.
A Crowd of Assistants
If you live in a connected home these days, chances are it’s powered by either Amazon or Google. That journey very likely started with a simple coincidence: the connected speaker you purchased in the first place. Most people will continue with that brand throughout their home, for ease of connection and loyalty to that system.
Let’s imagine you live in a Alexa-powered connected home. You turn up at work, which happens to run Cortana because of the office’s Microsoft infrastructure. You then head down to the Mercedes dealership to buy a new C-Class, which in turn runs its own bespoke voice assistant system, Hey Mercedes.
Pretty quickly it becomes clear that even though you have one assistant at home, you’ll be interacting with a crowd of assistants in your day-to-day life as you move through different environments. This is one of the key challenges that we face at German Autolabs: how do we get Chris to communicate with all these different voice assistants?
The Magic Word: Arbitration
Voice technology is already at the point where wake words like Ok Google and Alexa… are technically unnecessary. A new breed of contextually-aware assistants will understand when you are addressing them, and what tasks they should engage with.
Here’s how you might interact with the in-car voice assistant of the future:
Navigate me to Soho and find me a parking space. Oh and it’s our anniversary so I need a table for two tonight at a nice Italian restaurant — and don’t forget to send Lucy some flowers to her office.
Automotive OEMs don’t want to enter the domain of developing a booking system for a restaurant or a flower delivery wholesale service, just as Deliveroo or Interflora aren’t exactly falling over themselves to build cars. This is where arbitration comes in. The central Natural Language Processing (NLP) unit recognizes which intents to deal with internally, and which to hand-off to other assistants and platforms. Internal maps for the navigation, Foursquare for the table, Alexa to pay the local florist for the flowers, and Google Assistant for the calendar invite to the spouse.
No wake words, no platform exclusivity — only functionality.
What will the journeys of tomorrow look like? One day we might be passengers in level 5 self-driving cars — fully autonomous and requiring zero driver input. This is the point where cognitive arbitration will come into its own:
Drive me to a boutique hotel in Paris and book me a room. I’ll watch a couple of films on the way, something Oscar-winning, and something funny. Oh and order me a new pair of my favourite running shoes to the hotel, I’ll go for a jog when we arrive.
As technology advances, the number of user touch points will decrease, and the number of arbitration points between the crowd of assistants will increase exponentially. At German Autolabs, our products strive to make it easier for the crowd of assistant platforms to talk to each other. We enable brands to deliver a next-generation journey and a better customer experience along the way.
Don’t forget to sign up for the German Autolabs newsletter for more insights on in-car voice assistants and everything automotive. Thanks for reading.