NO.1 UX problem of AI voice assistant
No matter you are using Amazon Echo or Google Home, the №1 problem that a user would faced is: What should I talked to it? If I say something and it doesn’t understand me, that will make me so stupid.
Here is how a user start using a voice assistant:
- Look at the user guide app or website;
- Search for user guide;
- Say the command word to the AI as instructed;
- Get feedback
- User continue to learn
It’s horrible! People are already get used to “learn how to use an app by using it”. And “Don’t make me think” is still a best criterion to for good user experience, but no voice assistant do it properly now.
AI, so far is not a human being, talking to an AI assistant is just like talking to an uneducated child, so the key thing here is the boundary: Which sentence does the AI understand and which one not.
Here is the solution: Let the AI be a properly proactive talker, talk and teach the user under specific situations.
Examples:
1.
Challenge: How to let a driver knows that the vehicle AI assistant in his/her car, has a functionality called
“Use your voice to set up multiple destinations”.
It’s one of the various functionality that the AI has, we cannot show the feature list to the driver directly, that will be too stupid.
The solution: If we detect the user arrive a destination first, and then set up a different destination, and then, we can let the AI talk to the user :
”Next time, if you want to go to office and train station, you can say, ‘Go to office first and then go to train station , then I will help you set up two destinations and plan a suitable route for you.’ ”
2.
Challenge: How to let a user know that the AI has shortcut command word.
If the user is listening to music, and the user touch the screen to switch to next song, then the AI assistant can come out and say
“Next time you can say ‘next song or last song’ to switch music. ”
All in all, even though AI is not that smart now, AI can still teach user how to interact with them in immersive situation.
