Day 2 in the Desert — Talking with Robots / Hey, Ford’s doing pretty good!
I’m here in Phoenix, Arizona for the User Experience Professionals Association (UXPA) conference, where for three days I’ll be highlighting my experiences and top learnings in the field of design as it relates to UI, UX, and future technologies like voice, AI, and more.
Yesterday here at UXPA there was a whirlwind of talks given on topics ranging from voice interfaces to designing technology for an aging generation. In this blog, I’ll be highlighting some learnings from the voice interface talks, as well as a quick epiphany on Ford as it relates to design and design thinking.
Talking with Robots: Are we there yet?
TL;DR
- While voice interfaces like Alexa do some things well, they are also frustrating for users due to a 30–50% failure rate
- Recommended Reading: Talking with machines? Voice interfaces and conversation design
Let me go ahead and start with the positive. In one of the talks from yesterday the speaker highlighted something that I hadn’t thought of when it comes to the smart speakers that we use, seen here:
When you think about it, this gets a bit deeper than just Alexa preventing you from picking up your phone. For example, think of this scenario. You are sitting at home and starting to wind down at 9 PM and want to quickly remember what your morning schedule looks like for the next day. With a smart speaker, you can easily and efficiently get this answer just by asking “Hey Alexa, what time is my first meeting tomorrow?” In a few seconds and with one question, you have your answer and can go about getting ready for bed.
What happens otherwise? You open your phone or laptop to check Outlook or Google Calendar and then get sucked into replying to emails for the next hour. Your desire to check just one thing, your first meeting in the morning, leads to a whole series of other work-related tasks, and before you know it, it’s 10 PM. With a smart speaker, however, you can focus on just that one mini-task as Alexa or Google Assistant prevents you from getting sucked into a wormhole.
Now, as I mentioned before, when it comes to voice interfaces, it’s certainly not all positive. In one of the afternoon sessions, Professor Stuart Reeves walked through some of the results of his team’s study where they deployed five Alexa devices to five households in England.
I won’t go through all the findings in their study, but a few things did stand out. First, they discovered a failure rate of 30–50% of Alexa. This is seen as needing to repeat requests to Alexa numerous times or Alexa failing to hear the voice of the requester amongst the murmur of a background dinner conversation. As Reeves pointed out, (although this is starting to change) voice interfaces like Alexa are predicated on one-at-a time interactions where, for example, a single user is sitting at home with little background noise and asking Alexa something. As we listened to several clips from the study, however, we see more commonly that the situations where Alexa is involved are much more complicated. Kids are laughing, several people are talking, dishes and cutlery clang in the background, or participants are battling over who control Alexa. The simple world where a user is sitting at home silently reading a book and asking Alexa, “What is the weather tomorrow?” isn’t quite as common as originally planned.
Another interesting point that Reeves highlighted is the idea of progressivity — that as people, we seek to keep things progressing. This can be seen in good and bad examples of the way that Alexa provides error messages. In the image above, you can see that when Alexa isn’t sure of what the user is asking, Alexa responds, “You want to hear a station for b b intro right?”. In this case, the error message allows the user to keep progressing forward and allows for potential correction. More common, however, are scenarios where Alexa stops the conversation in the tracks. See the example below where the families wanted to play a quiz game.
In this case, Alexa doesn’t help the users progress at all, instead effectively ending the conversation after each request. The family then has to keep reusing the trigger word (“Alexa”) to make a new request each time. (Reeves mentioned that despite four failed attempts, finally the family was able to get Alexa to play the family quiz game).
So where does this leave us? Alexa, Google Home, and other smart speakers are great at simple tasks in uncomplicated scenarios (Sitting quietly at home asking to play a specific song or to ask about the weather), but the technology isn’t quite ready yet to be embedded as a true conversationalist in our hectic, complicated lives.
Ford and Design / Design Thinking: Hey, we’re doing pretty good!
Earlier in the day I sat in on a talk about the role of Design Thinking in strategy making. While the talk didn’t cover much new information (various design thinking frameworks and examples of design thinking in use), what more so stood out to me was the question and answer section at the end. A few of the questions and remarks that came from the audience:
- “This is great and all to go from A to A+, but right now we are just trying to go from 0 to say a C or a B…what can we do?”
- “At my organization we have 30 UX designers, but we are just looked at as wireframe monkeys…”
- “A lot of times, we’ve gone through the whole design process, conducted user research and came up with solutions based on this but management has already decided what they are going to do.”
As I sat and listened to these remarks, I had a quick epiphany, realizing how far along FordLabs (and Ford) is in comparison to some of these statements. Sure, we still have a lot more room to grow, but as a designer at FordLabs, I do feel very much empowered to employ the design process to figure out problems and also that the product owners we work with are open to this new way of working. It was a cool moment that made me stop and think, hey — we’re doing alright here!
That’s all for me for now, off to Day 3 in the Desert!