On some days, on my one-hour commute from home to work, I can manage to go without having spoken more than a single word. While this doesn’t seem like a big deal, my commute is on a train in one of the most densely populated cities in the US.
Nonetheless, having not spoken during the beginning of my day doesn’t mean I haven’t been involved, in some way, with the people around me. During my commute I hear all kinds of conversations and voices, everything from verbal disputes about seats to more mundane chit-chat. This is because the most human medium for interaction is voice.
If we think of voice as a UI, it’s different from other UIs because the way we use our voice is, by nature, rooted in basic human interaction. Much like how I’m able to hear conversations on the train, when we interact with Voice UIs (VUIs) we’re not only engaging with a device that can interpret our words, but we involve the people around us—whether voluntary or involuntary—who can hear and understand what we’re saying and can interpret the tone and subtleties of what we’re saying.
“Listening is a lifetime practice that depends on accumulated experiences with sound. Humans have developed consensual agreements on the interpretation of these sounds. Languages are such agreements.”
When it comes to using VUIs in public, I still get side-eyed using Siri with other people around, especially on a crowded train. This is likely because VUIs have, almost suddenly, made very private interactions like adding things to your calendar, scheduling reminders, and generally interacting with your iPhone, public and in the open. While that might not seem like a big deal, especially when strangers—like me—can hear the conversation you’re having with your friend on the train, it just feels different.
Because VUIs are often called the most “efficient” UI—a designation often used to defend their legitimacy—the technology’s advancement is often tied to the quantification of that efficiency. These attempts at tying the time-savings a VUI offers to its viability seem wrong to me. If we’re only validating VUIs by the number of phone taps or screen clicks it can save users, it’s possible that the technology may never reach its fullest potential or find ubiquity in our lives.
If we aren’t thinking about how our use of VUIs affects the space around us and the people who share it with us, we aren’t thinking hard enough.
Our voices, first and foremost, are used to engage in human interpersonal relationships with our families, friends, and coworkers. What we share in these moments is much more than just words, but also emotion and nuanced expression.
With that in mind, VUIs should be designed in a way that preserves and enhances human elements of voice instead of suppressing them for the sake of efficient navigation structures.
Why should families care?
Children rely on their parents and caregivers as their guide to how they view the world around them. As adults, we may take our seemingly small and mundane interactions for granted, but children depend on them to model how they navigate the real world with real people in real conversation. In order for these values to take hold, they need to be delivered face-to-face, specifically by their parents or caregivers, through the meaningful use of language and expression.
Take story time for example in this Times article by Pamela Paul and Maria Russo:
“When you read with toddlers, they take it all in: vocabulary and language structure, numbers and math concepts, colors, shapes, animals, opposites, manners and all kinds of useful information about how the world works. What’s more, when you read out loud, your toddler connects books with the familiar, beloved sound of your voice — and the physical closeness that reading together brings.”
As VUIs proliferate, we need to take care that they don’t replace these interactions. A VUI product should never stand in for human, voice-to-voice interaction, however, it can add to it.
It’s crucial to keep this relationship in mind while figuring out how to design products and experiences for parents and children. As designers, we need to focus on developing a strong basis for genuine moments of active bonding, rather than creating something that would put parents in the back seat during their child’s development; a consequence of many of the VUI childcare products currently available on the market.
If a device encourages behaviors in a parent that denies a child the parental nurture crucial for their development, it will have failed as a childcare product.
Designing a Voice UI for parents and their children
Since my first day at Moment, I was taught that the value of design is found not just from users or experiences, but in how design can be used to redefine and shape future relationships.
This summer, we’re being challenged to create a vision for the future and see how VUIs could be designed to empower the relationship parents have with their children.
Through research and the synthesis of our insights, we’ve narrowed our focus to how we could design a VUI to help parents connect their children to their native culture and language.
With the guidance and brain power of the designers at Moment, we challenged ourselves to look past the mental models that current products have built around VUIs. While we may have an idea for what voice assistants like Alexa or Siri could become in the future, we wanted to think even further than the kinds of technology that are available today.
How might voice UIs be more aware of our health and emotions?
How might voice UIs make sound and space a more tactile and immersive experience?
How might voice UI’s learn from our individual speech?
We ask ourselves these kinds of questions to help us rethink how our voices might one day be a powerful force in raising a more culturally rich and diverse generation of children in years to come.
Stay with us. We’re super excited to share what we come up with!
Each summer Moment creates a research project about an exciting topic we see on the horizon. In the past, our teams have researched self-driving cars, way-finding with Google Glass, consumer-facing media consumption, symptom and medication management for cancer patients, and in 2016, Virtual Reality in the classroom with Peer.
For Moment’s 2017 summer project, our group of designers, interns, leaders, and experts will explore the intersection of Voice User Interfaces (VUI) and parents.
Thomas Carroll, Blue Cuevas, Chantal Jahchan, and Jason Kim are interns at Moment in New York. Thomas is pursuing an MDes in Experience Design at VCU Brandcenter, Blue is pursuing an MDes in Design Strategy at IIT Institute of Design, Chantal is an incoming senior at Washington University in St. Louispursuing a BFA in Communication Design, and Jason Kim is a recent graduate of Rensselaer Polytechnic Institute with a BS in Interdisciplinary Design. They’re currently exploring the intersection of voice user interface and first-time parents. You can follow the team’s progress this summer on Momentary Exploration.