Design experiences with Google Home

Part 2 of my design learnings from building a Google Home agent hack

Recently I joined a fashion tech company and participated in my first hack week. We built an, ‘In-home personal Stylist’ agent for Google Home, with the vision to advise customers what to wear based on the weather, their location, events (music gigs, travel) and what is currently trending. You can read about the process and hack week learnings here.

This was fascinating from a design experience perspective, but I felt an even greater responsibility as a product manager to design with ethical decision making and meaning and educate our users. I’ve collected some thoughts below on my reflections after experimenting with voice and Google Home.

Design with meaning

I wanted to build something meaningful in a fashion context. After conducting user research with the Google Home stylist, and users divulging the time spent changing 5 times and making decisions on outfits before leaving the house, I began to deeply understand how fashion is related to so much more than just clothing. For many, it’s a form of self expression.

The Jobs-To-Be-Done (JTBD) framework allowed us to boil down the, ‘Jobs’ and assume a need for honest advice and empowerment when it comes to making decisions on fashion, both functionally and emotionally.

The experience I envision and want to test, is to build an honest, no-BS stylist that gives our customers bias-free advice in their safe, playful environment of their own home. Customers share intimate concerns, flaunts, or discomfort around body shape in return for honest, empowering feedback and specific style advice and recommendations. With a goal to build a trusted relationship, the stylist ensures the customer leaves the house feeling confident and ready to storm the streets with their style.

The next phase of user research for the stylist conversations will attempt to validate or disprove this assumed use case in a home environment. Will customers really divulge such personal information to a machine?

User location and context

People have challenged the potential for voice, but the contextual shift from mobile voice assistants to a connected voice device responding to you anywhere within the home creates a new opportunity for voice.

A user’s home environment is a safe space for people to be playful and completely themselves. Our lives are built on conversations and stories and this is how we build our relationships with those closest to us. Therefore engaging users at home through conversations can be a very powerful opportunity. But, we need to be sensitive with how we build on the medium.

As with all user research, one of the most critical considerations is the user context. The persona we design needs to be cross platform, meaning we’re interacting with the same ‘character’ through voice at home, then via Messenger text on the train, as we swipe through some new season recommendations the ‘character’ has sent us. The conversation should be seamless as the user switches between chatting to Google Home, to messaging the same bot, now on mobile as they walk out the door.

Long term I would like to be given contextual data that helps developers understand where the user is in the home. Sonos sets up their speakers so they have context where the speaker is situated within the room and how the sound (output) can travel. For Google Home, will sound engineers be able to map where a user’s voice (input) is coming from and use this context to infer assumptions based on this. eg: user calls out to the Google Home device in the lounge, ‘what’s the weather today?’ The device gets context the user’s voice is coming from the bedroom (and assumes based on preset preferences) the user also wants to get an outfit suggestion based on the weather.

The first few minutes using the Google Home is painful from my personal experience and even more painful to watch others use it. A lot of craned necks and robot talking users, especially in our unnatural testing environment of the lab. But based on our research, the exponential increase in comfort and casual conversation with the device is huge, once within a home environment. A user exclaimed while sitting in their lounge, ‘I forget I’m chatting to a machine’. Therefore, our next round of JTBD interviews will be made selecting users who have already engaged comfortably with the device in their home environment for at least two weeks prior to our testing. We will test how playful and experimental users will be in their natural home environment when already comfortable with the device.

Design and build with morals

This technology was fascinating and learning about the human factor and data potential was incredible from a product perspective. However, I grew more concerned around ensuring we build technology ethically, treating each other with respect, and being privacy conscious. If users allow companies to collect their data, ensure we give users real value in return.

As technology is shaping how some people live and behave, it is our responsibility working in tech to ensure we build products in a moral manner. Building an agent for Google Home has demonstrated this need to engage other industries not only inspiration, but also to provide expertise and ethical advice and direction.

To build the best team for a Google Home product, I would now engage a psychologist, a script-writer or story-teller and data ethics expert and/or philosopher to inject knowledge, challenge decisions and support the learnings and development of the core team (UX, Developer and Product) to make fair and human decisions along the way.

How will we ensure smart assistants are designed in a bias-free, gender neutral way in the future? Previous voice assistants have used women’s voices and names, such as Siri, Cortana and Alexa because people are said to prefer female voices. This is a topic in itself, but is this preference towards female voices due to nature or nurture and can we change what we build today to ensure this isn’t the case for future generations? Can we build a cute gender neutral voice and grow to love that instead?

The unconscious bias and associations made between a woman’s voice and the ‘assistant’ role, was the deciding factor for us to build on Google Home over Amazon Echo.

Google Home is not perfect, but has put some consideration into the design against unconscious bias which deserves some credit. The core device assistant does have a female voice, but has allowed customisation of the agent’s voices (two female voices and two male voices to choose from).

The conversational interaction between the user and device on Google Home also allows for a natural and more respectful conversation, ‘Hey Google (object), (call an agent)’ which starts another conversation. Alexa interactions however, create an authoritative relationship with the user commanding, ‘Alexa (name), (request a skill) eg: turn on the music please’.

Manners or not, the observed responses and derogatory remarks from users towards either Alexa or Google Home devices when they did not perform could be quite harsh. This was particularly disturbing when associated with a female voice, or either gendered assistants.

We must design with compassion, ethical decisions and ensure our future generations have complete awareness of potential unconscious bias. We are shaped by our previous experiences. Can we make new norms and expectations for new generations?

What about our data in the Google ecosystem and our own companies?

As a product manager I use data to provide more meaningful, useful results to our users and I love getting creative with this aspect of the design experience. This is already a sensitive subject which must be treated with care and quickly becomes more grey, when data is used in such a personal context within our own homes. We must ensure we are transparent about data usage, collection and the privacy agreements are clear to our users.

Some leading ethical companies are making data commitments to their users as part of their vision and business philosophy. Vai Kai, a toy company I am deeply fond of who live and breathe their philosophy in everything they do, have made a commitment not to store personal data on their toys and make their privacy statements comprehensive for 6 year olds. TechCrunch covered some other approaches companies are taking, here.

Consultancies and agencies are also appearing, like Simply Secure who specialise in advising companies to follow fair practices with their data. I hope more of this continues to surface as the topic becomes more widely focused.

Be playful, but be moral:

The customer delight with our use case was unbelievable. In our hack presentation over 100 people tested it unprompted in a noisy lab environment and ~95% of the time the conversation flow worked successfully, with people naturally wanting to ask the stylist, ‘what should I wear today?’

The intelligence is smart and this new technology creates a magic that makes us feel like we’re on a sci-fi movie. The experience needs to be felt. Watching a demo is one thing, but interacting with the assistant makes you really understand the opportunity.

Get creative with voice, dig deep into the design details, learn about what drives us, be playful, but be moral. If you can’t make ethical judgments, hire people who can.

Let’s keep it real.