How Human Interaction is Shaping the Future of Technology

⚡Hector Ouilhet⚡️
Google Design
Published in
6 min readJan 7, 2020

As a species, we humans have been talking to each other for a long time. It’s only very recently that we’ve started to talk to machines. In fact, if you think of human speech as having been around for 1 day, then talking computers have been around for just 1 millisecond. This recent development might sound like a flash in the pan, but it actually attests to the endurance of conversation as humanity’s preferred means of communication. 🔥

As a human-centric designer, my goal has always been to help people interact with new technologies by making them more intuitive and approachable. When I took on the role of head of design for Google Search & Assistant, I had an opportunity to realize my life’s goal and teach this nascent technology to interact with us in ways that are similar to how we interact with each other. The time I spent working on the Google Assistant has only strengthened my conviction that conversation, the most enduring form of human interaction, should be the design paradigm for future digital products and services. 💬

Awkward exchanges with technology

A conversation is essentially just a series of back-and-forth exchanges: one person speaks while the other listens, and vice versa. We process the input from this exchange — be it new information, changes in mood, or even body language — by contextualizing them within the framework of our existing knowledge about the topic and speaker. And in doing so we continuously refine both our understanding of the topic and the other speaker. This doesn’t just allow us to move the conversation forward, but it helps us to anticipate what the other person might say. And we do all of this, for the most part, spontaneously. 👯‍♂️

Now contrast this type of exchange with the recent exchanges you’ve had with technological interfaces. Not so dynamic and fluid, right? This is because most interfaces behave in largely predetermined ways; machines just don’t learn about and adapt to us in the same way we do with them. But please, don’t blame the machines. We humans have become so used to accommodating technology — that is, learning to speak its language and working within its constraints — that we’ve stopped wondering how the world might look if our technology could carry on a conversation. 🤓

What would your printer say if it could talk?

Imagine, for a second, you relied on a standard office printer to print out weekly reports. Nowadays, even your run-of-the-mill printer has countless fancy features. And yet, come Friday, you shuffle over to the printer and push, by sheer force of habit, the same buttons in the same sequence to get the same result. Naturally, your report is never formatted exactly how you wanted (perhaps the margins are too narrow or the images are too dark), so you just accept the result because it would be too time-consuming to figure out how to get the thing to do exactly what you want it to. Nobody likes doing this type of tedious work, which is why it usually got passed off on me back when I was an intern: “Hey Hector, can you print this out for me, same as last week but just skip the images?” I couldn’t delegate this to somebody else, because the only thing lower in the hierarchy than me was, well, the printer. 🖨

But imagine how enjoyable my internship would’ve been if I could’ve talked to my printer. Nowadays, some smart printers can already handle basic voice commands, eliminating some of the inefficient button pushing. But what if these smart printers could take it a step further and carry on a context-sensitive conversation that accounts for changing needs? It might, for instance, make helpful suggestions like, “It looks like you’re just printing a quick draft, so should I make it black and white to save some ink?” This would make it more than just a useful tool, but an assistive agent. 💥

Learning to let go

A few years ago, my family and I were sitting in our living room and getting ready to watch the World Cup. I was fiddling around with my remote controller when my 3-year-old daughter, Ana Julia, turned to the Google Home and said, “Play the Mexico game in the living room!” The TV suddenly turned on and she just started watching. I was beaming with pride. I was so proud of my teams, both the one on the football field and the one at Google who made the experience possible. But most of all, I was proud of my little girl who, in the most matter of fact way, asked this piece of technology to do something in the same way she would’ve asked her father to. ⚽️

But then at the halftime break, she asked the Google Assistant to do something she knows I’d never do: “Play ‘Let it Go’ three times!” I was tempted to tell her it’s not going to work, because I knew how difficult it was to fulfill a multi-intent command. But to my amazement (and perhaps momentary annoyance), our Google Home actually played the song, three times in a row. 🥶

While the chorus blasted in the background (“let it go, let it go, let it gooooo!”), I couldn’t help but reflect on letting go. My team and I didn’t explicitly design or hardcode this feature; instead, we designed higher-order models that could parse different things people might ask for and fulfill them in one flow. In other words, we taught the Google Assistant the basic principles of conversation and trusted it to generalize for new and unexpected use cases. Obviously my daughter doesn’t understand how the technology works, but what she does understand is that she can use her voice to get technology to do things for her. 🙎🏽‍♀️

And we shouldn’t take this for granted. Our whole lives we’ve been using the most unnatural interfaces — from push-button telephones to computer mouses — to operate our technology. These machine-centric interaction models are now so deeply ingrained in us that, like me clutching my remote controller, it’s easy to forget that it doesn’t need to be this way. Technology is ready to start listening. We just need to have the courage to speak up. 🗣

Just be yourself

When I moved from Mexico to the US, I had difficulties expressing myself in English. And because personal expression is so important to me, I had a lot of frustrating experiences where I longed to use the Spanish words I felt more comfortable with. But I also knew that I wanted to make a life here; I knew that this language was to be my new medium of communication. So I was patient with myself and, thankfully, so too were my new friends and colleagues. 🇲🇽🇺🇸

We’ll all have to be patient with this new conversational technology. We still have a lot of work to do to improve natural language understanding and speech technology. In the meantime, you’re bound to have a few frustrating interactions where, like me in my first years here in the US, you long for your accustomed medium. But deep down, you know the world is changing and many of these interaction models are becoming obsolete. 🌎

We should just be grateful that we don’t need to learn a new language to interact with this new technology. We just need to get used to speaking it with somebody — or better yet, something — that is still learning to talk. So be patient, be open-minded, and just be yourself. Only then will the future of technology finally take a human shape.This might be the end of the article, but hopefully the beginning of a longer conversation. I’m eager to expand my thinking in this space and would love to hear your perspectives. 🎆

Fore more about this conversation, here’s my latest talk @ CX SF 2019 🕺🏽and podcast at Forrester What It Means series (The Ambiguity-Laden March Toward People-Centric Design). 👋🏽

Illustrations by Helen Slavutsky

--

--

⚡Hector Ouilhet⚡️
Google Design

Designer by ❤️. Driven by potential in everything & everyone 👫. Translating that potential into making our lives a better🔥. designandgoodtimes.com