How design is being transformed by things we can’t see

Owen Williams
9 min readNov 4, 2016
Designing the invisible, Sandy van Helden

Design is becoming infinitely more complex every day, and over the last decade it’s finally gotten a seat at the table: it’s important enough to have a say in every aspect of business. Companies across the world are tasking executives with being design driven, but how do you actually get there?

‘Design thinking’ can be traced back to Stanford University all the way back in the 1970’s, where a paper focused on using creativity in engineering education. The paper argues that design should be used in the curriculum because it helps students conceptualize ideas better in a vastly shorter amount of time.

Another paper from Stanford argues similarly, saying that such thinking “encourages a flexible and interdisciplinary way of working which abandons inappropriate mental barriers and stereotypes.”

The university, wanting to help engineering students think differently, forced incoming students to spend two weeks doing freehand sketching, regardless of their interest in drawing.

The professors found that “many students find themselves in the pleasant situation of having learned to do something they were convinced they never could do” and that “this often makes them question other behaviors which they assumed they had no talent for or otherwise assumed were unavailable to them.”

It’s clear that design thinking is pervasive throughout most businesses in 2016, but what happens when you’re designing for the future? Design is always present, but how about when it’s not always visible?

Assistants are just the beginning

With this in mind, the role of the designer is changing drastically. In a future where we’re designing everything from chatbot interfaces to voice personalities for products like Amazon’s Echo assistant, that leaves design in something of a conundrum: what does it really mean to ‘design’ something you might not even be able to see, or even touch?

Digital assistant, Sandy van Helden

A great example of this, of course, is Apple’s voice assistant Siri. When it debuted in October 2011, Siri was something out of sci-fi movies: you could push a button and have a magic assistant in your phone do seemingly everything for you.

When you’re designing something that’s intended to be used by voice, there’s very little to work with. Sure, there’s fundamental elements: a microphone button and voice sound waves on screen to show it hears you — but the rest is up to your imagination.

A new breed of design is pervasive in this world, one where everything isn’t quite as it seems.

Using the interface to imply options

Siri the interface on screen might be how you think about her, but how does it make you feel? Does Siri make you feel like you’re able to ask questions, or if it’s too tricky to use? Can Siri lend itself to making you feel like the answers are there if you just get the question right?

Apple Siri

According to the original team behind Siri, after Apple acquired the company and integrated the service into iOS, it had “hidden Siri’s capabilities with a design that over-promises on what Siri can deliver”– instead, choosing to mitigate user disappointment by offering a list of a few phrases that actually worked.

In iOS 10, Siri’s current layout largely leaves it to your imagination, offering a simple microphone and a help button. But the real work actually came down to making Siri sound human and not robotic, otherwise you’d treat her as such.

In a deep-dive on how assistant technology got its voice, The Verge wrote that “early robotic voices sounded robotic because they were totally robotic” which utterly changed the way that people tried to interact. Siri, however, got human by… starting out as one:

“After the script is recorded with a live voice actor, a tedious process that can take months, the really hard work begins. Words and sentences are analyzed, catalogued, and tagged in a big database, a complicated job involving a team of dedicated linguists, as well as proprietary linguistic software.

When that’s complete, Nuance’s text-to-speech engine can look for just the right bits of recorded sound, and combine those with other bits of recorded sound on the fly, creating words and phrases that the actor may have never actually uttered, but that sound a lot like the actor talking, because technically it is the actor’s voice.”

Siri was absolutely groundbreaking when it was released, and despite getting a lot of recent criticism, it’s easy to forget that we have a tiny brain we can talk to in our pocket and ask questions about the universe around us.

Designing the invisible

If you think Siri was a challenge to design for, take a look at Amazon Echo, the e-commerce giant’s first foray into a digital assistant that lives only inside a speaker that sits on your countertop.

Amazon Echo

Echo, unlike Siri doesn’t exist on your phone. It has no fancy interface for you to get used to, has no screen and just one button — it’s the antithesis of how to get users friendly with a product.

For Amazon, however, Echo has been a boon. It turns out that thousands of people love the idea of having a speaker you can talk to that just stays in one place — and it even plays music.

Echo doesn’t need to look pretty, have a ton of bells and whistles or even do much: it just needs to hear exactly what you said, word for word, and understand it perfectly.

Giving a robot personality?

Amazon hasn’t said much about how it designed Echo publicly, but when it was first released it was fairly utilitarian: you would ask a question by saying “Alexa,” then your query, the blue light would spin on top, then you’d get the answer. It was robotic, but it worked well enough.

Digital assistant, Sandy van Helden

Recently, sensing that people really wanted an assistant like Siri, Amazon has started infusing some personality, according to The Wall Street Journal. That’s an entire team of people loading in witty answers, funny jokes and other elements to fill the gaps.

“It’s so funny because I think ‘Oh wow, I am talking to a machine,’ but it doesn’t feel that way,” says Ms. Martin-Wood, who lives near Birmingham, Ala. “It is a personality. There’s just no getting around it, it does not feel artificial in the least.”

Google, on the other hand, has hired writers from companies like The Onion and Pixar to help train its own voice assistant, fittingly called Google Assistant. The company’s main goal is to build a relationship with the user, and it’ll use skilled writers — as well as learning your tone of voice– to help.

Another key experience with voice assistants is latency — how fast it responds to you once you start speaking. If Alexa, Siri or Google is slow to reply to your query, you’ll probably think it’s dumb.

Making the voice assistant reply in the right amount of time is paramount, alongside the rest of the experience, to how you as a user feel about it.

This is a hard challenge, particularly because much of the heavy lifting in all voice assistants is performed in the cloud, where powerful computers can crunch the amount of data required.

Google Dots in motion

That means there’s likely a limited connection, and you’ve got to account for how long it might take to get the answer back from the server. Amazon mitigates this with a cute LED light animation, and Google uses bouncing dots on screen– but Apple’s technique makes you aware Siri at least got what you said, despite not having the answer yet.

As you’re asking Siri your query, it’ll instantly drop the words it recognizes, in order, onto the screen as it starts to understand them.

Despite not actually knowing the answer yet, since that takes heavier crunching of your intent, the task of recognizing your actual words can happen much faster.

Because you can see what Siri is thinking, it feels faster. It’s really a magician’s trick–Siri has improved in speed, but it’s not that fast yet, so the way it illustrates that provides the illusion of speed.

Motion, context and more

These are but only a few examples of the new types of interfaces companies are faced with designing– how do you design tone of voice into a messaging app? Without audio, is there a way to imply the user’s intended tone of voice right inside a message bubble?

Apple iMessage animations

In iOS 10, Apple uses a combination of animation, text size and other effects to portray this in iMessage. It doesn’t automatically assume you want to use the effects, but offers them up as an additional way to express something a flat message might not otherwise have portrayed.

Motion or interaction design is becoming just as important as graphic design. Motion can help tell a story better than a plain interface ever could — if you need to educate your users about the actions they’re performing, or where something lives, motion can be the key to reinforcing a feeling of context.

Even better, it can help hide if your app is a little slow.

“Without animation, there’s no conversation.”

For years the way an app animated interactions or transitions was generally an afterthought, but now entire tools like Framer, Atomic, Adobe XD and Origami help designers plan out how a design will feel before a single line of code is laid out.

There’s one more element that can influence how your users feel drastically: time. Adrian Zumbrunnen, a designer at Google wrote that “time transcends animation or any other topic” and can work for as well as against you.

Here’s an example. Say you text someone “Hey, it was nice to meet you” — what kind of response do you expect? According to Zumbrunnen, no matter your preference “one thing is for sure: response time will affect your interpretation” of the words.

“Without going in to too much detail, anticipation and delay greatly impact our appreciation and the way we communicate.”

Designers have spent the last few years searching for the ‘next big design trend’ like flat interfaces, but what’s incredible is it’s already here: and it basically looks like thin air. It’s everything in-between the way something actually looks.

A bold new future

Where does this leave the designer? In the old world, it was actually pretty straightforward to design an interface, since it was usually entirely visual.

In 2016, there are endless permutations to consider: how does this make the user feel? Can we help them express themselves better? Does an animation make them react a certain way, or explain the interface?

We’re quickly moving toward a future where touch interactions are used even less than natural ones. Soon, you’ll likely be talking to your devices more than than tapping on them, and that’s a brave new user interface paradigm.

Design, rather than just being about the visual, is now about much more. Designing the future is hard, because the concept of a conversation can come with many expectations, and if you miss on any of them it can feel like something went wrong in the process.

This future still has no rules, so it’s up to us to define them. You now have the power to change how people feel interacting with your product on almost every level. It’s time to start thinking outside the rectangle in your hand, or the message bubble on your screen.

This article is a part of the “Do you speak human?” lab — enabled by SPACE10 to explore conversational interfaces and AI.

Make sure you dive into the entire publication.

Read next:

--

--

Owen Williams

Fascinated by how code and design is shaping the world. I write about the why behind tech news. Design Manager in Tech. https://twitter.com/ow