What does AI look like, by Sandy van Helden

The success of conversational interfaces relies on designers

In order to talk about something, we also need to create something to look at. What does artificial intelligence look like and how does it behave in different situations? And probably even more importantly: Does it even have to be visualized?

Here’s a fun fact:

Today, one in five Google search queries on Android-devices are now made by voice.

If you ask us, that number is only going to rise.

The fact that we are able to produce 140 words per minute when we speak, and in the same amount of time only type 40 words, is probably the clearest reason why conversational interfaces in the future could be voice-based only and not relying on typing and reading as we do today.

But if something doesn’t have a face, if we can’t look at it as a concept or a product—how do we then talk about it?

In order to talk about something, we also need to create something to look at. Give it a name, give it a face.

Or do we?

Conversational interfaces are well underway to change our perception of what a user interface is today.

We are used to control our devices by pushing buttons. But in the very near future interactions with devices and services will most likely be controlled by our shining voices, not our clumsy fingers.

But if artificial intelligence and conversational interfaces are to be successful, we need design to solve a great challenge:

If a technology isn’t tangible, but only audible, how do we visualize it?

What does artificial intelligence look like and how does it behave in different situations? And probably even more importantly: Does it even have to be visualized?

It’s time for designers come into play.

Today, if you google ‘artificial intelligence’ you’ll be met by this:

Futuristic visualizations of what we once imagined AI could look like, but also a bit old fashioned, right? They don’t really tell something about where we are heading today, tomorrow.

Images from popular culture are often used to illustrate artificial intelligence. Two examples are screenshots from the films Ex Machina and Her:

Ex Machina (2015)
Her (2013)

If we dive a bit deeper into the pond of artificial intelligence and look at conversational interfaces, the first visual example we stumble upon is often a visualization of services running on Facebook Messenger:

Thanks to artificial intelligence connecting the dots, you are now able to request rides directly in Facebook Messenger conversations.

Simple screenshots from Messenger are strong storytellers because more than a billion people use the service and recognize the universe. The same goes for Siri, Apple’s virtual assistant:

We could come up with a few more examples, but you get the idea: We are not used to visualize artificial intelligence because it is still rather new. And this is where it gets interesting, because how does it even look?

Should it even look like something, or just remain individual conceptions in our minds?

For designers, the opportunity to give a face and a voice to artificial intelligence is probably the toughest and most interesting design challenge in the last 50 years.

The internet lacks so much visual material related to conversational interfaces that this was one of the better images to show here. Not impressing, huh? (source)

In the blogpost Design is a conversation, Intercom’s Director of Product Design, Emmet Connoly, argues that

“…the history of personal computing is best described as the continual removal of layers of abstraction between machines and people (…) the next step is for machines to extend and adapt themselves to how we naturally communicate.”

If conversational interfaces are going to be dominated by voice, all physical interactions could be obsolete and replaced by only a voice and the machine’s ‘ear’ to perform actions.

Amazon Echo.

In 2015, Amazon launched Echo and got all of us to befriend their new voice-based assistant Alexa. Amazon Echo is capable of voice interaction, music playback, making to-do lists and much more.

Recently, Google launched Home, a voice-based unit that is clearly aimed to take on Amazon Echo.

Google Home.

Aside from the obvious design of the physical product, the technology behind both Amazon Echo and Google Home has no real visual interface.

There’s not much to look at, nothing to poke around inside of, nothing to scroll through, and no clear boundaries on what it can do.

That is why we need to design around conversational interfaces, and consider all aspects in which design can play a role.

The architecture of a sentence: In voice-based future, tone of voice becomes an important disciplin to master for designers, copywriters and engineers. (source)

As a part of Do you speak human?, we are now setting out to explore how design can add value to conversational interfaces that might or might now be based solely on voice.

What if, ultimately, visual identities as we know them will be replaced by audible identities?

We don’t know the answer yet. But we know how we are going to get it.

What does AI look like, by Sandy van Helden

We are always curious to hear from designers, copywriters, engineers and everyone in-between who have something to contribute with. Send us an email or hit us up on Twitter.

It’s time to start the conversation. Are you in?

Help us! Please tap or click “♥︎” to help spread the word about the exploration to others.

Do you speak human? is enabled by SPACE10.

Like what you read? Give Double a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.