The unexpected pleasures of interaction: sending messages needs to be a sensual experience
- Thomas Gayno of Spotify draws his roadmap for the future of UX/UI design
- “The future lies in pleasurable interactions — typing on glass isn’t enjoyable, so how could it be better?”
- How long until the GUI becomes history — and what will replace it?
Thomas Gayno works on the cutting edge of UI and UX on a product that has over 100M users — Spotify. As Product Lead, he is tasked with changing the way users interact with the world’s leading music streaming service.
Previously, at Google, his teams helped built products with an international impact, from the controversial (Google Glass) to the whimsical (Androidify) — and they all involved novel user interaction.
He’s spent a decade looking to change how we interact with devices and each other, and re-imagining how we use voice, touch and contextual computing to connect people in a simpler, more expressive ways.
At TOA, he described how human interaction with devices will change, and what the new opportunities on the horizon are — so here are the essential take-aways, and a must-listen stream of his whole talk.
Screens are vanishing and the Graphical User Interface is dying:
The point-and-click GUI has been dragged along into a world where it’s not needed as much. Today, we’re looking for quicker, simpler, more human ways to interact: we need a more democratised operating system.
Thomas put it bluntly: “we are entering a world where interaction is on small screens or devices with no screens.” So what should designers bear in mind to be part of this exciting future?
Thomas recommends looking carefully at how behavioural, contextual, and spoken possibilities could be applied in novel ways, and to always make interaction pleasurable. Speaking is the natural language to use, not pointing a mouse. Gestures make more sense than navigating a menu.
Make touch enjoyable, not a necessity. How could it be done differently? How could it be fun?
The future lies in pleasurable interactions, says Thomas. The way you can simply edit, doodle and share photos with apps like Tapstack and Snapchat are essentially very enjoyable interactions with a piece of glass.
And while this sounds very obvious, compare those to one of the most common interactions with your phone: typing. Typing on a piece of glass is not enjoyable at all, so how could it be better?
Utilise existing natural actions for quick adoption of new ideas
Thomas believes designers should look to normal behaviour and use actions that feel “right.” Pinch-to-zoom feels natural. Swiping between pages on a tablet feels obvious. So when looking forward and inventing new interactions with technology, consider existing actions that we, as animals, all find intuitive.
While Google Glass didn’t quite work as a consumer prospect for a few reasons — not least, it turned out having a camera pointed at you was not a user experience the outside world always wanted—there were plenty of innovative ways to interact with the product.
The wink-to-take-photo feature was a very natural user experience: it’s obvious, it quickly becomes instinctive, and it’s hands-free. And in the future, as Augmented Reality means that our technology becomes more aware, we’ll be able to use simple gestures to do complex tasks, and researchers have created headsets that allow users to communicate with thought alone.
Alexa is getting the plaudits now, but the era of voice input has been here for a while
We have been talking to our tech for a long time, argues Thomas — but one fascinating area for opportunity appears around human-to-human communication in particular. Teenagers and older generations are using voice as a way to avoid typing — whether via voice-to-text or simply sending short voice messages.
Phones are enormously powerful, and we should be taking advantage of this for voice interfaces. 20% of all mobile searches are made by voice, and the emergence of Google Home and Amazon Echo mean that the home might be the next frontier of voice search. What would you shout out and search for?
Contextual UI will seamlessly change our relationship with devices
Thomas paints a picture of a near-tomorrow that is a tweaked-today. It’s a world that’s recognisable, but where devices understand a bit better how humans live life, and adjust themselves accordingly.
Imagine a typical workday, starting with a commute. Your phone senses your speed and location, and figures out that you are driving your car towards your office. It automatically chooses to read important work-related messages out loud for you, and know that it should be alert for hands-free voice input.
And then when you arrive and dive straight into a meeting, it will then look at your calendar and know to silence itself. Maybe it’ll learn your lunch routine and offer suggestions of new places to eat 15 minutes before you get hungry.
These are all “user interface moments” which are more nuanced than tapping and swiping: they’re examples of how UX and UI are becoming part of a holistic, smart and instinctive technology collaboration that augments and shapes our lives.
This conversation has been edited for clarity and length. Previously on TOA.life, we explored Thomas’s views on tomorrow’s post-screen world, and his vision for a screen-free future.
If you enjoyed this article, please consider hitting the ♥︎ button below to help share it to other people who’d be interested.
Get TOA.life in your inbox — and read more from TOA’s network of thought-leaders:
Sign up for the TOA.life newsletter
Conducting the orchestra: the secret behind Airbnb’s groundbreaking user experience, from their Director of Experience Design, Katie Dill.
Bye bye, bank manager: trust the blockchain, remove the middleman: Shermin Voshmgir of BlockchainHub explains the ‘trustless trust’ myth