Design for our Senses, not our Devices

Silka Miesnieks
Sep 18 · 14 min read
Adobe Remix — Vasjen Katro/Baugasm

I recently gave two talks about Sensory Design at the Business of Design Conference and the New Context Conference. I’d like to combine those messages to share the potential positive impact Sensory Design can have on humanity… if it is built well to act well.

Let’s talk techie…

Today we can extract and use data from Machine Senses like our AR cameras, our voice-controlled speakers, bio-data and, in the future, haptics. Combining this ‘sensory’ data with the power of Artifical Intelligence (AI) is so powerful that it’s produced a new computer platform that functions beyond the screen. We call it Spatial Computing.

Rachel Hunicke who is a positive force in games and an influential designer said to the recent Leap Con,

Spatial Computing is the juiciest challenge that we’ve had to date.

I have to completely agree with her. But before I dig into ‘juicy design challenges’, let’s get on the page about Spatial Computing, Machine Senses, and Artificial Intelligence vs Machine Learning.

Spatial Computing is the next computing platform, like Mobile Computing and Desktop Computing before it. Spatial Computing operates in the spaces around us by using Machine Senses as input. Previously, touch-screens and mouses were our sole forms of input. Machine Senses are trying to mimic our human senses and behavior so that it is intuitive for every human to use, regardless of skill, abilities, or cultural background. Traditional computer platforms have required us to think like a computer, now we expect spatial computing to think like a human (but it can’t ever be human… important difference!).

Machine Senses are everywhere. They are the AR cameras in our phones that mimic the sense of sight with Computer Vision. Mics and speakers simulate the sense of hearing and ability to speak with voice recognition. Our AR mobile devices and AR glasses are outfitted with sensors to imitate the sense of proprioception, the understanding the space around us. Machine Learning with cameras can augment the ability to understand body language and objects around us. Voice Recognition and mobile AR have reached a point of mass adoption. So when AR glasses, haptics and ML models for sensory data, have also reached this tipping point, the ‘input problem’ will be solved and Spatial Computing will become the next computing platform.

Artificial Intelligence (AI) versus Machine Learning (ML)
Artificially Intelligent machines think on their own and find new ways of doing things — this is the goal, but no machine is intelligent on its own, yet. But Machine Learning (ML) and it’s significantly smarter younger sibling, Deep Learning, provide a way for machines to interpret massive amounts of data in new and amazing ways. Machines today can learn, but not ‘completely’ understand.

When it comes to Spatial Computing, ML can act a bit like the human nervous system. It takes data from sensors like cameras/eyes, mic/mouth, movement/IMU, and collects, interprets, and responds through a complex ML/nervous system. If you bump your toe, the whole body jumps back. If you can’t read Dutch, your camera can translate it for you; if you can’t hear well, your speaker could amplify that voice amongst many or translate speech to text; if your car goes through a pothole, the local council could know about it immediately; an industrial designer could know their toy was being used or left in the toybox, leading to better toys & reduced landfill.

ML and historical data remembers and understands the past. We are already seeing our sentences being finished for us in Gmail based on our historical writing style; one day my kids might to experience my life when they are my age; maybe we could ‘see’ a predicted future of our inventions based on historical events.

Josh Lovejoy encourages us to use ML for good not bad by, “respecting each person as the author of their own storyJosh Lovejoy. (See ‘Approaches to AI Ethics’ in this article for more inspiration.)

Tech talk over, let’s talk about all things design.


I’m Silka Miesnieks, Head of Emerging Design at Adobe. I work with several teams across Adobe to bring emerging technologies into products and services to solve real human and societal challenges.

From a very young age, my mother thought I would have one of three careers, a psychologist because of my love of people, an artist because of my passion of making, or a jailbird because my curious nature was always getting me in trouble.

People, creativity, and curiosity, have instead driven me to explore what has not been discovered yet in the world of virtual and physical design. I have always worked in the undefined spaces of emerging technologies.

Nine years ago I said yes to leaving our comfortable home in Australia to come to America with my husband and two young kids to set up a startup in Silicon Valley. We wanted to build the operating system for the world using Augmented Reality (AR). Yes, this was no small vision. And even then, I saw a future where information was not locked in rectangles but flowed freely and in and through the world around us. I led product design while my co-founder, Matt Miesnieks, was CEO and CTO.

Why care

Tea Uglow is a creative director at Google, and her perspective on Spatial Computing has deeply influenced me and my teams by helping us think about a better future.

So, with your permission, I’d like you to take you on an imaginary journey that Tea showed us. Close your eyes for just a minute. Imagine your happy place, we all have one. Even if it’s a fantasy. For me, this place is on the beach in Australia with my friends around me, the sun shining, the feel of the salt water on my toes and the sound of a barbecue sizzling. This is a place that makes me feel happy because it’s natural, it’s simple and I’m connected to friends. When I sit in front of my computer or spend too much time looking at my mobile screen, I don’t feel very happy. I don’t feel connected. But after a day being my happy place, I start to miss the information and other connections I get through my phone. But I don’t miss my phone. My phone doesn’t make me happy. So, as a designer I am interested in how we access information in a way that feels natural, is simple and can make us happy.

And fortunately, many tech companies feel the same way. They have invested a lot of time and money on artificial intelligence systems and sensing technologies. Because I believe that we all want better solutions or a happy future.

So who are we building this future for?

For people like this, Generation Z. “GenZ will comprise 32 percent of the global population of 7.7 billion in 2019”. Today they are aged 8 to 23 years old. A Gartner Report says that they have more devices than previous generations. In America they have Amazon Alexas in their homes, and they have AI chips in their phones, and in 10 years they might have AR glasses in their pockets.

The kids pictured above are 9 and 10 years old, and like typical GenZ’s their identity is not drawn on race or gender but on meaningful identities that shift as they do. They express their personality fluidly and continuously. So when asked, “do you think you’ll marry a girl or a boy?” they didn’t think it was a strange question. One of them said “a girl” and the other son said, “I’m working it out.” Their answers were not awkward or uncomfortable because they are not binary thinkers.

Fluid-Identities

At Adobe we’re seeing brands shift from creating self-creation-type experiences for YouTube or Instagram to brands that allow for fluid-identities by using augmented reality face masks in SnapChat and Facebook Messenger.

This is the kind of shift we’re expecting with Spatial Computing. We’re moving from the place where information was held in on screens to a world where creative expression can flow freely into the world around us with AR powered by AI. Future thinkers will need to be able to navigate through the chaos while building connections, which is why creativity is a core skill needed for future generations.

Creativity

At Adobe we need to make creative expression simpler, more natural, and less tied to screens. We are democratizing spatial design tools. Tools like real-time animation is a core skill needed in spatial computing, but today, the difficulty of animation causes it to be left to professionals with access to specific tools.

So my team built a tool that lets you transfer the movement of a bird flying or friend dancing just by capturing the motion through your phone camera and instantly transferring it to a 3d object or 2d design. It was so exciting to see the wonder on people’s faces as they used the magic of sensing technologies.

Members of GenZ want to create collaboratively in real-time. They also expect to create with anything, anywhere, just by looking at it or playing with it.

Learning

Today, many kids can explore the world around them from their classrooms using mobile AR. Or ask Google to solve their math homework… yep, my kids do that. By the time my guys reach the workforce, they’ll have AR-enabled glasses or projections that could attach to objects so that they can use both hands to learn the guitar… a bit like a “wonderful mechanical YouTube” as Tea Uglow calls it.

Sensing Technologies like augmented reality, speech recognition for mobile, IOT, and smartwatches bring a lot of design challenges. At Adobe we have more questions than answers right now, and I think that’s true of the industry as a whole. So I thought a good place to start is to share with you some things we’ve already learned then things we don’t know.


Four Shifts in Design

Let’s talk about some fundamental shifts in design we know today.

1. 3D is Legit

3D is ‘legit,’ as a GenZs might say. They expect to be able to create using their voice, gestures, cameras and the environment around them as input. Our traditional inputs -keyboards, mouses, and 2D screens-have made software applications more complicated to navigate, which makes it harder to tap into our creative superpowers.

For example, I’m dyslexic, so transferring my thoughts onto paper is incredibly frustrating. My creative flow is lost, and I become speechless. I wrote this piece using voice-to-text technology. It’s not perfect, but it helps to get my words down and my voice out there.

Sensing technologies powered by AI have the potential help people with many different abilities.

We wanted to explore this idea further. So, who better to explore ideas with than artists. Here is an artist we invited to our Adobe AR Residency program, Can Büyükberber, to share his thoughts on spatial computing and his work.


2. Uncontrolled User Interface

Secondly, we need to understand that design elements placed in the world cannot be controlled. UI designs have to adapt to the lighting conditions, the dimensions and the context of the surrounding environment. Additionally, we don’t control the camera the viewer uses nor can we prescribe a viewpoint.

Stefano Corazza, the fearless product leader of Adobe’s first AR design tool, Project Aero and Head of AR, said: “AR is forcing creators to give some thought to the viewer’s sense of agency, (or self-directed choices) and this fosters more empathy towards the viewer.”

Cameras are Uncontrollable. So when we showcased Aero at Apple WWDC 2018 a few weeks ago, I instantly understood what he meant. Giving the viewer control over the camera gives her a role to play. She becomes part-creator. I saw her assume the role of cinema-photographer the moment she moved the AR-powered camera through a layered 2D artwork placed virtually on stage.

Physical movement is Uncontrollable. Here are Zach Lieberman and Molmol Kuo from our Adobe AR Residency program exploring new ways typography can interact with the viewer to create new narratives. We can see body movement, depth of field and proximity being used as a narrative element.


3. Physical by Nature

Thirdly, digital designs placed in the world are expected to act physically. This is our new user interface standard for Spatial design. We expect a virtual mug should smash just like a physical one. We can break these rules, as long as the user doesn’t think the app is broken too.

Just as screen designs are triggered by a mouse click or a screen tap, designs in the world are triggered by our senses.

Like voice — voice is physical. Four weeks ago we took one small step towards spatial design by adding voice to our screen prototyping tool, XD. Khoi Vinh demonstrated the voice feature on stage at Adobe MAX and went on to explain the vision that he sees for the future of voice.

Faces are Physical. Zach Lieberman and Molmol Kuo propose using AR facial tracking as another input, as an instrument that can be played. Blinking eyes could trigger animations and mouth movements could generate music.


4. Multi-Sensory is Robust

Lastly, and most importantly, tools need to be multi-sensory. As XD becomes more multi-sensory, our tools will need to similarly evolve. By allowing our tools to take in and combine different senses, we will enable products to become more robust and able to better understand user intent.

Last month I went to see Massive Attack in concert, an event that engaged all of my senses. It brought me to tears, and the 90-minute experience gave me a deeper understanding of Massive Attack’s message that I had been able to glean from more than 20 years of listening to their albums. I believe this was because all my senses were engaged, allowing me to understand and feel the message in new and concrete ways.

So we know good design in the world needs to be - 3D, Uncontrolled, Physical and Multi-Sensory.

What we don’t yet know is how our machine and human senses can work together to build this happy future.


Sensory Design, an extended Design Language

So we decided to bravely jump into the unknown and try to figure out a new language for designing with senses. We’ve gathered a group of designers, cognitive scientists, entrepreneurs, and engineers to figure out new language we can all speak. We call it Sensory Design Language.

First, we looked at different design languages that have worked in the past. Material Design was an excellent example of a language that works well for Android UI. We want to expand Material Design to incorporate all our human senses and machine senses.

Sensory Design Language

Screen design traditionally relies on our mind’s cognitive ability to perform a task. Sensory Design relies also on our body’s cognition abilities to perform a task.

But this is not a new concept. We already know that we have excellent spatial memory by using our sense of proprioception, our understanding of space around us. I bet you could be blindfolded and still walk through your house and open the fridge. We’ve already seen virtual reality using proprioception is an effective tool for training used by many enterprises today.

Psychologists have proven that smiling when you feel sad makes you feel happier. This connection between a brain and senses is how we understand our world, how we perceive the world. So if we’re designing for senses and our mind’s cognitive abilities, we are hacking our perception of reality. You could even say Sensory Design is the design of my and your perceived realities. This scares the hell out of me, and I hope it does you, too.

It’s a fantastic opportunity to make the world a better place, but one that comes with great responsibility. So we’ve written three principles to hold ourselves accountable.

Sensory Design Principles

We need to be inspired by a good or ethical human-to-human behavior. Designing with artificial intelligence done right requires us to understand human behavior in depth.

So the first two people to join our Sensory Design team were cognitive scientists. We’ve had to go back to basics and learn about the universal first principles of human behavior. We also need to understand the differences between constructed societies, cultures and individuals. Lucky for us, there has been a hell of a lot of work done in this field already. We’ve just had to sift through hundreds of research papers to come up with some key starting points.

And secondly, respecting people means respecting their physical and digital privacy; giving them control over the tools we build, and thinking first of their well-being over a pat on the back.

Lastly, like the open-source movement has democratized software development, open-design endeavors to democratize good design.

Framework

Next, we drew up a framework to see opportunities and connections.

We broke up our human and machine senses so that we can put them together again. In new ways to solve real-world problems. What are some of the problems that sensory design can solve that no other medium can? One example is using CV and AR to understand sign language, translate it to text and back again to sign language. Computer vision can understand facial expressions, and when combined with hand gestures and biometric data, a machine can get some idea of how you’re feeling. Artificial intelligence is very good at seeing patterns in massive amounts of sensory data. Organizations are already using this data to help organize the plan of cities and solve climate issues. My hope is that it will one day allow us to understand each other better.

How can a combination of senses and intelligence help us be more empathetic across different cultures and different ways of communicating? Can we give people new abilities, similarly to how voice-to-text has let me express myself more easily despite my dyslexia? We have so many questions and so many opportunities.

One thing we’re studying right now is the creation of closeness. In some ways, technology has connected us and in other ways, it has created isolation. We don’t know why this is. We don’t know how it happened. And we don’t know how to fix it.

We’re looking afresh at how technology can support closeness.

We’re starting a study to understand how current forms of communication have brought us together while also creating isolation. Does the lack of field of view with video calls reduce our sense of reciprocity? Or have we’ve got no eye-to-eye contact in voice conferencing and it doesn’t matter anymore? Is it that we’re not in the same space sharing that sense of presence with each other? Is that why we’re here today? Is it the side conversations that we have after the video calls? Is it all the nonverbal communication that happens with side glances, hand movements that are missed in group video calls? We’ll be sharing what find out in the spirit of open-design.


We are the people that are actually building the foundations for this new era in Spatial Computing. It’s us, the designers, engineers, cognitive scientist, entrepreneurs, and CEOs. We need to share what we’re learning, set high standards and challenge ourselves build a good design foundation that acts well and is a little more empathic by nature.


Thank you Stefanie Hutka, Laura Herman and Lisa Jamhoury for your work on the Sensory Design Language. Matt Miesnieks, Alysha Naples, Avi Bar-Zeev, Tea Uglow, and many others for your insights.

Credit: Vasjen Katro amazing video work that incorporates Adobe’s machine learning framework, Adobe Sensei, into his physical forms of creativity. See his creative exploration.

Thanks to Matt Miesnieks, alysha naples, Lisa Jamhoury, and Elizabeth Barelli

Silka Miesnieks

Written by

Head of Emerging Design @ Adobe

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade