We’ve Mastered the Screen, What’s Next?

By Blake Hudelson, Interaction Designer — Method San Francisco

I feel the designer’s role has changed in recent years from one of creating beautiful forms or clear identification for brands, to one where the designer visualizes and awakens the hidden possibilities of an industry.
– Kenya Hara, Muji Design Director

I grew up in the 80s and 90s watching personal computers become the first technological platform to disrupt a society within a person’s lifespan. Like most people at that time, I had no idea what a big deal this was.

Slow internet or no internet connection, I was still hooked.

Once computers took root in our daily lives, mobile phones became the second era of computing. These new pocket-sized supercomputers let us watch puppy videos anytime and anywhere — what could be better!

Image: Melvin Galapon

Now we’re at another juncture, where voice, gesture, and haptic-based technologies are opening up new ways to interface with the world. This third paradigm is especially interesting for me because I spend many of my waking hours using and designing for screen-based interactions.

I’m not much of a gambler, but I would be willing to bet all my chips that this third platform revolution will go well beyond the glossy screens that we’ve grown so accustomed to.

Embracing Our Bodies

The father of electronic music, Brian Eno, is surprisingly known for dismissing a lot of the digital instruments that are used to create electronic music. Since most of these instruments are only operated with our fingers, Eno considers them to be an obstacle to creating new music.

According to Eno, “The trouble with computers is there is not enough Africa in them.”

Translation? Interfacing with just sliders and nobs is like dancing with just your fingers — the goal should be to engage your whole body.

Looking, Sensing, Feeling

As humans evolve — and as the way we work and play changes rapidly — we are seeing a need for technologies to be increasingly fluid and flexible. We are seeing connected home devices, smart clothing, augmented reality glasses, and fitness trackers leave behind the touchscreen in favor of better battery life and more intuitive interactions.

Tiny accelerometers, cameras, and microphones are vindicating us from the tyranny of the keyboard, helping us be more expressive — more human. Swipe your sleeve to turn on your music, wave your hand to dim the lights, or speak out loud to search the internet.


Last week my beloved Beats earbuds broke. Rather than replacing them with the same ones, I decided to try out the new Apple AirPods. Thinking they were just another pair of wireless earbuds, except with a better battery life, I had no idea how it would change my daily behavior.

What the AirPods have done is seamlessly put an AI-powered voice assistant in our ears. Anytime I want to know anything about the world, I tap twice on the AirPods and simply ask Siri. My phone never leaves my bag. Magic.

Voice control is still in its infancy, but I can’t help to think it is the future of how we interact with the devices that surround us. Most importantly, voice control is hands-free and eyes-free, which has major implications for actions like distraction-free driving. Many companies are in a frantic race to equip every ‘dumb’ device with AI. It likely won’t be long until you can ask your toaster a question and expect it to respond.


When I first saw the movie, Minority Report, over a decade ago, I was ecstatic about the future of the user interface. Tom Cruise gracefully waved his arms to wield intricate visualizations, dancing with data. But as sexy as that seems, that’s not our future. Holding your arms out in front of you for more than a minute is tiring and unsustainable. Interfaces in the future are much more likely to utilize our hands close to our body, like sign language.

Project Soli is one project that is making it easier to control our devices with small gestures. It uses radar sensors to accurately track micro motions. It can be built into small everyday devices like a watch, enabling people to use minute hand gestures to control the digital world around them.

If you think about it, this makes complete sense. When every object in our lives has a built-in computer — connected lightbulbs, smart sprinklers, sensing sneakers — they’re not going to have screens to control them. Emerging technologies like Project Soli show that the air in front of you can be your user interface.


Have you ever been watching a movie and the imagery is so captivating that you want to just reach out and touch it? The billions of screens that pervade our lives are convenient and powerful, but rich experiential layers remain hidden behind the cold pieces of glass that cover them.

What if we could someday view an image of a cheetah and actually feel its fur when touching our screen? Disney Research Labs has been working on ways to render 3D geometric features on 2D surfaces, which could open up new ways to share experiences. By applying a slight voltage that simulates frictional force, their flat screens can take on a new three-dimensionality.

Think about how this could change learning, where students can get out of their static textbooks and actually touch the objects that they’re studying. The Research Lab did reading comprehension studies and unsurprisingly found that kids improved comprehension and memory when haptic feedback was provided.

Wearables are also helping to bridge the gap between our digital and physical worlds. A wearable vest called the Sensory Substitution Vest was recently developed to translate sound waves into vibrations that people can feel with their entire body.

This has significant implications, like for those who are hearing impaired. Researchers discovered that in only a few months, a deaf person’s brain could reconfigure itself to “hear” the vest’s vibrations as sound. Therefore, the vest could help deaf people to hear again.

Image: Mind the Horizon

Connected or Broken

With supercomputers in our pockets, AI personal assistants in our ears, and answers to any question with a summoning of our dear friend, Alexa, it feels like the gap for technological innovation is narrowing.

But actually our technological frontier is just beginning. We’re moving into an era where all objects, not just computers, will be expected to be interactive.

According to technologist, Kevin Kelly, “Soon if a thing does not interact, it will be considered broken.” It’s true — just ask any 3-year old kid today why your newspaper doesn’t pinch and zoom, and she will likely say it’s broken. Our newest generation of kids expect everything to be connected and interactive because they’ve been handed touch screens since birth. Except soon, screens may not be the primary window to interface with our digital world.

Thanks for reading! If you’d like to continue the conversation, leave me a comment or ping me on Twitter @blakehudelson.