3 Future UX Trends Inspired By Apple’s Vision Pro

Jim Ekanem
8 min readAug 3, 2023

--

Introduction

Apple’s announcement of the Vision Pro headset has sparked anticipation for a new era of spatial computing. This cutting-edge headset introduces new possibilities for user interactions in digital environments, opening up a world of multi-modal experiences and immersive design. As spatial computing gains momentum, digital designers face the challenge of sensing and shaping the future of digital design. In this article, I explore three UX design trends that could follow Apple’s Vision Pro’s success and the impact they might have on the future of digital experiences.

Trend 1: Attention Ownership

The present and future of technology interaction presents a key challenge called the race for attention. Children already face developmental problems owed to smartphone usage. From a neuroscience perspective, young, developing brains learn by engaging all senses in a real-world interaction. This process can not be facilitated by iPads and Smartphones unless you switch screens off and let kids throw them around as if they’re just another piece of worthless metal. As adults, we have already playfully learned about the world. However, our attention ownership is at stake. Soon mixed reality headsets, like Apple’s Vision Pro, will start exceeding the 2-hour battery life and manage to limit eye fatigue. At that point, the race for attention will enter new heights. Designers have the power to shape the impact technology has on our attention ownership.

Regardless of the designers’ and other stakeholders’ responsibilities, from the user’s perspective - as a human - it is wise to stay prepared in order to sustain a positive level of mental health. Living in our accelerated and interconnected part of the world I believe that every knowledge worker must master two key skills to maintain overall life satisfaction and productivity at work. On one hand, we are required to learn and unlearn rapidly to keep up with the mass development of new knowledge and AI tools. On the other hand, is important to navigate the age of indulgence which confronts us with constant stimuli and dopaminergic surges. For great ideas and social relationships to flourish we must engage in deep work, ongoing focus-bouts, and thus have agency over our attention. However, we can only exert so much control over our life because we depend on the development of Humane Technology. You may remember this term from the Netflix film The Social Dilemma which was co-produced by former Google Ethics Manager Tristan Harris. On his podcast called Your Undivided Attention he and his co-host Aza Raskin present 3 rules for humane tech:

RULE 1: When we invent a new technology, we uncover a new class of responsibility. We didn’t need the right to be forgotten until computers could remember us forever, and we didn’t need the right to privacy in our laws until cameras were mass-produced. As we move into an age where technology could destroy the world so much faster than our responsibilities could catch up, it’s no longer okay to say it’s someone else’s job to define what responsibility means.

RULE 2: If that new technology confers power, it will start a race. Humane technologists are aware of the arms races their creations could set off before those creations run away from them — and they notice and think about the ways their new work could confer power.

RULE 3: If we don’t coordinate, the race will end in tragedy. No one company or actor can solve these systemic problems alone. When it comes to AI, developers wrongly believe it would be impossible to sit down with cohorts at different companies to work on hammering out how to move at the pace of getting this right — for all our sakes.

These rules emphasize the importance of acknowledging and addressing the responsibilities that arise with the creation of new technologies. They caution us as technologists to be mindful of the power our innovations carry. As mixed reality devices evolve it is thus, important to cooperate with one another to avoid detrimental consequences and ensure ethical and responsible development in an increasingly interconnected world.

Trend 2: Multi-modal interaction

With the announcement of Apple’s Vision Pro, we’re witnessing the emergence of new possibilities for user interaction. As our digital experiences become more immersive, we will see the limitations of traditional input methods. Multi-modal interactions combine gestures, voice commands, or eye tracking. Design innovations in this field, such as the Gaze and Pinch can unlock even more intuitive and seamless user experiences. Just imagine enhanced accessibility and more efficient interactions while catering to individual user preferences at the same time. There is a large potential for multi-modal interactions to shape the future of UX design. Read the full article Fitts’s Law meets Apple’s Vision Pro where I break down the most recent research on multi-modal interaction techniques.

Trend 3: Real-World Spatial Embeddings

UI Designs' context of use will change. Imagine interfaces that compete with real environments in mixed reality and that are customizable in 3D space by their users. It is mainly Apple’s responsibility to keep updating their mixed reality design guidelines as we use devices more and exploit their design weaknesses. Therefore, I reviewed Apple’s spatial design guidelines and related them to future design challenges.

Familiar

Familiarity remains key in spatial design. The visual language of spatial interfaces should harmonize and maintain consistency with the app’s existing design elements in other interfaces. Let’s consider Jakob’s Law of UX which states:

Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know [less friction].

How can designers reduce friction in users’ interactions?

One of the primary ways designers can remove friction is by leveraging common design patterns and conventions in strategic areas such as page structure, workflows, navigation, and placement of expected elements such as search. When we do this, we ensure people can immediately be productive instead of first having to learn how a website or app works.

It will be interesting to observe upcoming shifts of interaction patterns across traditional app-design, owed to the impact of spatial interface interaction. New technologies will always come with some friction. At this point, it is impossible to predict how for instance a user’s E-Commerce checkout journey could be transformed entirely if the mixed reality Gaze & Pinch interaction offered a more seamless experience than current user flows. Jakob’s Law of UX can only stay valid as long as there is no paradigm shift within parts of the user experience.

Human-centered

Human-centered design takes center stage, encouraging intuitive and natural interaction patterns that seamlessly integrate digital experiences into the user’s surroundings. Reflecting on my blog article about the Need For 11 Usability Heuristics, I wonder… How do we design for accessibility and inclusion in mixed-reality environments? There’s a 2020 Forbes article addressing this topic in relation to the Metaverse and I find several arguments worth revisiting to either reinforce their point or critique them.

To accommodate deaf visitors, locations in the metaverse will need to feature some type of captioning system but how will these be displayed in an immersive 3D environment? Will they follow the user wherever they turn their head and might this make some people nauseous?

I don’t think that text which is pinned on the screen will make anyone nauseous. However, I do think that technology should support users with hearing difficulties. A mixed-reality headset should be able to caption real-life noises and voices for users by combining external microphones and AI language models. This is where it could become challenging for designers to separate and signify the difference between sounds from the device’s apps and sound from the real world. What comes to mind immediately in reflecting on this challenge, is how we implemented similar functionalities in noise-canceling headphones. They allow the user to regulate the intensity of external noise suppression while being able to regulate the volume of the application sound. In the noise-canceling example, the locus of control lies with the user as he decides the degree to which the function supports his needs. It would be interesting to test the mixed-reality accessibility scenario with varying degrees of control for the user.

What about users with motor and dexterity impairments? To freely navigate the metaverse, users will be expected to master an assortment of pinches, swipes and hand gestures.

This will be an issue with Apple’s Gaze & Pinch gesture. Research has shown that the gesture performs worse than alternative interactions for distant on-screen targets. This is why I anticipate Apple to include an option for users to switch to Gaze & Finger or Gaze & Handray. All three interaction modalities and insights supported by research are explained in my article Fittss’ Law meets Apple’s Vision Pro.

Dimensional & Immersion

Spatial design principles also emphasize dimension, prompting developers to create immersive experiences with depth, scale, and interactive windows, catering to users’ eyes, hands, and voices. Furthermore, control management in windows is thoughtfully addressed, with Apple suggesting that window options and commands be designed outside the window itself, ensuring a clutter-free and user-centric environment. Adapting interfaces to various screens is equally significant. Apple introduces the concept of points as a standard unit of measurement in design, allowing interface elements to scale and adapt based on user distance, promoting a flexible and cohesive spatial experience.

Authentic

This principle in Apple’s spatial design guidelines focuses on creating natural and intuitive interfaces in spatial computing. The main challenge is creating an authentic app experience regardless of the real-world environment that the user is in. To be fair, designers will have to cope with less dynamic contexts than with mobile computing. Thus, it is Apple's responsibility to listen to feedback and once again shape an overarching OS interface that supports any app design in a multitude of real-world environments.

Summary

As we approach the age of spatial computing, anticipating the trends of multi-modal interactions, attention ownership, and human-centered design becomes essential for ensuring seamless and ethical user experiences. By proactively discussing these principles and staying responsive to technological advancements, designers can shape authentic and immersive digital environments and reflect on their own works' impact. As the field of spatial computing evolves, collaboration and responsible design practices will pave the way for a future where users seamlessly interact with technology in ways we can only imagine. Let’s embrace the potential of spatial computing with caution to positively shape a humane interconnected digital world.

References & Further Readings

Tristan Harris’ and Aza Raskin’s podcast Your Undivided Attention: Link

O'Reilly's summary of Jakob’s Law of UX: Link

Forbes’ article on accessibility and inclusion in the metaverse: Link

My article Fittss’ Law Meets Apple’s Vision Pro: Link

My article Why We Need 11 Usability Heuristics: Link

--

--

Jim Ekanem

Hey there, I mainly write about Workshop Facilitation. Occasionally, I'll still share stuff about UX Design & Usability.