Contextual Awareness in Spatial Computing UX and UI

Costas Michalia
9 min readJul 4, 2023

--

As I continue to explore Apple’s latest OS, ‘VisionOS’, I am consistently excited by the potential that the device and the discipline of Spatial Design offer. There is however an undeniable feeling that what might be missing is in Apple’s toolkit is context, or more specifically, contextual awareness.

Contextual awareness is a pivotal component of UX design, accentuating the importance of the user’s context of use. This ensures the UX adapts to the user’s social, emotional, and physical environment.

(Source: Wikipedia)

Apple covers the basics here

Image: Apple | Vision OS
Images: Apple | Windows | Volumes | Spaces

Windows: You can create one or more windows in your visionOS app. They’re built with SwiftUI and contain traditional views and controls, and you can add depth to your experience by adding 3D content.

Volumes: Add depth to your app with a 3D volume. Volumes are SwiftUI scenes that can showcase 3D content using RealityKit or Unity, creating experiences that are viewable from any angle in the Shared Space or an app’s Full Space.

Spaces: By default, apps launch into the Shared Space, where they exist side by side — much like multiple apps on a Mac desktop. Apps can use windows and volumes to show content, and the user can reposition these elements wherever they like. For a more immersive experience, an app can open a dedicated Full Space where only that app’s content will appear. Inside a Full Space, an app can use windows and volumes, create unbounded 3D content, open a portal to a different world, or even fully immerse people in an environment.

(Source: Apple)

In this instance, context could denote several concepts; it could signify work, rest, or play. Each element might be influenced by factors such as location, time of day, or activity, and in turn, each of these aspects could be affected by a combination of elements. Apple incorporates a basic contextual function, named ‘Focus’, across MacOS, iOS, and iPadOS. To an extent, this feature enables the user to specify time, location, and function to set their device according to their preferred preferences, thereby aiding the user’s focus or, in the case of sleep, minimising disturbances.

Image: Apple | Vision Pro

With VisionOS and Vision Pro, the concept of context can be propelled a step further. Bearing this in mind, I have explored the notion of context and how it might be applied to various tasks. The first step is to integrate ‘Context’ into Apple’s ecosystem; Windows, Volumes, and Spaces now incorporate context.

Image: Apple | Fiora with the addition of the Context Icon. Showing 4 dimensions implying space & time

Context

Can be incorporated into Windows, Volumes, and Spaces. It can also be connected to an iPhone or iPad, a HomePod or an iBeacon, and even an Apple Air Tag, to enhance Vision Pro and VisionOS’s understanding of the current situation. Context transforms an immersive experience into something personal, intuitive, and productive.

Context mapping requires a deep understanding of user behaviour and Task analysis to implement. How might context materialise? As the user enters a space — for instance, in the image below, the user has transitioned into a space equipped with a HomePod or an iBeacon — these devices notify Vision Pro of the user’s current location, and subsequently suggest apps, content, or await instruction based on the most probable forthcoming event.

Images: Background Vitsoe | Apple products Apple

The image below extends the capabilities of Apple’s ‘Focus’ tool by merging the concepts of spatial computing and spaces to provide context. In this scenario, the user has entered their home office and the context function has suggested three key ‘Focus Spaces’ — Work, Research, and Personal. Based on previous activity within this space, these arethe most frequently activated Focus Spaces.

Navigating via eye-tracking, the user is able to select the ‘Work’ Focus Space. The resultant effect is the minimisation of the Apple TV, Music, and Mindfulness Apps, and the promotion of work-related apps.

Images: Background Vitsoe

The user can now expand the ‘Work — Focus Space’ and is presented with several options, all of which have been previously engaged with. Upon selecting the relevant Focus Space, the interface shifts, filling the user’s view with the key windows, apps, and associated elements that pertain to that specific context. Moreover, the previously used apps are present, saving the user the time and effort of switching views or navigating menus.

Images: Background Vitsoe

A critique I hold of several other headsets currently available (please note my current go-to headset is the Oculus Pro) centres around the how slow some tasks can be to accomplish. Typing, navigation, and even scrolling can prove to be tedious. A well-known argument against the usability of VR/AR headsets focuses on their overall interface complexity and cognitive overload, creating a steep learning curve for users. Research in the AR space supports this criticism, with studies highlighting challenges in user experience design, information display, and effective interaction techniques. All of which add to user frustration.

The concept of integrating context into Apple’s already accomplished UI holds the potential to greatly enhance the efficiency of work conducted within the headset.

One fascinating feature of Apple’s Vision Pro is its capacity to accommodate a screen as expansive as your field of view. Several studies have been conducted to determine the optimum setup, varying between multiple monitors and single, larger monitors. The shift towards spatial computing and the elimination of boundaries introduces another intriguing challenge, as well as a benefit.

Images: Matthew S. Smith | Georgijevic

Multiscreen and large format screens

The effectiveness of using larger and multiple screens can vary depending on the nature of the tasks being performed, the user’s experience, and the specific context. There have been various studies conducted in this area, and while there is no consensus on the ‘perfect’ setup, several key insights have emerged.

  1. Increased Productivity with Multiple Monitors: A study by the University of Utah found that productivity can increase by up to 25% when using multiple monitors, as they allow for easier multitasking and information retrieval. However, the study also highlighted that the benefits can depend on the nature of the work. Tasks that require users to switch frequently between applications or to compare information side-by-side may benefit more.
  2. Potential for Increased Distraction: Conversely, research has suggested that more screen space can lead to increased distraction, especially with the proliferation of multitasking. In an environment where several windows or apps are open simultaneously, users may find it harder to focus on a single task (Czerwinski, et al., 2004). However, this may also depend on individual user preferences and their ability to manage their attention.
  3. Size Matters: Larger monitors can help reduce the need for scrolling and can make text and images easier to see. However, if a screen is too large, it may require users to move their head and eyes more frequently, leading to potential discomfort or fatigue.
  4. Optimal Arrangement: Some research suggests that a two-monitor setup, with one main monitor in front and a secondary one to the side, may provide the best balance for many users (Colvin, et al., 2004).
  5. Single Large Display vs. Multiple Smaller Displays: Studies have also explored whether a single large display or multiple smaller displays are more effective. Some studies found that for tasks that require a broad overview (like data analysis), a single large display may be more beneficial. Conversely, for tasks that involve switching between applications, multiple smaller displays may be more effective.

It’s important to note that the ‘perfect’ setup may vary depending on individual preferences, the nature of the tasks being performed, and the specific context of use.

No borders or chairs

Taking a moment to contemplate the actual usage of the headset, I find it hard to envision myself seated while using the Vision Pro, considering that I seldom, if ever, sit while employing my Oculus Pro. As such, standing might seem the more natural position, thereby rendering the ability to turn, move around, and situate virtual windows in contextual spaces as an intuitively logical progression.

Images: Background Vitsoe

Where I see contextual spatial computing truly starting to excel is within the realm of collaboration. Restricting oneself to a window within Zoom or Teams and having to share screens, taking turns to express opinions or thoughts, is time-consuming, awkward and tiring.

As illustrated in the image above, the team is engaged across several documents and platforms. I envisage a more tangible ‘Horizon Workrooms’ type space. However, within VisionOS, everyone shares the same space, simultaneously working on a single document and or collaborating across multiple documents independently.

Imagine sharing the same space, but within your own environment — think of it as akin to sharing a window rather than your entire screen. You can choose to invite someone into your space and establish a boundary within that area. In essence, you’re sharing the context and Focus Space but not the entire environment.

Possessing the capacity to control one’s own space, retreating to a corner to take a call or send an email whilst remaining present, will be a compelling feature. It’s likely to address some of the pain points many users encountered within Horizon Workrooms.

Image: Background Vistoe

As the day draws to a close, the user walks through their home, eventually entering their lounge. The headset, recognising that it’s 6:00pm and the end of the workday, presents the user with news updates. The user casually dismisses these, takes a seat, and glances at their record collection. In response, the headset shifts to music mode, foregrounding the three most frequently used apps in this particular Focus Space: Music, Apple TV, and Safari.

This scenario showcases the potential future of spatial computing — a world in which walls, indeed any surface, become screens and facilitate interactions. Current research and predictions suggest that this is more than mere speculation. Developments in technology such as eye tracking, head up display, augmented reality, holography, and advanced projection systems are leading us towards an era of ubiquitous computing, where the environment itself becomes the interface.

This vision aligns closely with the concept of contextual spatial computing, which can be encapsulated as ‘Contextually aware Focus Spaces’ (CaF). By adding layers of contextual information to our spatial interactions, we enrich the user experience and add a level of personalisation and adaptability that isn’t currently possible with standard screens. This concept is at the core of the Vision Pro and VisionOS experience, providing a holistic, immersive, and personalised interaction with digital content.

However, the ultimate goal must be to transition beyond the headset and the operating system, to a point where our environment itself is the computing device. This future is beautifully captured in the concept of Star Trek’s Holodeck — a fully immersive, interactive environment that responds to the user’s needs and actions in real-time. While we are not there yet, advances in spatial computing, holography, and related technologies are taking us ever closer to this vision.

Please note: that the UI of Vision Pro and VisionOS in the visuals was based on the Spatial UI Kit by GTCSYS Design, sourced from the Figma Community. Several of the UI components were adapted or created specifically for this article.

--

--

Costas Michalia

Strategy & Innovation Director @Fiora. Spend most of my time thinking about thinking. www.fiora.agency www.fioraconsultancy.com