Shared Experiences in Mixed Reality

Understanding the opportunity (and complexity) of designing for shared scenarios

In the 1960s, sociologist Erving Goffman described communication as a system of intentional and unintentional expressions. Speaking, eye-contact, and smiling are intentional expressions we make when we interact with others. Non-verbal cues in our body language make up the unintentional expressions. Both are especially important when we collaborate: Body language helps pace what we say, gauge interest, and ensure understanding.

In lieu of visual cues we rely on voice. The inflection of words, pace of speech, pauses for response. This is most obvious on the phone, a tool that augments our voice in a tremendously powerful way. Voice conversations can be had anywhere over nearly any distance, instantaneously.

Video takes this further. Visuals enhance both intentional and unintentional expressions of the speaker and listener: Gauging reactions, showing our surroundings, presenting things. Yet despite this, videoconferencing is not always the killer app for business it was designed to be…

In even the best videoconferencing scenarios (when the projector works and the connection is excellent) something is missing. The tech is limited by the medium itself — 2D video struggles to mimic the presence and immersion of being in a conversation. Video lacks the ability to fully communicate intentional and unintentional expressions. And that limitation is one of the key reasons why travel remains a crucial part of business today.

Enter: mixed reality.

The opportunity (and complexity) of mixed reality

Just as the telephone is built on voice, and video is built on images, mixed reality is built on presence and immersion. Immersing participants in a digital layer within their real world (or a new, virtual one) while leveraging the presence of others sharing in the experience.

Shared experiences in mixed reality can help circumvent technical literacy. In much the same way voice-only UIs are attempting today, we can design more natural shared interaction, building off the intuitive understanding we have of interacting among groups of people. Through digital avatars, the representation of ourselves and others brings new opportunities to remove abstraction from the experience. Instead of finding a button in the interface to point at something (as you would in a 2D videoconferencing scenario) you simply point to it with your hand or motion controller.

Fundamental qualities like this provide unique benefits across the spectrum of mixed reality, allowing developers to make the most of a device’s individual strengths:

  • A shared immersive (virtual reality) experience might take advantage of embodying an avatar in a 3D world: from leveraging known real world behaviors (e.g., placing notes on virtual objects as you would with objects in real life) to creating fantastic new perspectives (e.g., walking around a blood cell).
  • A shared holographic (augmented reality) experience might layer data over a real world environment (e.g., visualizing data at a construction site) while leveraging our real bodies (e.g., discussing holograms by walking around them).

To utilize these strengths, developers must navigate the complexity that comes with it. In an ideal use case, we have two people with holographic devices collaborating in the same room or two people with immersive devices collaborating remotely. But mixing these devices (while still leveraging their strengths) becomes considerably more complex. A mixture of devices, real presenters, virtual audiences, real tools in virtual worlds, virtual objects in real spaces — the design can be daunting!

Six questions to define shared scenarios

Before you begin designing for shared experiences, it’s important to define the target scenarios. These scenarios help clarify what you’re designing and establish a common vocabulary to help compare and contrast features required in your experience. Understanding the core problem, and the different avenues for solutions, is key to uncovering opportunities inherent in this new medium.

Through internal prototypes and explorations from our HoloLens partner agencies, we created six questions to help you define shared scenarios. These questions form a framework, not intended to be exhaustive, to help distill the important attributes of your scenarios.

1. How are they sharing?

There are many ways to share, but we’ve found that most fall into three categories: presentation, collaboration, or guidance. Generally, the complexity of an experience increases with the user’s level of agency. A presentation might be led by a single virtual user, with multiple users collaborating, or a teacher might provide guidance to virtual students working with virtual materials.

2. What is the group size?

Complexity increases exponentially as you go from small to large groups. One-to-one sharing experiences can provide a strong baseline and ideally your proofs of concept are created at this level. Be aware that sharing with large groups (beyond 6 people) can lead to difficulties from both technical (data and networking) and social (the impact of being in a room with several avatars) perspectives.

3. Where is everyone?

The strength of holographic experiences comes into play when a shared experience takes place in the same location. We call that co-located. Conversely, when the group is distributed and at least one participant is not in the same physical space (as is often the case with immersive experiences) we call that a remote experience. Your scenario might also feature both co-located and remote participants (e.g., two groups in different conference rooms).

4. When are they sharing?

We typically think of synchronous experiences when sharing come to mind: Everyone participating together. But include a single, virtual element that was added by someone else previously and you have an asynchronous scenario. Imagine a note, or voice memo, left in a virtual environment from a previous session. How do you handle 100 virtual memos left on your design? What if they’re from dozens of people with different levels of privacy and access?

5. How similar are their physical environments?

The likelihood of two identical real-life environments, outside of co-located experiences, is slim unless those environments have been designed to be identical. You’re more likely to have similar environments. For example, conference rooms are similar — they typically have a centrally located table surrounded by chairs. Living rooms, on the other hand, are usually dissimilar and can include any number of pieces of furniture in an array of layouts. If your experience depends on the physical environment (e.g., holograms needing to be placed on a flat surface), what is the fallback?

6. What devices are they using?

Today, you’re often likely to see shared experiences between two immersive devices (those devices might differ slightly in terms of controllers and relative capability, but not greatly) or two holographic devices, given the solutions being targeted at these devices. But consider that 2D devices (a mobile/desktop participant or observer) will be a necessary consideration, allowing for situations of mixed 2D and 3D devices. Understanding the types of devices your participants will be using is important. Not only do mixed devices create different fidelity and data constraints, but because users have specific expectations for each platform.

Learning from our partners

The design team has been applying this framework to better understand opportunities inherent to this new medium. From existing apps like Skype to entirely new shared experiences targeted at both immersive and holographic devices, there is enormous potential left to explore. You can learn more about how to develop shared experiences in your own mixed reality apps on the Windows Dev Center.

Special thanks to Amy Scarfone from the Windows Mixed Reality design team and Amy Hillman from Object Theory who developed these concepts for a recent workshop at Microsoft’s Mixed Reality Academy.