Your UX Design issues… in virtual 3D… in the real world!

Alexia Buclet
Minsar
Published in
7 min readNov 20, 2018

With the UX Design of Human-Machine Interfaces comes a number of issues: where should we place elements on the screen, so the user can interact with them optimally? How much information should we display? Where? When? Etc.

When designing for Virtual Reality interfaces, a lot more issues add up because of the 3D parameters. I’m sure you must wonder what it’s like when virtual elements are integrated into the physical world with Mixed Reality…Well, let’s take a look!

Different issues

2D

As designers, we’re used to work on products with defined usage environments. We’re driven by standard guidelines (position, contrast, size…) and interactions (mouse, touch…). Even when devices aren’t the same (smartphones, tablets, or computers) we have enough perspective to deal with it, thanks to responsive design for example.

We’re confronted to definite constraints to frame our work. There’s a precisely defined workspace, it can be a desktop or mobile screen, a framework in a software… This workspace is often a rectangle, limits are easy to perceive, for both the designer and the user.

Take the PowerPoint workspace for example, all zones are persistent and clearly defined, even if there are some options to customize them.

Powerpoint zoning

We’ll get back to PowerPoint later on as a concrete example to compare with 3D.

3D

The interface no longer fits a screen. It is displayed around the user and it’s our job to make it as relevant as possible depending on the context.

In the Real world
Huge scoop: the physical world is in 3D. More seriously, you’ll understand why I’m telling those obvious facts. Our world is ruled by physics properties, like gravity or matter. Here on earth, you can’t teleport somewhere (for now), walk through a wall or in the air. We are used to those rules and we don’t pay attention to them (most of the time).

In Virtual Reality
All those magical interactions are possible in VR
, we’re not bound by the law of physics anymore. You can go through anything, fly, be invisible, move things you can’t reach with you hand… Everything is possible, the only limits are the imagination and devices performances. You can create unlimited types of interactions.

Nevertheless, the user can’t do something that wasn’t anticipated by the VR experience team. Indeed, rules are defined by the code behind the experience. The user must be guided to know what they can do in this new reality.

In eXtended Reality

Here at Opuscope, we are working on a XR (Virtual, Augmented & Mixed Reality) software to create 3D experiences, immersively. While in the headset, you can import or create content to organize a scene, make them interactive and share your experience with your colleagues and audience. You can create and visit those experiences in a free or a specific space (for example, the Hall of Mirrors in the Palace of Versailles). This means that we are facing both 3D and physical world issues.

Import and adjust your content, collaborate on your experience, then share it with the world!

We have to code another reality, compatible with the physical world ; this means thinking about all possible cases your user can encounter, to provide the best possible experience.

As designers, here are the main parameters we need to take into consideration:

  • The depth, since there are 3 dimensions.
  • The FOV, a lot smaller than in real world (with current headsets).
  • The user’s position and behavior.
  • The physical world configuration (walls, furniture, people moving around, light…).

See my other article for more details:

There are a lot of possible conditions of use: many constraints but you can’t control the user’s environment… Physical world limitations without the advantages of VR.

Take the Microsoft HoloLens as an example: the cursor is linked to the head movements (not the hand as for a computer mouse). The User Interface can move according to our head position and movements. It can’t be designed as for 2D screens where it sticks somewhere most of the time ; AR phone UI included since buttons can be displayed on the screen edges (see a nice article about it “World space” vs “Screen space” https://medium.com/@nathangitter/what-i-learned-making-five-arkit-prototypes-7a30c0cd3956).

A concrete example: the import feature

When you import something in a software, there are usually 3 steps:

  1. Choose the element (not the matter in our article).
  2. Download the element (most of the time a loading sign is displayed and the element properties aren’t always known at this step).
  3. Position the element in the workspace (slide, scene, etc.).

2D

As promised earlier, let’s take a closer look to the import feature of PowerPoint.

When you drag and drop a picture in PowerPoint, as a HMI designer you know there are 2 possible cases:

  • the picture fits the slide
  • the picture doesn’t

What choice has been made by the HMI designers team?

When you import a picture, its original size is used and if it’s bigger than the slide, it automatically fits because well, you know your user won’t want a picture to be half seen on a slide… At least most of users… For the rest of them, advanced options are available to manage the picture size or they can also zoom out and scale the picture.

In return, if the user changes the slide size, a popup appears asking them to choose between adjusting the content to new slide size or not.

Powerpoint popup

It’s an obvious example but now you’ll see additional problems we are facing in 3D for the same feature.

3D

To be relevant in 3D, we must take several possibilities into account to define an imported element behavior:

  • Does the imported element fit the scene?
    Or is it too big and may go through a wall for example?
  • Does the imported element fit the user’s FOV?
    Or is it too huge to be identifiable at a glance by the user, or may surround them when the import is completed?
  • Where is the user when the element is done downloading?
    If they’re far, how can they know the import is a success? If it’s not, they should be able to easily fix the problem from where they are without being interrupted in their current task if they don’t want to…
    Will the element collide with the user when the import is done?
  • Is the element imported big enough to be selected?
    Or will it be likely to stay in the scene forever even if the user wants to get rid of it?
  • Are there virtual elements around the import spot the element may collide with?
  • What would happen if the user wants to move the element during the loading? Or change its size, rotation, or other parameters?

Our work is to ask ourselves these questions and find the best possible answers. Thus, the user will not be bothered by these issues and will get an intuitive experience for this easy (on the surface) feature. However, we should beware of not crossing the limit of making too much choices for the user, they must always be and feel in control.

I won’t reveal how we tackled these challenges, you’ll be able to find out when using Minsar!
Whatever it be, next time you try a 3D experience, pay attention to every details to try and guess what issues the design team addressed and which ones they didn’t. This will help you think about all possible scenarios and challenges your user could face, and prevent you from forgetting some. You’ll be a better 3D UX Designer.

Creating new worlds: the challenge of Designers

Adding a new dimension into our products comes with a set of new challenges and issues, especially for Mixed Reality which merges virtual elements into the physical world.

Only designers and the development team should mind solving them, not the end user.

This challenge comes from our ambition to develop a solution compatible with the entire XR spectrum, including Augmented, Mixed and Virtual Reality. This will allow the inclusion of a high amount of use cases in the field, leading to exciting perspectives for the future.

Our goal with Minsar is to provide a great user experience and an easy to use interface. To reach this goal, we’re focused on solving any issues that could arise beforehand, so the user will be able to enjoy their experience smoothly while achieving their own goals.

Please 👏 👏 👏 if you enjoyed your reading and feel free to share your thoughts and your 3D Design issues to enrich this reflection!

--

--

Alexia Buclet
Minsar
Writer for

French UX Designer & Cognitive Psychologist since 2010, I worked at Ubisoft, Adobe, Aldebaran Robotics and Opuscope (AR/VR). Currently freelance in impact tech!