UX PoC in AR/VR #3: Place & Move Assets

Alexia Buclet
9 min readJan 24, 2023

--

The 3rd episode of a series to share Opuscope’s work focusing on building the best User eXperience (UX) in Augmented Reality (AR) & Virtual Reality (VR) possible, thanks to Proof of Concepts (PoC). Maybe they will help other professionals through their AR/VR journey.

To get more context about how we made PoCs, you can check out the first episode:

This episode focuses on the different PoCs we made regarding how the creators could intuitively place and move their assets.

Challenges

Placement may seem familiar and easy, but in AR/VR, it’s not. 🫣

It’s even harder when the placement isn’t automatic but performed by the user. We need to provide them with relevant features and guide them in this new environment.

Find the best balance between immersing the user in a world like the one they know, with the same physics rules, and offering them new convenient powers! 🪄

Many apps place elements on a horizontal plane only. It could be automatic once the plane is detected, or the user has to tap on the plane to make the element spawn at that spot.

Since we were working on an app to create AR/VR experiences, we faced a lot of requirements:

  • the user must be free to place their assets anywhere (not only on the floor).
  • the asset to place may be of any size or proportion (since it’s imported by the user).
  • the placement must take both the virtual and physical (AR) environment into account.
  • the user must be guided through the most common cases.
  • the user must see the asset while they place it (in comparison with taping a spawn point) to get a WYSIWYG (What You See Is What You Get) experience.

Aaaaaand here we go with all these constraints! 💪

Gaze placement

We have used gaze placement since the beginning of Minsar. It was the easiest to do on HoloLens and was compatible with all platforms.

Once the user chose an asset to import in Minsar, it’s loaded in front of them and follows their gaze. The user has to tap to drop it. This way, they can easily drop the element wherever they want, then fine-tune it manually.

Define a max and min size

Even if creators could import anything, we had to define a frame to help them in their import process.

A really tiny asset, let’s say a cube of 0.5 cm would have been really hard to see and interact with. An element this size is meaningless in the experience we offered the user to create. Therefore, we decided to scale up any asset below a certain size on 2 dimensions (one dimension could be very tiny, the element would only be flat).

Same for huge assets: at the import, it’s hard to comprehend a real-scale virtual Eiffel Tower (more than 300 meters in height). We decided to put a maximum size for the creator to easily place it at first, then scale it up if they want to.

We made those choices especially because all 3D models aren’t modeled at a human scale as they are made on computers. A scale error can happen fast.

Handle the size

Depending on its size, you don’t place an imported asset in the same spot. The creator should directly see it in full.

We set 3 positions:

  • Big assets: far enough to be fully in the Field of View (FoV).
  • Medium-sized assets: at a great default position, not too oppressive, and close enough to interact with it easily.
  • Small assets: close to the user so they don’t miss them.

Once imported, the creator could still adjust their position.

Move

In Minsar, we offered several ways to move an element. Note that some ways were available only on some platforms.

We mainly used the PoC to define the inputs, movement speed, velocity, smoothness, and sensitivity, to both offer precision, and limit making too many movements to move something far for instance.

Drag & Drop

The obvious and regular way to move an element was to drag and drop it. The creator only had to target it, hold it, move it, and drop it at the spot they wanted. Intuitive and compatible with all platforms.

Touch screens

Even if the drag & drop was compatible with touchscreen devices, they required some exceptions.

This device type only offers a 2D experience, compared to immersive headsets. We needed to make 2 dedicated gestures to move the element in depth and in height. We chose to move with one finger for the depth as it was the most common gesture; move with 2 fingers for the height.

Targeting an element on a small screen may not be easy. That’s why we decided the user could move a selected element by making gestures on the whole screen, and not only by touching the element like when it was not selected.

Another trick to help the creator easily place their element where they wanted, was the “hold to move”. An in-house expression to name the following feature: the creator could hold the element, then move their phone and the element would move with it in space as if it was attached to the phone.

Grab (VR)

The controllers in VR offer further possibilities. One of them was to put the controller in the virtual asset and grab it thanks to a dedicated button. The element would follow the controller’s movement as if the creator was holding it like a physical object in real life. Nothing new, it’s quite common in VR.

Controllers’ buttons

To offer another way to move elements, headsets with one or 2 controllers were the opportunity for lazy creators to only use the buttons without moving their arm/hand.

This was especially challenging for the Magic Leap One: there was only a controller with a touchpad and a button. We had to combine the move with scale and rotate with very few possibilities!

We offered this feature only for selected elements. The creator could use the top or down edges to move them forwards and backward. With 2 dedicated gestures:

  • Simple tap to be precise.
  • Long tap with more or less strength for continuous movement.
Inputs on the Magic Leap One’s controller (Magic Leap — 2020)

We iterated a lot on the PoC to find the most intuitive values between speed and distance.

Interaction with the environment

Elements are moved in an environment at least virtual, at most virtual and physical (AR). They need to react in a way to make the placement intuitive and logical. That’s why we developed interaction rules: some are based on physics 🧑‍🔬, and some are magical to help the user do what they want 🧙.

Collision

A moved element may collide with another one or the environment. This is a default behavior based on the physical world.

The PoC wasn’t mandatory for this feature, but we used it to define nice haptic feedback and sound to go with it.

Magnetism

It’s common to place an element against a wall for example, or next to another one. We started from the principle that if something was going very close to another one, the user surely want them to stick together. That’s why we developed a magnetism feature.

Thanks to the PoC made on HoloLens, we defined some behaviors for a being moved element.

We found a value to make the element stick to another one when very close to it.

In the same matter of helping the user adjust 2 elements together, we added resistance when trying to put them apart. The user was then able to move an element on another one without losing contact.

The Magnetism PoC on Microsoft HoloLens (Opuscope — 2018)

As you can see in the video, the colliding face adapted in real-time to the shape of the element it collided with. We also tried to smooth the behavior to provide the creator with a nice experience.
You can also see the “Go through” behavior (see next part).

In the first video, it’s the best-case scenario, but all surfaces aren’t clean, especially in the physical world and with its spatial mapping. The PoC helped us find values to smooth the move with the collision behavior of an element on an irregular surface.

The Magnetism PoC on Microsoft HoloLens with an irregular surface (Opuscope — 2018)

Go through

Since elements collided with each other, as in real life, we wanted the user to be able to do magical things: make elements go through others or the environment, to help move them wherever they wanted. 🧙

It wasn’t easy to do! We iterated on different PoCs through the years. We wanted to find the most intuitive way to do it.

We started by making the element go through an obstacle if the user pushed it quite strongly. It’s the most intuitive way, but the value to do so was hard to find. A nice balance is required to be able to put something on something else, or go through it (see the first video in “Magnetism”).

Then we improved it to make it clearer that the element was stuck and where it would be without the collision. The PoC helped us to find a relevant style for the ghost of the moving element.

Position’s sign during a collision on Oculus Quest (Opuscope — 2021)

With the Quest’s controller, we were able to offer another feature: disabling the collision during a move by holding a button. This wasn’t possible on the HoloLens.

Semi-occlusion

This idea came from the fact that sometimes the repositioning wasn’t perfect on AR devices (HoloLens, Magic Leap, and iPhone). When we loaded an experience, sometimes some parts were behind a wall and we wouldn’t know it.

That’s why we decided to show, in creation, elements behind a part of the physical environment. There were a lot of technical constraints and the PoC helped us iterate mainly on the style.

We applied this feature to virtual environments in VR as well.

The sign of a cube behind a virtual wall on the Oculus Quest (Opuscope — 2021)

Being able to see the element is nice. The user was able to know their asset was behind a wall, but how could they access it to get it back? Go to the next room? It isn’t always possible and not really convenient… To push the magic forward, the user could also interact with it through the wall (physical or virtual).

Handle move and collision in gaze placement

Remember, when creating or importing an asset, it was displayed in gaze placement at first. There could be other elements in the scene, the created asset may collide with.

We added a smooth behavior for the created element to collide with other elements and pass through them.

To prevent the user from being stuck, we also allowed them to move other elements at the same time. We applied the previous behavior with moved elements too.

The PoC was very helpful to define the behavior to make it natural and match users’ unconscious expectations.

Different behaviors in gaze placement on Oculus Quest’s PoC (Opuscope — 2021)

--

--

Alexia Buclet

French UX Designer & Cognitive Psychologist since 2010, I worked at Ubisoft, Adobe, Aldebaran Robotics and Opuscope (AR/VR). Currently freelance in impact tech!