Adding support for VR inputs with WebXR and Three.JS

Alexis Menard
10 min readMar 1, 2019

--

This article is part of a series about creating responsive VR experiences:

In part two of the series, we learned how to render immersive experiences regardless of the type of devices the user has. One important point to make an experience more complete is to give the user the ability to interact with the content. In this article we’re going to cover the basics to provide an interactive experience with WebXR.

During this article we’re going to refer to a simple demo located here. It uses WebXR and Three.JS and is the basis of the code snippets in this article. The full source is located here.

A FEW WARNING WORDS

The WebXR API is still being refined (the First Public Working Draft has just been published) so I will try my best to update this series to reflect the changes and keep these articles ever-green. New features will come to the WebXR Device API specification and I will update this article accordingly if it simplifies the code.

ADD INPUT SUPPORT IN AN IMMERSIVE-VR SESSION

Before we get into details on how we can support input functionality in our experience, let’s go through the types of input you typically have in an immersive experience which a user would use to interact with the content. Typically a user wants to aim and select something.

Pointer-based input

Users will use this type of input when their VR systems have a controller, no matter what type of DoF is supported. In VR typically you will render a representation of it in your scene, so the user can see where the controller is and where it is pointing. To represent the pointing aspect, you typically fire a laser straight from the controller representation and potentially some kind of cursor on an object that the user can interact with, helping them to understand that something will happen if you select that object.

Pointer-based input (credit Google)

Gaze-based input

Typically this system is used when a system doesn’t have a dedicated controller (for example, cardboard). The user will look and aim at something and the selection is based on the head pose rather than a controller pose. You typically want to draw a cursor helping the user to see where they are aiming. Drawing the laser is not recommended here, because it would come right out of the user’s head, which can cause discomfort. If the device has a button, the button can be used as a selection mechanism (e.g. cardboard), otherwise, you can just create a reticle with some kind of loading animation to give feedback to the user that if they keep looking at this object, they will trigger a selection (e.g. radial loading).

Gaze-based cursor (credit Jonathan Ravasz)

There are various ways to show that an object can be selected when aiming at it. For example, you can change some properties of the object (such as color or size), or change the cursor aspect to highlight that something could happen. In the end, it’s really up to you to decide what works best for your experience.

Touch-based input

I’m not going to expand much here because it’s not really specific to immersive, but it’s important to mention it. Touch-based interactions are used typically for inline or immersive-ar with tablet/phone experiences. These are a bit simpler to use since you don’t need any representation, the user just touches the screen and you need to determine which object was selected in your 3D scene.

Fortunately, the WebXR Device API helps you to support these various interactions. Your input state needs to be updated whenever WebXR provides you new frame data and right before you render your scene. Typically in the _render function, right before you render the views is a great time to update your input state. It’s going to look like:

Now let’s focus on the _updateInput method.

The WebXR Device API lets you iterate over the input sources this way:

You’ll get a list of input sources because some VR systems have more than one. You can iterate over and request their pose data:

You have to iterate on the input sources on every frame and update your scene accordingly, not just because the controller changed position, but because, for example, one of the two controllers ran out of battery and you want to handle that gracefully.

If the input pose has the gripMatrix property set, then the user is using a pointer-based device, therefore you should draw the virtual representation of the controller inside the experience. If you have a 3D model, an easy way to do that with Three.JS is to draw it this way:

Then you can just set the gripMatrix onto the 3DObject matrix property of your controller:

The next step is to check the targetRay property of the input pose, which will help you to draw a laser pointer, if appropriate, and will place a cursor in your scene if you desire.

inputSource.targetRayMode will tell you if the user is using a tracked controller or if it’s a gaze-based experience. Typically if it’s set to tracked-pointer you could draw a laser like this very simple example:

You probably want to store that laser object to avoid re-creating it for every frame. Then you can just update the geometry with the new transformMatrix value. The computation of the laser’s length will be covered below.

On a gaze-based experience (or even with a tracked pointer), it is desirable to show a cursor helping the user to see where they are aiming. Let’s look at how you can draw a simple cursor using the WebXR Device API:

This code will create the cursor, but then you need to place it in the scene at the right position, typically where the laser intersects with an object (even though the laser is not visible if the user is using cardboard). This gives a visual clue to the user on where they are aiming. In order to place the cursor, you need to hit test by sending a ray through the scene and see where it intersects. Three.JS has a Raycaster class to help you with that, however some setup is required. WebXR Device API gives you a targetRay object as part of the input pose which will help you to set up that ray:

Then you can ask Three.JS to compute the intersections and iterate over the result:

Now you can place the cursor at the right place using some information provided by Three.JS and compute the laser length:

One thing that I personally like is when the cursor has the same rotation as the object it is covering (see the gaze animation above with cardboard). It is easy to achieve doing this:

Please note that matching the rotation can be a bit more complex if it’s a more advanced Mesh and you may want to work with the intersected Face.

ADDING LOCOMOTION (TELEPORTATION) IN IMMERSIVE-VR

In most cases, the user is bounded by either their VR systems (6 DoF, the play area) or because the system doesn’t support the user’s movement in space (3 DoF). However, you may want to let the user move inside the virtual world so they can go to different rooms, for example. Usually in VR this is achieved with teleportation: a user will aim at a position on the floor or scene and press a button on the controller and they will be transported to that new position.

To handle teleportation, we must go back to the _updateInput method and make some modifications. First we need to check if the intersected object with the ray cast is the floor, so we can draw a different visual cue to inform the user they can teleport:

Here is a simple example to draw the teleporter:

In this code, I’m reusing the cursor object and assigning it a new geometry.

We now have the teleporter showing whenever the user is pointing at the floor, now we need to make sure we handle the click and actually teleport the user to the new position. WebXR helps you handle this with the select event:

Then we need to write the event handler:

_adjustMatrixWithTeleportation needs to extract the position from WebXR matrices and add the offset as follows:

Finally, we use Three.JS raycaster to intersect with the floor and if so, we should teleport the user and set the offset for later (to be used inside the render loop).

Now we need to adapt the render loop and make sure to take into account the offset whenever we render the new scene. For each eye, we need to adjust the matrices from WebXR and add the offset we set in the select handler. Let’s add that code in _renderEye method right before we render:

The _translateViewMatrix is a helper function to adjust the matrices from WebXR while keeping the information passed in the first place.

With the updated matrices, we can continue to render the scene as usual. Please note that this code can disappear in the future when the originoffset property of the space of reference support lands into browser. (Thanks to Brandon Jones, Nell Waliczek, and Alex Turner for listening to my ramblings and creating the feature to help with simplicity.)

IMPROVING THE INLINE/MAGIC WINDOW EXPERIENCE

The WebXR Device API does a great job at providing inline experiences by leveraging the sensors of the device. This is an ideal way to provide a user preview, regardless of their devices, OSes, or if they have a VR head mounted display in the first place. While WebXR does a great job at providing a 3DoF experience, you may want to add more ways for the user to move inside the 3D world. You can add touching inside the scene or something a bit more funny like this virtual joystick I’ve added in my demo:

Users can use their thumb to navigate the scene which makes it a little more fun to use. You could apply this to experiences like a virtual tour of a museum, or exploring your hotel room or rental.

Here is the HTML + CSS code:

Then you can listen for pointer events or touch events to handle the moves:

Typically, you want to store the user position in a member variable and update it as the user uses the navigation pad. However, it’s not as simple as just updating the position-based only on the touchpad movement, because you also have to take into account the rotation of the user. Going to the front on the touchpad doesn’t map to increasing x/y coordinates because the user may be looking backward. In order to calculate the new position, you need to get the rotation information from WebXR.

Before you render the view, run this code with the viewMatrix coming from WebXR:

Then you can render your scene with Three.JS.

Note on this particular use case:

It seems a bit cumbersome to handle offset and make sure that it plays nice with WebXR. I reported this issue to the WG and we discussed it here. A proposal was made to update the spec hereand it landed. The code above will become simpler and I’ll make sure to update this article when I get a chance to play with the new feature.

BROWSER SUPPORT OF WEBXR

  • Google* Chrome has WebXR currently in Origin Trial, which means that developers can experiment on their domain. You can request to join the trial here.
  • Microsoft* Edge with EdgeHTML as its engine supports WebVR (the previous iteration of WebXR) and thanks to the WebXR polyfill, one can render it on top of WebVR. Future Edge versions based on Chromium* will likely enable WebXR.
  • Samsung* Internet Browser and Oculus* Browser have support for WebVR as well, and have active representatives in the Immersive Web WG. We can expect that they will enable WebXR when they rebase their browser on top of a newer Chromium.
  • Mozilla* Firefox has support for WebVR currently shipping and is actively working inside the Working Group as well as WebXR implementation. Meta bug is located here.
  • Apple* Safari currently has no plans to support WebXR and Apple has not publicly commented on potential support. However, Igalia* contributed to add WebVR support inside the WebKit and intend to move the implementation to WebXR.

CONCLUSION

Creating responsive immersive experiences is a little bit more work than creating an experience that works on a single type of VR system. However, the benefits of reaching more users is worth the extra work. You have to avoid obstacles where the user will be frustrated that the experience doesn’t fully utilize their system. You can’t assume that they have a high-end VR system. Providing a minimal keyboard/mouse/inline experience is a great way to engage with users and may convince them to go buy a VR system to “upgrade” their experience.

Entering VR is still a high friction step from a user standpoint, so it’s important that you make sure your experience performs with their system, regardless of what they use, and show them how their HMD enhances the experience. Teasing them with an inline experience is a great step in that direction.

If you have suggestions, please let me know! Keeping these articles updated benefits the web developer community as we go through the evolution of providing our users with a robust immersive experience.

RESOURCES

ADDITIONAL READING

This article is part of a series about creating responsive VR experiences:

About the author

Alexis Menard is a software engineer at Intel. His main focus is on the ever evolving Web Platform, the immersive web, which includes work on W3C standards as well as Blink/Chromium.

--

--

Alexis Menard

I work on the Immersive Web at Intel. I also works on Blink and Chromium. Previously created and worked on https://crosswalk-project.org.