Making a Business Oriented VR Interface

Fear and Loathing in Unity’s Architecture

Paul Harwood
Apr 14, 2020 · 12 min read

You would think that building a relatively simple VR Interface in Unity would be “Out of the Box” sort of work.

Sadly, with a confused and confusing architecture, you are going to have to do much more work than you expected.

Background

Although I have no games development background, I recently started on a project to develop a business-oriented and data-oriented Virtual Reality application in Unity. More details about the project should be coming soon.

As an experienced architect and developer coming new into the VR and Unity world, I want to cast some light onto the way to create a solution and hopefully help someone avoid the traps I have explored.

One note. I have noticed that many articles about Unity tend to start with the obvious about how to install it and then, at the critical point just say something equivalent to “then reverse polarity in the transduction mode of the matter curve”- without any explanation of what this means or, indeed, what to do with the matter curve if you don’t want the polarity reversed! I intend to assume the obvious and discourse more on the transduction mode. I hope I am successful.

There may be better solutions than we came up with. I am claiming to be far from an expert and I would love to hear about them: but the architecture is convoluted and the documentation is pretty much “ineffable”, so this path is the best that I could find.

At the end, I give my views on what I wish this experience had been like and how the architecture should be managed.

Objectives

First, lets lay down some objectives. We want to be able to explore in VR a data space and, in that space, to use Menus and interact with data entities using a typical ‘laser-pointer’ tool. That is all.

We also want this to have a professional feel, so we don’t need any attempts at avatars or even hands (controller avatars are good enough) and we want good functionality in the menus, which should look like menus.

We also want this to be maintainable and so want to avoid technical debt and of course, we want to try to avoid reinventing everything to reduce the support load.

Let’s also for the time being that we want to work on Oculus only. But not close any doors to the future.

Starting Out

At first, everything seemed good.

Core Unity provides a GameObject structure that can quite easily represent a spatial entity model (I hope to talk more about this in another article) along with the ability to create an application core using C# and C++.

Looking at the Interface and Interaction model, you would think:

  • Unity has rolled out new Scriptable Render Pipeline technology (URP and HDRP for normal quality and High Def respectively) and has hinted that the legacy pipelines will eventually be sunsetted. It is a no brainer to use the URP to avoid technical debt.
  • Unity has rolled out a new unified Input System and has stated that the old Input Managers will be deprecated. So in that goes.
  • Oculus has a good integration package for Unity including all of the required Prefabs. Oculus has more incentive than anyone else to keep the integration supported and up to date and they should have more access to and knowledge of the low-level configurations. It seems to make sense to use this, particularly for the “XR-Rig” that is the representation of the user in VR space (i.e. the eyes and hands in their relative positions).
  • For the Interactions, Unity has an XR Interaction Toolkit that is supposed to provide everything you need to interact with entities in VR, include a laser pointer. This has also just been released, so less technical debt.
  • Finally, Unity has a new UI providing a good set of UI entities for creating Menus both in screen space and in world space.

All good, yes?

Ah, but …!

It turns out that, basically, you cannot!

The first part was to migrate to URP (no problems) and convert the project from the deprecated “Built-in XR” to the XR Plugin system. It is a little surprising that the former, although deprecated, seems to be still the default. Configuring the Oculus plugin involved some fiddling with versions but they will presumably have fixed the version problems in the next releases.

But :

  • The Oculus Utilities for Unity do indeed provide all that is needed to build a good OVR (i.e. Oculus VR ) Rig and plugins for Unity. The documentation is also quite good. However, OUU is not URP ready and only provides integration to the legacy Input Manager and not to the new Input System,
  • The Input System itself, although it does claim to support VR devices, only provides the most basic support. We could only actually get support through Legacy VR support emulating the mouse!
  • the XR Interaction Toolkit only works with its own XR Rig and on its own XR controller prefabs, which are no way as functional as the OUU Rig. There seems to be no easy way to wrap the OUU devices with entities from the XRIT. It does provide Interactors, like the laser pointer that we require, but it seems from the discussions that the focus is on grabbing items, which makes sense for games but is not what we want in this environment. It also true that the XRIT is not URP ready and the pointer is based on the Line Renderer, which means that the Ray has no 3D “body” and I personally find this and the rendering of the line when it should be occluded disconcerting. The pointer also Lerps VERYYYY slowly — for no apparent reason! It is pretty much useless.
  • The UI System does provide the tools required and DOES integrate with the Input System (as well as the Input Manager) and with the XRIT. However, since neither of those is integrated with OUU, that is of limited use. The documentation, not unreasonably, focuses on how to build UIs and not on integrating into VR and although you get the feeling that building one’s own integration is possible using the scripting API there are no hints about where to start.

In general, I have to say that particularly the Unity components are architected from inside their box! It is a real shame that there is no guidance at all about how to add your own InputModule to the UI system or your own Interactors to the XRIT!

At this point, we also started to look for other frameworks to create VR Interfaces. The VR Toolkit (VRTK) seemed a good place to look, especially since it is open source. There are other frameworks available for purchase but we are wary of code lockin with closed source. However, VRTK has a big legacy problem, well described in this article, so we really want to start with V4.

This version is very Beta! But they have a very nice pointer interactor and a very well developed Input Mapping and Event system for VR controllers.

But, it does come out of the box with documented support for OUU.

However, it provides no interaction with the UI system! Or, less surprisingly, with the XRIT!

My architect's heart is saying that VRTK should be writing their Interactors to be compatible with XRIT (and Unity should be sponsoring them to do so). That way they would get UI Integration for free as well.

Squaring the Circle

So. How did we solve this particular puzzle and meet our objectives?

1 We started with the OVR XR Rig and Plugin from OUU, as the most functional. This gives you working head and controller tracking and controller avatars. You have to do the work to go through and change the shaders and material used in the prefabs to URP. We found that the URP Simple Lit shader works well with the default material.

2 We then added the VRTK Input Mapping by adding the UnityXR.Oculus.Left{Right}Controller GameObjects as described in this tutorial. These take the Input Bindings added to the Input Manager by the OVR Plugin and expose them as UnityEvents in a well-laid out and type-driven way. Not essential but incredibly useful. We, in the end, did NOT use the VRTK Tracked/Alias Rig — it actually gives you nothing that the OVR Rig does not do better and just adds runtime cost.

Image for post
Image for post

The only problem with the Input Mapping is that, for some reason, the Controller GOs are not complete, they are missing all of the Axis controls and the grip control. It is a bit of work but not complicated to add those ( You can download the prefabs we created, provided with no warranty of any kind, for information only).

3 Both VRTK and XRIT provide Locomotion systems — mostly Teleport and Jump Turn. We are building a room-scale VR interface, so jump turns are counterproductive and whilst teleportation might have its uses in gameplay, it is pointless for exploring data. We built a simple set of utilities to move the model we are looking at and move the game around the model (using the left and right joysticks respectively) and it was very quick to integrate those scripts into the events from VRTK (10 minutes). We did the same thing with the Input System for non-VR use and it works pretty much the same way. The only pain was the need to create two complete sets of event Callbacks since the types are different! (for what it is worth, with no warranty and for information only, you can see our script here)

4 VRTK has a very nice laser pointer Interactor. It is mesh-based so it has a real 3d feeling and it comes just as the beam, so you can easily add it to the OVR controller and it looks ok. It works out of the box and like VRTK Input Mapping, provides simple to use UnityEvents for when a collider enters the Raycast and when it leaves — which makes it easy to create your own callbacks for sending messages to entities based on the buttons. So that is what we did. The Interactors (from all frameworks) all come with their own state model of activated, selected, grabbed, touched etc which I suppose is great for a simple game but is useless if you just want to send a message to the entity to do something and usually want to send more than one type of “touched” message. We ended up writing all of that ourselves.

Image for post
Image for post
satellite imagery © Mapbox

So we have a stereoscopic XR Rig integrated into the Oculus devices, with controller avatars and a laser pointer all working, the ability to move around and the ability to point at and manipulate entities using a laser pointer. Mostly out of the box with some business logic written in C# and integrated using UnityEvent callbacks.

Menu of Despair

Finally, we come to the menu part of the UI. And this is where the technical gap starts appearing between the systems.

The Unity UI system does provide everything we need and it was relatively simple to create a world space canvas and add that, suitably scaled, to the left-hand anchor so that it appeared to be part of the left-hand controller.

Using the UnityEvents from the VRTK Input Mapping, it was also a few minutes work to make it so the menu only appeared when pressing controller button 3.

But, you need to be able to press the menu buttons!

After trying many things — we actually settled back on XRIT as being the only good way of integrating into the UI. But with a twist.

Setting up the XRIT for use with the UI is well documented. You must have an XR Interaction Manager and a UI EventSystem in your scene, and the EventSystem must use the XRUI InputModule. Also — your UI canvas must have an additional raycaster. However, nothing complicated.

You also need an interactor. We discussed above the reasons for not using the XRIT pointer and the VRTK pointer is not an XRIT interactor. We could have used an XRIT direct interactor attached to the right controller to touch the buttons, but we could never make that work (we think it is some sort of interaction between the XR Controller that XRIT insists to add to an Interactor and the OVR Controller) and anyway — we wanted to use the pointer.

So our solution was to make the VRTK pointer into an XRIT Interactor. This is certainly in the spirit of the documentation (although it does not say how to do it) and is allowed by the license. And, it is done through published APIs so should have some future. However, this is the weak point of this design.

EDIT — the previous version of this post used a box collider to cause the pointer to terminate on the UI Element. This proved to both be unnecessary and to cause problems if the menu is moving in world space and so has been changed. You can still use a box collider if you want the visual effect of the pointer hitting the UI but be aware you will get unexpected artefacts if the Menu is moving.

The interaction (and therefore the integration) comes in two parts:

1 We repurposed the existing XRIT Ray Interactor script — to get the base class and the methods that we needed to override. The key mechanism for integrating to the UI system is a model(i.e. set of points) that is passed to the UI raycaster which includes a list of points that define the line (the line might be a curve). We have added a method to the Interactor script to receive a model. Add this script to the pointer game object

2 To get the Model, we link the new method on the Interactor script to the Results Changed event on the Straight Line Cast script on the Starightcaster daughter to the pointer game object.

The raycaster then determines if the pointer passes through the interactable and takes the appropriate action.

The model also contains a boolean for whether the Interactable should be selected. This was linked to the state of the right trigger.

The code we used is shown below, with no warranty whatsoever and for information only. Read the comments for how to link this to the VRTK ObjectPointer — basically you need to link the receiveRay method to the StraightCaster and the Selected and Unselected methods to the activation events — and then link the activated event of the ObjectPointer to the trigger (or whatever). As a note, we made our pointer to be always-on by going to each of the PointerElements and changing them to “Always On”. This very rough integration will only work with straight pointers (but it would not be that much work to make it work with curved ones) and we only implemented the functionality required for UI Interactions. More work would be required to make the pointer work with other XRIT interactables.

Image for post
Image for post
Example — with Box Collider: satellite imagery © Mapbox

Thus we have the pointer ray, that terminates correctly on the button which correctly changes state to highlighted. Press the trigger and the button is clicked and then selected. All systems working correctly. The button has an OnClicked event for integration into the business logic.

Wishlist

In case anyone cares, or has even reached this far, some lessons.

We managed to make a good VR interface using largely standard components. for that, we thank all who provided components.

But I wish:

1 That all undeprecated modules and packages from Unity and major players like Oculus should be URP ready out of the box. There is really no excuse. Also, that Unity did not seem to come with defaults that say “this is deprecated — use something else”. Why do I seem to have to create a be project and then change to URP, Input System, XR plugins and go around changing shaders and materials etc?

2 The VRTK Input mappings are really nice — but should not be necessary. This should have worked out of the box once the Input System and OVR Plugin are loaded. Unity and Oculus should be chasing each other to make that happen.

3 That in general, the maker of Interaction Frameworks (like both XRIT and VRTK) did not base their integrations around their own XR Rig and instead created a framework that is easily integrated into any Rig, so that the device providers like Oculus can the parts close to their hardware. VRTK is much better here than XRIT, the later having very close integration to their own XRcontroller entities and I cannot really see why.

4 In general, XRIT does a very good job of integrating into the other Unity Modules and should really have stopped there and specialised in providing a framework for other people to create XR Rigs, Interactors and Interactables. It obvious from the documentation that was the aim but they seem to have got lost along the way. Then, VRTK could focus their limited resources on creating good interactions and interactions without spending so much time on the plumbing, and they would get UI integration automatically.

5 Also in general, I wish that all of the Interaction components made fewer assumptions in their design. They all make assumptions about a state model fixed to touch and grab events. It would have been easier if they were architected around a set of customisable events (preferably unlimited), two of which just might be touch and grab. In our case, they were not.

Code

ViRGiS Team

Working to create a GIS VR platform in Unity

Paul Harwood

Written by

Paul is a long time veteran of the tech industries — with 30 years in the trenches including stints with Nokia and Google as well as startups QR8 and trackbash.

ViRGiS Team

ViRGiS is a Platform for visualizing and manipulating GIS data in Virtual Reality

Paul Harwood

Written by

Paul is a long time veteran of the tech industries — with 30 years in the trenches including stints with Nokia and Google as well as startups QR8 and trackbash.

ViRGiS Team

ViRGiS is a Platform for visualizing and manipulating GIS data in Virtual Reality

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store