Developing A Client For ShadowVR: First Steps

Timothée Tallon
Shadow Tech Blog
Published in
6 min readJul 6, 2022

This article is part of the ShadowVR series, click here for the previous article.

As established in the previous article, the ShadowVR project consists of a SteamVR driver that runs in a Shadow virtual machine, and an application for the users to be able to connect from their own VR device. This article is about the first iterations of the client application, and the choices we made that would structure the whole project.

Shadow VR Client

To avoid too much complexity, we chose early on to restrict the development to a single class of VR devices: the Oculus Quest standalone headsets. This choice was motivated by several factors.

First, the Oculus Quest systems run on Android, so that we could rely on the preexisting Shadow mobile application for Android, and reuse a significant amount of code.

Second, the Oculus Quest systems are the most relevant use case of ShadowVR, as we would enable using any kind of VR content on a light, standalone headset, in line with the ambition we have at Shadow to free the users from the need for processing power.

And last, the Oculus Quest headsets are among the most widely used due to their simplicity and competitive price, and that is a plus for reaching as many users as possible.

For clarity, the ShadowVR client application’s role is to be the entry point for the users to access VR content on their Shadow. Just like any other Shadow application, it needs to provide a way for the users to log into their Shadow account, manage settings, and launch a streaming session, which could be a VR session to access VR content, or alternatively a desktop session to access the Shadow desktop directly. Unlike any other Shadow application, it needs to be adapted to be used in VR, which is important when designing the user experience.

Approach

Any VR application can be seen as a special kind of interactive real-time 3D application, not so different from video games or professional 3D conception software. As such, VR development can be a challenging and time-consuming task, as it requires many different components, like a 3D renderer, an interaction engine, maybe some physics simulation, all with a necessity of high performance and interactivity. In the industry, VR applications are mostly developed using game engines like Unity or the Unreal Engine, delegating to the engine the complexity of 3D graphics and interactions, and allowing to focus on the content.

In our case, the core aspect of ShadowVR is the VR technology itself. We do not provide content, but a way to access already existing VR applications and games, in a way that should be as close as possible to a local experience. The best way to approach this is to control as much as possible everything that is happening on the VR system, from how frames are produced, processed, and displayed, to how tracking data is sampled on the headset and the controllers.

In the end, a 3D application running on Shadow will produce frames. These frames will be captured and sent over the network to the client, which will need to display them at a precise time to be in sync with the 3D application, while also controlling how they are displayed, taking into account the predictions used to produce a frame, and applying custom reprojection.

Such control is not typically offered by the game engines previously mentioned. Therefore we chose not to use them, and rather to develop our own 3D rendering system with the OpenGL graphics API, and VR device management using the low-level APIs provided by the headset constructors, in our case Oculus’s VR API. To be noted: OpenXR, the VR development standard promoted by the industry, that since replaced proprietary APIs, was not supported yet by Oculus at the time the project started.

We knew this choice would induce a lot of additional work, but it would also ensure that we could experiment in good condition in a controlled environment, with low-level access to all the resources.

Android development

As stated before, the Oculus Quest is an Android-based system, with a custom layer added over it. This allowed us to leverage the experience we have at Shadow with Android devices and reuse many components of the Shadow mobile application. For instance, services like authentication, communication with the Shadow back-end, preferences management, or metrics collection were already ready to use, written in Kotlin, with only minor adjustments needed.

The main difference with a classic Android application comes with the interactive 3D rendering needed to create a VR environment, and how to interact with the VR system, which is typically done in a low-level language for performance, C++ in our case.

To make these different elements work together, we wrote bindings using the Java Native Interface.

Native VR Development

The two main tasks at the beginning of the project on the client side were the setup of the VR system through the Oculus VR API, and the development of the low-level 3D renderer.

VR API

The Oculus VR API provides many functions to initialize and run a full VR application and allows very fine-tuning of what happens on the hardware. It took a while to figure out exactly how everything needed to be set to run smoothly.

We designed an event-based system to keep track of all the states of the system and forward them to the right component. For instance, we needed to handle correctly what happens when leaving the headset and putting it back on, clearing some resources but keeping some in memory to get back into VR quickly.

One important aspect is how the rendering surface is handled: you can not use any surface or texture for producing and displaying frames, as the Oculus Quest system needs to keep a reference on it at all times to be able to apply post-processing like an asynchronous timewarp. Instead, you need to ask the system to create a swapchain, i.e. a collection of OpenGL textures, by providing a texture description that matches the needs. Only these textures can then be used for rendering and sent to the system for display.

Low-level Renderer

To render a frame on these swapchain textures - either produced locally for the launcher part of the application, or remotely for streamed frames, we need a renderer, i.e. a system that is fed with a scene’s geometry and materials, and a camera position, and that produces in real-time the images displayed to the users.

Our renderer is classically composed of a rendering loop that runs continuously and that can receive render commands to implement a rendering pipeline, such as “clear that texture”, “draw a triangle from this point of view using this material”, and so on. These commands are sent from another “scheduling” loop that is tasked with synchronizing the rendering with the headset’s panels’ refresh rate, as well as sampling the headset’s position and orientation prior to rendering.

To be noted, the tracking data is not used as-is, but rather predicted ahead of time to make sure the pose used for rendering a frame is as close as possible to the pose the user will be in at the time the frame is displayed. When rendering is done, the scheduling loop notifies the system that it can take ownership of the texture to apply distortion and reprojection, and finally display the frame to the user.

With these different elements up and running, we were ready for the next steps of the ShadowVR project: integrate the Shadow client library, connect to a Shadow VM, and start streaming. We’ll see all of this in the next article…

--

--