How did Shadow use virtual reality concepts to develop an innovative new product?

Paul Ragot
Shadow Tech Blog
Published in
6 min readJul 6, 2022

Shadow has been an innovative company since its beginnings. As such, we are constantly interested in what is new in the technology industry. Virtual reality started to be democratized almost 10 years ago, and many of our developers were really enthusiastic about it. We were interested very early in the compatibility of VR headsets with Shadow. Indeed, VR games that run on PC are particularly resource-intensive, requiring high-end GPUs. A few years ago, a VR setup cost nearly €2 500 (€1 000 for the headset and a minimum of €1 500 for the computer). VR devices compatible with a Shadow PC could drastically reduce any gamer’s expenses (€30 per month after a VR headset acquisition).

VR with local PC vs. VR with Shadow

When the Oculus Quest was released, we seized the opportunity to have a completely autonomous VR headset without wires or external sensors. This headset has democratized VR through low cost and ease of use. However, many people at Shadow acquired over the years a large library of VR games on the Steam store. These games were only compatible with PC headsets (HTC Vive & Oculus Rift). Since these games are not compatible with Oculus Quest headsets on Android, we decided to develop an application allowing the Oculus Quest to connect to a Shadow PC. Then our customers could play their Steam VR games on a cheap standalone headset.

Steam and HTC were precursors to the VR market in its early days. They quickly put the HTC Vive headset on the market, allowing access to VR games on the Steam store with a software called SteamVR. The player had access to a 3D interface allowing them to navigate in a virtual world called SteamVR Home, and ultimately access their games library. And the developer could use the OpenVR SDK to develop VR games compatible with SteamVR, but also to make any VR hardware compatible with SteamVR. This last point interested us the most. We wanted to create a virtual VR headset interface for SteamVR, allowing us to link any VR hardware to our Shadow PC in the Cloud.

There are many differences between VR and classic video games. This article introduces fundamental concepts used to produce a proper virtual reality rendering, and how our teams implemented it into our software.

Stereoscopy

When a game is displayed on a screen, the images are calculated using the projection of a single camera. For a third-person game, the camera will be a few meters behind the character being played. However, in the case of a first-person game, the camera is located at the level of the character’s face. When playing on a screen, the player will look with both eyes at a single flat image. It is logical that the camera in the game is located at the top of the character’s nose between its two eyes. For a VR game, it’s a bit more complex. It is necessary to calculate two images, one for each camera, positioned at the level of each of the character’s eyes. The images rendered are displayed on two screens (or one screen cut in two parts), positioned in front of the player’s eyes behind two converging lenses. The lenses are here to allow a wider field of view as the screens are positioned a few millimeters in front of the player’s eyes. This configuration of screens will produce a 3D optical effect for the user. It is based on the fact that the human perception of depth is formed in the brain when it reconstructs a single image from the perception of the two flat and different images coming from each eye.

Stereoscopy

Vertical Synchronisation

Swapchain

When a classic video game is rendered by the graphics card, it will use a texture swapchain. That is essentially a queue of textures ready to be presented to the player. In its rendering loop, the game must therefore calculate the pixels of a texture to be presented (we use shaders for that). The result is copied into the backbuffer of the swapchain (last texture of the queue). During this time, the texture contained in the frontbuffer of the swapchain (first texture of the queue) is displayed to the player. This is a typical 3D rendering concept. When the rendering of a texture is finished, the game performs a swap, which will swap the frontbuffer and the backbuffer, and present the next texture to the player. The number of swaps performed in one second is the framerate of the game. This is where the concept of vertical synchronization comes into play. Vsync synchronizes the framerate of the game with the refresh rate of the player’s monitor, to avoid the screen tearing (a torn look as edges of objects fail to line up).

Screen tearing

Asynchronous reprojection

When VR headsets were developed, one of the main issues was the latency between the GPU and the headset, ie. when the image starts to be rendered and when it actually has to be presented on the headset’s display. In case of high latency, the user’s brain will not correctly reconstruct the information perceived by his eyes, ears and body; triggering something akin to motion sickness. One way to significantly improve latency, and limit motion sickness, is through asynchronous reprojection. Thus, if the GPU is unable to keep the framerate of the headset, the asynchronous reprojection intervenes by dropping the current frame and replacing it with the previous frame re-projected using the head movement predictions of the user. A re-projected frame will have a tilted appearance on the headset’s display, and this will allow the user to feel less lag during their experience. The framerate can thus remain constant and high, guaranteeing the user a fluid experience, without judder. The reprojection is asynchronous, because the process is running in parallel with the rendering. Most frames, even when not dropped, are reprojected by interpolating with the previous frame to significantly reduce the perceived latency.

ShadowVR: an innovative new product

At Shadow, we used all of these concepts to think about a new product: ShadowVR was born. We wanted to reproduce the behavior of a wired headset like the HTC Vive or the Oculus Rift, using a PC in the cloud. Namely replacing the HDMI and USB cable (usually connecting the PC with the VR headset), with a network layer going from our data centers where the GPU is located to the user’s VR headset, via Wi-Fi from its home.

As written at the beginning of this article, SteamVR allowed us to develop our own virtual driver, allowing us to make SteamVR games believe that a real VR headset is connected to the Shadow PC. Thanks to this driver, we were able to recover raw images from the game with stereoscopy applied and without distortion. The game images are then encoded and streamed via our network protocol to our Android client for Oculus Quest. In addition to the game frames, we send the timing and position information of the virtual headset, so the Oculus Quest can apply asynchronous reprojection (called asynchronous timewarp/spacewarp in Oculus VR API). In return, our Android client communicated all framerate and VSync information to our data center, but also the real headset and controllers inputs. With these techniques, we were able to significantly reduce latency and achieve a completely smooth VR gaming experience on a cloud PC.

In the next article, we will talk about the development of the Android client for the Oculus Quest.

--

--