CineShader VR Review

Lusion Ltd
Lusion
Published in
6 min readJan 21, 2021

A cinematic immersive experience

When we first released CineShader one year ago, we knew that we wanted to make this environment accessible in VR. As CineShader is already a cinematic and contemplative experience, it was the logical next step for us to make the experience running inside an immersive virtual space.

Virtual Reality on the web

This project was an opportunity for us to give a try to VR on web with the use of the recent WebXR API. The fact that VR headset progressively became cheaper and more accessible was also a good reason to try this new medium.

WebXR preview with Oculus Quest

Running Virtual Reality through a web browser is first the easy way to share a link to anybody and to quickly be immerged in a new world without installing any application. Most of all it’s the ability for us to share the same codebase. The promise of WebXR is among others to just run our existing application into VR.

First disclaimer about this promise: running application on VR feels more like mobile experience than desktop ones. Even if Oculus Quest, our main target, are powerful headsets, you still have to render the scene twice and the browser is limiting you in some way. But with some decisive choices and good optimisations, it’s possible to run something smoothly. Firstly, a lot of our post-processing effects have been reduced in quality to be able to run the app. You have to smartly pick your battle and ours was framerate. On top of that we experienced that a scene with too many vertices highly impacts the performance (we will come back on that point this with the UI).

User journey inside virtual world

Going from two dimensional website to an immersive three dimensional experience opens tons of design challenges. One of the main ones is the user interface.

Shaders gallery: a trial-error story in 3 steps

1. The “museum desk”

The 3D desk idea, like an interactive screen in a museum

Our first idea was too imitate the small desk that presents the artwork in some museums. After a small time trying it, we quickly found out it was not the best solution:

  • the interface surface is small and not easy to use because it’s not obvious to scroll in VR
  • it’s stuck to one physical position, that means you have to go back to it each time you want to change the artwork
  • the text is readable only when you’re very close

2. The “docked board” with existing libraries

After that, the “docked board” idea came naturally: it’s the most simple way to display interactive and readable content in VR. To make it quickly, we tried using three-mesh-ui to do the UI placements and troika-three-text to render the texts in webgl. The setup was easy and quick! However we were facing performance issues: the libraries created a lot of useless geometries and layers, and they can be really slow to render and it affected the overall user experience.

3. The “docked board” without libraries

The final version of the interface uses canvas2D to draw images and texts as textures on simple planes.

So back to the start, we decided to write our own UI system using multiple canvas2D as textures on simple planes. It was surprisingly easy to do, more performant and very flexible (you can draw whatever you want in canvas). The only tricky part was the gallery ‘selector’, where we had to determine the selected cell depending on the raycasted position on the plane. Anyway, we saved ~20fps and ~300ko on the js build by making our own system!

Interface familiarity

As we said previously, we tried different things for the interface and finally the one that stayed is a dock similar to the one that you find in the Oculus lobby. We thought that using something familiar is the first step to keep him inside the experience.

A tutorial briefly explains to the user what he can do.

VR Controller: a mouse improved

We also used everything that was available to us, like controllers which are something really different from a mouse. Being able to shake them is a new micro-interaction to give more feedback to the user.

Even if the user can physically moves to visit the room, we found out that using joysticks was a nice way to easily walk around. And because virtually moving without physically moving can cause motion sickness, we also added a teleportation feature by selecting a point on the floor.

Ring animation to indicate the teleportation position

Finally to completely immerge the user in front of this giant shader screen, we implement background audio. With the use of a positional audio source set in the screen the user really feels the space of the room.

Visual adaptations

Since we want CineShader VR to run on standalone Quest 1, we obviously needed to on the visual optimisation otherwise it would not be an immersive experience for those users.

At the early stage, we did a quick test with the old version of CineShader in Quest 1 and the experience ran at 8–10fps. So we got a lot to work on.

Firstly, we did the obviously optimisation by reducing the shader canvas size and the deformation vertex count. Then we used SpectorJS, a browser extension which can help reducing the unnecessary GL operation. For example, we spotted a mistake we have done in the old version that we simply used RGBA render target everywhere when we do not necessary need the alpha channel in our experience.

After those obvious changes mentioned above, we needed to make the decision on optimisation which might impact the overall visual. If you read about rather you should use post-processing in your WebXR application, most recommended WebXR handbook will give you a straight answer — NO. However, without the post processing pass, the visual looks completely flat so we decided to keep that. Instead, we optimised the post-processing shaders and reduced the unnecessary passes and the result looks pretty identical to what we had before.

The whole scene in CineShader were only lit by one single textured area light. As you can imagine, textured area light can be really expensive to run special on mid-low end devices like a Quest 1. So we had to use an obvious alternative by simply sampling the texture based on the surface reflective ray. Combining with bicubic mipmap sampling, we eyeballed the before/after results and we were happy to find a middle ground between the look and the speed.

!Good to know

VR Browser and WebXR

Before entering immersive VR, the user lands on a classical webpage shown in Oculus Browser. Trying to render complex animation and WebGL graphics inside this browser is really not smooth as it seems to be capped by the device itself. So your first web page needs to be quite light and then only when the user launches the immersive VR mode WebXR, you can let the magic happens.

CineShader’s landing version used for VR browsers. The background is a animated image instead of a 3D scene.

Animations system

WebXR relies on his own animation system that is not the same as the one used by most web-animation libraries. So when you use a library like GSAP you need to manually call the update of the ticker system in order to use tweens inside your app.

Conclusion

Designing for VR can be quite challenging as it’s quite new for the web industry. It’s in the same time exciting to innovate in this field of creativity, but it also requires to do some errors to find the best solution. We think that VR on the web opens new opportunities for user to dive into immersive experiences. For us it’s the possibility to work with our tools and our workflow on a new medium. In the future, we look forward to push more experiences like this one.

Thanks for reading!
Lusion
hello@lusion.co

--

--

Lusion Ltd
Lusion
Editor for

Lusion is a creative studio specialized in creating immersive design, VR/AR, digital campaigns and installations.