W3CVR workshop, quick trip to future Web

Thomas Balouet
9 min readOct 31, 2016

--

by @thomasbalou

360 Group photo from the very first w3c webVR workshop! (360 version)

Two weeks ago I took the whimsy decision to book a flight from France to the San Francisco Bay, in order to attend the W3C first workshop on Virtual Reality(VR). The W3C is “an international community that develops open standards to ensure the long-term growth of the Web”. Basically, it is a joint force working on the conduct of the Web and discussing on how it should be built to work in harmony everywhere, to produce rich interactive experiences for everyone.

I started to work on WebVR almost two years ago. In April 2015 I managed to have the Web 3D social network Beloola working on an Oculus DK2. From this moment onward, I understood three things:

  • VR is going to radically change the way we use our software, from the information age to the experience age
  • The Web will be the Trojan horse of VR, slowly leveraging our daily media experience
  • I want to be part of this world, and help in laying the groundwork of what we’ll soon call the Metaverse

Where Great Minds Meet

This W3C-VR workshop was the perfect reunion of all the greatest minds, and I felt honoured I was able to join them. From established browser top notch engineers like Brandon Jones (Google Chrome), Chris Van Wiemeersch (Mozilla Firefox), Frank Olivier (Microsoft Edge) to new VR-oriented browsers with Laszlo Gombos (Samsung Internet for Gear VR) and Justin Rogers (Oculus Carmel browser), and many more.

WebVR is a Web API bringing VR to the world of the browser by autonomously augmenting the medium’s capacity and by handling the different devices’ specificity in real-time (ranging from desktop 2D presentation to VR headset lens distortion, or to mobile orientation for the ‘magic window’ mode). At the moment, WebVR makes the Web the most ubiquitous platform and in VR the most accessible to its audience with very low (/almost no) friction.

That is why the W3C-VR workshop was such an important event, showing VR as the next step for the Web. It demonstrated the industry’s interest toward WebVR, and marked the start of many talks on product implementation and potential common standard definition to have the best harmonic experience across devices.

A glance at the future of the Web in VR

You can already find most of the presentations given during the workshop, and notes taken during the event should come up soon. Other attendees will surely provide their own takeaways, Tony Parisi (VR serial entrepreneur) for one already published his review.

Meanwhile, I wanted to give you a glance at some of the main points of concerns we talked about, from my point of view, to give you an idea of what the future of the web could look like.

VR user interactions in browsers

The Web as we know it is a constellation of information about pretty much anything happening in our world. But as of today, it’s mainly done in 2D (with some early adoption of 3D models on some webpages).

Bringing the Web to VR means we have to rethink the way we use it, how we can watch it and how we interact with it. To me three main points came up at this subject:

  • We need to identify a way to bring the actual web to VR, because there’s already so much information we don’t want to loose. We are looking for a methodology to enhance actual systems, like HTML and CSS, to render in VR. It could be by bringing the entire flat page to VR, or it could be by adding parameters to render the page differently, in a 3D perspective (with depth of perception)
  • We should be able to go seamlessly from one page to an other in VR. As of today, you have to put off your headset to do so, which doesn’t make for a good experience. But link-traversal (going from one page to an other without requiring user action outside VR) implies lots of question about security (how to know where you’re going, prevent spoofing?), and how to do it so the user doesn’t loose its immersion
  • We also need to interact with those pages. From single input button on Google Cardboards to controllers and room-scale on high-end headsets, via 3-dof controllers on Daydream, we need to find a common way so the user will always knows how to interact. Or each way has to be intuitive enough for the user to adapt. We basically have to reinvent the mouse control from the desktop. It could be laser-pointing or hand gesture tracked with cameras, or even something else we haven’t thought of yet.
VR interaction with 2D web page from Josh Carpenter’s (GoogleWebVR designer) presentation

Multi-user VR experiences

The high potential of VR is social. Facebook acquisition of Oculus and their design of a Social Application is one proof of it. Other VR applications like High Fidelity, JanusVR or AltSpaceVR make social interactions their high valued business. And the Web is the most social place of all, where people all across the world can meet and share things together.

But how do you enable gatherings in VR? How do you make the users “feel there” and interact with the other users in real time? How do you find places to go?

Avatars in JanusVR (top) and High Fidelity (bottom)

Some ideas were discussed. Boris Smus from Google talked about co-presence, enabled by using positional audio and direct repercussions from the real world to the virtual one (like scaling the size of your avatar made the sound of your voice change). But we’ll also need to represent the users in a way where they could relate to. Every application gives its own piece of mind about it, from human-like avatars on High Fidelity to any 3D model on JanusVR. And how the user will relate to an other avatar, recognize it and feel like he’s talking to the real person behind the headset?

Multiplayer and VR in general is also about how you get to move from a clickable space to a walkable one. The different devices existing today imply different methods, and different user experiences are still in testing. From teleportation to automatic walk-through, every application, once again, gives its piece of mind and no perfect solution has been found yet.

Finally, Tony Parisi presented a collaboration with Mark Pesce about a Mixed Reality Service, taking longitude, latitude and other information and mapping it to a unique URI(Uniform Resource Identifier) representing a VR experience. A bit like the domain names servers actually work. It would help organizing the VR world and enable people to access any part of it.

Authoring VR experiences on the Web

That was an other big point of the workshop, which translated in two breakout sessions on the second day. The attendees agreed that we need a way to simplify authoring in VR on the Web. Some tools already exist, like A-Frame, Vizor or XML3D. And they are already pretty efficient at what they do.

A simple WebVR Hello World by A-Frame

But the question was asked if we shouldn’t need to take it up to the browser, enabling a broader system and making a better use of the hardware resources in order to render a better experience.

This brings us back to asking if we should enhance the actual Web system, DOM and CSS, to bring existing content to VR and make it really easy for people to create new immersive content.

Another element to take into consideration is whether the browser should implement key features that are instrumental to every experience. Like personal safety zones, within which elements are occluded or pushed away (From Josh Carpenter’s talk), or 360° background, like the Samsung Gear VR browser already implemented. Security matters would also come at hand, like how do you represent link-traversal, transition to an other site, or prevent the user from a malicious experience.

Also, the question of a common format for 3D modeling rose. As of today, any 3D file format is working on the Web. If you give it a little work... But we’d need a simple format, easy to make, to share and easy to display. The gltf format is moving up to this place. It’s an open standard from the Khronos Group and it could become the jpeg for 3D modeling, helping people to publish and create in 3D for WebVR. The point here is not to just make an other format, but to simplify the creation process for everyone.

360° video on the Web

reVeRies is an immersive platform for VR/360° videos on the Web

360° content is a huge point for WebVR multimedia, as it’s something users can create easily with the rise of 360° cameras. This area particularly is of interest to me as it is core to the reVeRies platform at LucidWeb I’m working on.

Other than specifying metadatas to help the systems to know what kind of display (top/bottom, side by side…) the video is filmed in, and other information, two main points were addressed:

  • The streaming problem, which is relevant to any 360° application, not just the Web. Streaming 4K (and more) videos is heavy on the bandwidth, and heavy for the hardware to process. Plus, as in a 360° experience the user isn’t looking at the whole picture, it doesn’t make sense to send it all. Louay Bassbouss from Fraunhofer presented an early solution by dividing the pictures to just send the portion the user is looking at. Other companies like Pixvana are working on streaming optimisation.
  • The display problem, which is inherent to the Web. As of today, each frame of an HTML5 video element is copied into a texture, applied through WebGL and rendered through WebVR, which makes the process very heavy and slow. The proposition from David Dorwin from Google would be that browsers handle directly the 360° videos with something like a spherical DOM. It would enhance greatly latency and performances.
Pixvana work on adaptive streaming for 360° content

Conclusion

It would take more than a week to sum up those 2 days of workshop. It’s been a real pleasure to talk about all those exciting future questions, and to confront ideas with other enthusiastic WebVR professionals.

WebVR is still a work in progress, and as the Web, it will continue to be worked on as long as it exists. But the technology should come to your browsers quite soon. Starting with Chrome for Android by the end of 2016, accompanied by the first version of the Oculus WebVR browser Carmel. Implementation will continue and other browsers should announce deployment of their system in the beginning of 2017.

If you are interested in the subject, you can follow the work done on the W3C group, and if you want to get involved, join the WebVR mailing list to participate in discussions and futures working groups on the matters you want to take part in.

To conclude, the trip was the best time spent: New info and developments were shared on WebVR, and I’ve learned a lot of things and interesting ways of improving our reVeRies platform at LucidWeb. Then again, as we are just at the start, I will continue to work with the WebVR community to try and progress altogether, so stay tuned, some more good news should come soon!!

--

--