John Riccitiello the CEO of Unity explains how Real Time 3D (RTX) is disrupting every industry including: Virtual Reality (at Disrupt 2018).

Also how Unity gave us a sneak peak of their Oculus Connect 5 Presentation (at GDC 2018) and how the VR AR Industry is going to go mainstream!

SVGN.io
Silicon Valley Global News SVGN.io
9 min readSep 23, 2018

--

Article by Micah Blumberg a journalist, analyst researcher, and neurohacker.

Published Sept 22nd 2018: At the Vision Summit in 2016 John Riccitiello presented his vision of the future of VR that he called the Gap of Disappointment.

The 2016 Vision Summit Keynote

I was sitting in the audience when he gave that keynote (in 2016) and in that talk he contradicted most of the analysts predictions for the VR industry that year that predicted VR would take off in a linear upward direction, to become a mass market item almost within a couple years. At that time (a couple of years ago) John Riccitiello had a longer term vision that incorporated some years first (the gap of disappointment) in which VR and AR did not become mainstream product (and how right he was). The reason for this, he said, was that VR & AR hardware (as targeted for the consumer) was and still is, in his view, in a sort of beta development stage. The devices (he partly explained) are expensive, error prone, with problematic control systems, tracking issues, you still can’t see your arms or your legs. Eyes and cheeks are not yet tracked in most of them. Finally there isn’t a lot of content available relative to the film and games industries.

Step forward two years later and Nvidia surprises the world with a new Real Time Ray Tracing Graphics Card at Siggraph 2018, and months before that happened at GDC 2018 Unity unveiled its updated software pipeline for doing “Real-time Ray Tracing on the GPU at GDC in 2018” (or Real Time 3D). Unity’s software works on the current Playstation 4, Xbox One, and your existing PC Graphics cards like the GTX 10series, and your AMD cards (that’s why Tomb Raider Shadows that Nvidia showed off as an RTX title still looks great on PS4 and Xbox One) and we learned from OTOY (at Siggraph 2018) that Real Time Ray Tracing even works on the iPhone with it’s modern GPU.

Unity at GDC — High definition render pipeline, shader graph, GPU-based progressive lightmapper

The fact that Ray Tracing can work on an iPhone is the reason why I think Real Time Ray tracing will work on the Oculus Santa Cruz. However that’s not going to be the same kind of experience that you might get with desktop VR.

During the talks Unity gave at GDC in 2018 they unveiled a high definition render pipeline, shader graph, GPU based progressive lightmapper and a scrip-table render pipeline that they are going to talk more about at OC5. The importance of all this to VR was hidden in plain sight, for us to look back on in retrospect (hindsight is 20/20) because it wasn’t until Siggraph that Nvidia finally unveiled the Nvidia RTX card and industry insiders were able to only just begin to grasp just how huge of a shift this is going to be for the computer graphics industry and all the industries that rely upon it.

Unity’s Scriptable Render Pipeline talk at GDC

During one of Unity’s talks “Unity at GDC — High definition render pipeline, shader graph, GPU-based progressive lightmapper” (at GDC) they unveiled many new techniques that are going to be used to create next generation VR and AR experiences to make them more immersive than ever. To start with they are making it much easier to capture reality, and to capture high quality 3D objects as well as textures and materials from reality to import into games and movies, using techniques like “Photogrammetry” and the “Shader Graph tool” and a variety of new tools like the aforementioned scriptable render pipeline.

These tools allow small teams to create really high resolution high quality 3D assets in a really efficient low cost way. In the Unity asset store you will be able to find a huge new collection (as of GDC 2018) of high quality photogrammetry and material assets that you can use in your game.

The key components of Unity’s Real time 3D (ray tracing) or HD RP for short are that 1. Its physically accurate, they use physically based rendering for their materials and physical light units for lighting (ray tracing) 2 It’s unified there is unified lighting and material response for opaques, transparents, decals, translucents, and volumetrics. 3. Baked lighting happens in real time. (Another reference to Real time ray tracing). 4. It gives you predictable results under all lighting conditions and all material responses. 5 It’s very configurable with tons of options to create very specific content.

Some of the key features of HD RP include the ability to define region specific lighting effects, and they have automated the process of adding tessellation and displacement so that they are really only adding intense detail when you get closer to what you are looking at. This allows you to see a high resolution world in VR while saving on the GPU performance cost.

Trees, leaves, rocks, everything in the environment can have photo-realism or cinematic quality detail.

Materials can also be layered on top of other materials in really complex ways to create real world believable surfaces. This will help trick your mind into creating greater immersion in VR so you are more likely to get that wonderful feeling of presence that makes you feel like you are actually in some other place.

You can see light and shadow through leaf materials for example. You can see how an amber material on a wood material collects light bouncing off the wood inside it’s material adding a sense of realism that isn’t possible without physics based lighting effects (aka real time ray tracing)

The technique is called subsurface scattering (how light scatters below the surface of the first material reflecting the underlying materials) another similar technique is called skin subsurface scattering which is necessary to begin to reduce the uncanny valley effect.

Hair, Skin, Fabric, Eyes and even soap bubbles have will have realism that has never before been seen in VR.

At Techcrunch Disrupt (2018) John Riccitiello also explained how this new real time 3D rendering is disrupting a host of industries, causing the costs for capture & rendering in many industries to fall (which is good). That includes the auto industry which is going to be revolutionized from auto design to car configurators to autonomous driving. Beyond that time and cost savings will be extended to Architecture, Engineering, Construction with a lot of the things that used to take weeks and months now being done in real time.

The same is happening in the film industry, and in games and in Virtual Reality and Augmented Reality or what’s now called XR.

John Riccitiello noted profoundly that in the games industry (referring to Nintendo, Playstation, Xbox) a launch meant billions in marketing, billions of dollars tooling GPUs and CPUs that the world had never seen before, billions of dollars invested in content, so a huge number of software titles came and a console that came under 500 dollars. A great games console platform, he said, is one that looks like that is designed to conquer the world, and we haven’t yet seen that kind of push in VR or AR just yet so its still early days.

It might be another 6 to 9 months before some of the big changes are felt (a reference to my talk with Jules Urbach and his prediction that getting the most out of the new RTX cards might take another 6 to 9 months)

What has happened is that core products like Vive, Oculus, and Hololens are seeing massive adoption by the industry, by the world outside consumer technology, so what we are all waiting, I believe, is for is that multi-billion push to get a VR headset that costs less than $500 dollars into the hands of the mainstream consumer market.

Oculus Santa Cruz might be the first device capable of reaching the mass market if there is the kind of push behind it that John Riccitiello was talking about, but don’t get your hopes too high. We are only two years in from John’s predicted 5 to 7 year timeline. It could still take another 3 to 5 more years from now before VR and AR leaves its beta development stage and gains mass market acceptance. We can only hope it comes sooner than later.

Here below is a video I created talking about this article, the story is expanded a little bit in the video.

Three upcoming talks that you may want to check out include:

Thu, Sept 27, 2018

Event: Oculus Connect 5
Title: Unlock the Visual Potential of VR Using Unity’s Scriptable Render Pipeline

Location: Grand Ballroom 03:00 PM — 03:45 PM
Description: VR’s future is being transformed by stunning visuals and new rendering capabilities made possible by advancements such as Unity’s Scriptable Render Pipeline (SRP). As seen through Unity’s ‘Book of the Dead’ demo, these technologies are now available in VR for the first time. Unity’s award-winning Demo Team developed ‘Book of the Dead,’ a first-person interactive demo showcasing the capabilities of Unity 2018 for powering high-end visuals for game productions. In this session, learn how SRP provides enhanced customizability of Unity’s rendering architecture and can put more control in the hands of developers.

SPEAKERS
Bradley Weiers

Director of XR Product, Unity

Tony Parisi

Head of VR/AR Brand Solutions, Unity Technologies

--

--

SVGN.io
Silicon Valley Global News SVGN.io

Silicon Valley Global News: VR, AR, WebXR, 3D Semantic Segmentation AI, Medical Imaging, Neuroscience, Brain Machine Interfaces, Light Field Video, Drones