2020: The year of Real Time Ray Tracing

A look at the hot, new and old technology, in real time.

Daniel Rose
GameTextures
13 min readJan 24, 2020

--

An image from Control, which makes use of every ray tracing feature available.

2020 has kicked off, and the movers, shakers, and consumers in the console gaming space are already looking to the end of the year. Xbox Series X has been shown in concept renders and a teaser trailer by Microsoft. Leaked shots of Sony’s Playstation 5 development kits have been on the internet for months, and Sony has chosen to share tidbits of information too (we know it’ll have an SSD of some sort). Excitement among the consumers abounds; there is nothing quite like a console launch and with the viability of a home console seemingly being proven via the successes of the PS4 and Nintendo Switch (The Xbox One did fine too), those of us who don’t have dev kits can’t wait to see exactly what Microsoft and Sony are up to.

One of the factoids that have seemingly been confirmed about the upcoming consoles is a feature that high-end PC’s with Nvidia graphics cards have had for over a year now; Ray Tracing support.

What is ray tracing? PC’s have this now? How do we develop for it? Why does it matter?

We’re here to answer all of that. This is the GameTextures Primer on Ray Tracing in Real Time.

Metro Exodus. 4A games chose to implement single bounce Global Illumnation, making the brights stand while what’s shadowed is truly in shadow.

What is Ray Tracing?

Ray tracing is the primary form of rendering 3D scenes for use in film, tv, commercials, trailers, and the like.

That wasn’t enough was it?

Ray tracing is the process of tracing the path of light from the camera throughout the scene until it hits the source from which it came, or it terminates. This can be controlled by ‘bounces’, often set in the rendering tool being used. One bounce means the trace will hit an object, then go straight to it’s light source. 10 bounces means that a light trace will bounce around the scene 10 times based on a ton of math (BRDF function, surface normal, and more) before either hitting the light source on it’s own OR terminating. All of this is done on a per pixel basis based on your target render output. If you have a target render of 1080p (1920x1080), this means you have about Two Million pixels that need to have traces cast for them. If you have a 4K output (3840x2160), that pixel count jumps to 8.2 million. Each pixel often contains a number of samples that are taken in it based on the camera, light, material, or any combination of those three (again, depending on the software). This is often why when creating renders using a program like Arnold or Vray, it’s important to test at a mix of low sample counts and low resolution. Sample counts and resolution are some of the biggest costs in terms of render time.

A general explanation from Nvidia of DXR in DirectX 12. Naturally, slightly biased.

Ray tracing has been around for a very long time. The first record of a raytraced CGI film I could find was Compleat Angler from 1979.

Ray tracing in real time has been considered a ‘holy grail’ of sorts for graphics programmers for a long time. Due to the nature of bounce lighting, reflections, and translucent objects (glass), ray traced renders look more natural. Unfortunately, speed has always been the downside with this technique. You needed a lot of computing power to process high resolution images with high fidelity models and realistic lighting with ray tracing. Even today, to get the absolute best render quality available with a render like Arnold, a high number of samples per a pixel and high resolutions are needed for both materials and lights in a scene. Denoiser technology has come a long way thanks to machine learning, but even Nvidia’s Optix and Intel’s Open Image continue to benefit from moderate to high sample counts.

PC’s have this now?

In late 2018, Nvidia released it’s new graphics card, the GeForce RTX series. While reviewers were generally ‘meh’ on the performance improvements compared to the GTX 10 series that came out in 2016, the RTX series impressed with it’s feature set, including support for hardware accelerated ray tracing in Direct X 12 titles using DXR, or DirectX Raytracing.

Interestingly enough, this wasn’t the first time ray tracing had been shown running in real time. In a collaboration between Nvidia, Epic, Disney, and Microsoft, Epic showcased Reflections during GDC in 2018.

Reflections is a fun little video with and an elevator showcasing real time reflections of lights on the armor of a few different Stormtroopers. Epic also showcased a few other demos at GDC that year, all with ray tracing support. Prior to the release of the Turing architecture that powers the RTX series, 3 Volta GPU’s were used in SLI to render out many of the demos that were showcased that year.

Control, RT on the left, Traditional Rendering on the right

It’s important to note that DXR is a feature of DirectX 12 and isn’t unique to Nvidia. Nvidia however has engineered its hardware to support and accelerate this spec. It’s even possible for many GTX 1060 or higher series owners to get a taste of what real time ray tracing can do-earlier in 2019 Nvidia enabled those cards to process DXR calculations via the shading core. Just be warned, it’s incredibly slow.

Along similar lines, Epic’s Unreal Engine is on the bleeding edge of supporting real time ray tracing. It’s 4.22 release in early 2019 finally brought ray tracing to users of the engine, enabling game developers to test it out with their titles. At the time, I was working on product visualizations for Dreamline. I immediately got to work implementing ray tracing into my workflow and thanks to some tips and tricks from Epic’s helpful tutorials, was able to get very high quality renders that came pretty close to my co-workers who were using Vray, all in significantly less time (on a GTX 1070 no less).

It should also be noted that Unity and Crytek, the makers of Unity and CryEngine respectively, are also working on their own ray traced rendering options as well. Unity has ray tracing available in preview currently, and Crytek plans to ship their implementation in 2020.

CryEngine’s Ray Tracing Demo
Unity’s Ray Tracing Demo

How do we develop for it?

I am going to using Unreal Engine for the frame of reference and example. I have more experience within that tool. Unity offers ray tracing support in 2019.3b, but it is currently in preview and is missing functionality whereas Unreal no longer considers ray tracing experimental.

Working with ray tracing from a development perspective is exciting, especially for someone like myself. I have experience with Arnold, V-Ray, and the now dead Mental Ray. I have shipped countless games and assisted on many experiences that use a variety of tools. Ray tracing coming to real time opens many more opportunities for the tools and workflows that I know to be adopted by many new industries.

Unreal’s Arch Viz learning scene with RT Enabled
Same scene with no RT and full Screen Space Reflections

It also introduces new challenges. What changes when you’re building a scene? Do I still need reflection probes? Do I still need light maps? Will this work with LODs? How much will this hit performance? Can my work machine keep up? Do I need to change my textures around?

On the asset side, 3D models have no changes to be made at all. If you’re making props or larger level geometry, the only real consideration will be if you have mesh holes that need to be closed because they are visible in the reflection. LOD’s don’t need to change either, they function just like they always have and will be visible in reflections and the like too.

Texture work doesn’t really need to change much either, although artists might want to take more care in how often they have highly reflective surfaces being mixed with non reflective surfaces on a model. This is an oversimplification, but noisy roughness maps with a high variance in roughness values affect what can be called ‘ray coherency’. This can be understood as the time it takes for ray calculations to render. Fully rough surfaces don’t need ray trace calculations done for reflections, while anything that is at or below the Max Roughness setting in a Post Process Volume will need some degree of additional calculation. Generally, it’s faster to either have a lot of reflective surfaces or very few, but not a noisy mix of both. Check out the video below for a more in depth explanation.

A great video covering almost all you need to know about Unreal’s Ray Tracing.

Unreal Engine’s current implementation allows users to set a roughness limit via a ‘Max Roughness’ setting in a Post Process Volume for both ray tracing and screen space effects. That setting tells the engine what the ‘cut off’ for ray traced reflections is as it were. Smaller values run faster as a roughness of .5 or less will be ray traced, and anything from .51 to 1 (MAXIMUM ROUGHNESS!!) will be using cube maps reflections generated by the reflection probes. This leads to another point as well; cube maps and shaders.

Cube map reflections are an absolute must today, and likely will be for some time. Ray tracing has the best visual improvements on highly reflective areas in a scene. For everything else that has moderate to no reflectivity, cube maps can continue to provide general reflection data. If you have a heavily smudged surface or a highly machined piece of metal with a very broad highlight, cube maps will suffice. In some limited testing, I’ve found .6 to be a solid setting for visualization scenes, while game scenes with more varied surfaces may work better at even lower thresholds.

Reflections on, GI off on the left. Translucency enabled on the right.

Shader and texture optimizations are also additional considerations to be made, and thankfully Epic has made both pretty easy to implement. The amount of materials in an environment can make a big difference in ray tracing performance, as each material necessitates a state change for a ray. Material instances cut down heavily on this performance cost. This could also mean that the days of heavy use of packed texture maps for assets will be making a comeback (depending on the engine and ray tracing implementation of course). Material fallback options are another optimization that can be added into any material an artist makes. It’s a switch that activates on an object when it is actively being traced in reflections, refractions, and global illumination and it replaces the entire logic of a material. For instance, I created a chrome ball that is visible as a chrome ball when viewed directly, but when viewed via ray traced visuals, it changes to a fully rough white surface (except in reflections, where it is black. I am unsure as to why). In practical terms, this should be used to simplify down material logic to it’s most important set of instructions so the asset can still be read in the reflection, but advanced material features are disabled for faster rendering. Texture mip fall backs (enabled in the project settings along with ray tracing) can be an aid in helping with saving on texture memory by calculating what mip level should be shown on an object based on a voxel cone that factors in ray distance, texel density, and other aspects of the model.

RT Final Gather for GI on, reflections off

The final question when creating content and assets that will be used in real time ray tracing comes in the form of light map UVs. Unreal and Unity both make use of lightmaps and baked lighting solutions to create believable and realistic bounce lighting in static scenes. This necessitates the creation of a second set of UV’s that are used in engine to create a small (or large based on settings) textures that store shadow and light information. Theoretically, ray traced global illumination solves for this by calculating the amount of light bounces around a scene in real time. It works too, as shown in games like Control and Metro: Exodus. At the same time, not every game is going to have the same budgets and engineering setup. Those two games use their own proprietary technology, giving the teams additional control over content. Regarding our example case, Unreal Engine, RTGI was very expensive prior to version 4.24. With the latest version, Epic added in the Final Gather method of GI which sped up their solution by quite a bit. Just look at the FPS counters below.

Final Gather (1 bounce, 16 samples/pixel), Brute Force (1 bounce, 16 samples/px) and Brute Force (2 bounce, 8 samples/px)

Epic’s previous GI method, coined ‘Brute Force’, functions a bit differently from Final Gather. Brute Force casts simulated photons from all of the lights that are eventually terminated at the camera. These photons carry light and color information from object to object before terminating, adding to the illumination of the scene. This is more physically accurate. Final Gather on the other hand casts light rays from the camera and samples the surrounding area, bringing the dark and bright areas a bit more into harmony with each other (as well as implementing color bleeding). It’s not physically accurate, but it works rather well. If you look above, the final image uses Brute Force GI, but with a twist. Instead of a single bounce with 16 samples per a pixel, it’s two bounces with 8 samples. It can be hard to notice in image form, but the upper right corner of the image is slightly brighter than the other images. Brute Force supports multiple light bounces, exchanging performance for accuracy, while Final Gather makes GI more performant while only supporting a single ‘bounce’ as it were.

In the end, the answer to the question of continued used of lightmap UV’s remains “it depends”. If you want to make a video game targeting 60 fps or VR, you need lightmaps. Outside of that, it will depend heavily on your hardware, the project type (still images and movies vs. interactive scenes or worlds) and overall workflow. Personally, I’ll still be making lightmaps for a while longer even if I don’t use them.

Why does this matter?

Control, RT on left and SS on right
Metro Exodus, RT on the left. Metro uses a single GI bounce to add more color and light to parts of the scene, while the area behind the door is properly left in shadow.

It’d be natural to think that ray tracing is just another technical fad that Nvidia and Microsoft introduced in order to sell more GPU’s. It’s worked…at least on me. I built a new PC in the fall and chose an Nvidia 2070 Super in part because of it’s RTX hardware. At the same time, I urge anyone to play Control on PC and to at least test ray tracing if you have a GTX 1060 or better. It’s a huge upgrade to the game. Crytek’s ray tracing demo runs on all hardware supporting DX 11, so I would suggest users with older hardware give that a go. Reflections can play a large part in a game like Battlefield 5, giving players with advanced hardware just a bit more information than those without. Ray tracing has noticeably improved my gaming experience in the titles I’ve played that support it. It’s not world changing, but it’s a step even closer to truly having movie quality visuals in our interactive medium.

Ray tracing in real time is not perfect yet. As you have noticed in many of the Control images, visual noise is pretty apparent. Even with denoise technology, the amount of rays used to get playable frame rates will vary. Control is both ambitious and conservative at the same time with it’s ray tracing implementation. Reflections, refractions, and ambient occlusion are the star of the show in Control. Global Illumination is in the game and looks great, but because of the high cost it’s pretty clear that it’s using a low sample rate per a pixel. These are the trade offs that must be made in games at this time. It should be noted that noise is an issue present to this day in Vray and it’s ilk. You need hundreds of samples per a pixel in conjunction with the right render settings and implementations in order to remove noise without a denoiser.

Call of Duty: Modern Warfare. RT on the left. Infinity Ward went for a more subtle approach by only ray-tracing shadows from point and spot lights.

Where ray tracing really matters is in the explosion of real time rendering in different industries; architecture, product development, film, TV, and animation. We’ve already seen select shots from ILMxLab’s custom Unreal Engine make appearances in feature films (Rogue One, The Last Jedi) and Disney+’s The Mandalorian makes use of Unreal as well. Unity is used by a number of car companies for visualizations. Arch Viz firms use both tools to create high quality walkthroughs for properties and scenes both in and out of VR. In an ever more on demand I need it now world, real time delivery of high quality, ray traced scenes will eventually become the standard. Companies are taking notice, and if they can get 90% of the quality of Arnold or Vray in half of the time or even less, they will take that trade off.

Ray tracing in real time is the future of not just games, but computer graphics as a whole. Just give it some time.

--

--