Diving into the Depths of Unreal Engine 5: Early Access

An exploration into some of the new technology and techniques of Unreal Engine 5.

Daniel Rose
GameTextures
Published in
16 min readDec 1, 2021

--

Unreal Engine is truly a marvel of real time rendering technology. Initially created in the 90’s with the development of the original Unreal and evolving though sequels, new franchises, and licensing agreements, we’ve now arrived at its 5th iteration. Unreal Engine has pushed the boundaries of real time since inception with colored lighting and reflections blowing players away in 1996 to the massive upgrade in visual quality developers witnessed throughout Unreal Engine 3‘s lifetime. Epic Games has a tall order for Unreal 5; Unreal 4 has been available to the public for nearly 7 years and has received many upgrades in that time like the addition of Sequencer for cinematics and animations, automated LOD and Collision tools, and the ever clickbait worthy Ray Tracing support. An argument could be made that Unreal 4, in it’s upcoming 4.27 implementation, doesn’t need much of an update.

That wouldn’t be fun, would it?

Epic didn’t think so and on May 26th 2021, they released an Early Access build of Unreal 5.0 to the world.

I have spent a good portion of my summer digging into the details of Unreal 5. I have copious links to Epic’s video and documentation content as part of this break down; it’s excellent and helpful for expanding on some of my coverage. It’s worth noting that this will be shared from an environment artist point of view. While I have experience doing other work in Unreal 4, including some VFX and game prototype work, I haven’t had time to play around in all of the new tools that Epic has given us, Hot Vaxx Summer and all of that.

I am also not a graphics engineer, so while I’ll give a high level breakdown of most features, I can’t explain how they all work to the letter.

Remove Baking, Kill it if you have to.

Epic, Unity, CryTek, Amazon, Infinity Ward, Naughty Dog, Supergiant, Nintendo…if you named a single developer and asked their opinion on Light Maps and Baked Lights, the answer would be the same:

“It’s a necessary evil to hit frame rate, and I really hate it”.

Lights and Shadows have been costly to render in real time for the entirety of real time 3D. At the most basic level, the shadows from lights are rendered into 2D images that get applied to the scene. In real time lighting, this gets rendered to a depth map. This map is a combination of the depth of the object and the distance from the light to a “shadow hit”. They get combined, some mathematical magic happens, and you wind up with a shadow. This technique has been modified and edited across the years to included bespoke solutions for specific lighting situations. Most open world games utilize a version of this technique (Cascaded Shadow Maps) with their sun or world lights to give players a good quality vs. performance trade off.

What makes these types of shadows costly to render is the need to do so at frame rate, or the rate at which the scene is rendered and pushed to the screen. This is typically 16.66 or 33.33 milliseconds, for 60fps or 30 fps respectively. Constantly updating the direction of the shadow and what is or isn’t shadowed can be costly.

Standard UVs on the left, Lightmaps on the Right. Automated tools help, but it can still be a tedious process.

This is why baked lighting was created. Baked lighting removed the need for lights and shadows to be rendered as the game is being played. Instead, they are created in an offline process by artists. Light intensity and shadows get calculated as part of a process called “baking”. This process takes the calculations and creates small textures for each item in the scene. These small textures then get packed into a single atlas for the scene and applied by the engine. While the specific process is different between Unity, Unreal, and other game engines that utilize baked lighting, the concept is the same.

Examples of the final packed maps used by the engine.

Baked lighting can give some great results if you have the storage or RAM to utilize a larger shadow atlas. Because the lighting and shadow information is stored in a single texture, performance is often extremely good and it is still suitable for many low end devices, or experiences that need extreme framerates like VR. The downside of this approach is that, even on modern hardware, baking an entire game level at an acceptable level of quality can take hours, even a full day. Artists also need to author a second (or third) UV set for each model, since no UV’s can overlap when baking shadow information. These UV’s also can be annoying to set up, since the textures are often very small. This can cause shadow information to be shared across areas where it shouldn’t, an

error often called a ‘bleed’. Light maps also take up memory, and on some devices, that premium can’t be given up. Finally, it’s impossible to change the lighting while playing a game, so you can’t have dynamic day and night cycles.

CryTek did away with baked lighting years ago, and Unreal Developers have been clamoring for a true Real Time solution that performs well since the introduction (and eventual abandonment of) Light Propagation Volumes in the early days of Unreal 4. To be clear, LPV’s did not perform well and weren’t terribly accurate either. I know because I used them in a scene once.

It took quite some time, but Epic heard the community and created a brand new way to have Global Illumination and high quality lighting and shadow without relying on lighting bakes. It’s called Lumen.

WIP being lit by Lumen.

Cutting out the Middlemen

Video game visuals have always had quite a bit of catching up to do with feature film visuals. While that gap has closed significantly today, it wasn’t always so close. Outside of shadows and lighting, one of the major problems games had with fidelity was polygon counts. Prior to the PS4 and Xbox One generations, polygons were always a concern. You couldn’t have too many on screen, they couldn’t bee too thin, you couldn’t stack too many items…but you also needed to give an object enough geometry that it could easily be read by the player and look like the object in real life. It could be difficult.

This lead to the creation of two different systems to handle complex detail; LODs and Tessellation.

LODs have been around for a long time. LODs, or Levels of Detail, are models that are cut down versions of your final display model. During gameplay, the models will swap based on distance or size on the screen. This actively reduces the number of polygons that need to be drawn at a given time while also reducing over-draw or alpha coverage (size of a transparent item). This increased performance with a fairly minimal decrease in visual quality unless you are forced to only use a handful of LOD models.

Lod0 (High Quality) on left, Lod2 (Low Quality) on right. Dense model so not the best example, but this is the use case for Lods.

LODs have their downsides, mainly an increase in the amount of memory that is needed for a given mesh as well as the need for artists to spend time generating each LOD model, although that has largely been addressed over the last decade thanks to automation from InstaLOD, Simplygon, Autodesk, Epic, and others. Thankfully, on the memory front, there is no longer a worry about fitting into PS3 RAM pools, so LODs are far more common across all games.

Tesselation is on for the left images (see the super ultra dense geometry) and off for the right. Visual Impact is mixed here, but can vary based on a variety of different factors from the height map format and quality to the division of the base mesh.

Tessellation refers to adding geometry to models though programmable shader instructions. It first became used widely in 2010, with Metro: 2033 using it to add more geometry to human ears, noses, and other round objects on human characters. It was a punishing benchmark game at the time. In the early days, tessellation only ran on PC and it ran VERY slowly on anything that wasn’t Nvidia’s highest end GPU. With time and the launch of the 8th Generation of Consoles (PS4/Xbox One), Tessellation was available to console developers. However, the GPU and RAM improvements over the previous generation allowed for developers to significantly increase polygon counts. This allowed for a wider use of tessellation at a lower cost, since the GPU didn’t have to subdivide meshes nearly as much. Developers used it to add high frequency detail on objects closer to the player, or used it to make materials pop or have extra depth. This was achieved though the use of height maps and offsetting the vertices of the subdivided model, much like how feature films render out high frequency detail to this day.

Much like everything else in game creation, this has a bit of a cost. Hardware tessellating features in the past were still costly, and it could break down with extreme vertex offsets. Some shading features, like Parallax Occlusion, can be used to generate similar effects but they can be limited by the power of your system. Finally, when using a height map to tesselate or to create parallax effects, that’s an extra texture that must be accounted for.

Epic wanted to find a way to simplify these different technical workflows and “extra steps” while simultaneously, and most importantly, increasing the fidelity of the actual models themselves. They needed a system that would work with Lumen and in turn created Nanite.

Explaining the Headliners: Lumen and Nanite

Nanite Visualization

In an effort to alleviate a lot of the issues stated previously, Lumen and Nanite have been created. In my tests and current project, I’ve found that you can remain pretty unaware of what Unreal is doing behind the scenes with both technologies and still create some pretty cool scenery. However, it’s worth taking some time to understand what Unreal is doing so you can debug your own work and take it to the next level.

Let’s start by dissecting Nanite.

A basic Nanite Overview from Epic. Must Watch.
Nanite visualizers.

Nanite’s main premise is the ability to import extremely dense 3D models, ranging from ten thousand triangles up to the millions, with the end goal being richly detailed meshes that do away with both the retopology process and LOD creation work that many game artists have gotten used to over the past decade or more. It does this though ‘virtualizing geometry’, and that is not a concept I can explain. It’s magic, it works, and it’s somehow fairly cheap based on The Collation’s tests for Alpha Point which suggest a fixed 2 millisecond cost for Nanite meshes no matter the scene density. That is magic.

Dense Nanite Mesh up close…
…becomes about 15 triangles really far away.

Within this magical land of limitless real time polygons are automated systems. These systems work under the hood to create the backbone for how Nanite works with the rest of the engine and allows for content to be built once and deployed everywhere: enter the world of Proxy Geometry.

Nanite Display Mesh
Nanite doesn’t display a “wireframe” as a traditional triangle mesh would.

Proxy Geometry is nothing new, its been around for years and has appeared in many of my favorite titles. I even used it early in my career when working on Uncharted: Golden Abyss. In practice, Proxies act as extremely low poly representations of your final model. They are generally a bit smaller than your actual geometry, allowing them to remain hidden behind the final display model. Some proxies may have special material applications to assist with remaining hidden, while others may have no real materials applied at all. These proxies are typically used to render shadows in real time, or as collision models. This works because render geometry can be very detailed and ornate, causing the shadows to either render slowly or to miss some of the smaller greebles on a given model. Proxy models typically do away with smaller intricate bits of a model and then simplify the shape and polygon count, making it much cheaper to cast shadows from. This is how we cast real time shadows on the Playstation Vita in 2011, and how Uncharted 4 handled their shadow work as well (with the key difference being automated systems in the latter title’s development).

Nanite Proxy

Nanite itself doesn’t need proxies to function, but if you’re working in Unreal Engine 5 and you would like to port to PS4, Switch, or another legacy system that does not support Nanite, you would be required to maintain two different sets of assets, at least based on how the current Nanite implementation functions. It would be a massive headache and make it very difficult for Unreal users to build once and deploy everywhere, which has been the driving force behind some of their Unreal 4 tools like automated LOD creation, collision creation tools, light map generation, and so on. To maintain this goal while improving detail and work speed, Epic turned to proxies. When you import a model as a Nanite mesh, a Nanite Proxy is automatically created (although this CAN be disabled). The initial proxy is all automatic, both it’s accuracy and overall geometry count. Users can adjust this, and the recalculation time on a high end PC from 2019 is pretty small, even on dense meshes. It’s even possible to combine the Nanite Proxy with typical LOD tools, allowing for even more options when building content for a wide range of platforms.

So, Nanite’s Proxy Models are used to scale Nanite up and down with the content requirements of releasing games on multiple platforms. Nanite’s proxy models also have one other important helping hand to give when working in Unreal Engine 5; ray tracing support.

Real Time Ray Tracing, or RTX thanks to Nvida’s branding power (RTT is often used for Render to Texture), is a powerful technology and the hardware that accelerates it is phenomenal. However, tracing rays against tens of millions of triangles in a single scene would bring even a 3090 to it’s knees. To account for how RTX works in Direct X 12 as well as mitigating the cost of tracing against Nanite geometry counts, Hardware based Ray Tracing in Unreal 5 uses the Nanite Proxy model. It’s important to keep this in mind if you’re working with visualizations or projects where accurate reflections are important (ie, use more accurate proxies). Hardware Raytracing isn’t going anywhere and Epic stated it’s dedicated to it’s continued development and support, but there are limitations to the technology still and with the growth of large open world with millions of instances that are geometrically dense, a different solution was required.

Nanite got a lot of the headlines early and often within the development press, but Lumen has become the more impressive technology that Epic has created for Unreal Engine 5. It’s imperfect, as the current performance of Lumen is a major factor in the current “Early Access” designation for the publicly available engine, but it’s exceedingly easy to use and allows a big quality of life improvement for artists and developers.

So how does it work?

An overview of Lumen from Epic. Also a good watch!

Lumen is a combination of many techniques that developers have used over the years to implement reflections or global illumination, plus a few new tricks. Lumen starts by utilizing a Surface Cache, which acts as a quick way for Lumen to check light hits and information on a given model (material info and the like) in a scene. Once the cache info is fully populated, Lumen does the math for direct and indirect lighting, giving us Global Illumination.

Lumen Cards on the left are used to generate the Lumen Scene on the right.

It’s oversimplified, but you can think of this element of Lumen as “Real Time Scene Light Maps + Temporal Anti Aliasing”. It uses generated texture data plus data from previous frames to give us GI. If you wonder why it seems to take a split second to update dynamically, this is why.

Lumen’s GI pass (WIP of course)

Lumen still has other techniques up it’s sleeve. It uses Screen Tracing in conjunction with software Ray Tracing to further enhance the quality of what we see, both in terms of GI and Reflections. Screen Tracing casts rays from the view of the player/screen and uses them to correct and enhance reflections or Global Illumination mismatches. Screen Tracing is fairly fast and has it’s roots in SSR and SSGI technologies that debuted in the mid 2010’s. Software Ray Tracing uses Signed Distance Fields, introduced in Unreal 4.5 (so long ago!), to accelerate the speed at which the ray trace occurs. Since the trace is happening against what is basically a volume texture scene representation, it’s fast enough for modern high end systems and consoles to do. Lumen performs a more accurate mesh distance field trace for the first two meters, then a less accurate but more performant scene representation after.

Mesh Distance Fields, a huge upgrade over the UE4 implementation. BSP, used to block out parts of my scene, don’t have Distance Field representations.
Scene Distance fields, lower quality overall scene representations. Used by Lumen for ray bounces more than 2 meters away from the camera.

Note how many different elements have gone into Lumen! Cache data, Screen Tracing, and Distance Fields all work together to create a really impressive system. It’s also worth noting that if you have hardware ray tracing capability, that can replace Lumen’s Distance Field Tracing to generally give better results. Remember from the breakdown of Nanite above, that it REQUIRES a Nanite proxy, and if you have 100,000 instances in your scene (or a very large world) hardware raytracing will not scale very well performance wise.

Other cool technology in UE 5

There is so much to talk about with Unreal 5 that it’s impossible to hit it all, and it’s impossible for me to understand it all at a fine enough level to break it down. Epic is going all in on highly detailed large worlds, and a lot of the technology here highlights it.

· Virtual Shadow Maps are a new way of casting shadows that is intended to work with the detail level that is expected of Nanite based geometry. At it’s core, VSM’s are really large shadow maps split into smaller tile pages that are then clipped (or mipped depending on light type) based on distance and what is on screen. It is in line with the goal of Nanite, Virtual Textures, and other technologies in Unreal Engine today: Render only what you actually see.

· While rendering technology that impacts workflow is always a headline grabber, The Open World Tools that are being added to Unreal 5 might impact more studios that you would think. World Partition acts as a replacement for the Sub Level workflow that many of us grew to…tolerate. Partition essentially allows for one large, streamable world that is broken up into grid cells, which helps to automate the open world creation process (in theory). There are a host of other tools that tie into this system that would be better to read from Epic’s Documentation.

· Control Rig and Full Body IK are huge improvements for those who are creating bespoke animations within the engine. Typically, animations happen outside your game engine to a skeletal mesh, and are then imported and applied though complex state machines, gameplay cues, or sequences. It’s also possible to store these animations as Alembic Cache files, which are vertex animations. With Control Rig and FBIK being able to be setup and executed within the engine, it should reduce or potentially eliminate the back and forth that can be required for some animations.

Content Authoring notes for Lumen and Nanite

Zbrush is back!

As is typical when new consoles or game engines come out, workflows for content often need to be adjusted to take advantage of the changes. With PS4/Xbox One and Unreal 4/Unity 5 it was a shift to creating Physically Based Materials. With Unreal Engine 5, the biggest change mostly comes from navigating the creation of dense, detailed meshes, what to do with baked maps (if they’re even needed), and how to build scenes themselves.

A bench asset from one of my asset packs. Built for normal UE4 use.
Super dense modular floor. This has more triangles than 90 or so of those benches.

The largest change I am experiencing is the increased reliance on modular pieces for everything. I have a floor that is nothing but kit bashed tiles, similar to The Collation’s work on Alpha Point. This is more of a Lumen limitation than anything else, as Nanite can support flat ground planes. Currently, due to a lack of support for traditional displacement mapping and, more importantly, Lumen’s inability to resolve flat, one sided geometry well (distance fields don’t like it), modularity is even more important than it was in the past.

Bench UV’s, nice, simple, clean.
Floor tile UVs. It’s not bad to work with in Blender, but definitely not optimal.

The other big change I’m adjusting to is authoring geometrically dense content. My main workflow relied on quick, basic shapes from Maya or Blender and then I move them to Zbrush for sculpting. I use a tool called Dynamesh to subdivide my models in Zbrush, allowing for even geometry distribution for easy and clean sculpting. I then export out the final sculpt and my decimated or ZRemeshed model for final UVs, baking, and the like. If the model needed a true retopology pass, I would give it one, but most rocks or sculpted tree trunks don’t need much more than a few minor fixes. With the level of model detail Nanite can afford, I’m working out ways to speed up my workflow. Presently, I’m still decimating models but to a much lesser degree, and Blender’s selection tools help me to UV my models fairly quickly. But, it feels like extra steps and extremely detailed models with millions of triangles may not play nicely in Blender’s UV editor. I’m hoping to dig into some of the ideas mentioned in the Nanite specific Epic video to speed up the process.

Outside of these adjustments, working in Unreal 5 has been a joy. No light maps, no baking, and knowing the engine is automating the tedious parts of my work is a delight. I’m looking forward to the final release slated for early next year and creating content in it for years to come.

Looking for great textures for your own 3D journey? Check out the GameTextures library!

--

--

GameTextures
GameTextures

Published in GameTextures

The GameTextures blog for all things 3D art and game development.

No responses yet