Photogrammetry at Embark (Part 3)

Robert Berg
Embark Studios
Published in
6 min readMay 16, 2019

This is part of a series of posts that describe how we work with photogrammetry at Embark. It’s the third and last entry. You can read the first post here, and the second post here.

Documenting my scans on the phone.

It is time for the final part in our series about how we work with photogrammetry here at Embark.

Andrew gave you a high-level overview of photogrammetry, Pontus told you about how we work when we arrive at a location, and it’s my turn to detail how we go from having a bunch of photos to creating in-game assets.

I work as an environment artist at Embark. My own journey with photogrammetry began back in 2014, after playing The Vanishing of Ethan Carter, a great horror adventure game that showed what amazing results can be achieved with photogrammetry even with a very small team. You can check out my personal explorations as well as the professional work I’ve been involved in on my ArtStation.

Photogrammetry is sometimes seen as cheating when aiming to reach high-quality results. While it can be a valuable shortcut, there’s certainly more to photogrammetry than throwing a high-quality asset with high-resolution textures into a game.

When using photogrammetry as part of game development you need to balance high-end visuals with the memory constraints and render budgets of current gaming hardware. That’s where the fun begins, and that’s when you need to get creative.

So let’s dig into what we did after coming back from our latest photogrammetry trip on Tenerife.

The first order of business was to sort through the more than six terabytes of photos that we gathered while there. In our previous workflows, this used to be done manually and could take weeks.

We have some great minds working on improving these workflows, like our technical artist Paul (aka MoP) who wrote a tool for us called Sortie that does all that work for us. All we have to do is to separate each scan set with a black image (by holding your hand in front of the lens and snapping an image after finishing a scan) and then Sortie puts each set of scans into a folder of its own.

Once making sure all photos are sorted correctly, it’s time to get the images ready for processing. The priority at this stage of the workflow is to minimize the difference between light and shadow in the images, to flatten them so to speak. This gives you a cleaner texture to work with, which helps a lot down the line when it is time to de-light the asset.

We then bring the images into RealityCapture, a program where the true black magic happens. RealityCapture finds similar points that match across several images, aligning them and creating a textured 3D model in the process.

Point cloud and camera angles of a scanned rock in RealityCapture.

Our record for this trip was a scan that combined 2,500 images, using both regular camera photos and drone footage. RealityCapture managed to align all these images, producing a 3D model with a whopping 1.4 billion triangles (THICK!).

While 1.4 billion triangles is an extreme case, it’s not rare for raw RealityCapture models to have hundreds of millions of triangles, far too many for any 3D application to handle with good performance.

That’s why we use a stand-alone program called xNormal to transfer the scans from high-poly to in-game models. The benefit of xNormal is that it doesn’t have a viewport, which means it doesn’t have to display any of the models and can handle a seemingly unlimited amount of triangles.

From here, the traditional way to finalize the in-game model would be to decimate the high-poly model to a target triangle count, and then do a manual clean up of the geometry until you reach the target. This is a tedious manual task that could take a single artist hours, if not days, depending on the complexity of the asset.

Reference image of the volcanic biome.

At Embark, however, we’re a small team with few artists. Without the ability to throw lots of people at these tedious tasks, we instead strive to simplify our workflows and automate as much as we possibly can, giving back valuable time to artists to spend it on more meaningful tasks.

To that end, our technical artists have built a tool for us in Houdini that automates this process. This new tool quickly lets us create the game model with an exactly specified triangle count. The tool also UV maps the model and bakes it down to a texture with the click of a few buttons.

But it doesn’t stop there. It also lets us mix and match different scans, combining different models into a seamless mesh, allowing us to create completely new assets from existing ones. Just recently we built a similar plugin for Blender, a program that since the release of the 2.8 beta many of us artists here have come to love!

Once the final game-res model is created and the textures from the high-resolution model are baked down, it’s time to clean up and de-light the asset.

Cleaning up, in this case, means filling in missing information, like an angle that you didn’t get enough pictures of, or spots that were too dark or bright for RealityCapture to read. The simplest way to do this is to bring the model into Substance Painter and clone stamp away any missing spots, with the great benefit of stamping through all relevant channels at the same time.

De-lighting is when you remove all ambient and directional light that is present on the object at the time of scanning. Having traces of light left in the textures will conflict with the game engine’s own lighting, resulting in assets not being shaded correctly, and lighting artists being sad.

The same rock structure before (left) and after (right) de-lighting in Substance Painter.

To limit the memory footprint of the asset, we make heavy use of detail maps that allows us to lower the resolution of the unique maps while still keeping the high-frequency detail that is required. Detail maps are an extra set of scanned textures that you tile on top of the unique textures to increase the fidelity without the need for huge textures.

A shot of the volcanic biome in Unreal Engine 4.

The video below is the end result of all the work we have been doing in the weeks since we got back from the trip. Our goal has been to nail the small to medium scale details of our biome, as opposed to the large scale vistas and atmosphere that was the focus of our first environment test.

In this demo in UE4 we aimed to reach high visual fidelity on the small- to mid-scale.

Our ambition as we continue our work is to remove tasks that are manual and repetitive from the artists’ workflows. In the early stages of pre-production, we’re devoting a lot of time to develop quality-of-life tools. If something is time-consuming and repetitive, we aim to automate it!

That means we can focus on the fun and meaningful work, like composition, lighting and world building.

Another in-engine shot of the environment.

--

--