Practical Photogrammetry — Digitizing Real-world places

Jon Baginski
10 min readJul 8, 2017

--

The Iceland DC3 VR Film running in Unreal Engine.

*Disclaimer: Photogrammetry is a skill that takes a lot of time and practice to get right. Quality in this field is paramount, and its better to get into the business when you are ready, not when you aren’t. Learn from everyone, share what you can, but never under-bid a project and deliver worse results than your competitors. Keep this industry healthy.

In 2006 while working for a games studio, I stumbled across this great little content creation process called Photogrammetry. Its been an interesting ride since, learning lots of great workflows, tips and tricks from others and not to mention travelling the world and capturing some iconic places.

Paul Debevic (original inspiration) http://www.pauldebevec.com/

RISE VR by David Karlak (2013 Inspiration for VR Ultra High Detail scenes) https://www.kickstarter.com/projects/1725739753/

This is the first part of a series where I’ll show my workflow and some of the not-so-common approaches to digitisation using photogrammetry.

Part One: The Kit

DJI Phantom Drone capturing top of the DC3 and ground plane reference

This is the simplest to explain and I hope not to dwell here too much. If I were to place my commonly used equipment on the floor it would be as follows:

  • DSLR body (Nikon D810, Sony A7r2, Hasselblad)
  • Lenses, 14mm Prime, 14–24 Zoom (anything wide and sharp is good)
  • Lighting: Speedlite, diffusers, polarising filters
  • Laser scanner, Z&F 5010x (optional)
  • Drone: DJI Mavic, Phantom IV
  • Lots of SD cards, Batteries, Hard drives
  • Lens cleaning cloths, X-rite chart and reliable camera bag.

The Software.

This can be a little different from project to project but the most common programs are: Lightroom, Photoshop, Zbrush, 3DCoat, Substance designer, Substance painter, Agisoft, Capturing Reality and Blender (its better to preview a mesh in blender than import into game engine due to auto LOD/Mip processing). Z&F’s laser scanning software “Laser Control” supports cloud-to-cloud alignment for scan data.

Misc:

Laptop with as much power capacity as possible, EyeFi card for wireless transfer/tethering, tripod for light probe capture and steady shots, colour checker (Macbeth or X-rite is fine) rain coat and the most important: Patience and determination.

Part Two: Data Capture Process.

Everything begins with some scouting. You really want to establish how to break a complex area into simpler areas — and watch for things that change over time (moving furniture etc).

I usually begin with setting up the laser scanner so that there is a GPS world orientation and coordinate system and long-range reference for all of the photogrammetry. Without the scanner I would have to spend tedious amounts of time doing manual alignment or else lose a lot of photos/components in the process.

Keep in mind that a laser scanner has its own problems — namely reflections and noise due to atmosphere etc.

Once the laser scanner is going, I’m good to shoot with the DSLR. Two birds with one stone is the aim here.

A good laser scan takes approx 15mins minimum. You want as much detail as possible for the areas of interest. Move the laser scanner around strategically to get as much coverage as time permits. Using Photogrammetry as a “hand held scanner” if you will.

Stay out of the line of sight of the scanner, and use a wireless device to remote control the laser scanner if possible (pause/resume etc).

Using high aperture (around f8) and a wide lens, I usually tape the zoom (if present) to lock it down. nowadays it doesn’t matter so much, but the more careful you are with shooting, the less “compounding” errors you will get.

Heres where I go a little off-track to traditional photogrammetry workflows.

I avoid shooting RAW unless its absolutely necessary. SAY WHAT? you ask? Its easy to explain.

In film, we have an option to shoot S-Log, or essentially an extremely flat profile so that the most amount of information is contained in the image. the reason is simple, 4000 Raw files eats up space and processing power/time. If you can take a colour chart reading, and then setup an S-LOG or flat profile, just shoot JPG! Saves you a ton of space, free’s your camera buffer so you can shoot like a tourist photographer at 9fps.

Slog comparison from upcoming Redwoods VR Project
Slog comparison from upcoming Redwoods VR Project

Here’s what you need to decide when going down this path: Is the subject matter heavily occluded? if so, you need more photos — no other way about it. This is especially the case for environments in VR, where missing areas will show up much more evidently than in 2D space.

If its simple like a rock or game/film/VR asset, then you are fine to use JPG. If its some sort of historical preservation project where detail and colour accuracy is paramount, then you should consider shooting RAW with a JPG side-cart. Worth noting is the fact that a JPG from camera will still contain all the important metadata intrinsics.

Tip: Disable distortion correction on JPG. You don’t want to do this as the camera profiles on the camera might not be good enough. Better to shoot a busy photogrammetry “test” area to get the correct distortion values for your lens. Then update data accordingly.

Note: The Brown lens distortion model corrects radial distortion, but not vignetting! You can still correct for this in post if you like, but only worth doing if its severe

There have been several instances where I tried to shoot everything at the highest quality and it resulted in missed areas, or not enough detail. Nowadays I regularly shoot upwards of 4K shots per project. Your camera shutter will hate you, but your clients will love you.

The process of actually covering an object is also straight forward: imagine painting the scene with your camera with a heat map: Green means you have covered an area well, red is areas you need to go back to. Its a mental exercise that is hard to explain but if you look up “heat maps” you will get the idea. Just don’t use it on your significant other (bad joke).

Otherwise use a notebook and make sure you are highlighting completed areas — shoot everything as though you will never get a chance to go back. Trust me. If you think for a second you will be back the next day to finish a job — imagine the horror of returning and seeing everything is different and you now have to restart the process.

I use a variety of things to help with coverage. Drones are great but not always allowed for various reasons. So using a tall tripod, and lifting it as high as possible in the air (extended) works pretty well. Bear in mind that you have a $6000 camera attached to that tripod, take extreme care with it and especially the people around you. Take out camera insurance and public liability insurance (a must!).

Using variable distance and height is essential to getting correct geometry in a scene. Photogrammetry apps have now figured out how to do frustum priority meaning closer camera positions will take preference over distant ones. This is a very useful thing.

Use a tripod to maximise those megapixels. Originally when doing landscape etc, it was apparent that a medium format camera needs to be “hands off” otherwise you are not utilising all the megapixels it has to offer. For instance, slight camera shake — although minimal can cost you half your image resolution. Its entirely up to the camera operator to get this right.

When you are finally done capturing all the photos, make sure to take some 360 HDR photos for all the lighting info in the scene, and potentially get some free environment backgrounds for your VR project. The lighting info is used to remove things like directional shadows. Its not really effective in heavily diffused lighting conditions.

Capturing HDR Light Probe for De-lighting (using nodal point adaptor)
Partial Lighting removal — running in Unreal (high poly count)

Remember to use the X-rite chart! also if the conditions change, group the photos and X-rite chart accordingly. Also triple check your white balance — you are not an art photographer here, but a technical photographer!

Get X-rite chart here: http://xritephoto.com/colorchecker-classic

Part Four:

Photogrammetry processing. The general rule I use here is to group things into 2000 photos each. Why? because for various reasons, once you cross a certain number of photos it takes exponentially longer to calculate a mesh. This is a good reason why software such as Capturing Reality is set to max 2500 photos. Trust me its better to produce assets faster and free up a workstation than sitting and waiting.

Photogrammetry app of choice: https://www.capturingreality.com/

Preparing the photos is easy. I’ve seen people convert photos to Tiffs and all sorts of formats to preserve colour info. 8 Bit is the devil they say! but seriously if that 8bit contains everything you need it can be better than a 14bit raw photo that is under or overexposed.

Having JPG ready means I can instantly scroll through photos and check for sharpness/detail etc. This saves me around 3 hours or processing already. Flag and remove in Lightroom as needed.

Generally I get 4x as many photos per card than when using RAW. And the fact that RAW doesn’t apply grading and tweaks to your exported image by default, means more time spent in lightroom developing and syncing settings.

Adding a bit of contrast back and colour correcting the JPGs (or raws) can be done before or after the meshing process. Use the X-rite chart to get the colour spot on. As a rule I only process while I sleep unless absolutely necessary — staring at progress bars is not productive, so batch process anything you possibly can. Learn command line tools to set timers, callback actions etc so that everything works in a nice chain (learning python is beneficial but you can find most of these things on stack overflow or support forums.

Tip: I rarely use high detail settings for mesh. I don’t see enough gain to justify the time. Also the mesh I export is usually filtered and reduced to something managable. 5million-10million poly’s is general fine.

Once the mesh is done, I export it to Zbrush, split to groups based on topology and cleanup anything I can using Dynamesh/Zremesher. Once you are done with cleanup its time to use 3DCoat and get the unwrapping done. There are countless tutorials on this and its not practical to go into detail here.

Tip: You can use a great piece of open source software called Instant Meshes by Wenzel Jakob to get great topology and easy to work with meshes. https://github.com/wjakob/instant-meshes

Zbrush and 3Dcoat cleanup result

Get Zbrush here: http://pixologic.com/

The end result should be a nice, clean mesh which you can collapse together again, keeping alignment intact with original export and then re-project textures as needed. No magic here!

Rocks were decimated and ready to use directly from Capturing Reality

Before you do your texture projection, consider treating your original images at this stage — since you are done with the meshing you can now do whatever you want with the images so long as you don’t change the format, size or anything that would throw an error in the photogrammetry packages. Its a good time as well to use the light probe capture to eliminate hard shadows or use de-lighting filters like with the upcoming Unity de-lighting plugin tool (available soon for free). Verify your light probe capture with that of the de-lighting tool to see what is best.

Closeup on the front of the DC3.
Closeup on the rear. The tail was ripped off by the owner/farmer.

If you plan to dynamically light your model in a game engine, consider going down the PBR workflow. There is a neat online tool called Artomatix which does a great job of splitting your textures into PBR components. If you skip this step then you will have visual mis-match issues with your CGI generated assets and your photogrammetry assets.

Use Artomatix here: https://artomatix.com/

Tip: If you would like a realtime asset, consider exporting a mesh with less poly’s and as few texture maps as possible. 4K textures can be mip’d inside the game engine editor automatically.

Once your final projection is done, your mesh is ready to throw into Unity/Unreal or application of choice.

Screencap from within Unreal.
Screencap from Unreal — Distance field turned off to show end of mesh.
The DC3 running in Unreal — 8x4k Textures total

From there you can decide to view in VR/AR or even make game experiences based on your real-world captures.

Part Five will cover Advanced topics and will be the result of feedback from the community. Ask away and feel free to email me by using the contact button at http://www.jonbaginski.com/bio/

Facebook: www.facebook.com/jonbaginskivfx

Short Bio: Jon Baginski is a technical artist currently freelancing between film and interactive entertainment. Specialising in LiDAR, Visual Effects on-set data acquisition and routinely works as a Visual Effects Pipeline TD and or compositor.

--

--