Getting Started with Photogrammetry: Part 2 — Interior Scanning

Interior Scanning: Gear, Capture Process, Mesh Cleanup.

Brief Introduction

This month, I started a series of tutorials (Read Part 1 here) on the basic techniques of 3D scanning technique called Photogrammetry, which is used to create incredibly detailed and photorealistic 3D scans of environments, people, objects, anything!


Interior Photogrammetry

Scanning indoor places has its own set of tricks to know and learn from.

Challenging subjects:

  • white walls / walls with no distinguishable features
  • windows/glass
  • reflective surfaces

Typically LIDAR laser scanners have been used by construction companies to map out interior spaces, however its price can be totally unreasonable (ranging from 10–100k) for a one-man army like me.

This series are the steps in which I’ve learned how to capture places better, know the necessary post processing procedures, and have a model that can be imported into any game engine (ie; Unity, Unreal).

Equipment:

Small enough to fit a backpack!
  • DSLR camera (Sony A7rii/A9/Canon 5D/Nikon D5/Panasonic G9/GH5)
  • Wide angle lens (ideally between 10mm -18mm)
  • Tripod/Monopod (smaller and more portable)
  • Capturing Reality ($40/month on Steam)

Let’s talk about cameras for a second. In essence, cameras can capture a plane within a volume of light. One photograph is only 1 viewpoint of an entire volume. With enough photographs, you can get ever-so closer to capturing the entire “light volume”

A “light ray” capturing module that can input variable light, and output an ordered set of lights and their colors, captured from a specific plane within a volume.

It’s a completely different way of thinking about cameras.

Photography terms to know:

  • ISO: The how much “gain” is given to an exposure. The higher the ISO, the more noise is introduced. The lower, the better.
  • Aperture: stray away from the lower f-stops, which create blur. The goal with each picture is to have everything in focus.
  • Shutter speed: keep it at a speed that keeps you moving quickly.
  • Depth of Field: try and eliminate DoF by keeping the aperture high
  • Lens sharpness: If you’re looking to invest in a good lens for photogrammetry, do some research on DXOmark and make sure you’re using a lens with a higher sharpness score.

Capturing process

Total capture time: ~10 min

I picked a concise and blocky environment with a ton of reference points (graffiti on the walls and ceiling as well as tons of points on the vinyl floor).

This environment is pretty ideal for photogrammetry because of the controlled lighting, tons of distinct looking reference points on the walls, and overall a cool place to visit in VR.

My thought process during the capture process was to move in a smooth orbit, at varying heights and angles. I did about 4 different orbits (as seen in the gif above).

Exposure

Try to expose most images with the same settings to ensure alignment keep the overall lighting consistent. However, it’s not always that easy.

Image sequence data set used for reconstruction

For any details, like capturing the overexpose art around the florescent lights, I had to incrementally and gradually take photos adjusting the exposure (similar to how bracketing works in photography, but without needed to actually stack the images in post processing).

Change the exposure gradually (roughly 3–5 images)

Body Positioning / Camera Positioning

The point of each image in your data set is to add a new perspective of the captured volume that hasn’t been covered from a specific angle.

try and have parallax movement between any two shots that you take. There is no depth information captured if you stand in one place, and pivot the camera around you.

Sharpness:

Sharpness and maximizing the detail per pixel is the best way of getting super high quality texture results. The sharper each image, the better the photogrammetry model.

Compare these two images. Sometimes, by just glancing through a photo without paying attention to the details, results in blurry photos getting processed in the textures, which result in a more blurry 3D result.

Cleaning up / Optimizing

Decimating

First, I simplified the model to ~70m triangles to 100k using RealityCapture’s Simply tool. Since this environment doesn’t have very detailed geometry, you can get away with decimating a ton of polygons.

Vertices in RealityCapture

Mesh Cleanup

Now, no matter how precise you are with your capturing process, you’ll still end up with some noise in your mesh reconstruction. The simplest way (without needing to learn zBrush) is to use Autodesk’s Mesh Mixer.

Valerio Rizzo showed me how to use Mesh Mixer, so I went through and closed up any holes, smoothed out surfaces that needed to be straight:

Here are some examples of before/after the cleanup:

It’s still not perfect, but good enough for this tutorial. Next, export the mesh and re-imported to RealityCapture, unwrap it, and textured it for the final result!

END RESULT

The final model is 108k polygon, 8k texture file. Easily importable into any 3D engine. Download it from Sketchfab!

Click through and take a look around!

PS: Download it and remix it in Tilt Brush / Blocks

If you want to get creative with your scans and paint/create some virtual elements with Tilt brush / Blocks, you can download the file from Google Poly: https://poly.google.com/view/e8oq2D723rL


Az Balabanian is the Director of Photogrammetry at Realities.io
Follow him on Twitter and Instagram to keep up with all things Drones + VR + Photogrammetry.

Download the FREE Realities app on Steam

Explore the results of our Photogrammetry workflow in beautiful and detailed environments from the around the world in VR.

Download from Steam for Oculus Rift / HTC Vive.

Follow the Realities.io Team

The Realities.io group travels around the world, capturing the most inaccessible wonders of the world. Follow our story on Youtube and Twitter.