Capture the Story in 3D, Tell It in Augmented Reality

Capturing and creating lifelike 3d models with any camera

Travis Daub
Jun 29 · 13 min read
This Photogrammetry model of a wrecked boat on Deception Island, off the coast of Antarctica was created from 122 still digital photographs. View the published AR model here.

Augmented Reality for Field Production in Journalism is a project at the PBS NewsHour funded by a Knight Foundation ONA 360 challenge grant


What if any journalist could easily capture and create a lifelike 3d model of a real-world crime scene, a sculpture or a landscape of rainforest deforestation. Adding simple 3D modeling to the journalist’s toolkit can give any reporter in the field an engaging new format for sharing their stories and engaging audiences.

Photogrammetry is a process that can recreate a 3D model of an object or a scene by stitching together multiple still images of that object taken from different angles. Software analyzes each photo, finds where they overlap, calculates angles and depth and renders a model based on the data.

What makes photogrammetry truly magical, is that it doesn’t require laser scanning equipment, deep technical knowledge, or even professional photography gear. A simple cell phone camera, point-and-shoot or prosumer DSLR can each produce excellent results. And, photogrammetry can be applied to modeling projects as large as landscapes captured by satellite images or buildings and city blocks captured by drones or as complex as a person’s face.

And now, thanks to the integration of Augmented Reality frameworks into many social media platforms and mobile operating systems, publishing these models in formats that give our audiences powerful opportunities to view, manipulate and experience immersive elements is a process that’s possible for any size newsroom or reporting effort.

Over the last 10 months, PBS NewsHour journalists have been experimenting with the best ways to use photogrammetry in the field to capture models that enhance their storytelling. Our effort was funded through an ONA Journalism 360 Challenge Grant. Our team has learned lessons about workflow, technology and the art of capturing models alongside our regular reporting and production, and we are sharing them here.

1. Selecting a subject for photogrammetry

While simple photogrammetry can be used to capture a wide variety of objects, the technology has some limitations, and certain objects or scenes will require more post-production than others. Since most journalism workflows require a quick turnaround for assets before publishing or broadcasting, we set out to limit our projects to situations that would hopefully require little backend work. Backend work can include additional 3d modeling to improve any areas of the model, photo editing to improve the photos before processing, and photo masking to remove elements in the background of photos that are irrelevant to the final model.

To reduce the backend workload, and to generally pick the best subjects for photogrammetry models overall, it’s important that your photos of an object maintain consistent lighting, texture, depth of field and more. Here are some points to keep in mind:

  • The subject should be well-lit, and the lighting should remain consistent over the course of your photo shoot. Small variations are usually easy to manage at the software level, but an object that is lit by moving, leaf-dappled sunlight, for instance, would pose a serious challenge.
  • Objects should have visible textures on most, if not all surfaces. Large regions of flat, featureless colors or reflective surfaces like the paint on a car, shiny plastics, or water can be very difficult, to nearly impossible to accurately capture and render. If a surface looks one way from one angle, but due to reflections looks very different from a slightly different angle, the model will end up distorted in that region. Note that if you have a high-quality camera and shoot sharp photos, the surface textures don’t have to be extremely course or have high contrast for modeling work.
  • You should have access to clearly photograph every surface of the object that you want to portray in the model, including the top, bottom and inside any large crevices. (keep in mind that you might need a ladder, or to squeeze into tight places to capture some objects.)
  • In general, models render well when the object stands out clearly from its background —when there’s plenty of contrast. Color contrast is the best. This isn’t a dealbreaker as images can be masked to remove backgrounds, but this contrast can help save work later.

Tip: In general, photogrammetry used for journalism is especially well suited for capturing natural, outdoor scenes since they’re typically well lit by sunlight and natural features tend to be covered with many, irregular textures.

2. Capturing the object — Equipment and Technique

Any camera that can take a reasonably sharp photo can be used to capture images for a photogrammetry model. However, there are some guidelines you must follow when shooting to make sure your photos will work.

  • Your camera’s field of view must remain consistent for every photo. In other words, if you have a zoom lens, it must stay at the same zoom level at all times.
  • To make processing your model easier, and to collect more useable data, try to shoot with an aperture that gives you a deep depth of focus. The more of the object you can capture that is sharp, the better.
  • Make sure your photos overlap, and follow the guidelines for shooting below.

On our projects we used cameras ranging from iPhones, to point and shoots, to a Sony Alpha 7 III.

The thumbnails in this graphic reveal the position of each photo that was taken to capture the model.

To capture most objects, you can shoot overlapping photos at a few different heights in rings around the object for a start. Don’t fail to capture the top and bottom of any surface you want to show, and also make sure to capture any hidden surfaces that cannot be seen from this angle.

Different objects and locations require different approaches. Agisoft, the software behind a popular photogrammetry package called MetaShape, provides the following guidance for shooting rooms, facades, and objects:

Based on our experience, the process of capturing can be simple, or very complex, based on the above factors.

Sometimes it’s easy — sometimes it’s not.

These two models were created by capturing images of sculptures by artist Gil Brevel.

The whaling boat, featured at the beginning of this article took our correspondent less than 15 minutes to photograph and he was able to capture the photos without disrupting his other reporting efforts during the trip. The boat was well-lit, easy to access in the environment, and its surfaces didn’t pose any challenges.

In contrast, on another reporting trip, two NewsHour journalists spent a full day photographing two, highly detailed sculptures located inside an artist’s studio. Our team brought lighting and high-quality photography equipment, and the process took hours. In the studio, lighting was challenging. The sculptures had some smooth surfaces with limited textures, and some of the surfaces were slightly reflective. Post-production on these models also took far more time than the whaling boat.

A question of accuracy for journalists:

In approaching these two objects, we realized that we had different levels of tolerance for the quality of the final model and how perfectly it captured reality. When done well, photogrammetry can capture detail with a scientific level of accuracy, but when produced under the time constraints of journalism, some models may suffer minor deficiencies. These issues may be acceptable since they don’t impede on the audience’s understanding of the story. In the case of the whaling boat, if the model had a subtle flaw, or if a texture wasn’t completely clear, the audience could still appreciate the scene without losing any of the story’s value. However, in the case of the sculptures, flaws in the models threaten to detract from the quality of the artwork — so achieving a high level of technical quality was imperative.

From the point of view of a journalist, this question about the quality of model production could have ethical implications. If a model is created of a crime scene and it contains a flaw that gives the audience an inaccurate perspective on the location, it could affect how people understand the overall story. It's up to journalists, artists, and developers, when building these types of models, to make sure the quality of the rendering matches the quality necessary to best tell the final story. Perhaps as photogrammetry software improves, this will become less of an issue.

Rendering the model:

Once photos of the object have been captured, the process of rendering the model can begin.

First, review all of the photos and discard any shots that might be out of focus, obscured, or that have other technical issues. By removing these photos from the set, you can reduce how much “noise” is introduced into the model.

We tested several software packages for creating models and found Agisoft’s Metashape offered a useful set of features at a reasonable price. Development of photogrammetry software is happening quickly — over the course of this project Agisoft released two major updates to their platform, and their competitors also released upgrades. Another popular package that is open source and actively being improved is Meshroom.

During our research, one useful resource was the Photogrammetry subReddit. There, amateur model-makers share their experiences with different platforms, models, and experiment with different approaches to model-creation.

Rendering in Metashape

Agisoft offers excellent documentation on the basic workflow for creating your first model. Use the link above to learn more about how the workflow plays out in the software, step by step.

The basic steps in any workflow are the same:

  • Align photos — at this stage the software evaluates each photo and finds where they align with others, creating tie points that begin to reveal the basic structure of the model.
  • Build dense cloud — at this step, thousands of more points are calculated and created to fill in the space between the tie points.
  • Build mesh — A 3d mesh is created based on the data created in the first two steps.
  • Build Texture — A texture is created, stitched together from the original photos. It is overlaid on top of the mesh, giving the model a life-like appearance.
Tie points are visible after aligning photos in Metashape.

Some advice based on our experience:

  • Once you create a dense cloud of the model, if there are obvious areas where points have been rendered incorrectly — like if there’s a blob of material in a space where there should not be, you can delete those points by selecting them in metashape. This can help correct your ultimate model.
  • In some cases, you can skip the dense cloud step and build a mesh based on the first key points, called the sparse cloud. It doesn’t always yield a high-quality model, but for some objects, it works well.
  • If your computer has a discrete GPU, make sure it’s enabled in the software preferences — and if so, disable your CPU.
  • If you’re rendering your model on your local machine (see tip below about cloud rendering) don’t be surprised if it takes hours or even days. But one important point- many artists have tested different platforms for speed and found that PCs dramatically outperform Macs at some of the most demanding tasks.
  • Don’t be discouraged if you have to use trial and error to improve your model. It can take time to render and evaluate the outcome.
  • If you find that your model won’t render accurately, it’s likely due to some low-contrast backgrounds blending with the foreground in your photos. Use the built-in tools in MetaShape to mask out portions of the images that might be confusing the software. You can also see in the photos, after alignment, where tiepoints have been assigned, and mask out those portions that are irrelevant to the model.
  • When you build a texture, make sure TEXTURE COUNT is set to “1”. This is critical for AR processing.
This image shows portions of the image that have been masked in MetaShape to improve the sparse and dense clouds.

Tip: MetaShape comes in two flavors — regular and Pro. Pro provides access to Agisoft’s cloud processing service, which is currently in beta. In our experience, the cloud processing works exceptionally well, and faster than any computer we had on hand. The price difference is not insignificant — several thousand dollars— but MetaShapePro is available to download and try, with all features in place for a 30-day trial.

After hours of processing and trial and error, you’ll end up with a model:

Most likely, at this point MetaShape won’t know which part of your model is the top, bottom or the sides, or how it exists in space, so you’ll need to orient it using the grid background on the X, Y, and Z axis. You won’t have to center the model at the zero point of the space, but do you want to make sure it’s upright and facing the direction you prefer.

To aid in this process, make the “GRID” visible in the workspace and use the object orientation tools to rotate your model until you’re happy with the placement.

MetaShape, along with every other package we tested, allows you to export your model as an OBJ file, which is what you’ll need to move your project into AR. You’ll end up with three files:

modelname.obj : This is the file that contains your mesh.
modelname.jpg : This is your model’s texture, embedded in a jpeg image.
modelname.mtl : This file contains metadata about the model.

OPTIONAL:

Depending on the complexity of your model, the mesh might be made up of millions of faces. If your model is too large, it won’t work properly in many AR applications. An upper limit target, we found, is around 1 million faces. Even this is an extreme amount though and will lead to file sizes in your final project that are very large.

To reduce the size of your mesh, there are a few options. There is a mesh decimate tool in Metashape that does a serviceable job of reducing the size of your model. However, this process can also remove quite a bit of detail from the object. There’s a tutorial below for this process.

You can also explore other software for decimating your mesh. Blender, an open-source modeling application has powerful decimate tools. Meshlab is an open source tool for mesh management that has many features for refining models.

Before decimating, you may want to try the next steps to see how your AR model turns out, directly out of MetaShape.

Creating an augmented reality model

The next step is to take your model into augmented reality. One of the most accessible workflows and distribution platforms for AR right now is Apple’s ARkit and the built-in quicklook functionality. Read more on these platforms here.

Details on the finished model can be highly accurate, down to the artist’s signature visible on to top of this sculpture by Texas artist, Gil Brevel.

To move your OBJ model into a format that can be viewed in Apple’s Quicklook, it will need to be converted to a different file format, called USDZ.

Apple’s documentation supporting USDZ is here.

To convert your file, you’ll need to download the latest version of Xcode and install it in your Mac. This will install the USDZ converter tool, which you can run via Terminal in MacOS. The tool is short on documentation, but Apple developer forum users offer lots of helpful information that will get you started.

Navitagate to the folder containing your project and run the USDZ converter.

Some notes on using USDZ converter:

  • USDZ uses a process of defining lighting and subtle texture called Physically Based Rendering (PBR). It’s a very powerful process that you can use to make your model look even more lifelike. This is an excellent overview of PBR in the USDZ workflow.
  • A note on PBR — if your model is dull and unlit, make sure you include a metallic map, and that map should be all black. This will brighten the colors in your final render.
  • Via PBR, you can also add subtle surface textures to your project, using a normal map. Normal maps can be generated from your texture jpeg file in Photoshop using the 3D filter.
  • If your object should have shiny surfaces, you can also use photoshop to generate a Bump Map, which is referred to as “roughness” in the USDZ workflow. Invert that file before you save it so that blacks and whites are reversed. We found photoshop’s bump map filter to be a bit too aggressive, so we often adjusted the contrast or levels to soften the dark areas. On a bump map, any area of the image that is non-white will be shiny, do the degree that is is black. Solid black is as shiny as glass or chrome.
  • If you produce a model, but it’s upside down, sideways, or otherwise not how you’d like it positioned, go back to MetaShape, or another 3D program and place it at 0,0,0 on the working plane.
Above is the bump map, used to give the sculpture a softly shiny finish.
This is the metallic map for the model. Not much to see here.

What’s next?

This guide and the experiments conducted above were produced by journalists with limited 3D modeling skills, using easily available, inexpensive and free technologies. There are certainly aspects of this process that could be improved by seasoned 3D artists, and we welcome their feedback as we continue to improve our processes.

Please leave your thoughts, ideas, and links to your own experiments and share them with us on Twitter at @newshour or @tcd004.

Thanks for reading!


A new online magazine about the art, craft, and business of storytelling, STORIUS is a publication for everyone interested in how stories are created, discovered, distributed, and consumed.

Storius Magazine

STORIUS is an online magazine about the art, craft, and business of storytelling. Featuring perspectives of professional and emerging authors, filmmakers, and other creators, it delivers a rich mix of storytelling facts, news, and techniques to its readers.

Travis Daub

Written by

Storius Magazine

STORIUS is an online magazine about the art, craft, and business of storytelling. Featuring perspectives of professional and emerging authors, filmmakers, and other creators, it delivers a rich mix of storytelling facts, news, and techniques to its readers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade