A working ethics code for volumetric storytelling

Theresa Poulson
McClatchy New Ventures Lab
6 min readJul 10, 2019

When we set out to create two volumetric series for mobile AR, we had to revisit many questions visual storytellers have answered time and again in 2D media: How do you create a sense of place? How do you present characters? How do you add context to a short story?

To answer these questions, we often tried to toss out 2D conventions and rethink storytelling natively in 3D. (You can see the results in our app, Actual Reality available on Android.)

But we didn’t leave all precedent behind.

When it came to ethics, it was important to the integrity of our work that we build on the standards and practices McClatchy has relied on for more than 160 years. Since there’s no rulebook for ethical storytelling in this medium — which often involves more post-production work compared to photo or video — we decided to start our own.

We gathered together our cross-disciplinary team to discuss questions that had come up during production. For example: How much is too much when it comes to editing 3D images? We looked to industry standards for visual ethics. Kathy Vetter, senior director of news content, provided insight into how newsrooms at McClatchy have approached similar questions in photography and graphics.

We walked away with a working code that’s provided some basic parameters and has created a foundation for ongoing discussion as the tech changes and new questions come up.

The guidelines also allow for experimentation and acknowledge that some of the 3D storytelling tech that we use isn’t perfect. We (the journalism industry) can’t afford to wait until the tech is perfect to explore its potential. In some cases, we decided we may need to explain how a shortcoming in the tech affects the story audiences see before them. (See the section on motion capture below as an example.)

Transparency was also a theme throughout our discussion. While AR content can have more in common with a video game than a digital article page, we want to assure our audience that we approach storytelling in this medium with no less rigor than others. To make that clear, we decided to explain our ethics code in our app.

What you see below is the epic editor’s note that appears in the “about” section of our app.

EDITOR’S NOTE

Our AR stories use a variety of 3D media, including photo-realistic models, realistic recreations and illustrations. The production processes are new to nonfiction storytelling. Maintaining your trust in our journalism is of the utmost importance to us, so we will be transparent about how and why we produce the 3D images you see in this app, including clearly noting instances where there might be confusion. We welcome any questions at newventureslab@mcclatchy.com.

Photo-realistic models

The photo-real (defined as a realistic recreation of a subject), 3D models of people, places and objects are created using a process called photogrammetry, which combines hundreds of photographs taken from many angles.

Realistic recreations

Due to the nature of the technology, many of the photo-realistic, 3D images are edited for technical and visual clarity. We follow these guidelines when editing those objects:

To maintain faithful representations of people, places or objects, some of the visual information that was lost or damaged in the production process has been reconstructed by replicating the texture and shape of the original object.

When a photo-real, 3D object cannot be reconstructed, we might use an artist’s recreation in its place. We have done this only if:

1) The object is integral to the story,

2) Using a recreation is the only option for representing the object in the story and

3) The recreation is as faithful to the original object as possible.

Any time we’ve used a highly realistic artist’s recreation to take the place of an object, it is noted at the beginning of the story you’re viewing.

Photo-realistic scenes through recombination

When a scene contains different types of 3D models, particularly models that move, it is often necessary to scan different components of the scene in isolation (separated either in time, location or both) and then recombine them to produce a single cohesive scene.

For instance, we might spend 20 minutes capturing a 3D model of a street corner, then afterward ask a character from that area to step into a different area where we can most effectively create a model of her/his body. That character might even take a break and come back later to have her/his motion recorded. Then we’d recombine all those elements to create one cohesive looking scene.

When we do have to use recombination to create a scene, we endeavor to recreate the scene with as much fidelity as possible making sure that:

  • the scale of the components match,
  • any actions represented are authentic to the characters and the spaces they are in,
  • and no objects of editorial significance are omitted.

At times, we will create recombinations that are not intended to represent actual environments, such as when we show how objects compare or show several objects as a gallery. These instances will be clearly noted in the Editor’s Notes for that story.

Motion capture

In some cases we have added movement to photo-realistic, 3D people through a process called motion capture, to provide lifelike representations of the sources in our stories. To do this, we 1) record motion data with a portable sensor, 2) take hundreds of photographs from many angles (photogrammetry) and 3) combine them in software to create a moving, 3D model.

As we experiment with this new technology, we aren’t always successful at capturing the real motion data, so we might use animation from a stock library to add movement to a person instead. Any time we do this, it is noted at the beginning of the story you’re viewing.

Volumetric video

In some cases we’ve presented characters using a technology called volumetric video. We use a tool called DepthKit, which combines frames from two different types of video cameras. One camera records normal color videos and the other records a low-resolution “depth” video that captures the shape of the character in 3D. Although you can move around these videos and view them from multiple angles, you might notice that the edges of the characters look warped from certain perspectives. To mitigate the effect, we’ve slightly altered the brightness of the color video with information from the depth video. You’ll be able to see the depth video overlaid as a series of light lines and dots. The resulting image looks a little like an animated topographic map.

Illustrations

You’ll also see different kinds of artists’ illustrations throughout this app.

The illustrations are used for a variety of reasons: to create a sense of space, to recreate a memory, when a realistic reconstruction wasn’t possible (either due to lack of access and/or privacy concerns), to contextualize or add to a realistic 3D object, to visualize data or represent a concept.

These illustrations should be easy to differentiate from the realistic, photogrammetric objects. But we’ll err on the side of caution: Any time we think there might be a reason to clarify or add context to what you’re seeing, we’ll let you know in a note at the beginning of the story you’re viewing.

Thanks for reading to the end! We’ll continue to update the editor’s note as we learn more and try new techniques. What should we add or clarify? Comment below or email us at newventureslab@mcclatchy.com

--

--

Theresa Poulson
McClatchy New Ventures Lab

Let's talk about Jersey, soup and AR. Product manager at McClatchy.