Will smart glasses replace screens?

The future of media is ‘our native, 3D environment’, it’s around the corner, and the media need to take it seriously. This is what Dan Pacheco argues. A pioneer in the use of virtual reality for journalism and an XR consultant, he spoke with GEN about the latest updates on the technology that’s coming (from HoloLens to Magicverse) and laid out the implications for publishers, journalists, and audiences.

Global Editors Network
Global Editors Network
14 min readMay 23, 2019

--

Dan Pacheco is a speaker at the GEN Summit and will talk about AR and its potential to be the future of immersive storytelling. Catch up with Dan at the GEN Summit and get your tickets now!

GEN: The public understands what immersive technologies can bring to entertainment, but what are the advantages of immersive storytelling for a journalist? Under what circumstances can it add to a journalistic narrative?

Dan Pacheco: VR, AR, and MR (what I and others are starting to call XR) headset makers currently cater to entertainment because they started in the gaming industry, and that’s where the big money is. But so did portable, high-quality colour screens that fit in your pocket. The first successful colour mobile device was Nintendo’s Gameboy. Ten years later, everyone was carrying around an iPhone or Android.

Using that Gameboy analogy, journalists should look past the entertainment value and take time to understand what XR devices are capable of doing from an information perspective. And that’s truly revolutionary. For the first time in history, when we are at a loss for words to describe a place or situation to our audiences, we can move beyond words and virtually transport them there.

Whether we’re transporting consciousness elsewhere, or bringing other things into the living room, the reason for doing it should be to help the viewer understand something better in a spatial way. Reading is a skill that must be learned, but experiencing is just hard-wired into our brains. How cool is it that we as journalists can now finally inform our audiences using the natural language of physical experience? That’s really what the XR revolution is about.

In my view, it’s a historical accident that all of our media up until now has been on flat rectangles. We live, breathe, work, and learn in three dimensions, so convenient three-dimensional media is our native media. It just hasn’t been possible to publish in our native language until now.

What are the best examples of immersive journalism you have in mind?

I like the way USA Today recently used a 3D model in “The City” to illustrate the story of a community in Chicago that suddenly found itself dealing with a trash heap. You can interact with the 3D model right inside the story on your mobile device. It’s embedded through SketchFab, which has built-in integration with stereoscopic VR viewers like Google Cardboard.

Interact with the 3D model of Chicago and its trash issue

The things the New York Times is doing with 3D scanning and augmented reality are very cool, and some are a perfect example of “XR” because they go across VR and MR headsets. Their piece about a volcanic eruption in San Miguel Los Lotes in Guatemala incorporated a 3D scan of a truck covered in volcanic ash. You could project it into your room using AR on your phone, but they put the same model into Magic Leap so that you can literally walk around and through the damage in your room.

An interactive, 3D scan of the volcanic eruption in San Miguel Los Lotes in Guatemala by the NYT

I love everything Emblematic Group does, and I have been telling everyone for over a year to try out Greenland Melting in an HTC Vive. It is the most impactful, full-body interactive piece of journalism I have ever tried.

You can download the Greenland Melting experience on Steam

I also spend a good amount of time looking for examples outside of journalism that could be applied, and there are a lot of good examples come from museums and historical preservation organizations. One current favourite is an app called MasterWorks, which lets you virtually visit UNESCO world heritage sites. As you move through the 3D models, all of which were captured through photogrammetry, you trigger audio, video and articles that teache you about the places you’re visiting. They’ve created the experiential version of the documentary which could be applied to any number of stories.

Travel to three continents and visit some of the most amazing UNESCO world heritage sites thanks to VR

What are the implications of producing an immersive story for those trained in traditional or legacy media? How would they have to reimagine their relationship with audiences?

First, there is no divide between an “immersive story” and a “traditional story.” There is a story, period. If that story has a strong spatial element — which can mean geography, a complex structure like Notre Dame cathedral, a physical place like a crime scene where the exact positions and activities in a place are central to the story, or anything where you find yourself having to describe physical layout or dimensions — you want to think about how you can use immersive XR technologies. And your goal should be to help people understand the information in a more natural way, not just to check off the “immersive” box.

After that, think about all the different ways that someone may move through that scene. There is no framing of a shot in XR, but you can set up the scene and the user’s position so that their attention naturally shifts toward where the most important parts of the story are likely to be discovered.

And third, discovery is key. There is often no set script in an immersive story, especially when it’s CGI-driven with interactive elements. In the case of MasterWorks, the entire architectural area is 3D images using a process called photogrammetry and the user transports through it using a hand controller. You may go left and I may go right, but if the hotspots and other indicators of interest are placed well, we will hopefully both pick up the same bits of information.

Traditional storytellers sometimes find this lack of control over how the story is experienced off-putting. I think this is why museums probably have a better idea of how to create immersive XR experiences than journalists because they already do it in real space.

With advances in “light field” technology computer-generated images could become indistinguishable from real-life images. What are the potential dangers for misinformation and emotional manipulation? How can they be avoided?

This is an area that concerns me greatly because people are just beginning to understand that there are fake 2D videos out there that are made up of public videos and photos, sometimes created automatically. These are sometimes referred to as “deep fakes.” Well, brace yourself because “fake experiences” are just around the corner.

Photogrammetry, the art and science of making measurements from photographs

Once you understand the mechanics of photogrammetry, which can generate a 3D image of anything with 40 or more 2D photos taken from different angles, you can see where fake news is heading. There are sites that let you upload these 3D scans and apply pre-recorded motions from people who performed the actions in motion capture suits. It is possible to create entire interactive scenes using scans of real people doing things that never happened.

Some journalists reading this may wonder why journalists should engage in this area at all. But think about what it means for society if journalists whose mission is to tell stories about what’s happening in the real world fail to use these new superpowers for good. The bad actors would take over. Just as it’s important for journalists to engage audiences with truth in social media, it will be equally important for them to use photogrammetry and motion capture to engage audiences with real experiences that inform people about what’s happening in the real world. And hopefully, by understanding how these technologies work they will also be able to identify and decry fake experiences.

In his 2018 TEDx Talk, Jens Franssen, a Belgian journalist (VRT NWS), said that powerful, immersive images can trigger PTSD in some viewers. When it comes to safety and ethics, what are the risks associated with immersive storytelling?

I agree that this is a risk, as it already is with deeply immersive movies. Beyond the potential for PTSD, there is the issue of sensitivities people may have to certain types of stories.

The example I always point to is 8:46, which was a virtual reality recreation of what it may have been like to be working in the twin towers when they were hit by a plane on September 11.

This virtual news game was created by French students who wanted to have a deeper understanding and connection with what happened in New York City on that day. Let’s just say, people in New York and pretty much all across the USA who heard about this almost universally decried it as insensitive and exploitative — with one exception.

Undergraduate students who I teach who were only five years old when 9/11 happened thought it was a good way to connect with an event, which shaped their world, in a way that was meaningful and appropriate for them. Just as people my age wanted to understand D-Day by watching the movie Saving Private Ryan (which many soldiers who lived through it avoided).

The emotional context of the experiencer is key. You can never assume that a piece of immersive media will affect everyone the same way (which is why the “empathy machine” argument is faulty — sometimes these experiences can make peoples’ biases even stronger). Whenever you are publishing something in XR about a sensitive topic, it’s very important to tell your audience exactly what they will be viewing and that they could be triggered.

Screenshot of the experience in 8:46

In February 2019, Magic Leap unveiled its new concept, the Magic verse — “an Emergent System of Systems bridging the physical with the digital, in a large scale, persistent manner within a community of people.” Could you elaborate more on this concept and what are the issues at stake?

I have talked to Magic Leap about this and also read their documents. It’s a bold idea illustrated by Cyberpunk author Neil Stevenson in his seminal novel Snow Crash, and it’s a play on what he called the Metaverse. (And by the way, Stevenson doubles as the Chief Futurist at Magic Leap).

What they are really saying is, what does it mean if everyone who puts on MR glasses sees the same virtual things in physical space? Think of the kind of persistent layering of virtual objects in the mobile game Pokémon Go, but applied to anything. Sometimes shared experiences like this will make sense, and sometimes not.

Magic Leap is clear that entertainment and games are just one layer and that there can be other layers related to communications, health, energy and other more serious topics like news and information.

This kind of shared virtual experience in physical space is incredibly compelling, and also essential to ensuring that MR glasses are socially acceptable. One of the major issues with virtual reality is that you can’t really engage with it when you are with other people because you've checked out of the real world. MR solves this to an extent because we can still see each other, but it doesn’t change the fact that we may also be seeing completely different things in the same space. At times when we’re not engaging in specific activities (like watching a sports game or reading the news), it makes sense that the digital information we do see is shared. Otherwise, the kind of community fragmentation that’s already happening with smartphones will be accelerated.

But having a shared experience also presents new challenges. Not everything can or should be shared with the world. The designers of this collective base layer need to think about how you can tell that someone is engaging with private information without making it look weird.

Magicverse is an Emergent System of Systems bridging the physical with the digital, in a large scale, persistent manner within a community of people.

I also wonder what it means for society in the future for those who don’t have MR glasses. The digital divide could potentially get even larger. If not implemented with care, we could literally have two societies — one that sees the digital layer that connects with the global economy, and one that doesn’t.

What hope does mixed storytelling methods (using AR, MR, or VR) hold for traditional media? Could they help attract more audiences and increase subscription fees?

This is really more of a long-term strategic play. I don’t see XR as a ticket to an immediate increase in quarterly profits, and that’s the wrong way to look at it, but it is a sure bet that your core connection with your audience will go away if you aren’t relevant on what many think is the next major platform.

I fear that editors and publishers are overly concerned with the near term to the detriment of the future of all media, which is almost guaranteed to move beyond the phone and into the fabric of our native, 3D environment. A decade from now we will all be using some kind of wearable device to consume and interact with 3D content in a natural way, just as we all use touch screens and Bluetooth earbuds and AI enabled speakers which were considered nerdy luxuries not long ago.

For now, VR projects require lots of financial and human resources to produce. Limitations remain on the distribution side too, as headsets are far from being mainstream. How do you see these challenges being taken up?

It depends on what kind of content you’re talking about. The cost of 360 videos have actually plummeted over the last two years, with stitchless 4K cameras that used to cost $60,000 now available for less than $500. The videos can also be edited in Adobe Premiere without any additional plugins. They can be published to YouTube and Facebook for people to view by holding their phones out, or in inexpensive cardboard viewers.

Even though 360 is so easy to produce, there are still news organisations that have yet to create a single one. And when you ask them about it, they claim that it’s too costly or expensive or that they don’t have the staff to do it. But is cost really the problem, or are they just dragging their feet with innovation? In many cases I find the problem is the latter.

CGI-based VR and AR currently requires more expertise in coding and (for the moment) 3D modeling, but even that is changing. There’s a great plugin for Unity called Victoria that lets you create AR without writing a single line of code. In the VR space, Mozilla’s A-Frame (aframe.io) makes it possible for someone with a rudimentary understanding of HTML and CSS to create an VR scene. And the cost for everything I just listed? It’s all free.

As far as headsets go, anything you create in VR or AR can also be embedding in 2D on your existing web or mobile site with little additional work. That’s because WebGL (web graphics language) framework is now supported by every modern browser, including the browser on your smartphone.

Why did previous attempts to generalise VR use, such as Google’s “smart glasses”, not work? What is different about the upcoming products, such as Magic Leap One, Microsoft HoloLens version 2, Samsung Gear (VR)?

Another player of reality glasses: Epson

Actually, Google Glass is still around and Google just released another version targeted to enterprise users. It just wasn’t the big consumer hit Google thought it would be back in 2013. But I find it interesting that the latest version of Google Glass removed the camera. People were really weirded out by the camera in the first version, and I think its presence contributed a lot to the so-called “Glasshole” perception.

The main difference with Mixed Reality glasses (HoloLens and Magic Leap) is that they project 3D information into physical space. Google Glass puts a layer of 2D information in front of your eyeball that moves with you as you walk around. MR glasses scan the physical structure of the space you are in, the put 3D objects and avatars in the room. As you walk around, those objects remain where they appear to be, like holograms.

What are your thoughts on the limitations for distributing augmented reality and do you see any upcoming solutions?

I think what’s holding AR back specifically right now is that you have to publish or update a mobile app in order for it to work. The average number of new apps that people install on their phones today is …. zero! So right here, Apple and Android have created an artificial gating problem for AR. The places where most people experience AR is inside apps they already use. There is an economic disincentive to develop a new app specifically focused on AR because nobody’s going to download it.

The other issue here is that app development currently requires app coders for iOS and Android, so it’s expensive. As more no-code AR tools like Vuforia come out I think we will see a lot more experimentation with AR. I personally would like to see Vuforia support MR.

Ultimately though, the distribution for AR and MR needs to be through mobile web browsers instead of app stores. Mozilla has some initiatives around this, and in fact they have a special browser for testing out content in Magic Leap. This theoretically makes it possible for someone with Magic Leap to just go to a web address to bring up an MR experience.

As for monetisation, you always need to start with building an audience. This future scenario I describe where glasses replace screens answers that. The entire Web will just become more immersive over time, and advertising and commerce will follow.

As you teach your students about the use of AR/VR/MR in journalism, what advice do you give to them?

I caution them not to get too caught up in becoming an expert in any one technology or platform and instead to think about how storytelling is different in the experiential medium that is XR. We currently use Unity as the engine and Insta360 cameras for filming, but the technologies always evolve.

The most important thing I want them to learn is not how to create XR content for today, but how to think about a story that viewers discover by interacting within a scene. I’m clear that we aren’t creating video games, but we borrow a lot of techniques from games that allow a story to be revealed by moving through it. I think this will be the most essential storytelling skill in the spatial computing world we’re all about to enter.

Interview by Ana Lomtadze

Dan Pacheco is a keynote speaker at this year’s GEN Summit in Athens, Greece, from 13 to 15 June, sharing his insights on the latest AR trends and the future of immersive storytelling.

Meet Dan Pacheco and other high-profile speakers at the GEN Summit and get your ticket now!

Dan Pacheco holds the Peter A. Horvitz Endowed Chair of Journalism Innovation at the Newhouse School at Syracuse University, and is a pioneer in the use of XR technologies for journalism. In 2014 he started and co-produced The Des Moines Register’s Harvest of Change VR project for the Oculus Rift, the world’s first large-scale use of virtual reality by a commercial news organization. He is currently working on journalism-related MR apps in Magic Leap and HoloLens.

--

--

Global Editors Network
Global Editors Network

The Global Editors Network is the worldwide association of editors-in-chief and media executives. We foster media innovation and sustainable journalism.