“Journalists will have superpowers” Futurist Amy Webb talks opportunities — and pitfalls — of journalism tech
We had the opportunity at CES on 8 January to sit down with Amy Webb, futurist, founder of Future Today Institute and author of its annual Tech Trend Report, to discuss computer automation, fake news and immersive storytelling. Amy Webb will be delivering a keynote at the GEN Summit 2017 held in Vienna, 21–23 June.
GEN: How can computer automation improve how news is gathered, organized and distributed?
Amy Webb: The future of news can be made much more efficient and probably also much more seamless through the use of automated technologies. That does not necessarily mean the end of human reporters. But it does mean augmenting the abilities of human journalists. And that is nothing but exciting for newsroom leaders and for editors-in-chief. What it essentially means is that journalists will have superpowers. Using what I call a next generation of computer assisted reporting — learning from the crowd by observing their data to using algorithms to collect, customize, produce and distribute content — all of these all of these scenarios are near-future realities, and they’re going to be great for newsrooms.
You just mentioned algorithms. How can they, along with bots, come together with journalists to enhance journalistic practices?
Bots are essentially little programs that take data in and are responsive. There are some chat bots and bot platforms that are intelligent, which is to say they have machine learning algorithms helping to power them. And there are others that are really just advanced Q&A systems, which are interesting, but that’s not what to focus on. I think the more compelling offering are artificially intelligent, “botable” platforms. Those offer huge promises to newsrooms.
We just had a shooting at an airport in Florida in he United States. We live in this age of social media, there’s all this content that’s being published by people, some of whom were there, others of whom weren’t there. As usual, there’s a lot of speculation, and essentially, the local news media have lost control of that story. One interesting possibility would be to build templates and allow people to have conversations with the news organization that are based in evidence and fact on the platforms they’re already using.
As a news organization, you could have a bot ready to go for you know for something like this, which is obviously tragic. But people could ask questions on Facebook Messenger or on Twitter. These systems can learn as they go, which is to say that they can you can add information to them and that they can give information out. This basically mimics the behavior that your readers and viewers are already showing, but you become the purveyor of that news. So I think that there are some real possibilities in a breaking news situation. But there are lots of other possibilities as well. The basic idea is how can you as a news organization can have a conversation with the news consumers, because that’s essentially what people crave.
Another way to think about this is a dynamic listicle, which most news organizations at some point or another have published. But if that listicle became dynamic, which conveys those short pieces of information, but in a conversational way. Now you’ve got a dynamic listicle that is accomplished through a chat bot or a bot platform that can become personalized for the people who are using them. That is a great opportunity going forward.
Which challenges do “fake news” pose for journalism right now and how can media innovation help address those challenges?
The problem with fake news is much more sophisticated than I think people realize. On the one hand you have what seems to be credible news organizations. And “credible” is entirely in the eye of the beholder. Here in the United States, we have Breitbart, a conservative-leaning news organization. Some of their stories are rooted in fact. They may have a political slant, but they’re obviously not making things up. But there are plenty of other stories where facts just don’t matter and which are completely made up. So what do you do in a situation when you’ve got a news organization that behaves this way?
That is not the same thing as obviously fake news websites that aren’t politically motivated for the most part. They’re just trying to make money off of how polarized we all are, not just in the U.S., but in Europe and in other parts of the world. So this is the problem: There’s not a single cause — you’ve got some people producing fake news just to make money. We have some people producing fake news for political gain. Publications like Breitbart sometimes publish salacious headlines and misrepresent facts because it helps with their traffic, and other times publish perfectly credible, fact-based stories. We have governments producing fake news, like Russia Today, which is a massive propaganda arm of the Russian government. They routinely publish completely false stories, but they sound so convincing that people who aren’t familiar with that logo don’t know the difference. So you have that problem as well. Another issue is people misquoting statistics or misunderstanding them. So there are all these different reasons for and sources of “fake news,” and they’re all different. So how do you come up with a solution to a multifaceted problem?
Another challenge is that Facebook and other platforms have a disincentive to stop “fake news.” The reason fake news spreads like it does is because it’s salacious. People get excited about it. But at some point we have to think of the good of our collective democracy and the good of society. It’s time for Facebook and for Twitter to step up and do something about this in a meaningful way. Facebook’s system is ridiculously complicated. If you see a “fake news” story, you can technically report it. But it requires four clicks — first you have to click on this little tiny ‘V’ at the top right of your screen. On the menu you’re presented, you have to know what to click on, and then that takes you to another menu. Ultimately, you can report a fake story, but it’s a hard process.
What Facebook and Twitter could do instead is develop a different algorithmic system that demotes or de-prioritizes certain types of news — news that originates from obviously fake sources. If something is suddenly trending, or if a post is suddenly popular, you can go back and often see that a bunch of fake accounts originated that content. The originator waited for a verified account to talk about it and then delete that initial post, and then it looks legit. That process is called a “hand-off.” We’ve seen Donald Trump retweet and promote content like this, perhaps unwittingly. There are technological solutions to this problem, but it requires Google and Facebook and Twitter to decide that it’s in everybody’s best interest to combat the spread of misinformation.
Immersive storytelling is no longer considered a curiosity but a standard option for journalism. What’s your advice to editors-in-chief in regards to using immersive storytelling in their newsrooms, both big and small?
Immersive storytelling offers great promise for certain kinds of stories. If you think about the future of virtual reality, we’ve seen some wonderful examples of news organizations publishing VR stories and 360 degree stories. But my challenge to editors would be to stop and think about the longer-term use case for immersive storytelling. For people to engage with a virtual reality story, they have to be sitting somewhere comfortable enough and feeling secure enough to strap themselves in to a VR headset. That means I also can’t divide my attention. I have to be totally focused on the story. How often is that going to happen?
At the Future Today Institute, we created a flowchart to help editors figure out whether it’s actually worthwhile to create virtual reality-based content on a given subject. If it’s really the best way to tell a story, then go ahead and do it. Otherwise you wind up pouring too many resources into what seems like a very trendy and cool new storytelling technique. It’d be much better to put those resources into distribution to address the long-term financial problems your news organization probably has.
You’ve said that 99% of news doesn’t fall in the VR category. Why are newsrooms better off going towards AR?
My strong recommendation would be for editors to look into augmented reality. Virtual reality requires a lot of the individual consumer, and it’s expensive. AR is not necessarily ready for the mass market. Right now we don’t have commercial headsets that are available, but the advantage of mixed reality is that it doesn’t require sensory deprivation through a wrap-around headset and goggles. We will soon all be generating an invisible information layer, made up of our own personal data and the data coming from news organizations. EICs can harness that invisible layer for distribution. That makes much more sense than VR for the long term. But that’s also not something that’s going to magically turn up next year.
Maybe this is the most important message: Editors have to start making longer term investments. Even if they don’t know what the revenue possibilities could be on the other end, they have to start thinking longer term — not 18 months, not five years but 10 years down the road. I know that’s hard because they’re faced with real problems you know over the next 12 months, but if they don’t do this, they’re going to find themselves in a much worse place 10 years from now than they’re in today.
What are some opportunities and potential pitfalls of Facebook Live?
I know people who work at Facebook, but I don’t know how they’re making decisions about what’s OK to live-stream. Why does live-breastfeeding immediately get taken down while something like torturing another human being somehow doesn’t meet that criteria? That doesn’t make sense to me. But I’m not familiar with their decision-making processes. To be honest, I don’t know whether they even have one. Maybe that’s the problem.
For news organizations though — I saw that torture video on a lot of websites, broadcast on air. So the question is: What’s the story? Is it that four troubled youths did something horrible to another person? Or is the story that it happened on Facebook. To me, there are two stories, and I don’t know how beneficial it is for us to collectively navel-gaze and talk about what Facebook is doing. What would be more helpful is if there were editorials about why this is maybe not a good practice.
I think that one of the most important things that every newsroom leader will have to contend with in the year 2017 and certainly beyond is ethics. Facebook itself may not have a policy, or maybe it does but it’s certainly not enforced in any uniform way, on what makes sense to show live and rebroadcast. News organizations need to start having these conversations.
What’s your take on Facebook Live Audio, which is expected to be debut in early 2017?
To me, the question is: Will Facebook be the next podcasting platform? Facebook audio could democratize audio in the same way that blogging tools did. Plus you’ve now got this built-in audience. I don’t see that as a huge challenge right now for news organizations. I don’t know that that is going to have any dramatic impact.
On the other hand, there’s a possibility for it becoming an opportunity. But again, editor-in-chiefs should think about the use case and everybody’s existing behavior. People are not used to listening to podcasts on Facebook. They’re not used to listening to audio. They are used to watching videos. If I was the newsroom leader making the strategic decisions, I would focus more on video than audio.
If you were an editor-in-chief at a national news organization, what would be your advice for implementing a news bot?
Bots have to do more than just give people basic information. Otherwise it’s not going to get used. Editors-in-chief could develop a strategy for the purpose of building their audience and publishing information. They could also use it as a strategy to develop talent within their newsrooms. There are plenty of platforms that require a little tiny bit of programming, but it’s actually not building the bot in code that is so revealing. Instead it’s building the corpus, which is the initial data, or the initial training set that powers the bot. It’s such a useful exercise for a journalist to sit alongside somebody who’s more technical to build that corpus. It will illuminate biases that may exist in the newsroom that may have been hidden before. It’ll also help journalists gain a much clearer insight into how machines think, and that is critical going forward. So in a sense, I think having a couple of internal bot hackathons without intending for the bots to actually be used could be a tremendous learning opportunity for newsrooms. By the way: Every editor-in-chief should participate in that as well. The same holds true for regional smaller newsrooms.
What do you make of object recognition?
This year’s edition of our annual report lists 159 tech trends. This means there’s a lot more for everybody to be paying attention to than there ever has been. A key theme for newsrooms going forward is artificial intelligence — but editors-in-chief must learn and understand what AI actually is: A big category with many components, like machine learning, heuristics and logic. I am not advocating that every editor goes off to learn advanced machine learning and coding, but they need to have enough of a basic understanding so that they don’t suddenly start making promises of AI solving all of their problems.
Autonomy as well as collecting and using data in a smarter way is also something we’re paying attention to. And auto completion, where the object recognition comes in. Machine learning programs that watch videos, stitch together photos and essentially create videos on the fly — those are all things that are within our near-term reach.
Lastly — what’s your take on conversational computing?
Here at CES, Amazon Alexa has basically been a part of everything. Conversational platforms that are voice activated are here to stay. Right now, Alexa basically reads news stories — it’s just recitation, not conversation. But that will start to change. Consumers having a conversation with a machine about the news is absolutely the direction that we’re headed.
About Amy Webb
Amy Webb is an author, futurist and founder of the Future Today Institute, a leading future forecasting and strategy firm.
Amy teaches courses on the future of technology at NYU’s Stern School of Business and the future of media at Columbia University. She was a 2014–15 visiting Nieman fellow at Harvard University. Her third book, The Signals Are Talking, describes how to identify seemingly random ideas at the fringe as they converge and begin to move toward the mainstream. For the past 15 years, Amy has been dedicated to helping inform and shape the future of journalism. Her research while at Harvard centered on the future of journalism and journalism education, and resulted in the publication of her second book, How to Make J-School Matter (Again), which won a national Sigma Delta Chi award for distinguished research. In 2016, she was a David Letterman Distinguished Professional Lecturer at Ball State University. Amy works out of FTI offices in New York City and Washington, D.C.
Brodie Fenlon — CBC News
“Everybody has the ability to Facebook Live, but what we can offer the audience that someone on the street can’t is a rich understanding of the story and years of knowledge and journalism built around it.” (Journalism.co.uk, 16 August 2016)
Trent Rohner — Viceland
“As virtual reality becomes more prevalent in journalism, we looked into how the technology is changing the way we experience the news. What is the new role of the reporter when the viewer can experience the event for themselves? Can journalism be more objective when there is no frame and they can see everything?” (AdWeek, 30 November 2016)
Damien Radcliffe — University of Oregon
“Bots and automation are increasingly becoming a part of how journalism is produced and content is being consumed. […] They provide and opportunity to publish information more quicky than humans can.” (TODAY.ng, 5 November 2016)
Zach Seward — Quartz
“Bots are software you can talk to, either through text input or voice. They fit neatly into these new media platforms because, without a graphical interface to click or tap on, the only way to control them is often through conversation. And chatting with a bot — even in a stilted fashion — requires a level of smarts that has come to be known as AI.” (AdWeek, 29 November 2016)