Our.News
Published in

Our.News

Our Q+A with Cindy L. Otis

We caught up with the former CIA analyst and disinformation expert to discuss her new book, “True or False,” censorship vs. labeling content, and why fake news is an “all of us” problem.

Cindy L. Otis with her new book, “True or False,” a CIA Analyst’s Guide to Spotting Fake News” (Photo courtesy Cindy L. Otis)

What do you picture when you hear “CIA analyst?” Someone like Jack Ryan, nonchalantly walking away from an explosion as he puts on his sunglasses?

When Cindy L. Otis started working for the Agency, she too had glamorized notions of secret agents and spies. But she quickly learned her work was about something much more important: discovering the truth.

It was her duty to keep senior government officials informed about current events across the world. To do that, she had to comb through a torrent of information on terrorist attacks, wars and foreign policy from a vast number of sources.

After leaving the CIA, Otis figured she’d publish her first book in the fiction genre. She settled on “True or False: A CIA Analyst’s Guide to Spotting Fake News” — something of a compromise. It’s a non-fiction book, about fiction disguised as non-fiction.

Unlike Otis, most of us aren’t privy to super top-secret classified intel. But we are constantly learning new information and making quick decisions about it. Do we trust it? Should we share it? Is it reliable? These are questions Otis helps us answer in “True or False,” which oozes cultural relevance, but it’s also incisively pragmatic and imperative to helping us navigate a world saddled between a pandemic and an equally dangerous “infodemic.”

We recently caught up with Otis to learn more about her new release, what goes into her methodology, and why we could all benefit from a little more internet sleuthing.

JM: What inspired you to write this book?

CO: When I left the CIA in mid-2017 I kind of reluctantly joined social media and started writing about national security issues and the world of intelligence, and I thought there was a lot of important context missing from a lot of discussion about current events but also on things like disinformation. The more I did some of that public-facing work, the more questions I got from people — messages, emails, tweets — they just wanted to know, “As someone who works in the intelligence community, what skills do you employ? What tips and tricks do you have to help us? What are accurate sources? What is worth listening to? What questions should I be asking?”

And so I started thinking about it and I started writing articles on the subject, and then sort of realized: this isn’t an article — this is a book.

Early on in the book, you lay out a really clear definition of fake news. And I think in order to effectively solve any problem, it’s important that we’re working from the same definition. Otherwise, we’re all solving different problems. So how can we ensure that we’re all working from that same definition, especially when so many people are already convinced that they know what fake news is?

Part of what I’ve observed over the last couple of years, is by the time we start to gravitate toward a given term, the industry tends to move on. And we end up parsing things down, thinking we need a separate term for each element of misinformation. Meanwhile, the average information consumer is increasingly confused, because we’ve already told them they need to move on from the term they just started to understand, whether it’s misinformation, disinformation, or malinformation.

So from my perspective, it doesn’t do us a lot of good to keep changing what we’re talking about. I do think it’s incredibly important for us to coalesce, as a community of experts, around an agreed-upon term and then work together to educate the public on that term and that definition. And keeping in mind, expecting the average information consumer to understand the difference between seven different terms is unlikely.

In the book, you give plenty of historical examples of disinformation, including how Ben Franklin used his printing press to influence public opinion during wartime. Do you think “fake news” is actually a bigger problem now, or is it just more visible and accessible because of social media?

I do think it’s worse than it has been, in part, because social media allows it to spread further and faster and reach more people than ever before. But my argument is, the kind of narratives, the kind of emotional triggers, and the content they crafted is really the same today. We’ve seen over time, false information is used, for example, to undermine women in power, to target minority groups and justify violence and oppression against them. All of those things still happen today. So I thought it was important to show the history.

And I also wanted to address another problem, which is the increase in panic and fear felt by people who are consuming information. Are there any sources they can trust? Where do they go for information? Do facts still exist? I think that fear and panic does the opposite of what we want people learning about how false information is weaponized, to feel.

Pew Research Center conducted a survey last year on public attitudes on misinformation. And the data showed that people don’t blame journalists for creating most of the misinformation out there. But most people think the media has the most responsibility to fix the issue, more so than tech companies, the government, and the general public. What are your thoughts on that? Should it be a shared responsibility?

Yeah, I definitely think it should be a shared responsibility. This is an “all of us” problem. Every segment has a role to play. Social media platforms absolutely have a responsibility to promote resources to combat harmful content on their platforms. The federal government has a role in providing and supporting efforts to teach digital media literacy and making sure public schools are funded to do that sort of thing, as well as the regulatory responsibilities and the national security initiatives. Academia has their role, as do nonprofits. And I think the point of my book is that the general public has a role as well, given that they’re the ones really creating this fake news content and they’re the ones falling for it.

Has the pandemic shown you anything new about how fake news spreads or how we consume it? Or has it really just reinforced your findings?

I finished the last draft of my book several months before the pandemic started. It’s kicked off this era in history where we’re seeing the largest amount of misinformation ever. So it’s very interesting watching it unfold from that perspective. I think we’re definitely seeing some of the key things in the book, including people who use information as a way of financially profiting, or using it as a weapon to pursue priorities, whether ideological or financial. They’re absolutely taking advantage of this very chaotic time.

I think of my generation and younger generations, and I don’t think any of us thought we’d live through a pandemic. It’s not something we had on our Bingo cards. I think that’s really affected how people are consuming information. And some of the things I’ve seen as a result, with stress and anxiety and fear now at an all-time high, are a lot of people turning to conspiracy theories to find answers to questions that they have that they’re not getting from other sources. A lot of people, who, under any other circumstances would probably not gravitate to QAnon or an AntiVax group, and they’re increasingly embracing those ideas. I’m seeing people in my own network embrace extreme conspiratorial ideas.

And I also think social media platforms, for the first time, are really starting to understand something a lot of us have been screaming about for a while, which is a conspiracy theory and something as benign as an aggressive meme can actually cause real-world harm and create very dangerous situations. They’re sort of grappling with understanding that information can lead people to take action. So some of the social media platforms are starting to take action with some of the content they were unwilling to touch before the pandemic.

That brings me to my next question: Facebook and Twitter have been in the news quite a bit lately for censoring content. Do you think labeling problematic content is generally better or worse than censorship?

My approach on how social media companies handle harmful content, discriminatory content and false content, is that they should be providing as much information as possible to users on where their information is coming from, and then allowing users to make decisions about whether they’re going to trust it or not.

You shouldn’t have to leave it up to an average Facebook user, a non-disinformation researcher, someone who doesn’t have an investigative background, to ask, “OK, why am I seeing this ad? Who’s actually paying for it? Who do they work for?”

Recently, Facebook and Twitter have both started labeling most state-run media. That sort of labeling is where I see a lot of value. People shouldn’t have to launch an investigation just to figure out who’s putting content in front of them.

What are your thoughts on our Newstrition approach?

I think it’s a really good, helpful way of approaching the issue. It’s very in line with what I think. The more information that helps people understand who is pushing what, where it’s coming from, who the source is — the better.

If you could design your own Newstrition label, what information would you be most interested to know?

I always want to know who operates the site, if there’s a company or a political organization behind it. I want to know when it was created. I want to know, without having to consult WHOIS, if it’s actually being operated in another country. I’m always interested in business information, if it’s tied back to an LLC, how many authors are on the site. And I’m always digging into ad infrastructure on the site. That’s going a little bit deeper, but it’s something I’m always interested in.

Do you think it matters what the public thinks about the news?

Yeah, one of my great concerns in this time period is that people feel there are no more facts and truth and there are no more trusted sources of information, so they might as well go believe this person on Youtube who has no credentials who’s spouting conspiracies. That’s sort of what it can lead to, so I do think it matters that people feel that they have ways of accessing good and accurate information.

When you tell people in your network that they’re sharing fake news, there’s often a guard that goes up, right? They may or may not interpret that as an attack on their character or their intelligence. So how do you respectfully call out friends and family on peddling conspiracy theories and other forms of misinformation?

This is the number one question I get from folks these days. There are a couple of important points. When you’re going into a conversation, make sure you’re not going into it with the idea that you can win an argument. That’s an unhelpful construct. Also, using shame as a motivator is just not going to be successful. I think public callouts will always be less effective than private discussion.

But, I do end up finding some success in showing people where their information comes from. If it’s not coming from good sources, that can be an effective way to have the conversation.

In the book, you give a lot of great, practical tips for spotting fake news, including double-checking the URL, double-checking the date, seeing who else reported it. These are all really good tips. But I think a lot of people often forgo that entire process because they instantly trust something. And if they instantly trust it, then they instantly share it. So being that people are both busy and impatient, how do we encourage them to slow down after they read the headline?

I wasn’t trying to turn everyone into a disinformation investigator with the book. It’s just about adopting better habits. So I think by doing three or four things that take five minutes, that’s going to save you a lot of grief and also make the information environment and the world in general a better place, and not just for you.

So the practical tips are important, but it’s also showing the impact of this stuff and therefore why it’s worth it. As more stories come out about people taking action based on false information that they saw, I think it will start to drive home the gravity of this. But all of this is a great challenge, because typically, there aren’t a lot of initiatives aimed at educating adults on these types of things. So we have to figure out how to get these tips, tricks and best practices in front of various demographics.

Cindy, thank you so much for taking the time to chat with us. And congratulations on the release of your new book!

Thanks for reaching out!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store