Reality Jamming: The Future of Information Online

Tow Center
Tow Center
Published in
32 min readDec 11, 2017

As open communication platforms online become trusted sources of reality, we anticipate a future dominated not only by targeted digital battles for attention, but for beliefs—unconstrained by consensus or veracity. In a panel hosted on September 29 at the New School, co-organizers Susan McGregor of the Columbia Graduate School of Journalism and Chris Wiggins from Columbia’s School of Engineering and Data Science Institute were joined by four others to discuss the future of information online: Joan Donovan, Data & Society; Matt Jones, Columbia’s Department of History; Jonathan Albright, Columbia Graduate School of Journalism; and Sam Thielman, a journalist at Talking Points Memo.

The panel attempted to look ahead; rather than sharing illustrative prior case studies of propaganda, coordinated trolling, or fake news, we tried to focus on dynamics and impact such as:

  • the new scale or capability afforded by technology; why is this the same as or different from the prior century’s marketing and propaganda?
  • the role of solutions: what if anything can be done to defend, innoculate against, or otherwise counter “reality jamming”?
  • what communities or groups are disparately impacted?

To best frame the discussion, consider a definition: “Culture jamming” is defined as playful attempts, often in satire, to combine media, or to communicate digitally, in ways that amuse, delight, shock, or inform. Reality jamming, meanwhile, differs both in scale (which is easier to attain now that communications are largely software-based, e.g., social media marketing tools) and intent. Rather than attempting hoax or satire, actors are using a trusted channel to try to convince targets into believing something false, fabricated, misattributed, or otherwise unreal. That is, the goal is to saturate, disinform, or jam what would otherwise be reliable sources of truth that people encounter digitally.

Below is a transcript of the Sept. 29 event, edited for clarity.

Chris:

This event started from an email from Justin in August, which was, “Hey, community, we should organize some workshops on interesting topics.” At the time I was interested in fake news and how it might expand; what I was calling reality jamming, meaning overwhelming the set of things that we use to define our reality. Not just in the format of news, but, as you get more and more reality from the palm of your hand, then the palm of your hand is now the attack surface, open to anybody who has user-generated content or any other signals that define your reality. That is going to be a scary way for all of us to define our views of reality.

One way to slice to the problem is in terms of past, present, and future. The present is changing by the hour. I’m sure by the time my phone goes off airplane mode, I will have three tweets, possibly from Justin, about how the state of fake news and computational propaganda has changed in the last fifty minutes. So instead, I want to focus on past and future.

The future: What is to be done? In the short term, and in the long term, is there any hope whatsoever? Is there one solution? Is there a spectrum of solutions we should be arguing for?

And the past: To what extent is this new or is this not new? Is this like dropping leaflets on Japan as we discussed on the phone call from yesterday? Or is this really qualitatively different? Is there a way we can say: “more is different” here?

Matt and Jonathan, could I ask y’all to kick it off with the past?

Jonathan:

There’s been data-driven campaigning for years, but the targeting that is happening now is quite a bit different. If we think of the older kind of micro targeting as push — you could customize messages going out — the newer kind of system of targeting is something more akin to a pull, a pull system. It pulls you in. It’s emotional.

The difference is the real-time aspect. I read a statistic that between the 2012 and 2016 election, smartphone use, or the smartphone ownership in the U.S., has increased seventy percent. Those statistics are huge. And just the sheer amount of data that platforms, especially Facebook, have been collecting over time makes it a different kind of targeting than the kind of targeting you could use in 2012.

So there are similar methods, but the scale is completely different. And the ability to use tools, I would call them list matching tools. So, you know, you can essentially upload a voter roll, with email addresses or zip codes and it’s processed through Acxiom. And you can use that to find those real people. So, I mean, that can be done at scale. You can upload an Excel file, and find if a person has that email address as their Facebook account. That will instantly identify them as a real person. The power of that is not to be understated. It sounds simple, but after that all you’re doing is working to refine your data sets. So there’s a huge difference in scaling, there’s a huge difference in sharing. So we have no idea what data is being shared in the background.

There’s all sorts of ad tech firms and second-party data now. Second-party data is when your phone provider is actually selling your data and you don’t know. There’s no third party involved, it’s actually like AT&T, or Verizon, or your ISP turning over and selling your, say, location data even if you have location services turned off on your phone.

The quantity of data we have now is different, as are the kinds of tools we have to target at scale, to find real identities through digital traces.

Chris:

Wonderful. We already have a unit — “baud” — for bits per second of information, and we need to recognize that if you’re going to have information per second, you’ll also have lots of disinformation per second.

Jonathan, Susan has put up one of your posts. Anything you want to say to talk us through some of your research on this topic?

Jonathan:

I go to a lot of panels and events, and people argue about framing and about the content aspect, or the delivery of the messaging aspect, the strategic communication element. And people will spend hours arguing about the types of framing used, the terms being used to influence. But I would argue that a lot of these fake news sites, quote unquote, are as much about collecting data as they are about trying to persuade.

We already know that disinformation or misinformation is, you know, it’s not necessarily always to persuade, right? It’s mostly to inject noise. It’s much easier to inject noise into something to kind of confuse and distract, than it is to persuade someone. It takes a lot more effort.

But I think that a lot of these sites, and you can look and see by the content, with the kinds of tracking especially. ABCnews.com.co, which was one of the go-to examples of fake news sites, has about 160 pieces of ad tech on it—that one site. I mean, there’s just these kind of bare bones minimalist WordPress sites rolled out at scale, and I would say that their content is a bait and it’s a bait used to kind of A/B test people. And it’s … I think that the arguments about content are great, but I think that there needs to be a move back towards… Content is often … These types of hoaxes, and viral hoaxes and things, are actually just a means to an end as bait for data collection. Using [Facebook] custom audiences, you can refer back to that website and see how much of a video someone watches, if they watch five percent or ten percent, if they’re in a certain zip code. The kinds of refining you can do after someone gets angry and shares an anti-jihad video, is quite insightful; you can kind of track that person or track those individuals from that point onward and just watch where they go.

Just from that point they leave a trail, like a magic Harry Potter map.

Matt:

I’m a history professor, and I wanted to talk about a moment in which there were large amounts of text circulating from strange sources that were not verifiable. They were reprinted over and over again, and some of them may have been pushed by state actors to upset the policy of the United States. This concern was echoed by John Adams, who you all know because of Hamilton, who wrote, “The gazettes of Europe,” gazettes being newspapers, “still continue to be employed as the great engines of fraud and impostor to the good people of America.” Engine is a technical term meaning a machine that produces energy at the time. “Stock jobbers are not the only people, so there’s financial gain who employ a set of scribblers to invent and publish falsehood for their own peculiar purposes. I verily believe,” he continued, “there are persons in every state employed to select out these things and get them reprinted.”

He’s referring to the circulation of paragraphs that are unverifiable, full of news that could have caused potential political and religious implications at a moment of strife. Why this is interesting—because he’s writing at a moment of the creation of new media and total lack of resolution of how that fits within democratic organization. How is it that the verification of sources is going to happen when you have at once a celebration of new media in a democracy and the questioning of all standard sources of credibility, combined with a lack of technical means of verification?

Fake news is old news. Now, I don’t want to discount anything that Jonathan just said. Why might this be interesting? Because many of the same questions about radical democratizing new media, and the way that it can be leveraged by state actors, by financial actors, and others, to lead one into a position of uncertainty is indeed something that we are living in very much today.

Fact production and verification, something that in some sense we had some certainty around in the United States in the middle of the twentieth century, are incredibly rare and fragile. They’re precarious. The default position, I want to say, is to be in the situation of fake news. The question then has to be, what are the kinds of solutions that one needs to have, technical as well as social? I think it’s important to know that the solution in the nineteenth century, which I’m not going to go into, was not the blockchain of the nineteenth century. The creation of the new forms of newspapers that comported with a democratic polity by the twentieth century involved both the constitution of new social organizations of credibility, something like The New York Times or the solidity of academic institutions, which came to fit into democratic organizations, but also new technologies of the potential of verification. Whether it’s the byline in the newspaper or other forms of verification.

That is, it was about a moment of technologies and institutions of credibility that comport with the radical democratic order. Because what we’re living through is the way in which the possibilities of the massively decentralized web have been exploited by people who want to take advantage of it. The celebration that you would find in, say, John Perry Barlow on cyberspace in the late nineties was something that state actors recognized, feared, and have used as the grounds for an overinflation of sovereignty. The fearfulness about sovereignty being destroyed by cyberspace meant room for its exploitation. And that’s very much what Adams was getting at when he said that persons in every state are employed.

The U.S. Government used to define this in the DOD definition as perception management. And I won’t read this whole thing, but this is U.S. Government doctrine from 2001 about doing this. “Actions to convey or deny selected information to influence their emotions, motive, and objective reasons.” The U.S. is famously bad at this Russian doctrine.

It’s now been obliterated by new euphemisms. But clearly what’s novel here is that there’s a particular toxicity that has happened when that kind of state operation is combined with massively decentralized democratic organs and the new analytic technologies that Jonathan was talking about.

Now, Chris wanted us very much to talk about what we can do, so I want to go back to an even greater moment of epistemic uncertainty. At the end of the nineteenth century, there was massive uncertainty about who owned anything, whether it was the church, or the state, or individuals. There’d been a discovery of a vast number of forgeries of medieval documents, forgeries from the fourth century onward, and everything was called into question.

Now, out of this came an incredibly sleepy and dull discipline called “diplomatics,” which is the study of, essentially, the metadata of documents: what paper they’re made of, the kind of ink, the spelling of the words. It was a highly technical discipline and it’s the heart of the development of what we think of as forensics. And it turns out, if you study the history of this, the people you most want to talk to if you want to learn how to authenticate or de-authenticate a document are the forgers themselves.

Tony Grafton, a professor at Princeton, has written wonderfully on this. But the entire set of technologies, both social technologies and technologies themselves, of the working against forgeries were something that were developed in alliance with the forgers. I think there’s a lot to that. So for every story about how AI is going to make it impossible for us not to, say, see that this video is fake, that’s an opportunity to leverage that, to counterexploit the exploit.

Chris:

There’s so much to work with there! Two great takeaways from what you said include the fact that the rate of production and verification of facts has increased. What’s interesting there is that I think the rate of production of facts has increased, but the rate of verification of facts has not. So we get to a point where it’s just so easy to just make crap facts, but it’s really slow for people to verify. We were talking about this earlier this morning, how much work it takes to actually track down where that Photoshopped source is from. If you’re a technosolutionist like me, that’s one thing where I’m like, “OK, I can write code to [whatever].”

And the second takeaway is this idea that we need white hat fake news people. Nobody better to combat fake news than fake newsers, so how do we create and incent people to create a federation of white hat fake newsers?

Sam:

I wanted to amen Jonathan’s point, that there’s this gigantic scale that is, with greatest respect, historically unprecedented. I wanted to ask you how malicious actors use that scale.

Joan:

Maybe I’ll just come out with a really contentious point: everything written and published online is fake news. Right? Because what you get is citizen-run media, and even though you have these large news organizations that are publishing things online, once they’re filtered through digital intermediaries like Google News, like Facebook, like Twitter, what you end up seeing is these cards where all content looks the same.

So the link that gets shared from The Guardian looks like the link that gets shared from The New York Times, looks like the link that gets shared from altright.com, looks like the link that gets shared from the Daily Stormer, looks like the link that gets shared from the WordPress blog. And as a result, you end up in a situation where people don’t really even reference the media institutions anymore. They say things like, “I read it on Facebook.” “I read it on Twitter.” And when you hear someone say that, you don’t instantly think, but where did you read it though? Right? And in that too, even these WordPress blogs, then you’re in this moment also of citizen journalism, where we are getting up alongside these articles and these hot takes and some of these more measured cold takes, which I appreciate. You know, maybe they’re a few days late to the party, but the cold take is something that I read.

In our research about the political opportunities posed by the change in infrastructure to news, we think about groups of people that are actually very good social engineers. They know how to make content look good. And then, what ends up happening is two forms of legitimation are built into the card itself, or the content card. One is repetition, you can see how many people share it. You can see how many people like it. And then, when these things hit into the hundreds, specifically like on YouTube videos when you see 80,000 people have watched something, it looks like it’s probably a big deal. And so maybe the YouTube video is of an event last week, like in Berkeley where the Milo Yiannopoulos fiasco played out. Where there are about maybe 300 people protesting overall, but 80, 90,000 people were watching online supposedly. But we know that people game these metrics. But they game these metrics to make other people believe that it’s more important than it is.

And then when we have this massive repetition where you start seeing the same links across platforms. So maybe you see it on Facebook, then you see it on Twitter, and you start to have a sense of it being a story and match with the metrics. You see lots of other people are sharing it, retweeting it. You then become part of the process of amplification at scale because then you think this is a big deal.

So, for instance, yesterday … Was it even yesterday? It’s like I know we’re not supposed to talk about the present but I got a hair across my ass about the Black Lives Matter Russia thing right now, and I’m very, very angry about the fact that you can purchase an ad and place it in people’s timelines in such a way to sow dissent and rage. And nobody can verify that that thing is officially from that organization. I can’t make an ad that defames Pepsi and put it on cable, right? There are gatekeepers for that. But when we have few gatekeepers and we’ve meshed all of the different ways in which we consume reality, right? You know, from the things our grandma says to us, to our first-person accounts, to our advertising, to our blogs. We end up being in on the … I don’t want to call it a game because the Black Lives Matter Facebook scandal, I think, had real-life consequences. I mean, people were … People are still serving court time for thing … jail time for things that happened in Ferguson and things that happened in Baltimore.

And so the Russian paying of ads that sow dissent, you know, makes it more and more difficult for people in those areas, community organizers, to argue for rethinking the narrative of what happened, and also getting their friends out of jail. Right?

And so that’s all to say that there’s things that are happening that are political opportunities that are part of the infrastructure itself, and that if we don’t hold these platform companies accountable for that, then it will imperil democracy. Of course, we know this. But it’s also coming for the economy, right? And this is another thing that we don’t really think about. When we don’t think about separating, parsing, legitimating, verifying, and also distributing information, we become beholden to systems that we have no control over.

Sam:

I’ve been a reporter for, I guess, twelve years and that twelve years has seen the collapse of the news infrastructure, basically. I’ve seen everybody who’s my senior at these publications laid off. That’s all the institutional knowledge.

And Joan, I don’t want to spoil your research, but Joan was telling me there are things that the older reporters know that are conventions that younger reporters don’t know, even if they’re very smart, know how to do what they’re doing, and went to the best journalism schools. So that’s a real loss. Seeing these places atomize so that all the work is being done by freelancers who are writing for twenty-five dollars an article, and thus trying to pound out five, ten articles a day, creates a systemic effect where nobody’s verifying anything except through other news organizations. In the best case, that’s going to trickle up to four or five newspapers that have the resources to actually go out and do original reporting. In the worst case, which is much more likely, it’s going to trickle down to jerks and assholes who are creating their own news and pretending it comes from places that are reputable. Excuse me.

So I think the problem here, is that we have these gigantic things that operate at this incredible scale. Facebook has one employee for every 77,000 users. Twitter, I’m not sure what the ratio is, but I’m sure it’s not better and it may be worse. And that’s daily active users, that’s not accounts. That’s people who are on Facebook every day.

These platforms are created explicitly to avoid human intervention, and that’s an important thing to understand because they’re not domain registrars. They’re not just allowing people to speak whenever they want about whatever they want. They’re already carefully controlling what it is you can and can’t say, and putting things into your newsfeed based on the way you respond to them. There’s a suggestion that regulating Facebook would stifle free speech, which I think is a worthy thing to be concerned about, but also does not quite understand the purpose of Facebook, which is to part you from your money. That’s the only reason Facebook exists. The problem comes when you have people interacting with Facebook in a sophisticated way who don’t want money, who want something different from money. Who want to amplify perspectives, like Nazism, that have been kept out of responsible news organizations because those news organizations understand that amplification increases the viability of that movement. There are people who want to mess with political campaigns, as we’ve seen from Russia.

So there is a way to interact with a billion people, even if that interaction is necessarily delimited by the platform, that will get you not money but power. And Facebook has not accounted for that, and Twitter has not accounted for that. And the news media is collapsing at a rate that is proportional to the rise of Facebook and Twitter in a way that is preventing it from countering it effectively. Not to spread despair.

Chris:

Now, we’re nearly at the midpoint, so I wanted to switch from the past to the future. I want to talk about people’s ideas for solutions, long term and short term.

Jonathan:

You know, I get this question a lot. So when some reporters call me, they’re like, “What can we do?” And I think that I’d hate to do it, but, I mean, we need to define the problem. We need to understand the scale and scope of the problem. There’s going to be a lot of money thrown around. People are looking to throw money at these foundations, even the State Department, the government. But we really have to understand what’s going on before we can move forward with initiatives to try to stop this. Otherwise, I think there’s going to be a lot of effort wasted.

We need to continue to wait and analyze and see what is going on and what’s happening right now. Just the past few days have given us more information than probably we’ve had in almost a year. And this is a legal process. This is based on subpoenas, or likely subpoenas. More information is going to come to light. Every day, or every week maybe, there’ll be revelations and maybe, possibly, other people will get pulled into this. I mean, you know, it’s not unlikely that Google will get embroiled in this somehow.

In my own research, I found that YouTube was one of the centers of all this. It’s an emotional training ground, so you can kind of recruit people through Facebook into a YouTube video. It’s like a propaganda education system. I don’t see it unlikely that Google will get embroiled in this, especially as they own Double Click and other organizations.

But I think that defining the problem is very, very important to move towards solutions, and safeguarding the open web, and safeguarding privacy. If we can reframe this as a personal information and data issue, we get past the tropes of, “Oh, how do we regulate Facebook? Can Facebook self-regulate?” I think that the fact that personal information and personal data in the United States is kind of more or less managed by the Federal Trade Commission, which suggests right there that people’s information is treated like commerce, it’s treated like a product.

Even if we ask companies for our data and they give us a table of our categories, the information they use to target us we’ll never know. We’ll never know who and how we’re being targeted. Regardless, if we get our kind of demographics list from whatever company, the methods and how we’re being influenced are not known because they’re proprietary.

So I think that, yeah, maybe a privacy commission. I don’t know. Maybe an arm of the FTC that deals specifically with election data. Maybe a Department of Homeland Security online because we’re strengthening our physical borders, right? It’s not really the physical borders that we need to worry about. So maybe there needs to be a department of … I don’t know what it would be. The Department of Online Homeland Security, DOHS.

Joan:

I’m gonna go ahead and just say “no, thanks” to the government regulation being involved in that aspect.

Chris:

Exactly. I have to wonder: Jonathan’s solutions all involve government. Depending on what city you’re in, you can imagine people being like, “OK, well we’re going to disrupt fake news” or something like that. There’s probably a broad spectrum of possible solutions.

Joan:

I … yeah. We’re in this situation because of the government. For years under Obama we had Occupy, Black Lives Matter, Justice for Trayvon Martin, just to name a few movements that were antagonistic to the government. But not all movements can be distrustful of the government and fight for social change in this way. The dreamers did register en masse in these government forms because they needed a legal path to citizenship and you cannot win that without cooperating with the government. Right after the election, the paranoia around “how do I make my phone more secure?” “How do I stop leaking data constantly?” Like, why is this thing [the smartphone] a faucet? Right? This was the fear that the government was going to use my information and my participation to track me.

Maybe handing over power to the government puts us in this position too, where we’re giving them information. Right? Amelia Acker and I were talking about this. Why don’t we think about creating standards for our telephones, and our computers, so that they aren’t so leaky? Right? So that we know what’s coming in and what’s going out, and that there are more controls over that.

The other thing that I’ve done for years is that I don’t register things in my own name—when I sign up for things on my telephone, and my computer and things—but I know that the ISP is the big problem. There’s only so much obfuscation you can do with digital tools; eventually you need to connect into the tubes and wires. And it’s at that place that they capture your identity and then link things up. You can have fifty different email accounts, but it’s not really going to make a huge difference.

How do you instill better standards? Can you bring Silicon Valley to the table and say, no more? At the same time, we’re watching a burgeoning alt-tech movement that has outpaced the left right after Charlottesville during this moment of corporate denial of service, emerged an entire disaffected tech class that is now aligned with white supremacists, now aligned with men’s right activists, now aligned with trolls that have been part of the fake news problem that are trying to build an alternative internet based on the values of, quote unquote, free speech. And they are going to argue very strongly and persuasively for legislating silicon value …

Sam:

That’s a high-quality Freudian slip.

Joan:

I know, right? Silicon Valley. But we’re going to see some of the leftists dragged along this path of what we need online is free speech instead of polices against hate speech and hate symbols. The legislation that the far right wants is basically to make everything work like the telephone, and they want to ignore the new possibilities of amplification because that’s their infrastructural opportunity, the political opportunity of amplification.

In the coming year we’re going to see a big push for net neutrality from all sides. People are going to be really like, yeah, it seems like a good idea. Twitter’s a lot like the phone, you know? Everybody should have a right. We have to then think about what it means: the right to an audience? And what kind of world do we get when everyone is acting, or playing, or ambivalent, or acting like a reality star?

Sam:

Right. You can’t treat these platforms like they’re neutral. You often hear people who slept through everything except day one of microeconomics say that there’s a free market. Of course, there is no free market. There are many markets, and they’re all established by the first person who goes into the market. That person says, “I have something to sell, here it is. Also, I’m establishing rules for this market and all of my competitors have to abide by those rules.” That’s the way most markets work in this country.

Twitter has pre-established zones of speech. Facebook is a different proposition, actually. But they both create these questions about what kind of speech you privilege and whether or not these things are platforms or publishers. I contend that you’re a publisher if you’re policing speech in any way.

I do think it’s important, though, to have some kind of regulation. I am very sympathetic to Jonathan’s point, having spent two years covering ad technology with the notion that there’s too much data out there. There was, at one point, an interesting article in The Times about what the moral obligation was at these companies when they figured out they could fairly accurately predict whether or not somebody was about to commit suicide. They’ve definitely figured out how to tell you whether or not you’re pregnant, and occasionally inform people’s parents before …

Chris:

Yeah. The Duhigg Target story.

Sam:

Yeah, exactly, which I don’t think I have time to tell.

Chris:

OK, but solutions, Sam, solutions.

Sam:

Yeah. Solutions, I think, are purgative. I think you have to go through and purge all of this data every few years. And it doesn’t have to be everything, but at some point, there’s no reason the OPM hack contained people’s biometrics from 1980. That’s nonsense. You don’t need people’s biometrics from 1980. You need to dump all that stuff after a certain period of time. But it is monetizable into infinity, and those companies are never going to do anything that interferes with their profit motive. They have to be forced by society.

That was one of the most disturbing things Zuckerberg said the other day, that he didn’t want to police, he didn’t want to have people check whether or not it was OK to say something, and he didn’t think society should want him to. Those are very disturbing words.

Jonathan:

I’m going to go old-school and take it back to the open web idea. As things move further and deeper into apps, the less we’re going to know. The kind of data we can extract from apps is extremely complicated. We have to catch traffic packets. The open web is being systematically targeted. Over the years Facebook and Twitter have systematically deprioritized the URL, especially with things like embeds.

Platforms have also kind of co-opted the URL in terms of converting it to a short link. The URL is not a platform’s friend, because it takes you outside of that platform. In most cases, a URL will take you outside of the property. We also lose signals when we have embeds. This goes back to what Joan was saying. Even in iMessage now, there are embeds. When we see a domain, there is some trust involved. We can see there’s signaling involved that we can make some kind of decision about whether this source is reputable, whether it’s dangerous.

Chris:

Those are all good problems. Solutions?

Jonathan:

Showing your source, like the links. It’s very simple. A second solution would be removing or adding non-emotional sharing mechanisms. Everything on Facebook is based on a loaded emotion, and they’ve now, I’m not sure exactly how much, but now a like is less than the reaction emojis in terms of the weighting algorithm. So the reaction emojis actually have more impact in the news feed than a regular like.

Sam:

Wow.

Joan:

I remember when bit.ly came on the scene and it was really confusing. The history of the web is really tied to a bunch of librarians wanting us to have formatting information and knowledge so that you can share things consistently. Right? And so the hyperlink was an address, right? And somewhere along the way, they started to acquire becoming verification, and then they became brands and signals towards corporations and things. We used to believe the URL would help us organize knowledge, help us with our universities, help us archive knowledge over time.

Some of us still feel that way about the URL. But at the same time, I just watch, and watch, and watch domain squatting, fishing attempts. There’s got to be another system of verification layered into the URL. Recently reporters were talking about wanting to have blue check marks on Google to say this is a verified company, corporation, a verified news source, whatever. It seems like that blue check mark thing, as weird as it is, does help us think about things as gatekeepers, as sources of verification and legitimation.

Sam:

Another solution: it’s something all you can go out and do, which is lose trust in Facebook and Twitter. If the public loses trust in Facebook and Twitter, and you all go home and delete your accounts and throw your phones into the sea, the world will be a much better place.

Chris:

There’s a big space of what it might mean to “solve fake news.” Jonathan is pointing out first of all that we should define the problem. That’s hard. So let me do something cowardly and not try to do that. Instead, let’s focus on parsing possible ways that you can think about solutions. One way to parse the solution is: Who’s going to do the solving? Sam is bringing out that one set of people that could do the solving is us, the consumers. Jonathan started out by thinking about the government should be doing the solving. Nobody here brought up disrupting the market. Maybe if we were in a different peninsula somewhere elsewhere in America some of you would say we should disrupt and disintermediate the platform companies.

In general, this gets back to a conversation Matt and I have been having for years about the three-party game that shapes the innovation economy. In Janeway’s book on doing capitalism in the innovation economy there’s a three-party game among the government that regulates things and, to some extent, sets the guardrails for the market. First, the private companies that shape what the products are. And then the individuals. The individuals play three roles out there. The individuals are the consumers, right? They’re giving money to these companies. They are the users, which, if the users flee then there is no product there for these particular sets of products. And they’re also the engineers, that is, they are the people that actually populate these companies.

One set of solutions that we haven’t really talked about is the fact that these companies have employees. You can imagine situations in history in which scientists or technologists have created a technology and afterwards realized that there were negative consequences, and taken a role of moral leadership in public to shape the way those technologies are used. I have not seen an abundance of that, personally, in the last couple of years of engineers building AI tools that have taken moral leadership. Maybe there will be in the future.

Joan was pointing out that, actually, it’s worse than that. In addition to an absence of people looking to correct the ways that these capabilities are being used for mal intent, Joan’s pointing out that there actually is an alt-tech movement and that there are plenty of engineers who are like, “Yeah, we like it this way.” And “let’s build out more technology to pour kerosene on the fire.”

Joan:

Can I just add one other actor? So if y’all haven’t heard of two organizations, one is called the Center for Media Justice, which does a lot of wrangling around tech and policy and has a different kind of perspective as a civil society organization that is really trying to intervene in the policy realm, but also is trying to understand what a people’s terms of service would look like. If we were to define our relationship to these companies, what would that look like? What could we push for? What is possible?

They recognize that why we end up needing a black press, for instance, is that none of these companies, corporations, newsrooms always act with the best of intentions. Center for Media Justice is a really important resource for myself when I’m trying to think through, well, what are the complications of doing it this way, and who do we silence when we decide that this is where legislation needs to go.

The other organization is called Color of Change, which was running a blood money campaign right as Charlottesville happened where they were trying to get payers, MasterCard, Visa, Discover, and PayPal to stop funneling money to white nationalist companies online. There’s a huge crowdsourcing issue as well around this, where these white nationalists do make and use each other’s products. And they’re investing heavily in new infrastructure right now, and so cutting off the servicers was Color of Change’s move in the game, right? So while the media’s covering the battle between white nationalists and anarchists, what’s really happening, in terms of the social movement fight, are strategic about stopping the money. And you stop the flow of money, we know they can’t build the infrastructure.

Jonathan:

One of the things I proposed to Facebook, months ago, was, what if there was a trust emoji? And I have no idea if it would work, but it would be interesting to test out what a trust emoji would do.

Chris:

It would be a blue check at the level of the individual post.

Jonathan:

The blue check is not meant, technically, to imply authority or influence. It does, unfortunately. The only thing that it’s supposed to do is authenticate identity. This is a platform argument. Over time, the blue check mark has come to kind of symbolize or represent kind of influence, authority, weight in the network.

Sam:

Well, it literally does that on Twitter.

Jonathan:

Yeah. So it gives you different options. The fact that we have people like Richard Spencer who have blue check marks. And there’s arguments that say, well, if he didn’t have a blue checkmark, it would be even worse because people would pose as Richard Spencer and insight riots or even more violence. You get into these arguments.

Matt:

I think the blue check mark is a way of informing authority through trust in who is an authority, and that’s one of part of a whole range of things that you would want in verification. It’s a very thin reed to rest upon when you aren’t able to go beyond the flat plane of the news story, as Joan was saying.

Sam:

But it is also the expression of an internal editorial voice.

Chris:

Yet another indication that these are not neutral platforms.

You democratize the production of information by allowing anybody to contribute content. There’s no way for anybody to consume all that content. Therefore you have aggregator companies. Those aggregator companies are not neutral, despite what they say, they assume all sorts of editorial functions. And the way that they scale the editorial function is to use user signals in order to do that editorial function. That opens up the possibility of mal actors to shape the way it’s curated, in addition to the mal actors that are generating lots of content.

Question from the audience:

Most of the solutions that you guys talked about had some iota of regulation in it, right? One of the things that we didn’t talk about, the cost of producing content is effectively zero. The amount of content is effectively infinite at this point, so it’s arguably impossible to regulate.

Shouldn’t we be talking about more how we, as individual consumers of information, alter how we do that? How do we change how we look at things through an editorial lens? Do you guys have any opinions on that?

Sam:

I mean, I really hope that happens. It’s not just individual jerks on YouTube, although there are a surprisingly increasing number of people who are producing incredibly influential work. It’s also the kind of alt-right news infrastructure that goes back to the first Clinton administration, I would say. And you can trace it all the way back to William F. Buckley at the National Review. There’s all kinds of dogwhistles and overt white supremacy in a lot of this material going back seventy years. But there was a period during the Clinton administration where up to that point staid outlets like the American Spectator started publishing insane things that didn’t comport with reality on any level. That has metastasized into a bunch of outlets that employ people who call themselves reporters and even sometimes go on to work for other news organizations who just traffic mendacious nonsense.

The Daily Caller, Breitbart, and so on are not quite on the same level as those kinds of zero cost content aggregators, but I would say morally and in terms of the amount of truth in their writing, they often are exactly equal. I hope people will learn who to trust, but I fear that people want to see sort of their biases reinforced. And I’m no different. I try to be different, but it’s hard. I fear that that will end up sort of precluding any change in that.

Joan:

I’ll just say really quickly that what we find is that media literacy doesn’t work. And it doesn’t work because when people do try, what they do is self-investigate, and then you end up with Pizzagate, where people are like, “I’m just going to go drive by a pizza parlor and see if there’s a child sex dungeon or not.” Right? And so when we find that when they start self-investigating online things that are conspiratorial, they end up getting pulled into these conspiracy theories and these conspiracy message boards. Especially when it comes to things that could be real, or seem outlandish, but once you get into a community of people that are ready to validate that worldview, or that story, it’s like fast downhill.

Chris:

In the future, if we create a sustainable business model for local news, we would have this sort of thing where you experience them digitally, but you also run into them at the store or see them at local events, and that multi-channel engagement with a source gives it a type of veracity that a graphically denuded card that strips out all branding, or you don’t even bother to read the source, doesn’t convey that source of truth and would return us to having some sense of veracity in source.

Question from audience:

Yes, thank you. I wanted to problematize kind of the frame of the conversation for a second, because we’re talking about something that is a problem that has solutions. That implies this is a bug that can be worked out, whereas isn’t this actually, as Matt was saying, like a future of Western civilization? Like Plato famously said—I’m going to butcher the quote but—that writing is the enemy of truth. Fifty percent of peer-reviewed results can’t be replicated.We talked about this is a difference in kind and a difference in scale, but are we actually just witnessing civilization becoming woke to the tenuous and precarious nature of an objective reality?

Chris:

There are, I think, qualitative differences of intent of the source. Sometimes the source is actually intending to mislead. Sometimes the source thinks that it’s true. Sometimes the source is purveying bullshit, to use the Frankfurtian technical definition of bullshit, meaning somebody says something, they don’t care whether or not it’s true. Other comments?

Joan:

I’m always sympathetic to the idea that early versions of blogs and citizen journalism were where I started to travel into getting my news when I was younger, and I felt very connected to local events because people were reporting on the scene. I’m trying to say that while we understand, and I think the broad public understands, that media is constructed and that it’s not always going to know everything about a situation. That you do appreciate the gloss, at least so that you have a sense of an event, what happened. You know, like a forest fire or a hurricane, things like that. And it brings us into a common sense of reality where at least we all know the event happened.

But we’re tracking disinformation campaigns where people are faking chemical explosions, right? And that is just different in terms of jamming reality than it is about the details of an event being possibly wrong. When the event doesn’t happen, but it appears online as if it did, those are the kinds of things that I’m thinking about as the problem.

Question from the audience:

What’s the difference of what you’re talking about between the start of the Iraq war and when Hearst got us into Cuba? Same thing. No difference.

Matt:

Those are, of course, explicit and well-organized campaigns to change public opinion and emotion. One of the things that’s very different, and a lot of people have struggled with understanding this in the Russian situation, is the extent to which this is explicit stated doctrinal policy of the Russian Federation coming out of the Soviet Union, and yet it’s amplified both by people in Russia and then by whole series of social phenomena that has made it remarkably stronger than anyone could expect.

There are definitely people in our government who are upset they can’t do more of this because the legal infrastructure of the United States is very restrictive. When we get more about what happened in the Russia hack of our election, a lot of it is not going to be directed centrally, and yet it’s going to have the effects that people like all of you have tracked so well that we’re going to understand that this is sort of qualitatively different.

Another thing that I think is quite important, is this relativism. One of the largest goals of disinformation is simply a suspension of belief, a disbelief that there could be any possibility of verification on media or this sort of thing. That even any potential solutions, that, I think, is different from what you saw with The Main and what you saw with Iraq. Because those were supposed to be facts, about which they were propaganda, rather than a world in which you are constantly suspended in the inability to know anything with certainty except for what is immediately around you.

--

--

Tow Center
Tow Center

Center for Digital Journalism at Columbia Graduate School of Journalism