The Fragmentation of Truth

danah boyd
Data & Society: Points
29 min readApr 24, 2019

Editor’s note: On February 23, 2019, Data & Society Founder and President danah boyd gave a talk at the Knight Media Forum. This text is the written version of her talk, which you can watch here.

Good morning! I’m going to begin today with a little bit of a stretching exercise. Because I think it’s a moment to take the temperature of this room.

1. How many of you consume news every day? [All hands raise.]

Good! You’re in the right room. Otherwise — if you didn’t raise your hand — you might want to exit or you’re going to be in trouble.

2. How many of you watch YouTube every day? [A few dozen hands raise.]

Okay.

3. How many of you consume your news primarily on YouTube? [A few hands.]

I’m here to officially tell you that you’re old. [Laughter]

YouTube

More seriously, my talk today begins with a discussion of YouTube because of its significant role in rearranging the information landscape. Tremendous ink has been spilled talking about Facebook and Twitter’s roles in the information and news landscape. Both have been dragged into Congress. But far too few people understand YouTube’s role in the information ecosystem, let alone the platform’s unique vulnerabilities, and what they show us about the current state of misinformation. The vulnerabilities of YouTube’s architecture allow media manipulators to shape public knowledge in ways that are profound. So, starting with YouTube, I’m going to talk through some of these exploits. I’ll back up from there to talk about the broader state of vulnerability within the information ecosystem and what we can do about it. I want you to understand this so that you channel your energies in the right direction to rebuild American communities.

If you talk to a person under the age of 25, you’ll quickly learn that they visit YouTube every day. It’s their MTV; they use it to watch music videos. But it’s also their primary search engine. Want to know how to tie a tie? You go to YouTube. How to cook pasta? Go to YouTube. How to do that calculus assignment? YouTube. As a result, YouTube is also the place where many young people start to consume news. It’s where they start to pay attention to broader conversations. But the news that they get there may not be what you imagine it to be.

The vulnerabilities of YouTube’s architecture allow media manipulators to shape public knowledge in ways that are profound.

Consider PewDiePie. How many of you watch his show on a regular basis? Do you even know who he is? [Only my friends raise their hands.] Your lack of familiarity with him should concern you. He hosts the most subscribed-to channel on YouTube, is one of the largest digital influencers on the internet, and makes millions of dollars per year shaping culture and the public’s interpretation of news events. PewDiePie is the gamer name of Felix Kjellberg, a Swedish gamer who began his channel on YouTube by posting gaming videos. Since 2013, he has had more subscribers on YouTube than anyone else (although he’s currently in a tight contest with a Bollywood account). He currently has over 87 million subscribers. He’s become famous for commenting on cultural and social issues. For the last year, he has hosted a comedic parody of CNN called “PewNews,” which he describes as “the most respected, trusted news source of all time.” Millions of people watch each of his videos. Thousands comment on every one of them. PewDiePie has become a hero to many young people, including most notably young, white men. His style is often crass, and he is frequently surrounded with controversy over comments that propagate tacit anti-Semitic and racist messages, although he vows that he’s not a Nazi. Still, most white nationalists and white supremacists appreciate his dog whistles.

Think about that. It is normal for the most widely watched YouTuber — a gamer who comments on culture on the primary search engine for under-twenty-fives — to espouse racist and hateful commentary on a daily basis.

Note to readers: The reader should be aware that this talk was written and given prior to the terrorist attack in Christchurch, New Zealand. Shortly before starting to murder innocent people, the terrorist encouraged those watching the livestream to “watch PewDiePie.” While the news media commonly interpreted this call-out as indicative of the terrorist’s appreciation, the terrorist appeared to be trolling PewDiePie. Kjellberg responded to the call-out with sadness and horror, explicitly rejecting the terrorists’ actions, values, and message.

Most people don’t know how important YouTube is at the intersection of news and search. They think YouTube is for watching music videos or Super Bowl ads. But as I mentioned, it’s a search engine. It’s also a recommendation engine. And it’s a social media site. It has discussion forums, influencers, and an auto-play feature that keeps people engaged for hours, watching videos after videos. It also has significant problems that can be easily exploited. And those exploitations tell us about what’s going on today.

I’m going to start with one specific kind of exploit that goes across all sorts of search environments, including YouTube: “data voids.”

Search Vulnerability: Data Voids

Data voids” is a term coined by Michael Golebiewski at Bing, which he uses to describe what happens when there is no high-quality information available for a search engine to return for a particular query. What happens when you search for something and the search engine doesn’t know what to do with it? It’s one thing to search for “basketball scores.” It’s another thing to search for something esoteric, some unique phrase with little to no content. That’s when exploitation is possible. And if you understand how this exploitation works, you can begin to see some of the vulnerabilities in our information architecture. Michael and I have four different kinds of data void with which we’re obsessed. Let’s walk through them.

1. Breaking News.

On November 5, 2017, many news outlets blasted a notification to subscribers — many of you woke up on that Sunday or saw it that day — to notify them that there was an active shooting in a church in Sutherland Springs, Texas. The remarkable thing about a town like Sutherland Springs, Texas, is that from what we can tell by going through Google and Bing records, no people had searched for that town in the two years before the shooting. Even if you are in Texas you might not know about this town. So, what did people get when they started searching for it in November 2017? They got auto-generated content: Zillow listings. Census data. Things that just sort of exist for every town in this country, even if no one ever searches for it. Google has content for millions of queries that no one searches, simply as a byproduct of all of the data that it collects.

Where there’s limited data and a spike in search queries, Google quickly turns to other sources to offer fresher content.

In situations like this, where there’s limited data and a spike in search queries, Google quickly turns to other sources to offer fresher content. It pulls on data from Twitter and Reddit to try to get real-time information. But YouTube works differently. YouTube doesn’t pull from other sites; it waits for users to upload content directly to the platform. And in situations like this, what happens is that different types of adversarial actors try to jump on that news story in order to shape the landscape, both in text and in video. As journalists were scrambling to understand what was happening, far-right groups, including self-avowed white nationalists, were coordinating online to shape Twitter, Reddit, and YouTube to convey the message that the Sutherland Springs shooter was actually a member of Antifa, a group that they actively tried to pump up as a serious threat in order to leverage the news media’s tendency for equivalency. (While there are people who identify with Antifa, far-right communities have made Antifa larger than it is. Thousands of Twitter accounts purporting to be Antifa are run by people associated with far-right movements.) Knowing that Google would get dominated by news articles within hours, they also used sockpuppet accounts on Twitter to engage journalists, hoping to shape their coverage.

One journalist who was targeted by these groups quickly put together a very decent article about it for Newsweek. He went into detail about the far-right groups that were staging this process, and the article was pretty thoughtful. There’s a problem, though: its title. The story ran as “Antifa Responsible for Sutherland Springs Murders, Says Far-Right Media.” The problem with that title is that Google cuts off headlines when it returns search results to fit into its pre-defined boxes. For the first 72 hours, if you searched for anything on Google related to this shooting, in the newsreel at the top you saw, “Antifa Responsible for Sutherland Springs Murders.”

Google wasn’t the only site of manipulation. On YouTube, people immediately put up videos to try and create new associations. This is a way of shaping a breaking news story, a way of shaping the conversation. And compared to Google, YouTube takes a lot longer to clean up. Part of the problem is that it isn’t a quick news ecosystem there. Yes, eventually ABC, NBC, CBS, CNN — they all upload their content in some way. But they don’t optimize for YouTube. They don’t go through the details of the metadata. They don’t optimize for related videos. They don’t engineer the connections between the different videos and terms. And if the metadata isn’t there and people aren’t building the link structure within YouTube, then those videos won’t be highly ranked, even if the content is there. That’s data void #1.

2. Strategic Terms.

In the early-1990s, a political operative named Frank Luntz became famous for his pithy talking points. He was really good at creating phrases that could shape the public imaginary. Every week, he’d host a meeting with Republican staffers on the Hill to offer them a new phrase that they should aim to get into news media, attempting to create a “drumbeat” of terms. This was very effective. You know many of Frank Luntz’ terms. Partial-birth abortion. Climate change. Death tax.

When congressional members started using those terms, they got the news media to do the work of amplifying the underlying message. It created coherence. And when you have a term or a phrase that creates coherence, you see how it shapes a cultural logic. Regardless of what you feel about Luntz’ particular terms, there is no question of his efficacy.

Today, with the internet, we also have a lot of people throwing spaghetti at the wall with terms, in countless different environments that you probably don’t pay a lot of attention to. They try out terms to see what will get traction and then move those terms up in the information ecosystem. But rather than just seeing if the term will resonate, they create tons of content using it. They produce YouTube videos. They seed the term throughout Wikipedia. They talk about the term on Twitter. They try to get the term everywhere, to shape the entire ecosystem. And then they target journalists, baiting them into using the phrase as well. This is how extremist groups take conspiratorial logics to the mainstream.

Search doesn’t return high-quality content when high-quality content doesn’t exist.

Consider, for example, the term “crisis actor.” This term emerged in extremist online forums after the Sandy Hook shooting in order to undermine those who were speaking on TV about the massacre. By calling them crisis actors, conspiratorial and far-right groups argued that Sandy Hook didn’t actually happen, and that those talking of it on TV were hired by the “deep state” to fake shootings in order to push gun control. This is conspiracy. There have been a large number of YouTube videos associated with that term, pushing that logic, long before Parkland happened. But when Anderson Cooper asked David Hogg on national TV if he was a “crisis actor,” people started searching the term. And that’s the point. Those who went to YouTube to figure out what this term was about were introduced to a tremendous array of conspiratorial content.

You’ve probably heard many of the other terms that have been staged from far-right, conspiratorial, and extremist groups into the mainstream: Incel. Caravan. Globalist. Deep State. White Genocide. Journalists end up doing the work of conspiracy theorists and far-right groups when they amplify the terms that have been staged for them. The goal of the manipulators isn’t just to use journalists to influence the public. More important is how journalists are used to influence and shape the information landscape set up by Google and social media. In other words, by getting journalists to use phrases, it pushes those phrases across search. And with search results you can get people to go to sites that have a much greater volume of extremist and white supremacist content. Welcome to the rabbit hole.

3. Terms Left Behind.

The third kind of data void comes from terms that were once in common parlance but no longer are, particularly terms that are understood to be pejorative now. To be specific about this, I want to use a term that is offensive in some contexts: “Negro.” Like the N-word, this word is often pejorative and hateful in many contexts. But unlike the N-word, “negro” has not been reclaimed as a common in-group term.

Author’s note: After my talk, I had a thoughtful conversation with a woman who argued that “negro” can be used pejoratively, but in her circles, is primarily a positive word reflecting her community of black and African-American peoples. She pointed to organizations and books that continue to use this term now. I take her point, although I would say that this does not reflect the content available online associated with this term.

Very little new content is ever produced using this term. When you search on Google, you get explicit information telling the viewer that “negro” is outdated and offensive. You get links to historical materials, dictionaries (including Urban Dictionary which offers more pejorative information), and articles about contextualizing the term and recognizing its racist uses.

But what happens on YouTube? If you search for the N-word, you get content covering and reflecting on the racist history of this term. But if you search for “Negro,” you mostly get archival material, including political figures and references. Content that is not put in context.

A screen capture of a YouTube video featuring President Reagan making a racist remark.

Let’s dig into this video of our former president. What happens when I click on the Reagan link? I get a decontextualized, uncritiqued clip of our former President making a racist remark. And beside it, in the comments, I get a slew of racist commentary. Rather than being challenged to think about the racist contexts and history of terms like this, you are invited to normalize racist messages. For a curious young person trying to make sense of terms that have been left behind, this exploitable data void amplifies rather than challenges a history of hate.

4. Problematic Queries.

The fourth kind of “data void” starts with a problematic query. Who searches for “Did the Holocaust happen?” Needless to say, the results for such a query on YouTube are not full of educational material. Not many people spend time making videos trying to undo such conspiratorial thinking. And so, the first result on YouTube is a Holocaust denier. Yes, there’s a Wikipedia link to tell you that this is a Holocaust denial. But the denial video isn’t even what’s most interesting. When you click on it there are thousands of comments below it. And in just a few clicks… you find comments linking to the video viewers “really” need to watch if they want to know the truth about Hitler. And that video is a conspiracy video, framed as a documentary, designed to tell you that there was no real Holocaust. Many Holocaust denial groups consider this particular video the “red pill” that will convince people of the “truth.” Their goal is to get people to that video. And that goal is littered through all of the comments on any video with a related topic.

These four types of data voids highlight a major vulnerability in search. Search doesn’t return high-quality content when high-quality content doesn’t exist. But text-based search engines like Google and Bing are a lot easier to clean up, thanks to the work of people in the production of news. We have a much bigger problem in a video ecosystem. YouTube has a lot less content to work with. News organizations are less likely to put their content on there fast. The metadata associated with it is much sparser. And manipulators have not only learned about these voids, they’ve learned how to target journalists in their exploitation of them.

Recommendation Vulnerability: Alternative Influence

As I said before, YouTube is not just a search engine, it’s also a recommendation engine. When you watch a video on YouTube, you are recommended a slew of “related” videos to watch. More importantly, once you finish a video, a new one auto-plays out of those recommendations. This is fantastic if you’re listening to music. You end up getting exposed to all sorts of new and interesting songs. But what if you’re listening to political commentary? What if you’re listening to news-y content?

Recommendation engines and auto-play features are designed to encourage you to continue to engage. The algorithms behind these features are constantly “improved” to maximize that outcome. The designers of these systems don’t pay attention to the content itself, but instead pay attention to a set of signals, trying to determine what makes something relevant to one person or another. For example, if someone watches Video A and then watches Video B (or, better yet, actively likes or comments on both videos), the next person who watches Video A will likely be recommended Video B. Pretty simple, right? But of course, that relationship is something that can be exploited. After all, making those types of connections isn’t something that the news community thinks about. They put the videos up, they leave the videos there.

The communities that are trying to shape these connections understand how to produce connections.

Consider what happens if you’re searching for vaccination information on YouTube. The first content you will almost always get is factual, high-quality video content. Many health organizations, including the Center for Disease Control, have responsibly produced videos that are available on YouTube to detail how vaccination is safe and important and to explain the conspiracy around anti-vaccination culture. Their videos are, let’s be honest, a little dry, but they are still well-produced and informative. However, the conspiratorial anti-vaxx community is hell-bent on getting their message of doubt through to parents who might be wavering, who might be beginning to search for information on vaccines. And many search engines have struggled with the sophisticated SEO practices that the anti-vaxx conspiracy groups use.

YouTube has a different problem. If you watch a health organization’s video and then you follow the recommendations you’re given or allow auto-play to continue, within two videos you will almost always be watching a conspiracy video. Why? Because the communities that are trying to shape these connections understand how to produce connections. They know that comments matter, so they comment on both videos. They know that links matter. So, they help shape links. They know views matter, so they get their community to watch both videos. They strategically and intentionally train the algorithms to build a link between CDC content and anti-vaxxing messages. This means that when people search YouTube for videos about vaccination, they are highly likely to be exposed to anti-vaxxing messages.

My colleague Becca Lewis described another method to influence news content in her report, Alternative Influence. While the CDC will never host an anti-vaxxer for a “debate” about vaccination, many people in the political context use this format. They think it’s appropriate to create a false equivalency, to have two people debating. This occurs in mainstream news, but it also occurs on YouTube. How it plays out on YouTube, though, really matters. Mainstream commentators host people who take extreme views on their channels to debate. In doing so, they send a signal to YouTube that these channels should be linked in the recommendation system. And so, the next people who are watching the mainstream channel will likely be recommended the one hosted by someone with fringe perspectives. This is a way of creating a pathway, a connection. It’s a way of manipulating the actual network graph of these systems. You go to YouTube for reasonably informed information and within a few recommendations, you are exposed to fringe, extremist, or conspiratorial content.

When people first hear about this dynamic, their initial response is — well, remove extreme content. And I understand that sentiment. They want YouTube to not allow anti-vaxx material. Or hateful or conspiratorial content. I have a lot of sympathy for that response, but there’s a problem if you think about the long run. YouTube has already gotten rid of a LOT of content, it has “down weighted” a lot of content, and there’s still so much there that’s utterly terrible. Partially this is because creators who have an agenda, like those in the anti-vaxx communities, have learned to skirt the lines. After all, most anti-vaxx content doesn’t tell people to not vaccinate; it asks people to question whether or not vaccinations are safe. That’s the process: seeding doubt. And it’s much harder to talk about removal with content focused on doubt. YouTube is especially sensitive to this because they don’t want to be seen as politically biased or removing content that is trying to promote dialogue.

They don’t send hateful messages. They just get their audience to doubt common ideas.

Of course, some media manipulators know how to exploit companies’ anxiety around political censorship to push the edge and promote anti-scientific frames. How many of you watch PragerU videos? How many of you are familiar with PragerU?

PragerU is produced by Dennis Prager, a conservative talk show host. PragerU produces a video a week. Their goal is to undo the “leftist” messages produced by universities. Their videos are popular on YouTube, but they are especially popular on Facebook. Their CEO claims that one third of all U.S. Facebook users has watched a PragerU video on Facebook. That’s significant. If you talk to people who are teaching in universities right now, they are constantly getting questions that come from students watching PragerU.

PragerU exploits data voids on YouTube to invite people to doubt widespread values. They don’t send hateful messages. They just get their audience to doubt common ideas. For example, if you’re a teenager who just encountered the term “social justice,” you might throw it into YouTube. If you do, you won’t get a conversation about the history of the term or the different movements involved in it or why a commitment to addressing historical inequities is important. Instead, you’ll get a PragerU video telling you that “social justice” is a propagandist term, that the term is not meant to help you, but is actually meant to harm you. (Of course, who is included and excluded in this “you” is significant given that their target audience is often conservative and religious. Whether they mean to or not, they help encourage young white men to see themselves as the “real” oppressed people.) PragerU’s strategy works because “social justice” is another data void: racial justice movements have left the term behind and are no longer producing new content related to “social justice.”

Once you watch one PragerU video, you’ll be given a non-stop stream of them. There are hundreds of them. Maybe you’ll get a video titled “Why No One Trusts the Mainstream Media.” Or the one on “What They Haven’t Told You About Climate Change.” Let’s check out the latter.

A screen capture showing how YouTube includes a link to Wikipedia on PragerU’s video about global warming.

YouTube recognizes it’s in contested territory so it provides a link to a Wikipedia entry. But why the entry on “global warming” instead of “climate change?” I don’t know. “Global warming” is another left-behind term. So, if you search for “global warming” (which is a reasonable thing to do on YouTube), you’re going to get hoax videos. Climate change denial videos. YouTube helped climate deniers build that pathway.

The problem with these paths is that most of them are not total disinformation. They are arguments for doubting a particular line of thought. They are inviting you to question, to see doubt. To look for more information — to do your research. And if YouTube removes such content, no matter how conspiratorial, they are met with charges of censorship. It’s a very familiar strategy. The same thing occurred with Russia Today (RT), and their “Question More” campaign in the UK.

Russia Today poster questioning climate change.

RT put posters around the UK: “Is climate change more science fiction than science fact?” They seem to be inviting you to question more, to consume all sides of the news. When the UK responded by removing them, RT put up their next round of ads: “This is What Happens. Redacted! Censorship!” In this way, they staged a challenge to speech. They seeded doubt and when that was called out as propaganda, they were required to remove the posters, at which point they decried that their rights were being taken away.

I’m not convinced that removing conspiratorial or doubtful content actually gets us anywhere in the long run; it simply creates different types of polarization. Do I believe some types of content need to go? Absolutely. But we need to think about where the power is in this dynamic. How do we understand the link between curiosity and extremism?

How do we understand the link between curiosity and extremism?

There are other ways to approach this. Personally, I’m fond of a technique that Spotify has implemented. I don’t know how many of you use Spotify, but you know how when you listen to something and you’re in a groove, and then suddenly you’re interrupted and you’re like not that. There are certain things that are known to be massive disruptions; for example, take the Christmas music problem. No matter how much you love Mariah Carey, if you’re grooving out to “We Belong Together,” you don’t want to follow it with “All I Want for Christmas is You.” If you’ve chosen to listen to top 40 hits, you don’t want to be slammed with a Christmas album. If you’re listening to Christmas music, it’s because you’re already in the Christmas music thing. And once you’re there, all you want is Christmas music. It’s a separate universe. Same thing with kids’ music. You may like They Might Be Giants, but you don’t want their kids album. So, Spotify has had to actively break structural patterns in their data that are sensible according to many criteria. And that requires understanding content, and really understanding context. On YouTube, it’s more complicated, but I’d argue that the company needs to actively examine and break certain recommendations by recognizing the strategic deployment of doubt and conspiratorial thinking on their platform. To realize that recommendations are fundamentally about amplification and to think responsibly about their role as amplifiers.

Epistemological Vulnerability: Scriptural Inference

Of course, any choice to design algorithms to amplify content or shape what people might see raises a different problem. Who decides what should be amplified and what shouldn’t be? What’s conspiratorial and what’s legitimate difference of opinion? When should disagreements be bridged and when should people not be exposed to different perspectives? My commitments as a social scientist mean that I believe it’s unethical to show people climate denial content when they’re looking to learn about climate science. The same is true for anti-vaxxer and Holocaust denial content. Suggesting that these are “sides” to a debate is a form of false equivalency that I believe is dangerous and irresponsible.

Yet, the lines aren’t always clear. As a researcher, I also recognize that people hold different and often contradictory truths. I respect that one person’s religion is another’s myth. I recognize that political commitments are exceptionally nuanced. And when it comes to knowledge, I accept that people hold different epistemologies. In other words, how people know what they know varies. I may be wedded to rationality, evidence, and reason, but I respect that some people start with experience or faith.

Since Eli Pariser first coined the term, many people have lamented the presence of “filter bubbles” on social media. In the political context, they’re seen as dangerous, especially on sites like Facebook. What does it mean to not be exposed to “the other side”? Researchers struggle with this, because often people choose to self-segregate regardless of what algorithms recommend. They double down on a world that’s just like them. When is it appropriate for recommendation engines to expose people to new information? Is that helping inform people? Is that a form of proselytizing? Is that an act of power that needs to be contested? And what happens when the starting points are in two radically different places?

When communities focus on getting to the “right” search query, manipulators can help stage content that people who search for different terms never see.

In Searching for Alternative Facts, my colleague Francesca Tripodi describes sitting in a Bible study in Virginia. After spending an hour analyzing a Biblical passage, the pastor turned to talk about the then-new Tax Reform bill. Diving into a particular passage, the pastor encouraged his congregants to apply the same tools of scriptural sense-making that they applied to the Bible to this political text. Using this method of analysis, their interpretation diverged wildly from how political wonks read the same information.

Francesca goes on to describe how the practice of scriptural inference is also applied to Google searches. Instead of using the search engine to research a topic, many of the conservative Evangelicals she observed approached Google for clarity and affirmation of something that they had been told. Within this context, leaders in the community — from pastors to talk radio personalities — asked listeners to search for specific terms to confirm the truth. They’re not encouraged to read the articles or try different paths, but to construct the “right” search query so that Google can provide the “right” information. And if you use the “wrong” search query, you get the “wrong” information. When communities focus on getting to the “right” search query, manipulators can help stage content that people who search for different terms never see.

This is important because where people start from sends people down different information paths both because of the architecture of search and because of how people approach search differently. Consider the difference between searching on YouTube for “Vatican pedophiles” versus “Vatican homosexuality.”

Screen capture of YouTube results for search terms “vatican pedophiles,” “vatican sexual abuse,” and “vatican homosexuality.”

On one hand, it’s very responsible of YouTube that it keeps the search results for these two queries separate and distinct. After all, it’s a very dangerous thing to collapse “homosexuality” and “pedophiles” into one category. But on the other hand, this means that whichever of those search terms you use to investigate for information about recent scandals will send you into a completely different world of content. These terms send you down entirely different paths. Your information landscape, your recommendation engines, everything is shaped based on how you begin this process.

You’re not just interacting with misinformation through content. You’re dealing with it through all the surrounding information.

Talking to Republican voters during a primary, Francesca was curious to know how they determined which candidates to vote for. Everyone around her told her that they didn’t trust “fake news,” by which they meant mainstream news sources — usually symbolized by CNN. So, she knew they were going to find a different path that was not about news, per se. She was expecting she would hear about community and different ways of trying to make sense of things. But they were like, “No. We go to Google.” And she thought — “Oh, I’m an academic. I know how to Google. Why is this so problematic?” And then she realized something important. These voters didn’t search to go look at the content. They didn’t click on the links. They put side by side, all of the candidates in a primary race to see what Google would offer up, because Google provided “all sides.” And they felt this would provide perspective. In other words, those headlines, those little clips, served up as search results, they mattered more than any actual content. The same ends up being true on YouTube. So much of the information ecosystem there is not about watching videos, but about seeing the clips and comments that are surfaced during searches. The things that surround the videos, the text that is meant to get you to click on it — this text becomes the end of the story, not the beginning of it.

This was not how Google (or YouTube) was designed. When we talk to Google engineers about the fact that people are doing this, they flip out. Google isn’t designed to be liberal or conservative, but it is designed to be data-driven. It has an epistemology that assumes that the knowledge is within the data. And it is designed with the assumption that people will click through to that data, that content, not simply read the headlines. This is a fallacy that we struggle with in general. Those in the field of journalism know there’s a constant struggle over what headlines do. Are headlines trying to get people in to read, or do they do the work on their own? How is your worldview shaped through a collection of headlines written by editors who each stretch the story in order to drive clicks if people don’t actually click? And it’s not just that people only read the headlines, it’s also that people follow the world through the notifications they receive on their phone, pushed from Twitter or Facebook. You’re not just interacting with misinformation through content. You’re dealing with it through all the surrounding information. And that’s what manipulators know how to exploit.

Rebuilding from Here

Misinformation is not a new problem. Nor is the exploitation of the media ecosystem. Whenever a new medium gains power, there are people who will exploit it for personal or organizational gain, whether for profit, ideology, or politics. Unfortunately, engineers design systems for how they hope they might be used. Just as journalists write for how they hope their information might be consumed. And policymakers create laws for the change that they hope it will make. Most people like to build, like to create, like to inform — to imagine all of the awesome ways in which their creations might be used. They don’t like to think about how they might be exploited.

The cultural wars we’re facing are rooted in epistemological fragmentation.

I would love for us to get better at building the structures and processes to proactively think about how things might go wrong. I respect the security community for spending time thinking about how a system can be exploited in order to build up resilience. I think that mindset is really critical for community and for news: Thinking about how things will go wrong in order to make them better. But building the right structures to allow people to both imagine a future that is amazing and imagine all the terrible things that might go wrong, is a really hard organizational challenge. As we look at our culture wars, we don’t like to find the hard ways through. We like to find the things we can blame, or the one thing we can fix. And right now, that conversation is about technology. Yet, the cultural wars of today weren’t created by technology, even if they are amplified by it.

The cultural wars we’re facing are rooted in epistemological fragmentation. In other words, the differences between how people build knowledge about the world are fracturing the very social fabric of our country. This is happening along countless dimensions, although it’s most visible through our political debates and our access to information. But let me explain this in the context of news. It’s easy to say that people who watch Fox and people who watch CNN see the world differently. What’s more concerning is that they are increasingly less likely to know each other. Equally troubling is that the structures that used to bridge those sources of news have disappeared.

Tech knows how to universalize, it knows how to scale, it does not know how to bring it down to the root of local communities.

Local news has played many important roles in our social fabric. Many people focus on the substantive, especially from the journalistic community, and that’s awesome. But I want to focus on the structural role that local news played. Because of local news, people knew people in the business of news; they understood how news was made and were therefore more likely to trust the business of news. If you do not know anyone in news, you do not trust the business of news, and the national news becomes a real foreign actor. Local news also provided a bridge to national news. It allowed it to become contextualized. It wasn’t just about reporting on local information, it was about building that conceptual connection. Local news anchored epistemological difference. Even in cities with multiple newspapers with different ideologies, it allowed a way of seeing the world differently, of seeing the range of views in the community. That’s a service, but that’s not a business. And certainly not in the way understood by hedge funds and private equity.

During the takeover years of the 1980s, financiers ate up local news organizations to extract real estate and obliterated the social value of those institutions along the way. And it gets interesting when we see the switch to the tech industry. Now, the tech industry did affect the business of local news, but that business was set up to fail by the financiers long before the internet came about. Major tech, however, was designed as a response to the takeover culture of the 1980s. That’s one of the reasons you see these controls over stocks, these attempts to not let hedge funds and private equity gain control. There is a belief within tech that we can do this better and we can do this right, and that’s why we’re at a moment right now where we can’t think of accountability within these frames. The thing about tech is that it fundamentally focuses on abstraction. It focuses on generalization, on scale, and, above all, on growth. When you think in terms of abstraction, you lose all the local value, all of the structure there. When an abstracted entity tries to produce local information, it will fail because it’s not actually rooted within community. Tech knows how to universalize, it knows how to scale, it does not know how to bring it down to the root of local communities. And that’s one of the reasons we’re seeing this fragmentation of epistemology. The more everything becomes abstracted and generalized, the more fragmented society will get because epistemologies fragment, and fragmented epistemologies drive division.

Our society isn’t simply divided; it’s fragmented — and increasingly fragmented. Many of us have the ability to choose to surround ourselves with people like us across whatever dimensions matter to us. And we do. This isn’t simply about echo chambers. It’s really hard work to build a society with people who share different epistemologies. Nowhere is that clearer than when we look around the globe and think about the role religion plays as a form of epistemology. How many countries have gone into civil war over religious differences? Bridging fundamental difference in epistemology is a societal project, one that countries have struggled with over and over again. Including this country. Getting out of that is hard and it isn’t simply about information, it’s about really humanizing and building those connections in the local fabric, in order to build it up as a whole.

Resilience is about having networks of people you can turn to. That you can ask questions of, that you can make sense of. That you can trust.

Part of why war brings people together is that the military is a social project. You learn to be vulnerable with other people, to find common ground across difference, to lay down your life for someone who is different than you for a common cause. This isn’t accidental — it’s how soldiers are made. And that’s the funny thing, we’ve actually had all of these projects in the United States at a national scale: American universities, the military, and other structures that brought people together across difference and allowed them to be vulnerable together so that they could build up community in a sophisticated way. Of course, all of those are under attack in different ways.

Many of you in the room are involved in community. Many of you know how to bring people together for a common cause. And that is utterly critical — now, more than ever, we need you. We need to understand that it’s not just about coordinating people to achieve something, but that when you bring people together you are purposely and intentionally building the networks of community, building that fundamental social fabric. Facebook may have thought of itself as modeling the graph of this country, but you who are in community need to actually build those networks. And to build them smartly. It’s not just about building with people who are of the same kind, but instead building with people who bridge across difference. Because when we look at things like YouTube, resilience is not about removing certain content. Resilience is about having networks of people you can turn to. That you can ask questions of, that you can make sense of. That you can trust.

What’s at stake isn’t simply information, it’s how we’re building an understanding of the world around us.

That’s why I was really delighted with the last recommendation in Knight Commission on Trust, Media and Democracy’s report. The Commission recommended a commitment to national service. This can be read as a project to give back to the country, but it can also be implemented to strategically network the people of the country. Imagine building cohorts, as you would in the military. Picture people who have fundamental differences but are actively and intentionally structured to work alongside one another. Imagine how you might purposefully build teams not out of shared interest or diversity, but to intentionally bridge gaps in the social graph, to intentionally connect people and communities.

In closing, I want to say: truth is fragmenting. It will continue to be so until we realize that what’s at stake isn’t simply information, it’s how we’re building an understanding of the world around us. While there are many bandaids we might be able to put on our current information ecosystem challenges, the real solution comes down to thinking about the communities we all live in. Thinking about information as a thing that bridges across those communities, and being smart and strategic about that. If you want to address the vulnerabilities that I mentioned upstream in this talk, the key is to get down into the roots and address those communities. Many of us who built social media imagined that we would do it just by creating the networks. We were wrong. This won’t be done by social media. Or by better information. It will only be done when communities focus on building programs and projects that bring people together.

Thank you.

Acknowledgments: This talk was made possible with support from the John S. and James L. Knight Foundation, Craig Newmark Philanthropies, and the News Integrity Initiative. It builds on the work of Data & Society’s Media Manipulation Initiative. Editorial support was provided by Patrick Davison.

--

--

danah boyd
Data & Society: Points

researcher of technology & society | Microsoft Research, Data & Society, NYU | zephoria@zephoria.org