Rebuilding legitimacy in a post-truth age
Co-written with David Rothschild
The current state of public and political discourse is in disarray. Politicians lie with impunity. Traditional news organizations amplify fact-free assertions, while outright fake news stories circulate on social media. Public trust in the media, science, and expert opinion has fallen, while segregation into like-minded communities has risen. Millions of citizens blame their economic circumstances on caricatures like “elites” rather than on specific economic forces and policies. Enormously complex subjects like mitigating climate change, or balancing economic growth and inequality, are reduced to slogans. Magical thinking (e.g., that millions of manufacturing jobs can be created by imposing trade restrictions; that everyone can have affordable, high quality, healthcare without individual mandates or subsidies; that vaccines cause autism) has proliferated.
The result has been a called a post-truth age, in which evidence, scientific understanding, or even just logical consistency have become increasingly irrelevant to political argumentation. Indeed, a flagrant disregard for consistency and evidence may even be interpreted as a demonstration of power: the power to create one’s own reality.
The consequences are not to be minimized. As President Obama pointed out in his farewell speech on January 10, when stripped down to its bare details, democracy is fundamentally a system for arguing about ideas. But for an argument to have a constructive outcome, it must originate in some basis of shared reality. Call it what you want — science, facts, truth, reason, mutual respect — unless there is some source of information whose legitimacy is accepted by all parties to an argument, they have no hope of making progress.
The fundamental crisis of the post-truth age, therefore, is not the explosion of fake news, or even existential doubts about the truth itself (which are always present), but rather a crisis of legitimacy: Whom and what to trust. As we will argue, rebuilding legitimacy is a much harder problem than designing better algorithms, and is not something that any one entity, however powerful, can solve on its own. We will also argue, however, that the present crisis presents a rare opportunity for journalists, technologists, and social scientists to collaborate on finding solutions.
The problem of legitimacy
Creating legitimacy is hard in part because it is intrinsically self-referential. For example, why should I trust that claims made by scientists have more legitimacy than those made by politicians or salesmen? The answer, one would hope, is that scientists are required by their peers to follow the scientific method, which is the most reliable mechanism we have for uncovering facts about the world. But how do I know that the scientific method is reliable, and not just an elaborate hoax perpetrated by scientists to elevate themselves above others? The answer, again one would hope, is that we know science works because of all the useful facts it has established about the world. As a career scientist I happen to believe this argument (although, like any source of truth, science has its problems), but it is unavoidably self-referential: science is to be trusted because it establishes facts, and facts are to be trusted because they are established by science.
The self-referentiality is problematic, because if I suddenly decide to stop trusting science — both methods and results — there is no easy way for science itself to re-establish that trust. Nor is this problem unique to science. Most discussions about legitimacy end up veering back and forth between the trustworthiness of the outcome (in this case, information) and the trustworthiness of the process that generates the outcome. If the legitimacy of the media depends on their ability to deliver reliable information, but the media is also the main source of assessing reliability, then once I stop trusting the media I no longer trust their assertions of reliability, which in turn undermines their ability to persuade me. If I think that fact-checking organizations are themselves biased, then their attempts to point out falsehoods may perversely reinforce my mistrust of the fact-checkers. In all these cases, once legitimacy is undermined it is hard to get it back.
One way to view the recent presidential campaign — and its aftermath — is as an all-out assault by the winning side on traditional sources of legitimacy. Scientists, experts, elites, government agencies, and the media have all been attacked not with fact-based rebuttals of specific claims, but rather with broad and sweeping innuendo. Climate science is a hoax. CNN is fake news. BuzzFeed is a failing piece of garbage. Meryl Streep is the most overrated actress in Hollywood. John Lewis is all talk and no action. Megan Kelly had blood coming out of her whatever. There is no attempt in any of these characterizations to engage with the substance of the argument being made. Instead, they seek to manipulate public perception such that the source loses their standing to make arguments of any kind.
The role of technology
Battles over legitimacy are of course nothing new. Authoritarians have long sought to undermine the legitimacy of their opponents, or the press, to position themselves as the sole source of truth. US political culture likewise has a long history of ad-hominem attacks substituting for reasoned dialog, dating back to the founding fathers (e.g., election of 1800 was especially nasty). And even the most respectable media outlets have long been vulnerable to accusations of groupthink, self-censorship (e.g., prior to the Iraq War), false-equivalency bias, and conflicts of interest. Whether the situation in 2017 is worse than it has ever been is debatable. Nevertheless, it is sufficiently bad to have generated extraordinary alarm.
It is also especially dispiriting because technology was supposed to help with the legitimacy problem, not make it worse. Long before Facebook and Twitter one of the most inspiring promises of the Internet was that it would dramatically reduce the cost of producing, disseminating, sorting, and aggregating information. The effect, it was imagined, would be to replace traditional, elitist institutions of knowledge creation and distribution — i.e., universities, think tanks, the media, etc. — with the “wisdom of crowds.” Let a thousand (or a million or a billion) flowers bloom, so the reasoning went, and by the magic of deliberative democracy, the best of them would flourish.
It was a nice idea, and consistent with a prominent strand of political theory, but it was always more of a vague aspiration than a well-specified design principle, and it was never tested in any systematic way. Instead, thousands of entrepreneurs and engineers built websites, added “social” features like comments, ratings, and badges, and tinkered with their designs (or copied others) to maximize traffic or engagement or revenue. The ideal of crowd wisdom may well have been there in the background but, with some possible exceptions like Wikipedia and Stack Overflow, it was never the first priority, or even necessarily a priority at all.
At any rate, it only half-worked. We got the dismantling of traditional authority structures, but the crowd wisdom that was supposed to improve upon it hasn’t materialized. Instead what arose was more like a popularity contest, where the “ideas” that rise to prominence are those that people find appealing or that are simply presented to them most frequently. Add to this the increasing geographic segregation of the country, along with the ability conferred by search and digital media to avoid both disconfirming information and inconvenient conversations, and we arrive at a world in which two people who disagree increasingly lack the motivation or means to resolve their disagreements respectfully.
What is to be done?
Not surprisingly, a lot of people are suddenly wondering what can be done, but much of the focus so far has been on the urgent-yet-narrow problem of fake news. For example, a number of ideas for flagging and filtering outright fabrications, checking claims for factual accuracy, placing facts in context, and linking to source data, have been proposed or are in development. Although efforts like these are commendable and might well help some readers and viewers more accurately evaluate the information to which they are exposed, they should not be expected to have much effect on the legitimacy problem, for at least four reasons:
First, there are many ways to mislead readers without saying anything that is flat-out false. Cherry picking data, quoting sources selectively or out of context, omitting alternative explanations, conflating correlation with causation, attacking straw men, and insinuating a claim without explicitly making it (e.g., by posing it as a question), are all common strategies for manipulating a reader to reach a conclusion that isn’t clearly supported by available evidence, but would easily pass a straightforward fact check.
Second, many proposed solutions suffer from the same problem as existing “social web” designs: they assume that everyone will use them as their designers intended. Unfortunately, recent history suggests that as soon as any tool for checking facts, or rating news sites is rolled out, malicious actors will look for ways to undermine it. For example, it took only weeks for fake news to go from being a rallying cry of the left to being used against it (see, e.g., #FakeNews). Similarly, climate science deniers have long tried to undermine science precisely by co-opting its language (e.g., by labeling themselves “skeptics”). Proposed solutions must therefore be designed with adversarial attacks in mind.
Third, more clearly differentiating between truth and falsehood will have little impact on public opinion unless citizens are exposed to disconfirming as well as confirming information, and more broadly encouraged to examine their own beliefs critically. In addition to providing better context for assertions made in mainstream or social media, any solution must also seek to increase the diversity of perspectives to which citizens are exposed.
Finally, when citizens do interact with others with whom they disagree, whether in comment threads of online news stories or on social media, exchanges quickly degenerate into shouting matches of unsubstantiated assertions and ad-hominem attacks. The result is that political arguments, far from yielding reflection and learning, instead harden existing attitudes and reinforce negative stereotypes. If arguing about ideas is central to democracy, then part of the solution must involve helping people to argue constructively.
Solving all these problems together presents an immense challenge. But if the problem is that the legitimacy of traditional institutions has been eroded beyond repair, then the solution may be to create new institutions that can restore the common ground of trusted information that we have lost.
A useful start may be the creation of a broad consortium of media and technology organizations that would publicly articulate and commit to clear standards of reporting, analysis, and argument. Ideally consortium members would come from across the political spectrum and would represent a wide range of opposing viewpoints. Nevertheless, they would be united in their commitment to making and evaluating arguments based on evidence and logic. Collectively consortium members would hold each other accountable for adherence to these standards and in return would support each other against outside attacks. They could also agree on the importance of information diversity, exposing their readers to alternative arguments, and presenting them with reasoned disagreements. Finally, they could collaborate on designing and deploying fact checking and contextualization tools with the aspiration of subjecting all claims on the truth to scrutiny, wherever they arise.
Industry collaboration on this scale would be unprecedented, and maybe impossible, but the growing sense that this is indeed a crisis might provide the motivation to take steps that would have seemed excessive only recently. And although even the broadest and most inclusive of consortia can still be attacked as illegitimate, a unified front based on clearly stated principles and transparent procedures at least renders such attacks more difficult.
Even so, journalists and technologists can’t resolve the crisis on their own. In order to solve a problem one must understand it, and there is much about the current state of the world that we simply don’t understand. It is easy to list Trump’s attacks on the legitimacy of the press, for example, but absent large-scale surveys of what is being read by whom, along with before-and-after polls on specific opinions, it is hard to quantify their impact. It is easy to reel off anecdotes about filter bubbles, but much harder to measure polarization and information diversity systematically across media and platforms. And it is easy to opine about the need to foster more deliberation across partisan lines, but no one really has any idea how to do it at scale.
Solving these problems will require significant advances in the social and behavioral sciences, a challenge with which social scientists are uniquely positioned to help. But to help in a meaningful way, social scientists must also deviate from their usual mode of production. Whereas academic research typically proceeds from theory to data to results, in this case the order would be flipped, with the problem to be solved coming first and the research agenda following. “Solution-oriented research” of this sort is rare in social science, and will likely be disruptive to business-as-usual, but once again the impending sense of crisis may be just what is needed to stimulate fresh thinking.
This country faces many vexing problems. We don’t have to agree on what all the problems are, and we certainly don’t have to agree on how best to solve them. But we do at least have to agree that there are problems that we share, and that it is in all our interests to try to solve them. If we can’t do that then we are deep trouble.
Duncan Watts (@duncanjwatts) is a principal researcher at Microsoft Research, and author of Everything is Obvious: Once You Know the Answer (Crown Business, 2011). David Rothschild (@davmicrot) is an economist at Microsoft Research, and founder of PredictWise.