Misinfo Motives: developing a framework for the incentives for spreading disinformation

--

Misinfo Motives is a project created during the 2020 Assembly Fellowship at the Berkman Klein Center at Harvard University. One of three tracks in the Assembly: Disinformation Program, the Assembly Fellowship convenes professionals from across disciplines and sectors to tackle the spread and consumption of disinformation. Each fellow participated as an individual, and not as a representative of their organization. Assembly Fellows conducted their work independently with light advisory guidance from program advisors and staff.

The Misinfo Motives project and this post were authored by John Hess, Michaela Lee, Isabelle Rice, and Brian Scully; the project team has backgrounds in human rights, government, and software engineering.

What can we do to stem the flow of disinformation? It’s a question we hear a lot these days, and one that seems overwhelming. Disinformation seems to come from everywhere, from shadowy actors who too often evade direct identification. When approached in its entirety, the question of how to shut down sources of disinformation is too large to be approached effectively. Our project “Misinfo Motives” suggests an initial framework to help categorize disinformation actors by motivation, which helps break down the looming monolith into more manageable problems. We developed this framework, and drafted an accompanying white paper, during our time in the 2020 Assembly Fellowship at the Berkman Klein Center.

Mapping actors and their motivations

In our framework, the actors in a disinformation narrative include both the disinformers (intentional creators) and amplifiers (both knowing and unknowing). The disinformers include state actors, politicians, industries, conspiracy theorists, and grifters. The amplifiers include platforms such as news media or social media platforms, as well as ordinary citizens. All of these groups are further broken down by degrees of complacency. Once actors in a disinformation narrative have been identified, our framework offers avenues to begin to examine their motivations.

Misinfo Motives Framework

We group motivations into four primary categories: financial, political power, social status, and ideology. These are clear distinctions in the abstract. But, we recognize that in practice they are not cleanly divisible, and may change over time. For example, financially motivated actors may evolve to buy into the ideological narratives they push, and inversely the ideologue may come to more highly value the financial benefits of their position. However, despite the messiness, we propose that this mapping of actors to motivation is still deeply valuable, and in fact, key to identifying the correct intervention for specific disinformation actors. Existing frameworks, like the “ABC” framework (Actor, Behavior, Content) help address questions of who is involved and how they operate, but to date, there is not a detailed motivation framework for actors. We hope this draft white paper serves as a start to better understand what motivates bad actors and how understanding those motivations allow us to identify more effective interventions.

Understanding motivation is key to intervening against disinformation campaigns

Over the last few years, tech companies, governments, and ad agencies have developed and tested measures to counter disinformation actors, including deplatforming, demonetizing, and labeling false information. This space is still relatively nascent, and new interventions continue to be developed and tested. For example, the advent of COVID-19 has spurred the development of new policies, tactics, and enforcement to deter misinformation and disinformation across social media platforms. Despite recent efforts, we still know very little about how to design effective interventions, particularly at the level of how individual interventions affect specific types of actors. This is where we believe applying an understanding of motivations to various interventions will be helpful.

We believe that interventions by tech companies can be grouped into three major categories: reduce prevalence and views, remove financial benefits, and educate/empower the user. Different companies enact these interventions in different ways, and we are only learning about the effectiveness of each lever now. For example, the Advertisement Industry focuses more on ensuring companies do not buy ad space on malicious sites by tracking the quality of ad placement, increasing the accountability of placement, and ensuring the validity of user traffic. Governments are dabbling in media literacy efforts and some have gone so far as to enact laws against “fake news.”

In our draft white paper, we apply this framework of actors, motivations, and interventions to several case studies, including the anti-vax community and climate deniers. The case studies corroborated our assumption that actors typically have more than one motivation. For example, most actors spreading disinformation in the anti-vaccination movement have both ideological and financial motivations. This leads us to believe that layering interventions that target different motivations may increase the effectiveness of disinformation counter measures.

Our hope is that through the application of this framework the problem of attacking disinformation becomes more segmented and effective.

For more information on the Misinfo Motives project, visit the team’s website. Learn more about the Assembly: Disinformation program at www.bkmla.org.

--

--

Assembly at the Berkman Klein Center

Assembly @BKCHarvard brings together students, technology professionals, and experts drawn to explore disinformation in the digital public sphere.