Can Tech Rein in Conspiracy Theories, Algorithms?

MIT researchers study ways to curb misuse. Hint: AI may actually help.

MIT IDE
MIT Initiative on the Digital Economy
5 min readJun 14, 2024

--

By Peter Krass

The many accomplishments of digital technologies — notably in AI, social media and online platforms — are also raising concerns and sparking new research.

In particular, the spread of false conspiracy theories online and the credibility of social media platforms were among the research topics discussed at this year’s MIT Initiative on the Digital Economy’s (IDE) Annual Conference. Researchers presented ongoing work on several critical issues facing the digital economy. [See related blogs about AI talks at the conference.]

One especially pernicious problem is the way misinformation, including false conspiracy theories, is easily spread online. Magnified by this year’s U.S. presidential elections and other global events, it’s a topic under investigation by MIT Professor David Rand, who leads the IDE’s Misinformation and Fake News Group.

Rand studies both cognitive science and digital technology and told conference attendees that misinformation sharing is promulgated by three major drivers:

  • A lack of attention: Online, it’s easy to be distracted.
  • Repetition: Hear a message often enough, and you’ll tend to believe it.
  • Partisan elites and political parties: Politics are complicated, so we often turn to our leaders for help. However, that’s not always a good idea.

Given the widespread concerns about misinformation-sharing online, Rand’s team hunts for deterrents. A promising idea is that technology itself could be used to curtail the spread of false conspiracy theories online.

Specifically, ChatGPT may offer help where humans fail.

Thomas Costello, a postdoc at MIT and a member of Rand’s IDE research group, described an experiment to soften the growing impact of conspiracy misinformation. It’s an important issue: Costello cited a recent poll showing that fully half of all Americans believe in at least one conspiracy. And dissuading people from believing these false theories has proven to be extremely difficult — until now.

In Costello’s experiment, researchers asked human participants to first describe a conspiracy theory they believed to be true. Next, the researchers fed these responses to an AI system, then prompted ChatGPT to engage in a discussion with the subjects. During this discussion, the system would try to persuade the human subjects to change their views by showing how their conspiracy theory was unsupported by the facts. (A control group also chatted with the AI model, but on a banal topic.)

To measure the results, the subjects were also asked to rate their belief in their conspiracy theory on a scale of 0 to 100%, and to do so twice: once before the intervention with AI, and once after. The two ratings were then compared.

Overall, Costello said, the AI interventions decreased the subjects’ beliefs in false conspiracy theories by about 20%.

“Evidence and arguments can change your beliefs about conspiracy theories,” he added. The good news is that “needs and motives don’t totally blind you once you’re down the rabbit hole.”

Social Media: How Credible?

Even when social media platforms are used for ethical activities, questions arise about their credibility — especially when these platforms act as information gatekeepers. To avoid content overload for users, algorithms predict the kinds of content you’re likely to click, comment on and share. Then, based on that calculation, they give you a ranked feed. Algorithmic recommendations are becoming ubiquitous. But are they reliable?

Alex Moehring, an MIT doctoral student and IDE researcher, described work that explores whether engagement-based rankings promote low-credibility content. His research also asks: If credibility is baked into a platform’s ranking algorithm, what’s the cost?

Moehring detailed an online conundrum: On social media sites, less-credible content is often the most engaging.

For instance, a headline saying, Congress is voting on a measure next week, might get no response. However, “The Pope endorses Mickey Mouse to be the next U.S. president?” Click!

Surprisingly, Moehring said, “for the majority of users, things seem to work pretty well. Engagement maximization has a small but positive effect on the credibility of news content that users are engaging with.”

However, for a small but important subset of users, ranked feeds encourage them to increasingly engage with less-credible news content. Moehring based his findings on a study of Reddit, the social media platform with more than 100,000 active communities.

No Quantum Leaps

In other talks throughout the week-long conference, quantum computing and “geek” management were also key topics. Unlike conventional computing’s binary approach, quantum systems use quantum bits, better known as qubits, that exist in a nondeterminate state.

IBM, an aggressive developer of the technology, hopes to have a system combining more than 4,500 qubits by 2025. But fully functional quantum computer won’t be available at scale anytime soon, according to Jonathan Ruane, an IDE research scientist.

Ruane said that complicated technical issues are slowing quantum’s progress.

For instance, when large numbers of qubits are put together, they become highly error-prone. “There’s this enormous chasm between where we are today with the number of error rates,” he said, “and how low we need to get that error rate before we can get into practical, commercial applications.”

Geeks Rule

Attendees also heard how “a bunch of geeks” have figured out a better way to run a business. That’s the conclusion of IDE Co-Director Andrew McAfee, author of a detailed study, The Geek Way.

As McAfee explained, “geeky” companies including Netflix and SpaceX are managed in a new way that has empowered them to topple long-standing industry leaders. For example, Netflix’s market value exceeds $270 billion, more than double that of either Disney or Warner Brothers.

How did the geeks do it? By adopting four new norms, McAfee said:

  • Ownership
  • Openness
  • Science.
  • Speed

It’s another formula to navigate the complexities of the digital economy.

Peter Krass is a contributing writer and editor to the MIT IDE.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.