Can Technology Stop the Spread of Misinformation?

Mackenzi Turgeon
Rivrb
4 min readOct 9, 2020

--

COVID-19, which has claimed more than 200,000 lives in the U.S. alone, continues to spread as we enter the colder months. But exacerbating the impact of the pandemic is the spread of a different kind of virus, not a biological one, but a digital one, and that is fake news. The fight against COVID-19 should be a non-partisan issue, and yet we’ve seen the public further divide itself along party lines in part due to the multitude of misleading articles about the severity of the virus and the effectiveness of mask wearing. Sites like Facebook and Twitter have been slow to take action, with some posts gaining millions of interactions before being removed. Misinformation’s effect on the pandemic is only one example of the ways fake news has misled the public, and social media giants have only exacerbated the issue. A Pew Research Center Study on misinformation found that Americans see made-up-news as a major problem, specifically made-up-news and fake videos. Nearly 70% of U.S. adults say made-up news and information greatly impacts Americans’ confidence in government institutions and more than half say it has an impact on our confidence in our fellow citizens. Interestingly, the majority of the American public believes it is up to journalists to stop the spread of fake-news, but experts see technology playing an important role in tackling the issue.

Experts predict that tech companies will emerge with new mechanisms for identifying and labeling both trustworthy and misleading content. Lauren Felt, lecturer at University of Southern California imagines, “There will be mechanisms for flagging suspicion content and providers, and then apps and plugins for people to see the ‘trust rating’ for a piece of content, an outlet or even an IP address.” Others predict “new visual cues” that will help consumers of news distinguish between trusted news and misinformation.

Rirvb Inc. is one such tech startup embodying these predictions. A social media site designed not only to allow people to interact with content that they wouldn’t usually, but to provide users with a verification system that they can trust. Social media has transformed the way we consume news because the algorithms in place cause news to undergo the same popularity dynamics as selfies and funny videos. A research fellow at Harvard University’s Berkman Klein Center for Internet & Society, Amber Case, remarked: “Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by creating an article that causes enough outrage that advertising money will follow. In order to reduce the spread of fake news, we must decentivize it financially.” Rivrb hopes to become a social media platform that incentivizes trustworthy content with new algorithms and verification systems. Oppositely, sites like Facebook are set up to lock users in an echo chamber, showing them clickbait and posts that align with their views, regardless of their informational value because that’s how they make money. A six year study showed that the more active a Facebook user is the more the user focuses on a small number of news sources that reinforce their views. Rivrb however, values truth over capital and diversity of viewpoints over virality.

Our web app helps curious thinkers put a magnifying glass up to the content they’re consuming — literally. Later stages of the product will feature our verification symbol, a magnifying glass with a check, that will appear next to articles that have been verified by our team. But experts say it’ll take more than tech to foster a culture that values truth in media in an era that financially incentivizes virality and political polarization; public funding of quality journalism and the integration of information literacy into education will have to be key players in the fight against misinformation. Rivrb’s goal is not only to be a leader in information verification and social media transparency, but to create Rivrb chapters for college students committed to truth in information that can contribute to our open-source code and promote information literacy in their communities.

So to answer the question the title poses: Yes, innovative technology can help slow the spread of misinformation. But tech startups will be truly innovative when they prioritize fostering a culture of information literacy in conjunction with developing novel algorithms. Instead of creating a rabbit hole of viral content like our competitors, our goal is to create a world where Rivrb’s magnifying glass verification symbol will be obsolete.

All quotes are taken from “Theme 3” of “The Future of Truth and Misinformation Online” a 2017 study conducted by the Pew Research Center.

Mackenzi Turgeon is the CMO of Rivrb, an early-stage tech startup that aims to be a leader in online information verification and community information literacy.

--

--