Can Twitter Stamp Out Misinformation? Should It?
We interviewed 4 social media experts on how Twitter can fix its enormous fake news problem. You’ll be surprised at the consensus in their recommendations.
By Max Eddy
While misinformation has many homes, Twitter is notable for its uncanny ability to spread incorrect information far and fast, with disturbing results. Former President Donald Trump’s heavy use of the platform, especially during his quixotic effort to overturn the 2020 election results, is just one of many examples of how Twitter misinformation has been weaponized.
Despite this obvious issue with misinformation, it is unclear how much Twitter can or will do to curb its spread. A recent whistleblower report presented evidence that Twitter was manifestly failing to control misinformation, particularly outside the US. An ongoing attempt to purchase the company by self-styled free speech proponent and billionaire Elon Musk has called Twitter’s future commitment to control misinformation into serious question.
With the first US national election since 2020 on the horizon, I put the same questions to several misinformation experts: What should Twitter do about misinformation on its platform, and is the company willing to do it?
While not everyone agrees, there’s surprising consensus. It turns out we know a lot about the problem of misinformation but may not necessarily be equipped to deal with it. Moreover, getting rid of misinformation might actually be worse than learning to live with it.
A Note on Semantics
When I first started writing about misinformation a few years ago, I knew I was going to have trouble telling it apart from its more insidious cousin, disinformation. Misinformation is anything that is factually incorrect, while disinformation is factually incorrect information specifically spread with the intent to deceive. To keep them straight in my mind, I made a little mnemonic device: mis-information is a mis-take, and dis-information is dis-seat (deceit).
My goal here isn’t to ferret out intent, so for simplicity’s sake, I assume the best possible intentions and use “misinformation” throughout, except where interview subjects use a different term. However, my word choice doesn’t mean that Twitter and other social media platforms have less of a problem with disinformation.
Odanga Madung on Misinformation in the Kenyan General Election
With the US midterm election still weeks away and the 2020 election a fading memory, I reached out to investigative journalist and member of Mozilla’s Tech and Society Fellows Odanga Madung to get a view on how Twitter handled a recent election in Kenya, a subject Madung has written about in depth.
“I can speak to the Kenyan context because I’m a Kenyan, and we’ve just come from what I consider to be a shit-show of an election information environment, thanks to the platforms,” Madung told me over our WhatsApp call. “This country essentially plunged into what we’re calling a post-election information dystopia. Many of us could really not tell what was real and what wasn’t. I’m not exaggerating,” he said.
Although I couldn’t see his face, it was easy to tell that Madung was not exaggerating, as his energetic voice dropped down to an earnest, quiet level.
This isn’t the first time Madung has criticized social media platforms’ involvement in Kenyan politics. In September 2021, Madung and coauthor Brian Obilo released a report that alleged Twitter’s trending algorithm was systematically manipulated during a particularly tense moment in Kenyan politics. They found 3,700 accounts generating more than 23,000 tweets across 11 different misinformation campaigns, eight of which were successful in being highlighted as a trending topic on Twitter. A statement from Twitter included in the report says that the company “took action” on 100 accounts the authors investigated and found evidence of “at least one network of coordinated accounts.”
Madung and Obilo go on to claim a conflict between Twitter’s business model and its commitment to curbing misinformation. The authors write, “The overall message this sends is that it’s okay to sow hate on the platform, so long as its owners can place ads next to the content and make money from it.”
Madung’s criticism of social media is not limited to Twitter. In my conversation with him, he spoke broadly about social media platforms in general. In June of this year, his research into TikTok posts related to Kenya’s general election led to that platform removing several videos.
When asked for comment on its role in the 2022 Kenyan general election, Twitter referred us to its blog post outlining the company’s efforts. The company told me, “We’re committed to providing a service that fosters and facilitates free and open democratic debate, while protecting the health of the electoral conversation, as demonstrated through our civic integrity and engagement work.”
Twitter and Transparency
Madung’s view on Twitter’s activities during the Kenyan general election were presaged by an internal report disclosed by whistleblower Peiter Zatko. That report says Twitter was unable to provide even a “scaled-back” version of the efforts it took during the 2020 US presidential election for a then-upcoming Japanese election. The report also states that Twitter staff lacked the language skills to properly address global misinformation.
To better control misinformation, Madung told me that Twitter needs to be more transparent about its interventions: “Without transparency, it’s almost impossible for us to know what has worked in this context and what hasn’t worked and therefore what interventions we should double down on and what should be recalled.” As an example, Madung told me that if Twitter is labeling misinformation, the platform should release information on how those labels have reduced the viewership of those tweets.
When I reached out Twitter, the company provided some information about the effectiveness of labeling tweets—but in the context of the US midterm elections. Twitter said it tested redesigned labels last year and is rolling them out globally. The company told me, “Our redesigned labels increased click through rates by 17%, meaning more people were clicking labels to read debunking content. We also saw notable decreases in engagement with tweets labeled with the new design: -13% in replies, -10% in retweets, and -15% in likes.” The company notes that it also removed tweets and otherwise reduced the visibility of tweets that violate its Civic Integrity policy.
Since I initially spoke with Madung, Twitter has appeared to increase transparency by expanding access to data with its Twitter Moderation Research Consortium. This provides additional data to researchers who are approved for access. Madung said that this effort was commendable, but he remains skeptical.
“We need to be careful that we’re not falling for something similar to what greenwashing did to the environmental justice movement,” said Madung, referring to the practice of marketing that adopts the language of environmental concern but without being backed by substantive action. He said that Twitter should, “expand the scope beyond information operations and cover broader aspects of information disorder, which cover gray areas of information environments.”
Start Early, Don’t Stop Too Soon
Madung also criticized Twitter’s timing of election misinformation interventions, saying that it and other social media platforms move too slowly to begin working against misinformation and then halt their efforts too quickly. “Especially in a hyperactive information environment like the US, you can’t just go in three months before the election and say, ‘Now is when we decide to do things!’” quipped Madung. “No. It’s almost too late.”
As with the 2020 US presidential election, Madung said that problems on Kenyan Twitter continued after voting as well. “There was just a lot of electoral misinformation in a post-election context that was going around that was not necessarily debunked or labeled, and that was essentially allowed to run rampant around the platform, which greatly affected the voters’ perception of the post-electoral environment.” Madung told me that this confusion and misinformation created enormous anxiety in the Kenyan population.
After the 2020 US election, Twitter was criticized for halting enforcement of its civic integrity policy in March of 2021, just two months after the deadly January 6th attack on Congress that attempted to stop the certification of the 2020 election results.
I asked Madung if he thought Twitter would be able to make the necessary changes to curb misinformation on its platform. “It’s better to just speak from a point of evidence,” he said. “All you have to do is observe the record, and so far I don’t think the record is that good.”
Once again, Madung grew quiet and serious as we concluded our conversation. “Especially in light of the midterms, one thing I’m learning is that a lot of this stuff that the platforms are promising you guys is complete crap,” warned Madung. “[US voters] really have to hold them to account.”
Madung conceded that while he didn’t know the specifics of what Twitter would do to protect the US midterm elections, he speculated the company would “do the usual script,” which includes labeling posts that are potentially misleading, partnering with fact-checkers, and other similar efforts.
In a follow-up call with Madung, he added another warning: single-mindedness among advocates calling on Twitter to address misinformation can be counterproductive. “Focusing on narrow goals like avoiding violence or removing a specific problematic individual, while important, cannot be our sole focus.”
When I reached Twitter for comment about its approach to misinformation on its platform and the 2022 US midterm elections, the company referred me to its blog post that lists some of its remediation plans, such as labeling misleading tweets, partnering with news and fact-checking organizations, creating information “hubs,” policing recommended tweets, and addressing misinformation preemptively, among other efforts.
The company said, “People use Twitter to find real-time, reliable information about elections, and we take that responsibility seriously. Our approach to the US midterms is multifaceted and is applied across multiple languages. We continue to build on this work.”
Michael Caulfield on the Rhythms of Misinformation
A key challenge of election misinformation is that it doesn’t necessarily need to change individual behavior to be successful, according Michael Caulfield, a research scientist at the University of Washington’s Center for an Informed Public (CIP). “The worry is not that your uncle Frank thinks the election is stolen,” said Caulfield. “It doesn’t really matter much. But it does matter if your Secretary of State thinks that.”
Caulfield spoke with me over a video call while sitting in front of a professorial bookcase. Like a professor, he spoke with easy authority, but with a grim smile, given the focus of our conversation.
Some of the misinformation he and his colleagues at the CIP have seen was entirely predictable, since it frequently appears around elections. I asked if he could give me a sense of what to expect. “Let me get my calendar,” he said.
The Misinformation Calendar
We first spoke in early September, and Caulfield said that the next predictable piece of misinformation we could expect would be regarding mail-in ballots, because that’s when some states and organizations start sending applications. Caulfield predicted that some examples of these applications would be posted online and called, mistakenly or not, actual ballots, with the implication that it was proof of fraud.
He also expects that there will be conspiracy theories based on stray marks or misprinted mail-in ballots, and mail theft will be portrayed as suspicious. “There’ll be mail theft, because there’s always mail theft,” he said.
Toward the middle of September, as poll workers are trained for the election, Caulfield expects that training materials will be used as fodder for misinformation. It will likely take the form of “out of context or surreptitiously recorded that could appear partisan or biased,” he said, while noting that it is possible some training actually could be improper.
Another source for misinformation will likely be the testing of voting machines and tabulators—that is, the machines used to count paper ballots. “Sometimes when you test the machines, you find that things were set up wrong and there’ll be some error,” said Caulfield. “That will be portrayed as the machines are actually rigged.” The irony, Caulfield continued, is that testing the machines is actually ensuring that they work properly on election day.
Although he made it clear that he could continue predicting misinformation for as long as I was willing to listen, he had one more scenario to warn me about. In early October, “Election authorities will be getting rid of materials from the 2020 election.” Certain states, he explained, require that election materials be retained for 22 months after the previous election and then destroyed. He suggested that this would be portrayed as a “vast plan to cover up 2020,” or that “People will interpret them as destroying materials from the current election.”
I followed up with Caulfield to see how many of his predictions had come to pass. Regarding poll worker training being leaked, he said “I think we were a bit early on this one. Or wrong.” Similarly, the destruction of election materials being re-contextualized incorrectly has yet to appear, but he warned that it was still possible.
He said, however, that misprints and stray marks being used as evidence of a conspiracy had popped up a few times and gained “medium traction,” as had voting machine testing being portrayed incorrectly as attempts to rig voting. Ballot applications and other non-voting materials being portrayed as actual ballots have been the most significant.
“This is one of the bigger events we’ve seen this cycle,” Caulfield told me in an email. The most significant current example is a controversy in Colorado where registration cards with instructions on how to register were being portrayed as approved registered or actual ballots. Caulfield pointed out that he’d written about the potential for just this sort of misinformation a few weeks earlier.
When asked about Twitter’s role in spreading misinformation, Caulfield had some praise for the company. He said that toward the end of the 2020 election, Twitter tagged some posts to make them less shareable. Instead of simply banning the person or the content, he said, Twitter was able to slow the spread of misinformation. The company noted in its documentation that it had employed methods that suppressed the shareability of misleading posts.
“The place where Twitter did comically bad was labels,” he said. “Particularly the generality of labels.” He described a scenario where a shocking misinformation video, showing ballots being shredded, for instance, might get a label that merely says that the 2020 election was safe and secure. A better label would directly address the content of the post and call it inaccurate.
“[Twitter users] have a question in their head,” said Caulfield. “The context needs to apply to the question the reader has.”
At least in policy, Twitter appears to be moving in that direction. In its undated policy on manipulated media—which covers re-edited, deepfaked, or recontextualized media that is not satirical—Twitter says that tweets with manipulated media may be labeled as misleading, with additional limitations placed on how much the media can be shared. The policy also allows for Twitter to require the deletion of manipulated media, lock or suppress the accounts that posted it, or some combination of both.
When I contacted Twitter on this subject, the company cited its partnerships with news outlets such as the Associated Press and Reuters as an effort to provide context to readers. “We’re able to more frequently create Trends with contextual descriptions and links to reporting from trusted sources. We’re also able to proactively provide context on topics garnering widespread interest including those that could potentially generate misleading information,” the company told me.
Living With Misinformation
When I asked Caulfield whether he thought Twitter would do what’s necessary to quash misinformation on its platform, I was surprised how similar his thinking is to Madung’s. “I’m not sure you want to ‘quash’ misinformation,” he said. Doing so would invariably also suppress important, factual information: “A healthy information environment is going to have a level of misinformation.”
Instead, social media platforms need to find balance. “Our traditional responses to misinformation are not adequate to the new speed and scale,” he said. What’s better is a system that gives institutions time to respond and gives readers the context they need in a timely fashion. “Corrections come,” Caulfield continued, “but far after the initial explosion of misinformation. And when [corrections] do come, they trail at a level far below the number of people exposed.”
“It still ends up being a struggle, but it’s more evenly matched,” he said.
Caulfield believes that finding balance is possible, as is providing useful information fast enough, but that it might be difficult for Twitter to withstand the costs that will come with it. “I think it’s hard,” he said. “The fact is, part of their work is going to result in them making wrong decisions from time to time, and they’ll pay a really heavy price.”
Jillian York on Seeking Parallels With Medical Misinformation
After the political chaos brought on by misinformation in the 2016 and 2020 US elections, it’s easy to forget that another misinformation epidemic bloomed in 2020, too, about the COVID-19 pandemic. From the earliest days of the virus arriving in the US, the internet has thrummed with misinformation about its origin, the safety of vaccines (they are very safe), and bogus cures.
I reached Jillian York, director for International Freedom of Expression at the Electronic Frontier Foundation, at an office in Berlin. She told me that a central challenge of addressing medical misinformation is that companies too often rely on automated removals, which York pointed out is the most affordable measure. However, automating the process means that other conversations are removed as well—including, for example, discussions about the long-term impacts of COVID-19. York said it’s been particularly problematic in communities where people might be discussing medical treatments and the possible downside of those treatments.
“We’ve seen that censorship often backfires,” York told me. “It’s okay to take things down, but you can’t take things down and disappear them,” she continued. “You have to convince people what’s actually right.” Mere censorship provides no alternatives to misinformation.
“An authoritarian approach to disinformation is not going to fix the problem,” York said. “Censorship is tempting, and maybe it’s the right answer—in limited quantities.”
Looking for Lessons in Medical Misinformation
York stressed that, especially with medical misinformation, the majority likely isn’t malicious. “There’s some really toxic stuff coming from people seeking to destroy or make a profit,” said York, “but there are a lot of people who are just scared.”
“We should be looking more toward the individuals who are often promoting misinformation as opposed to the individuals who are just questioning what they heard,” York told me.
To address the roots of medical misinformation, York said that companies cannot rely on overly simple moderation efforts built on inflexible, one-size-fits-all policies enforced by non-experts. “I have a fairly rare cancer, and I would not trust a moderator being paid less than $15 an hour from some third-party company,” she told me. She explained that while some medical alternatives are dangerous, some have merit, and the discussion is valuable for the people affected.
Room for questioning and even dissent is important, especially for communities that have historically been badly served by the medical establishment. “Remember that just within our lifetimes, LGBTQ people were considered mentally ill,” said York. An example of this fraught history can be seen in the Diagnostic and Statistical Manual of Mental Disorders classification of homosexuality. “It’s understandable that not everyone has trust in the medical industry.”
The Profit Motive
When it comes to whether Twitter will adequately address misinformation, York conceded that it was “trickier than some make it out to be,” but stressed that there are fundamental problems with what social media has become. She said that for her, the good old days of Twitter were when her feed was just the people she followed. Twitter, Instagram and most other social media platforms rely on algorithmic feeds that display promoted content, posts out of chronological order, and posts from people you don’t follow.
“In being profit-driven, these companies are constantly trying to innovate for more clicks, more eyeballs,” said York. “It becomes a soulless system.”
On the subject of transparency around its medicalmisinformation interventions, Twitter pointed me to its approach to COVID-19. “Through enforcement of our misleading-information policies (civic integrity, COVID-19, crisis), our teams work to protect conversations [about] health on Twitter, while ensuring people have the context they need to make informed decisions about content they encounter,” said Twitter. “Our approach to misleading information on Twitter is iterative, and we continue to share updates on this work.”
In addition to the aforementioned labeled tweets, the company cited its Read Before You Tweet feature, which prompts you to read linked articles before posting them. The company said, “ Early tests showed that people opened articles 40% more often after seeing the prompt and 33% more often before retweeting.”
Twitter also referred us to its expanded Birdwatch program and said early results are promising. “According to pilot surveys, a person who sees a Birdwatch note is, on average, 20%–40% less likely to agree with the substance of a potentially misleading tweet than someone who sees the tweet alone. They are also, on average, 15%–35% less likely to like or retweet a tweet than someone who sees the tweet alone.”
Jevin West on the Science of Misinformation
Jevin West, associate professor at the University of Washington and one of the cofounders of the Center for an Informed Public, broke down for me the theory and tools the CIP uses to track misinformation, particularly rumors.
Part of West’s work has been building the infrastructure to ingest millions of public conversations hourly from social media to observe the rise and fall of misinformation. “We’re tracking trends, we’re looking at topics that are rising, hashtags that are rising, individuals who are receiving excessive likes,” explained West, who was careful to point out that this work is done with the oversight of an institutional review board and does not include private conversations, which he and his team do not have access to anyway.
The result is the ability to track rumors and “see how they’re amplified and who’s amplifying them,” said West. Importantly, West and his colleagues can see how content moves between social media platforms, and how it trickles into traditional media.
The goal is to understand misinformation, who amplifies it, how it spreads, and intervene if possible. Because West and his team aren’t working at any social network, those interventions usually take the form of public reports, working with journalists, and sometimes giving warnings directly to social media companies.
The work, unsurprisingly, has not been without challenges. Misinformation, West explained, often contains information that is both true and false. Moreover, numerous stakeholders and actors are involved, some of whom are automated bots posing as humans. “It’s a challenging space to be in, especially because the content can weigh on you psychologically,” said West.
Building a Theory of Misinformation
I reached West by phone while he was in between meetings. He spoke quickly and energetically, especially when we started talking about the research paper he and several co-authors published in the journal Nature Human Behavior. While others offer first-hand experience and historical knowledge, West and his team are building theories that explain and predict the spread of misinformation. In the complicated world of misinformation, it seemed as close to certainty as I was likely find.
West explained that in the paper, he and his team used contagion models similar to what epidemiologists use when tracking the spread of diseases like COVID-19. This let his team understand how misinformation spreads, and test the effectiveness of interventions intended to slow that spread.
This is important because, as West told me, there’s a lot of experimentation happening in the realm of misinformation interventions. “That’s what we developed with this method,” said West. “We can now take a new intervention and plug it into the model and then see how effective it is.”
He walked me through some, such as “circuit breakers,” which watch for surging interactions and intentionally depress the visibility of inauthentic accounts or misinformation. There’s also tagging or labeling, where a social media platform marks a post as potentially misleading. This last effort has had mixed results, West said, in agreement with the other experts I consulted. Labeling, he said, has sometimes led to misinformation spreading further rather than curtailing it.
West said a more promising intervention was prebunking, where people are shown examples of misinformation that they might see in the future before they encounter it in the wild. In its blog post about the 2022 US midterm election, Twitter called out prebunking as a technique it planned to employ. Similarly, the Federal Bureau of Investigation and the Cybersecurity and Infrastructure Security Agency issued an announcement in early October that seemed to prebunk concerns about election disruption from cyberattacks (direct link to PDF).
Twitter specifically cited prebunking when I asked for comment on its approach to misinformation in the US midterm election. “We continue to deploy Twitter Moments (prebunks and debunks), which provide reliable election resources.” The company also referred us to its Elections Hub, which contains national and localized election news in both English and Spanish, as well as “voter education public service announcements (PSAs) in English and Spanish, created using information from nonpartisan government and voting advocacy organizations.”
When I spoke to West, he did caution that the model his team developed is “just a model,” and that real life might play out differently. Still, he believes he has good reason to think it’s valuable. “We can actually test it,” said West. “We can look at when certain interventions are employed and then see how much it drops and then see if the model predicts the same thing.”
“Sometimes you can’t,” he conceded. “But we look for ways—as far as we can—to test it.”
Openness Is Important
It is perhaps telling of the scientific mindset that West was as enthusiastic in talking about testing and proving his model to a skeptical audience as he was talking about the model itself. But West is aware that his model is limited by the data that is available to researchers. He argues that if social media companies—beyond just Twitter—shared more data with researchers, it would provide a clearer view of the misinformation universe.
“I have to give Twitter more credit than some of the other social media platforms,” said West. “They do have a good API, they make data available for researchers, he continued. “Some of these other social media platforms, like those owned by Meta, are not as open.”
More data, West argued, allows researchers to take the role of independent third parties, operating above the fray of competing social media platforms. “I’m partial toward that because I’m an academic,” said West. “But I really believe it.”
Since I first spoke with West, Twitter has announced that it has further expanded access to its Twitter Moderation Research Consortium. In a follow-up email, West told me he thought this was a step in the right direction but didn’t fully solve the problem.
“The platform manipulation campaigns and information operation examples are mostly identified by Twitter,” he said. “It is a good thing because Twitter does some of the hard work and is in the most convenient position to do this work. It is a bad thing because it loses out on opportunities for other groups with other methods to identify campaigns not found by Twitter.”
I asked Twitter if the company was aware of West’s work or if it was using his team’s models, but I did not receive an answer. When asked if the company could tell us how many researchers it had partnered with as part of Twitter Moderation Research Consortium, and to characterize the data it shared, Twitter told me: “We’re focused on developing a global group of Consortium members, and to date, have accepted applications from researchers around the world. So far this year, we’ve shared 15 data sets, including platform manipulation campaigns originating in the Americas, Asia Pacific, Europe, the Middle East and North Africa, and Sub-Saharan Africa.”
Moderation in Moderation
The work of West and his co-authors has already yielded some interesting results. He told me that using the models of misinformation spread, his team concluded that moderate application of several different interventions is more effective than taking any one intervention to an extreme.
“Extreme interventions would be to deplatform people, or to deplatform a whole bunch of people,” explained West. Follow-up work would be to look at how effective deplatforming is at stopping the spread of misinformation. “I can say that the results so far show that deplatforming may not be as effective at reducing the spread of misinformation as we originally thought,” West told me. “That, of course, may change as we work through the full analysis.”
In all my interviews, I asked people whether they thought Twitter would be willing to do what it takes to curb misinformation on its platform. For West, I asked if he thought the social media company would be willing to follow the recommendations of his paper.
“Twitter has a lot bigger problems in terms of finance and dealing with Elon Musk,” joked West, referring to the on-again, off-again purchase from the mercurial billionaire. “I don’t see them making those changes anytime soon, because those could have dramatic changes in how users use their platform, and I don’t think they want to take that risk. And that’s understandable!”
On a higher level, West said that he hoped Twitter would engage with researchers and actually look at the published research. “We’re spending a lot of time thinking about the methods and the ways you would test this, and they’ve got the data.”
West told me he hoped that Twitter could adopt his team’s model so that it could do its own evaluations of how interventions might work. This would, he said, allow them to experiment within “theoretical worlds” instead of our own. Twitter did not directly answer when I asked whether the company was aware of West’s work or using his team’s models.
“Because right now, most of social media—not just Twitter, but most social media—they’re just kind of experimenting,” West grew suddenly emphatic over the phone. “But they’re experimenting with democracy.”
For him, the stakes are that high. “It might sound a little melodramatic but, really, the role that social media plays in our democracy and the health of our democracy depends on how we address this challenge.”
“Most social media has a lot of benefits but it has a lot of negative effects. It’s had a lot of negative effects on society and we, as a society, need to address them,” said West, who suggested a combination of corporate and government policies. “That’s going to be a long-term discussion.”
Cause for Hope, Cause for Concern
These four interviews—and the responses I got from the company—give me a lot of cause to hope that Twitter, and other social media outlets, could get a handle on misinformation. Misinformation is complex and requires a complex response, but it is far from mysterious. We’ve seen it in numerous countries and contexts and have begun to understand what works and what doesn’t. We can anticipate misinformation, map its movements, and predict how it will progress. Stamping it out completely isn’t feasible, or maybe even desirable, it turns out. We’re starting to understand better ways to react when bad information begins to spread. I am surprised to see that Twitter’s policies do align with what the experts recommend, at least in some ways.
There is also much cause for concern. Twitter and social media companies don’t have a great track record when it comes to addressing misinformation, and most of the people I spoke to doubted that companies would do the right thing: not because controlling misinformation is impossible, but because doing so could hurt companies financially.
Again, a wild card in the conversation is Elon Musk. Musk’s stated preference for few limitations on speech of any kind would seem to lack the nuance experts say is needed to effectively counter misinformation. So would his reported plan to cut Twitter’s workforce. What impact Musk will have on Twitter’s policies and ability to moderate misinformation is an open, unsettling question.
Misinformation is frightening because it can replace the truth, warping reality in ways that are not easily undone once a misinformed narrative takes hold. Twitter’s approach to controlling misinformation will be tested in the upcoming US midterm elections. Even if Twitter succeeds there, it will be tested again in countless moments before and after as well. If it ever fails, we may never even know for sure, because the truth will be so obscured in the chaos.
Originally published at https://www.pcmag.com.