Combat election interference on social media by regulating content recommendation systems

Linda Yu
SciTech Forefront
Published in
5 min readJul 21, 2022

Linda Q. Yu, Ph.D.

Photo by dole777 on Unsplash

Executive summary

The misinformation campaign run by Russia’s Internet Research Agency during the 2016 U.S. presidential campaign illustrates the degree to which social media influences behavior that is critical to the health of democracy. Psychological research shows that once misinformation takes hold in people’s beliefs, it is hard to correct. Regulating algorithms on social media that recommends content and ads based on attributes it has inferred from users would help mitigate the effects of misinformation campaigns. Giving users transparency and choice over these ad and content recommendation systems and promoting sources that are rated as trusted by the userbase are promising ways to reduce the spread and influence of false information.

Key messages and recommendations

· Russian interference in the 2016 election caused widespread polarization and spread of misinformation.

· Psychological research suggests that misinformation is hard to correct due to motivated reasoning.

· Regulating attribute-based content recommendation systems would be an effective way to combat misinformation without running into issues of censorship and free speech.

· Policies should require social media companies to become more transparent and to provide opt-outs for attribute-based targeting of advertising to users.

· Platforms can use best practices from psychological research to promote increased attention to the accuracy of headlines and user-rated credible sources.

Introduction

Russia’s Internet Research Agency (IRA) interfered in the 2016 U.S. presidential elections by spreading misinformation via social media outlets. On Facebook, most of the ads placed by the IRA used attribute-based targeting of African-Americans and liberals. On Twitter, Russian troll accounts targeted both liberals and conservatives, with conservative users tweeting or retweeting Russian troll accounts more than 5 million times. These ads and tweets contained false claims and were especially polarizing, involving topics such as immigration, school shootings, and police. Such tactics are a threat to a functioning democracy because they cause voters to operate from differing sets of facts, heighten the emotional stakes, and they are enabled by recommendation systems on social media platforms designed to match content to attributes of desired users.

The effects of misinformation are lingering and persistent. Psychology research shows that biases cause people to not only be more likely to choose to read and believe fake news that agree with their viewpoints, but also to be less likely to accept corrections of these beliefs. This means that targeted misinformation campaigns like those run by Russia in 2016 are likely to have a lasting influence on people’s beliefs and behavior.

There is a broad appetite for regulating social media across the ideological spectrum. Polls show that 56% of US adults support government regulation of social media, and 62% do not trust social media companies themselves to moderate false or harmful information. At the same time, there is a deep concern about censorship: Texas has recently passed a bill (HB20) on preventing social media platforms from banning accounts based on political viewpoints. Therefore, a policy that seeks to regulate social media must both be effective at discouraging the promotion of misinformation, but also be sensitive to concerns of limiting free speech.

Congress is considering action on regulating social media: the new Platform Accountability Act (PATA), sponsored by a bipartisan group of senators (S.4066), would require social media companies to disclose internal data to independent researchers, including on user targeting in advertising. In addition, the Honest Ads Act (S. 1989 in 2017–2018 Congress, S. 1356 in 2019–2020 Congress, now folded in as a subtitle within the For the People Act voting rights bill, HR1) promotes transparency in political advertisement and prevents foreign actors from purchasing political ads on social media.

Figure 1. Malicious actors can cause misinformation to spread using tools on social media to target specific groups. Proposed solutions would allow users to see how they are being targeted, allow them to opt-out, and become more thoughtful about the content they are consuming through psychological interventions.

Both PATA and the Honest Ads Act are good first steps to establish more transparency on social media platforms. However, the following additional provisions, summarized in Figure 1, would give more choice to users and preserve the freedom of speech, while limiting the ability of malicious actors to distribute harmful and false content to interfere in future elections.

Policy options and recommendations:

· Add more requirements for transparency and informed choice for users on attribute-based advertising. While PATA would require data being used for content recommendation to be released to researchers, users should also be afforded transparency and choice on what algorithms are used on them. Such attributes are currently not well-disclosed to users — for example, the “why am I seeing this ad?” feature on Facebook only discloses a single targeted attribute at a time, with preference for the more popular and less sensitive attributes (e.g. demographics), potentially enabling malicious actors from cloaking their intent if they target more specific groups. The European Union’s proposed Digital Services Act, for example, would require greater transparency and choice for users in what attributes are being used by advertisers. This obligation is only required of large social media platforms, to reduce burden on the smaller ones. This provision is also low-cost for the government and would be broadly popular with the public.

· Add requirements for accountability for misinformation and risk-mitigation measures. While the Honest Ads Act would prevent foreign actors from placing paid political ads, it does not prevent non-paid harmful content from circulating. The EU’s Digital Services Act includes provisions where large social medial platforms must remove illegal content and mitigate the risks of both legal and illegal harmful content, including mandatory risk assessments and independent audits. The definition of illegal content is dependent on individual jurisdictions, but it usually includes child sexual abuse material and terroristic content. In addition to such content, polls suggest strong public support for the removal of false and misleading information and particularly posts that lead to violence on social media platforms, but these requirements would put more onus on the government to conduct oversight over platforms.

· Mitigate the spread of misinformation by using recommended practices from the psychological literature. Psychological research has shown that many “shares” of misinformation comes down to simple inattention to the accuracy of the content rather than ideology. Interventions that make readers more critical and reflective, such as having them occasionally rate the accuracy of a non-political headline, have been shown to be very effective at reducing the sharing of misinformation in the same users. Content recommendation systems could also use aggregate user accuracy ratings to down-rank fake news sites and promote credible sources. These crowdsourced ratings match expert ratings of reliability, with mainstream sources on the left and the right (e.g., CNN, Fox News) rated as more trusted by both liberals and conservatives compared to fake news or hyper-partisan sources. The advantage of this policy is that it is cheap (relative to having experts rate accuracy of headlines), easy to implement, and effective.

--

--

Linda Yu
SciTech Forefront

Linda is a Ph.D. cognitive neuroscientist studying learning and decision-making.