Social media self-regulation has failed. Here’s what Congress can do about it.

Nina Jankowicz
5 min readNov 29, 2018

--

Remarks delivered at US Helsinki Commission Briefing “Lies, Bots, and Social Media,” November 29, 2018.

From calling the influence of malign foreign actors on our electoral discourse and processes a “pretty crazy idea,” to inviting regulation, however begrudgingly, the social media platforms have come a long way since 2016.

Facebook, Twitter and Google have made political advertising more transparent, creating searchable databases of political ads and have tightened restrictions on who can purchase them. In order to reduce the amount of fake news being spread by ads, Facebook has updated its policies “to block ads from Pages that repeatedly share stories marked as false by third-party fact-checking organizations.” Twitter’s policies no longer allow the distribution of hacked materials.

Facebook has attempted to increase authenticity and transparency around the governance of Pages, an influence vector Russia’s Internet Research Agency utilized in 2016. It claims that administrators of Pages with large audiences undergo extra verification to weed out fake accounts; Facebook has also made other adjustments to arm users with information about the pages they follow.

All of the platforms have made adjustments to their algorithms in order to attempt to combat the problem of disinformation. Facebook did this by focusing on content from “friends and family.” Google’s “Project Owl” changed the search engine’s algorithm to surface more “authoritative content,” and Twitter has reverted its news feed to a more chronological timeline with less algorithmic intervention. Facebook and Twitter have also invested more in content moderation to identify and remove content that violates the platforms’ policies, including those related to false information, fake accounts, and hate speech.

This is not an exhaustive list of the changes platforms have made, but rather an overview of the more well-known and purportedly messianic features meant to deliver us from all manner of Internet evil.

They are not enough.

Among the features I’ve just described, loopholes have been exploited, missteps unforeseen, and pernicious disinformation allowed to flourish to the point where there is no question that social media self-regulation has been a failure. Just a day before the midterm election, over 100 Facebook and Instagram accounts likely controlled by the IRA were still active; Facebook only removed them after a tip from the FBI. This is a more complicated problem than playing whack-a-troll to remove fake accounts and increasing transparency on political ad buys.

These measures are, of course, important first steps towards ensuring authentic, healthy online discourse. But even a cursory look through the performance metrics of the ads released by the House Democrats reveals that plenty of the 2016 IRA disinformation performed very well organically. Because the IRA had over time built trust and community with their audience of sometimes hundreds of thousands on each Page, many people saw and engaged with that content without the purchase of a single ad.

Today, a lot of this type of content is spreading through Facebook groups, which the platform’s algorithm prefers based on the misguided understanding that they promote content between “friends and family” and which are not subject to the same level of content moderation.

Yes, groups are a means of connecting people, but they are also breeding grounds for disinformation due to their privacy settings. “Closed” and “secret” groups are not searchable or transparent and the content shared in them is only visible to members, so Facebook is less likely to moderate content within them. What’s more, the platform still incentivizes and promotes group activity.

Groups were a key vector in my investigation for BuzzFeed News into fake profiles supporting an Independent candidate for Senate in Massachusetts; a number of fake personae, controlled not by lines of code, but by a human and thus able to slip by some of Facebook’s detection tools for fake accounts, would astroturf groups with posts about their candidate, creating the guise of grassroots support for the campaign. Columbia University’s Jonathan Albright has also researched how groups support the spread of disinformation on Facebook and has noted that banned pages and brands such as InfoWars often move their activity to closed groups after their public pages are banned.

Finally, the spotty and opaque enforcement of platforms’ terms of service, including with brands like InfoWars which have a record of spreading hate speech and disinformation, undermines the entire discussion of content moderation to begin with. Legitimate voices are being silenced for infractions such as repeated uses of profanity while groups with considerable public reach that are violating much more serious clauses of the platforms’ terms are allowed to continue their diatribes until public outcry becomes too great.

To that end, transparency around takedowns has been lamentable. While Twitter has released the entire archive of takedowns of state-backed content on its service, a truly laudable move, Facebook releases this content selectively, and Google rarely. This contributes to the opacity of the problem and both a Congressional and public lack of understanding of how best to solve it.

However, there is an opportunity for Congress to join together in a bipartisan manner to address this issue. Here are a few areas where it might start:

To begin with, the Honest Ads Act should be passed before the 2020 election. There is no reason that online political advertising, which in 2018 saw at least a 200% increase in spending compared with the 2014 midterms, should be subject to different rules than television, radio, and print ads. The sooner these rules are harmonized across platforms (including smaller online advertisers) and integrated with existing FCC and FEC regulations, the safer and more equitable our electoral processes will be.

But as I noted earlier, regulating advertising only covers a fraction of the malicious information shared on social media. Congress should pressure platforms to increase transparency surrounding the administration of groups and pages.

Further, Congress should explore the establishment of a specialized independent regulatory and oversight mechanism that could:

  • Harmonize definitions of concepts such as hate speech, abuse, and disinformation across the Internet;
  • Define and require that platforms obtain informed and meaningful consent to their terms of service, serving as an awareness-building mechanism about data privacy issues and the limits of speech on the platforms;
  • Serve as a neutral appellate body for users who feel their content has been unjustly removed; and
  • Conduct public audits of algorithms, account takedowns, and data stewardship.

Congress must also consider the role of education — in particular, media and digital literacy, critical thinking skills, and civics — in protecting online discourse and empowering citizens. In short, Congress might consider earmarks or grants for educational initiatives in these areas — which I detailed in my testimony before the Senate Judiciary Committee in June 2018 — as well as the use of taxes or fines paid by social media companies to fund such initiatives or public awareness campaigns, as the U.K. is considering. These are generational investments, and ones that Congress must begin now. No regulatory or oversight solution can be complete without an informed and discerning electorate.

Finally, it is important to note the critical awareness-building and oversight role that Congress can play without the passage of any new legislation. Pressure from investigative journalists and Congress is what has led the social media platforms to begin to reform. That should continue and be strengthened in the new Congress.

--

--

Nina Jankowicz

Global Fellow @TheWilsonCenter’s @KennanInstitute | Russia & CEE, Tech, Democracy | @FPInterrupted Fellow | Fulbright 🇺🇦 alum | Opinions mine