Do you favor tech regulation? Why?

Adriana Lamirande
The Startup
Published in
8 min readMay 7, 2020

Originally written for Harvard Kennedy School Digital Platforms, Journalism & Information course, April 2020.

In amassing a vast user base and becoming the hub for modern communication, social media platforms have transformed into online town halls and defacto editors. At its inception, Web 2.0 peddled a utopian vision of a free flow of ideas and expression. But over time, the same companies proclaiming their service to the greater good began leveraging crowdsourced content and community building to collect private data and line advertiser pockets. Networks once centered around affinity devolved into oppositional mobs through the collapsing of identity and opinion that inevitably feeds the rise of hate speech and online disinformation. Prioritizing and rewarding controversy and conflict belies an ad-based monetization business model, and we’ve collectively removed our rose-colored glasses to glimpse our role as users as cogs in its machinery.

“Any system that allows for freedom of choice will create some balkanization of opinion.” As Sunstein deftly points out, the trouble is that choice has become imposed upon us, as algorithms make selections based on what will keep users on sites longer, strategically populating and profiting off of news feeds. Our online presence is defined by private entities predicting and controlling how we communicate, whose posts and opinions enter our field of vision, and the likeliness we’ll amplify the biases they’re feeding back to us — by emboldening us to click through for more and entrenching us into cognitive ingroups. Content amplification is merely a by-product of our online activity, and one it is unclear we consented to participate in.

The ease with which users can find like-minded individuals on the internet helps facilitate and strengthen fringe communities that share a common ideology. An unfortunate consequence of the technology’s nature is that many disaffected users (sometimes anonymously, sometimes not) feel comfortable spewing radical beliefs, hate speech, and inciting violence IRL. Those inclined toward racism, misogyny, or homophobia have found niches that can reinforce such views and goad them to dangerous behavior, with the added opportunity to publicize their acts to a wide audience.

While Facebook, Twitter and YouTube have pledged renewed policy efforts to improve parameters around user speech, sovereignty and safety — even recently retooling Community Guidelines and putting new protocols in place to specifically address hate speech — their content moderation efforts have had varying degrees of success. We also know that such moves have hurt the companies’ bottom line and growth numbers, which begs the question of what incentives (if any) are driving them to stamp down defamatory posts, fabricated content, and problematic bot accounts to improve newsfeed health and enable open public discourse.

The galvanizing effect of social media empowers extremists and conspiracy theorists to reach audiences far broader than their core viewer or readership. We’ve read countless news reports of mass shooters posting manifestos and garnering support from fellow incels on Facebook and YouTube before committing deadly rampages. A specialist quoted in “#republic” extends the trend to recruitment efforts, noting that “new patterns of social connection that are unique to online culture have played a role in the spread of modern networked terrorism.”

Where controversy invites clicks, the platforms cash in. Users’ experiences online are mediated by algorithms designed to maximize engagement and inform precision ad targeting, which often inadvertently promote extreme content. Capturing and reselling attention is at the core of the so-called “economic logic of digital markets,” identified by the Shorenstein Center’s Dipayan Ghosh and Ben Scott as effectively fueling societal divisions by compounding pre-existing biases, affirming false beliefs, and fragmenting media audiences. YouTube’s autoplay feature is especially pernicious. Critics note that rather than removing offensive videos, like one provoking homophobic harassment of journalist Carlos Maza, the company instead cut off the offending user from sharing in its ad revenue.

The AI and machine learning driving algorithms aren’t sophisticated enough to tackle the dilemmas inherent in identifying and addressing hate speech and disinformation online, and our en-masse content consumption is undoubtedly shaping related public interest and civic issues. As such, It is necessary and urgent to combat systemic ad hoc “platform law” devised by enigmatic self-regulators — where consistency, accountability and remedy are non-existent — and the only way forward is through government-sanctioned regulation. Presented with a minefield, Zuckerberg himself has called on federal regulators to help set baseline standards around free speech and content integrity to inform smarter practices at scale, a few of which I will elaborate on below.

Codify Santa Clara Principles Into Content Moderation Hypertransparency Law

Internet platforms are businesses built on asymmetric information and secrecy. In order for regulators to ensure they are best serving the public, they need a view into how their technology and internal decision-making works.

Just last week, the EFF put out a press release calling for feedback and suggestions from organizations and individuals around the world to update the landmark 2018 Santa Clara Principles on Transparency and Accountability that establish a set of practices social media platforms should undertake to provide transparency around content moderation, encouraging the disclosure of:

  • The number of posts removed and accounts permanently or temporarily suspended due to violations of content guidelines.
  • Notice and a clear explanation when a user’s account is taken down or suspended.
  • Opportunities to appeal content removal and suspension on a case-by-case basis.
  • Any appeals and accompanying analysis outlining the journey of a post or content creator that was notified of a guideline violation, flagged or reported by another user, or whose content was suspended or removed from the platform, could be an extension to this condition.

I would push regulators to extend such calls for public reports beyond content moderation practices to include curation practices. The government should ratify a law requiring social media companies to publish quarterly briefs outlining content curation procedure and execution. This would include illuminating instructions behind algorithms that were deployed in instances where hate speech was flagged or reported by a user, or when creators peddling false information about a public interest issue or event (elections or pandemic vaccine advances, for example) were notified their accounts would be temporarily or permanently suspended, or bot accounts were detected.

Additionally, input from human moderators should be automatically recorded during filtering and threat mitigation operations. Clarifying what moderators are supposed to be “catching” that the algorithm cannot should also be publicly disclosed, in order for the public and federal bodies to better understand the automated landscape they traverse and weigh in on more efficient methods. A separate recommendation could be made to companies to double down on hiring moderators as full-time employees instead of contractors, in a bid to accomodate more frequent shift changes to minimize fatigue, trauma and PTSD as well as ease the burden of the sheer content volume to monitor.

This could create a pathway to unveil black box operations of algorithms and the companies building them to instill trust amongst civil society members, political and community leaders. But enforcement remains a hurdle. A year after the recommendations were initially proposed, New America found that YouTube’s report failed to provide adequate transparency around the role automated tools play in content takedowns, Facebook’s report lacked basic information such as a number indicating how many pieces of content were removed for violating Community Standards, and Twitter’s report provided information on the number of accounts that were flagged and subsequently acted upon, but not on the amount of content that was removed.

A remedy would be to impose hefty sanctions, EU-style, on offending companies if they do not release comprehensive quarterly reports in a timely manner.

Legislate Public Interest Features in the Vein of Fairness Doctrine for the Internet

A return to public interest technology would mean a renewed focus could be placed on making tools available to users to help verify content trustworthiness, provide multiple viewpoints on a topic, and direct them to obvious and accessible channels to flag purported disinformation.

Regulators could require that platforms tweak recommendation systems to present alternative information alongside “fake news” or extreme speech patterns, so users are exposed to verified links to Wikipedia and other vetted sources that would debunk and discredit misguided claims, harmful speech inciting violence, or posts that definitively lean to the extreme left or right. One of Sunstein’s proposals includes the debuting of opposing viewpoint and serendipity buttons on posts to involve users and experts in productive discourse.

Furthermore, users could undertake voluntary self-regulation through a technical tool that would empower them with customizable algorithm control over their feed, or the option to opt out of the filtering algorithm altogether to view their timeline in sequential order.

Offering dissenting opinions a “right of reply,” alongside “must carry” policies, as Sunstein proposes, would promote education and bring attention to issues of public interest to truly allow “consumer sovereignty.” In order to narrow down the editorial challenges of so much content, regulators and companies can together convene a panel of experts across sectors to consistently monitor and suggest a limited number of major topics in which fake news or hate speech may cause serious harm. Congress could fund public library tutorials and workshops to ensure citizens understand how to leverage these functions in their networks and to get civil society members on the same page about what constitutes well-rounded, while locally-inclusive, media consumption today.

Recommitting to serving and fostering a well-informed polity that is empowered by democratic self-rule is the best path forward for platforms to return to their virtuous essence. Arming users with more options to choose a healthier information diet to combat the “infotainment” phenomenon is a first step.

Ensure Conspicuous Display and Ease of Access to Hate Speech and Disinformation Guidelines

All content related policies centering on hate speech and disinformation should be elaborated on platform home pages with clear signage and a high-level bulleted summary of prohibited actions and content parameters.

Partnering with platforms to build in systematic redirection to Terms of Service and Community Guidelines through keyword identification or other algorithmic signals denoting potentially false, harmful or prohibited content could also be useful in the moment. This automated and contextual notification would serve as a gentle reminder of outlined rules and consequences of violation, perhaps prodding one to think twice before posting for fear of account suspension.

While these propositions seek to address just the tip of the iceberg of free speech and content moderation disputes online, there is no doubt even they will see challenges abound. Recent Congressional hearings highlight the chasm between Democrats and Republicans on the issue, namely around criticism that social media companies’ rules disproportionately censor conservative speech — with Ted Cruz going so far as to threaten Facebook, Google and Twitter with charges of anti-conservative bias. Decrees on platforms will likely continue to be battlegrounds for party conflict over whose speech is granted and why such decisions remain overly arbitrary. The suggestions above serve to create some order and may help to break what has devolved into a deeply partisan stalemate.

In rethinking a social contract for today’s digital democracy, the government can aid in breaking internet companies’ mass preference formation, a form of unfreedom shackling us to our echo chambers and calling into question their harms to foundational values of free speech, democratic debate, and truth. In so doing, it can also recognize the unprecedented and nuanced complexity of holding them lawfully accountable.

--

--

Adriana Lamirande
The Startup

A place to gather policy memos, academic research papers, op-eds, and creative musings. Interests include the internet, psychoanalysis & video art.