Since the 2016 presidential election, Facebook has been under fire for the ease at which misinformation spreads on its platform. A recent New York Times investigation shed light on how the company’s leadership deflected, denied, and secretly counterattacked critics as it sought to conceal the extent of the fake news, election meddling, and harassment infesting its platform.
The tech giant has been in damage control mode since the story broke. After denying some of the allegations in the investigation, Mark Zuckerberg said Facebook would create an “independent body” to help make decisions on content moderation.
Zuckerberg and his company have to solve a genuine paradox: calibrating the right amount of censorship on a platform touted as a digital space of free expression and communication. One way to solve this dilemma would be to take a different approach to content moderation, one that may be an upstream swim against the company’s bottom line.
While tech companies assert that their social media platforms are open and neutral, the economic incentives to get views, likes, and shares often prevent them from seeing how the negative aspects of society are augmented on their platforms. Digital spaces can be bastions of free speech, a place for friends and family to keep in touch, but they can also be gamed to spread misinformation and political propaganda.
The independent bodies created in Silicon Valley often focus too much on code and not enough on effective moderation. What Facebook needs is fewer engineers, designers, and lawyers, and more thinkers aware of how social dynamics play out in the digital world. It needs people who study things like race, gender, linguistics, social media culture, and historical movements.
Facebook should put a range of academics on this independent body.
Scholarship isn’t a panacea for how social media inflates political divisions and turmoil, but it would help tech innovators make better decisions about how to design their platforms to discourage misuse.
Perhaps Facebook could combat fake profiles if it had people who understood speech patterns. If the company consulted with experts on video game culture or group identity in chat rooms, maybe it could mitigate the tribal sorting function and outrage amplification of social media. Maybe Facebook could have foreseen Russia exploiting our society’s existing racism and misogyny if people with backgrounds in African-American studies or gender studies were part of the moderation team.
Facebook should talk to scholars like Sarah T. Roberts or Tarleton Gillespie, who studies how content moderation shapes social media. Safiya Noble would show them how societal inequalities and biases become encoded into digital tools. Ramesh Srinivasan or Renata Avila could explain how the cultural biases of Silicon Valley affect the world. Whitney Phillips could provide a sense of how trolling changed the culture of the internet—and even politics. Joy Buolamwini’s research could help the company cultivate a culture of “algorithmic accountability.” And though it would be bitter medicine to swallow, consulting critics like Jaron Lanier or Siva Vaidhyanathan could be a huge step toward changing the toxic feedback loop the platform incentivizes.
Many more names could be added to this list, but the point is that Facebook needs to look beyond computer science to tackle its pressing issues.
Criticism of the idea that academics would improve Facebook mainly stems from the company’s business model, which is based on increasing engagement and connecting users to advertisers. To borrow an analogy from Lanier and former Google design ethicist Tristan Harris, we can think of Facebook like a casino. Yes, casinos create a sociable atmosphere (bright colors, live entertainment, food and drinks, comfortable furniture, ornate decorations, an array of games and machines, etc.) where people can come to have a good time. But we shouldn’t forget that casinos are designed to facilitate one goal: get people to gamble away their money. Though Zuckerberg asserts he is “driven by a sense of purpose to connect people and bring us closer together,” Facebook’s goal as a business is to keep people on Facebook. It’s possible that hiring academics wouldn’t change anything.
Facebook has more than 2.2 billion users — more than the population of any country on the planet—yet crucial moderation decisions are being outsourced. What users see is mediated by those who likely do not have the comprehensive, culturally specific knowledge and analytical training to make these choices.
Whether Facebook or any other social media is a “neutral platform” misses the point because the world we live in is anything but neutral.
Facebook is a business and, therefore, more totalitarian in its governing structure, but making content moderation more democratic could help Zuckerberg make more informed decisions. Whether Facebook or any other social media is a “neutral platform” misses the point because the world we live in is anything but neutral. Having an independent body that brings together experts from various backgrounds would help Facebook deal with the dissonance of its founder’s purported values and the reality of how the platform is used in an increasingly complex world.
Scholarship isn’t a panacea for how social media inflates political divisions and turmoil, but it would help tech innovators make better decisions about how to design their platforms to discourage misuse. The problems sparked by technology can’t be solved by technologists alone. Facebook needs people with the right expertise to spot individuals and governments seeking to misuse its platform. Academics have the skills to spot certain social patterns that A.I., algorithms, and outsourced workers cannot.
Facebook isn’t the only tech giant facing criticism. Platforms like Twitter have garnered their own share of scrutiny. The same way tech companies draw from the top technology programs in the world, the solution to many of their pressing problems could come from employing those who understand the intersections between human social behavior and technology.