Regulating ‘Acceptable Speech’ on Facebook: Bad Idea, or Worst Idea?
Facebook co-founder Chris Hughes is the latest person to call for some kind of regulation on the tech industry’s data usage.
In a new op-ed in The New York Times, Hughes argued that Facebook should be broken up into smaller pieces — a stunning admission from someone who helped design the platform (and earned as much as $430 million for his efforts). He also claimed that Facebook CEO Mark Zuckerberg wields far too much power:
“I don’t blame Mark for his quest for domination. He has demonstrated nothing more nefarious than the virtuous hustle of a talented entrepreneur. Yet he has created a leviathan that crowds out entrepreneurship and restricts consumer choice. It’s on our government to ensure that we never lose the magic of the invisible hand.”
The ultimate solution to the problems raised by Facebook, Hughes added, is regulation in the form of a federal agency tasked with monitoring the tech industry. Not only would this hypothetical agency protect user data; it would also establish guidelines for “acceptable speech” on social media:
“This idea may seem un-American — we would never stand for a government agency censoring speech. But we already have limits on yelling ‘fire’ in a crowded theater, child pornography, speech intended to provoke violence and false statements to manipulate stock prices. We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.”
What Hughes is proposing is extremely messy, especially since Facebook and other social networks have spent years hiding behind Section 230, which allows these platforms to claim they’re not responsible for whatever content users post on them (similar to how Verizon isn’t responsible for what you say on a phone call). But if social networks move to aggressively police content, then they transform into publishers — with all the legal exposure that comes with such a designation. (And that’s before we get into the extra-super-messy issues related to, um, [checks notes] the First Amendment.)
For smaller app and web developers, the potential consequences of “acceptable speech” are huge, even if they don’t run anything that even remotely resembles a social network. What if such policies are put in place, and extend to messaging apps and coding repos in addition to “traditional” social networks? (That’s not such a far-fetched assumption, considering how Facebook is trying to evolve in a “messaging first” direction, and already owns prominent messaging hubs such as WhatsApp.) Once you argue for the regulation of any sort of user-generated content, everyone is potentially affected.
And smaller tech firms don’t have the time to police all of their user-generated content — or hire the lawyers necessary to ensure they stay aligned with new speech regulations. In essence, firms such as Facebook that have the money and technical know-how to police “acceptable speech” would benefit the most, and all others would likely suffer (especially if the Facebooks of the world developed sophisticated machine-learning algorithms that automatically handled many of these content-management tasks).
Good thing there’s likely zero chance of a sweeping law regarding “acceptable speech” surviving in court; but that might not prevent tech companies from cracking down harder on content that’s legally (and perhaps morally) questionable.
Any attempt at censoring activity will lead to firestorms in the press and online, as well. According to new data from Pew Research Center, 72 percent “of the public thinks it likely that social media platforms actively censor political views that those companies find objectionable.” Unless Facebook executives don’t want to end up in front of Congress (again), taking a super-active role in regulating content probably isn’t the best idea from a PR perspective.
Facebook’s (and Tech’s) Privacy Future
After Hughes’s editorial, Zuckerberg responded via an interview with the French media: “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference.”
And chances are good that the U.S. government won’t move to break up Facebook anytime soon, if we’re being realistic about it (lobbyists and campaign donations have a funny way of swaying lawmakers onto your side). When it comes to privacy, though, it seems that momentum is accelerating for the U.S. to adopt more stringent regulations — a “Data Bill of Rights” similar to what the European Union already imposed with the GDPR.
Last year, Ro Khanna, a U.S. Representative in California (D-CA 17th district), consulted with prominent members of the tech industry (including Tim Berners-Lee and Nicole Wong) and came up with such a “Bill of Rights.” Its tenets included the right to opt into the collection of personal data, as well as the ability to easily port data.
That sort of thing is potentially great for consumers; and if the GDPR implementation proved anything, it’s that companies can adapt to more stringent data rules.
Indeed, as Google demonstrated during its huge I/O conference this week, companies are already moving to place more privacy controls in the hands of users. For instance, Google is introducing an “incognito” mode to Google Maps. While those features haven’t excited privacy experts, it’s clear that Google (along with Facebook, which is suddenly very privacy-centric) recognize that they need to offer stricter privacy controls before the federal government does it for them.
In the meantime, it will be interesting to see how much traction Hughes’ editorial gets, considering he co-founded Facebook.