Facebook has apologized for the way it handled “hate speech” against women on the social network, after repeated complaints from advocacy groups alleging that it was turning a blind eye to what was clearly offensive behavior. This has been hailed by some as a victory, since Facebook has admitted that its policies around such content are weak and weren’t applied properly. But even if its policies are improved — as the network says they will be — do we really want Facebook to be the one deciding what qualifies as hate speech?

What makes this kind of topic so difficult to discuss is that much of the content Facebook was accused of harboring is unpleasant in the extreme: some of the pages that were mentioned in the complaint by the group Women, Action and the Media advocated violence against women, promoted or glorified rape, and made jokes about sexual or physical abuse (one of the more tame examples was a page called “Kicking Your Girlfriend in the Fanny Because She Won’t Make You a Sandwich”). No one in their right mind would argue that this kind of content is worthwhile, or that it isn’t offensive and disturbing.

Facebook decides what speech is free

The larger problem in making Facebook take this kind of content down, however, is that it forces the network to take an even more active role in determining which of the comments or photos or videos posted by its billion or so users deserve to be seen and which don’t. In other words, it gives Facebook even more of a licence to practice what amounts to censorship — something the company routinely (and legitimately) gets criticized for doing.

To take just a few examples, Facebook has been repeatedly accused of removing content that promotes breast-feeding, presumably because it is seen as offensive by some — or perhaps because it trips the automatic filters that try to detect offensive content and send it to the team of regulators who actually police that sort of thing. The social network has also come under fire for removing pages related to the Middle East, as well as pages and content published by advocacy groups and dissidents in other parts of the world.

As Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, has pointed out , the entire concept of “hate speech” is a tricky one. In France, posting comments that are seen as homophobic or anti-Semitic qualifies as a criminal act, and Twitter is currently fighting a court order aimed at having the social network identify some of those who posted such comments. The company is resisting at least in part because it has staked its reputation on being what general counsel Alex Macgillivray has described as the “free-speech wing of the free-speech party.”

It’s an increasingly slippery slope

Some groups have tried to convince Facebook that pages promoting heterosexuality qualify as hate speech, while others have complained that pages making fun of people who are overweight should fall into the same category. Many people would undoubtedly see the kind of content that Women, Action and the Media are complaining about as being clearly offensive in a way that these other pages aren’t — but not everyone would agree. 

Where does Facebook draw the line on this particular slippery slope? Is it only the content that draws the most vocal criticism that gets removed, or the campaigns that influence advertisers?

As more than one free-speech advocate has noted, if popular protests about offensive content were what determined the content we were able to see or share a few decades ago, anything promoting homosexuality or half a dozen other topics would have vanished from our sight. There is at least a case to be made that the simplest course of action for a network like Facebook would be to only remove content when it is required to do so by law. But then what happens to the kind of content it just apologized for?

Private entities making their own rules

To its credit, the social network has tried to find other ways of discouraging these kinds of pages — including requesting page administrators to identify themselves (although the company’s “real name” policy raises some equally troubling questions). And while Facebook’s behavior looks and feels a lot like censorship, it isn’t legally an infringement of free speech because Facebook is a corporate entity, and free-speech requirements like the First Amendment only apply to governments.

And that central fact about Facebook — that it is a proprietary platform controlled by private interests — is part of what makes this situation so complex. For large numbers of people, the social network is a central method for connecting with and sharing information with their friends, a combination of water cooler and public square. But like Twitter, it is not a public square at all: it is more like a shopping mall, with private security that determines what behavior is tolerated what isn’t.

That’s not a problem when you want security to remove the people who are offending or disturbing you, or when you agree with the company’s decisions — but it’s quite a different thing when you are the one who is being accused of being offensive or disturbing, and you have no recourse. And Facebook has provided plenty of evidence that it can make just as many wrong choices as it can right ones.