Freedom of Speech and Social Media Regulation in Australia

Madeline Whitting
The Public Ear
Published in
4 min readSep 27, 2019

If you’re an Australian and didn’t hear about the Israel Folau case earlier this year, you must have been living under a rock. If you are one of those people, I’ll quickly break it down for you.

On April 10, the Rugby star posted to Instagram that “hell awaits for drunks, homosexuals, adulterers, liars, fornicators, thieves, atheists and idolaters”. Rugby Australia was quick to announce his comments breached the game’s Code of Conduct, and they would terminate his recently signed four-year contract. Folau then hit back, by filing an unfair dismissal because of religion, based on a breach of Section 772 of the Fair Work Act.

Issues such as this really bring to light the blurring of the public and private sphere. People like Folau once manifested their opinions in their own private sphere, but now have opportunities to share these thoughts publicly thanks to the likes of social media. This has caused panic to run high about unregulated expression, increasing the exposure to potentially “dangerous” and offensive content.

So, what does this mean for issues around freedom of speech? And, if people are so panicked by this overriding of social strictures, how can we attempt to regulate political expression on social media?

Drawing the line between protecting free speech and legislating hate speech can be difficult. Especially as the internet promised freedom of speech, a right that underpins the core structure of a democratic society. But now, this promise is under threat against the new challenges that have been brought about from a networked society.

Some suggest regulating social media the same way as Internet intermediaries. Social media platforms both stand in between users and other users, and between policymakers and the people they seek to regulate. As the ‘publishers’ of their users content, social media platforms are called to bare some responsibility for the actions of their users. In Australia, these intermediary laws are an outright mess, and the majority of the legislation predates social media making it finicky to apply.

In cases like Folau’s, regulation around hate speech and discrimination could also be enforced. Once again law enforcement may find themselves in murky water, as laws vary across different states in Australia. We do federally have the Racial Discrimination Act, that forbids people outside of their private sphere to humiliate or insult others based on race, colour or national or ethnic origin. But like the intermediaries law above, this legislation is also old, not mentioning other issues such as gender identity or sexual orientation that are key issues facing hate online today.

Even if these laws were somehow applied, it can be difficult to pinpoint and police because of social media’s cumulative effects allowing content to travel across jurisdictions. This calls for a universal law, which would be extremely challenging to develop and implement.

Whilst one of the questions is how should social media platforms be regulated, the flip side of the coin is how platforms govern themselves. Although they are not legally required to do so, many social media platforms police their sites. Many have strikingly similar user guidelines when it comes to problematic content, most establishing boundaries around sexual or violent images, hate speech, harassment of other users and promotion of illegal activity.

Arguably, it is impossible for a social media network to find a solution that ensures no hateful or illegal content will appear. Employing moderators to examine each post before being published it is simply too hard given the large scale of these platforms. The only logical approach is the ‘publish then filter’, meaning content against guidelines will always be published for some time before a response. Most platforms rely on the community to ‘flag’ offensive content to the moderators.

These difficult questions all point to responsibility — should the government take responsibility for its citizens, or, the platforms take responsibility of their users? Ultimately, I think that social media platforms and regulation have to work together to combat hate speech online. The platforms as intermediaries have to develop better methods to correctly identify and delete offensive content. Alongside this, the government needs to produce new regulation that guides what content should be removed and what deserves financial penalty.

My points are parallel to those of Founder and CEO of Facebook, Mark Zuckerburg who earlier this year called for help from governments to play a more active role in stopping hateful content online. He admits platforms such as Facebook have too much power with regards to freedom of speech, and notes that these platforms should not be making regulatory decisions on their own.

As policy makers continue to battle with the idea of social media regulation, we have to keep in mind the importance of freedom of expression. Any laws made have to be conscious of not tipping the scales disproportionality to undermine freedom of speech and the public interest. This will be an ongoing challenge in a fast-evolving media environment, which could mean regulation is always one step behind.

--

--