When Top Tech Companies are Complicit in Crimes

Srinthan Hampi
Kubo
Published in
5 min readSep 15, 2021

It is common knowledge today that social media platforms and the internet in general are used to commit crimes. Most illegal acts are more often than not committed in spaces where tracing the criminal is almost impossible. However, setting aside crimes that occur on the dark web (sale of illegal substances, sharing of questionable pornography and software etc), offenses are also routinely committed on popular social media platforms. Offenses of this nature include defamation, hate speech, and other such damaging acts. While the proprietors of the social media platforms avoid liability by trying to prevent these offenses to the best of their capabilities, a recent decision by an Australian High Court may have completely changed the dynamics of liability incurred by companies online.

The case of Fairfax vs Voller flips the status quo on its head, inviting radical changes as to how liability is ascribed to stakeholders when crimes are committed online.

This case deals with an instance wherein media companies failed to prevent the publication of defamatory and trauma-inducing comments on their official Facebook page, ending with a lawsuit being filed in 2017. The court, in this case, found that because Fairfax (along with multiple Australian mainstream media outlets) did not attempt to filter out any comments on posts relating to Voller’s juvenile detention, which was the subject of the discussion and posts on Facebook. The media companies (admins of the Facebook pages) indeed had the ability to screen comments and preemptively delete those that may unfairly damage the reputation of Voller. The lack of oversight exercised by the media companies, despite the option to prevent the publication of defamatory statements, convinced the court that Fairfax could be held liable for ‘publishing’ such content online.

This marks a massive departure from the way such cases are looked at currently — with Safe Harbour protection and third-party liability exemptions. According to such exemptions, administrators and proprietors of online platforms cannot be held liable for any criminal acts committed by third parties. This was simply because it would be unfair to penalize companies that are merely maintaining forums for discussion, for the actions of a small collective of criminals. In most common law countries, Safe Harbour exemptions are contingent on the administrators of the platform fulfilling thorough due diligence standards, after which they can assume the role of ‘Innocent Disseminators’.

But wait, this case and this story may not be as consequential as you think. The case only penalized the administrators of the Facebook pages, and not Facebook itself. As long as Facebook gave users and administrators tools to ensure a safe and legal discussion on its forums, it had no reason to be charged with enabling the defamatory comments on the Fairfax Facebook pages. However, this could be the start of a wave of public sentiment, forcing companies to be held responsible for crimes committed on their own platforms.

However, the core question still exists — who do we hold responsible for crimes being committed against innocent users online?

It seems extremely hard to argue that companies themselves need to be held liable for crimes committed on their proprietary platform. This is the case, simply because it is supposedly impossible to filter out every single instance of defamation committed online, despite many checks and balances put into place by corporations themselves. Moreover, social media giants may choose to abandon entire markets, simply because the potential liability they can incur is too costly. But, does this absolve Facebook and other such platforms of all responsibility with respect to crimes committed?

To take a relevant corollary, let’s consider the crime of using copyrighted content online. The Content ID system on YouTube may be the single most sophisticated anti-copyright infringement algorithm in the world today. Similar systems exist on all the world’s biggest social media platforms, to ensure that nobody unfairly profits from intellectual property not owned by them. At least on YouTube, the claims and appeals system appears to be broken, or non-functioning most of the time. This however, never seems to dissuade YouTube from cracking down on content creators, even though their content may be protected by fair use. In this instance, it is fair to say that YouTube values the monetary interests of copyright holders, more than the rights of their innocent content creators.

If this level of interference is applied to something as (relatively) inconsequential as copyright violations, why isn’t the same level of intervention practiced while dealing with actual, harmful crimes being committed on YouTube? Considering this thought process, we may actually be justified in forcing all social media giants to protect the rights of their users, to such an extent as to hold these giants responsible for crimes committed under their noses.

This, however, is an extremely unpopular standard to hold Social Media companies to. It becomes even harder to enforce these standards on companies, considering how much influence they may hold in the legislatures all over the world. Imposing this rule on companies may inadvertently nudge them into shutting their doors in entire countries (as Facebook may have to do in Australia, in the very near future).

As the internet and social media are essentially ubiquitous in 2021, it is safe to say that we are on the brink of a radical change in how social media itself is looked at, as a tool of progress. The more we as humans rely on social media for connectivity and business, the bigger the burden is on companies to prevent their users from being harmed.

Project Tinker is a Bangalore-based startup aimed at helping ideators with the tools they need to build amazing ideas. To learn more about our services and philosophy, visit project-tinker.com

--

--