The Unintended Economic Implications of Limited Liability Regimes for Online Platforms
an article by Stefan Herwig, intro by Neil Turkewitz
A few weeks ago, I came across an article by Stefan Herwig in which he examined the political and economic foundations of an online environment that facilitates the distribution of harmful materials. He reminds readers and policymakers that the current problems we are encountering on the internet are rooted in rational, economic conduct by major actors. If we want to change the results, we need to change the underlying incentives or we’re doomed to what is essentially a pattern of rinse & repeat. I found his article to be quite interesting, and I reached out to him to ask his permission for me to post a translation of his original article (in German). He has agreed, and I have, with his permission, posted the translation below. I have made some minor adjustments to make some of the language flow better in English, but have kept his language intact. I hope that readers find it interesting.
Of particular significance are his arguments about how limited liability operates to stifle competition by rewarding scale, and how the economic incentives to rely on algorithms rather than human content moderators are responsible for both over and under moderation. He closes with a plea to avoid “state failure,” — a plea that I echo, and which highlights the fact that present internet governance, or the absence/failure thereof, is largely informed by the cyberlibertarianism of Barlow’s Declaration of Independence of Cyberspace which views states as unwelcome in the kingdom of the mind. In other words, present internet norms are predicated on the failure of the state and the irrelevance of traditional principles of law and economics. That’s the starting point. Perhaps it’s time to rethink our assumptions?
Translation of article
Following the revelations by whistleblower Frances Haugen and last week’s failure of Facebook’s infrastructure, politicians and the media are outbidding each other with demands for regulation, such as breaking up the corporation. But the most sensible options are being ignored.
Last week offered a one-two punch of bad news for Facebook CEO Mark Zuckerberg. Firstly, his network of social media and messenger platforms Facebook, Instagram and WhatsApp fail completely and worldwide for 7 hours due to an internal DNS configuration error. Additionally, the source of a series of scandalous leaks revealed herself to the public. Former Facebook employee Frances Haugen had leaked internal studies to the Wall Street Journal, according to which Instagram, for example, has a negative effect on the psyche and body image of young people. The effects ranged from eating disorders to depression to suicidal thoughts — in approximately one-third of female Instagram users. Facebook however decided to keep these findings under wraps, refusing to change its profit-generating algorithm which feeds users the information that is most interesting to them but also most polarizing.
Testifying before the U.S. Congress last week amidst great public interest, Haugen confidently sharpened her accusations, saying Facebook was deliberately earning through hate and polarization, the byproducts of its algorithm. The company had also failed to contain disinformation, she said, and had allowed too much misinformation about the covid pandemic on its platforms. Haugen’s revelations that social media platforms’ algorithms have negative effects on users and society are by no means new. For a long time, numerous scientific studies have concluded that the recommendations of major social media lead to radicalization, polarization and hate speech. Their “digital fallout” is thus the unintended side effect of a highly profitable business model.
What is new in this case, however, is that Haugen has the incontrovertible receipts that Facebook knew about the harm it was facilitating and took no effective action to address its role.
In response to these revelations and the almost simultaneous failure of the group’s infrastructure, which also made clear how closely the three platforms Facebook, Instagram and WhatsApp are technically interlinked, there were calls from various sides for more regulation. Green MEP Rasmus Andresen tweeted his call for the group to be broken up, even though a U.S. company like Facebook could hardly be effectively broken up from within Europe. German Federal Justice Minister Christine Lambrecht as well as EU Commissioner Thierry Breton now also see renewed cause for more far-reaching regulation. Hardly anyone is likely to believe the constant assurances that Facebook is on the mend.
But how do you regulate such a powerful corporate conglomerate, which has almost inexhaustible financial resources, continues to grow even in times of crisis, and whose algorithm is ultimately so opaque that its social effects only become visible years later through scientific studies?
How the “fallout” of the products of economic participants should be dealt with in regulatory terms is actually a problem that has long been solved in economics — these solutions have just been ignored in digital policy for years. In addition, the political framework conditions in the U.S. and Europe have themselves contributed significantly to the problem.
When products and services of economic players generate such unintended but harmful side effects, economics even has its own term for this: These side effects are referred to as “external effects” or “externalities” when the effects or the costs incurred do not have to be borne by the originator, or only to an insufficient extent. In the case of Facebook’s algorithmic amplification of toxic content, this is precisely the case. For years, Facebook has been able to outsource responsibility for the effects of its algorithms to third parties in accordance with the law: for example, to journalistic research corrective agencies against disinformation or police investigative agencies against hate speech or terrorist propaganda. In the worst case, however, the consequences are borne by individual victims or society as a whole, since often neither platforms nor perpetrators can be prosecuted. What is explosive here, however, is that the platforms’ ability to outsource the damage to third parties is more or less covered by law. This is because Facebook, just like other social media corporations, has very limited responsibilities under the laws of most nations, following the lead of the US and EU as set out, respectively, in Section 230 in the US and the E-Commerce Directive in the EU. Both sets of laws granted platforms a kind of general absolution for the effects of their own conduct. Only when they become aware of an individual violation of the law do the platforms have to delete individual content, if necessary. In the US under Section 230, there isn’t even a requirement to delete illegal content once the platform becomes aware of it. Whether they delete or not doesn’t affect their immunity, unless platform misconduct rises to the level of violating federal criminal law.
This legal situation is also so advantageous for the platforms because it has enabled them to automate a large part of their deletion tasks. This saves on staff, but often produces disastrously poor results, especially if the deletion algorithms are not checked in parallel by humans.
However, as the number of users and media reach of the platforms grew, the social problems caused by their content grew also — exponentially. Only when hate speech, disinformation and polarization spread throughout society were platforms such as Facebook and YouTube forced to adopt more effective deletion behavior, but so far only through selective tightening, such as the German Network Enforcement Act. However, this only regulates the deletion of criminal content through hate speech. It does not, for example, include the effective handling of customer objections. As recently as last month, Facebook ignored customer objections to unlawfully deleted content with the automated response that it could not process all objections due to the “coronavirus pandemic.” The customer should please have understanding for this. The political regulation of Facebook’s content moderation — and that of other social networks, too, of course — is thus more than capable of being expanded.
So what use is the realization that Facebook’s “digital fallout” can be understood in economics as a market failure due to externalities? It helps the legislator insofar as it makes it clear that such problems can no longer be left to the market itself — because the market is failing in the economic sense. The diagnosis also shows where regulation would actually have to start in the case of Haugen’s revelations: An economic operator immunized from liability is more likely to choose the lowest cost for itself than the most effective solution. Facebook’s strategy of outsourcing content moderation to, among others, Filipino minimum-wage workers, as documented in the 2018 film “The Cleaners,” is just one piece of evidence of how the corporation repeatedly tries to keep the costs and overhead of its global network down with wholly inadequate measures.
Although it is now almost certain that the radicalization and polarization of societies, as is particularly noticeable in the USA, can be traced back to the effects of information algorithms, legislators are not acting — or are doing so only in ways that don’t fully respond to the scale and nature of the problem. In Brussels in particular, there are signs of another massive policy failure, because the rules for platforms are currently being renegotiated there as part of two legislative packages, the “Digital Service Act” and the “Digital Markets Acts”. However, the liability privileges of the platforms, the basic features of which date back to the ISDN era of the Internet, are to remain virtually untouched. This would mean that the legal subsidies would be maintained, even though the various harms facilitated by online platforms were actually intended to be politically contained.
The correct response to externalized damage caused by “digital fallout” in terms of regulatory policy would be to internalize the costs of the damage caused back into the market and pass them on to the polluter. However, this would only be possible through a fundamental change in the general liability framework and a substantial improvement in the content moderation of social networks.
The problems are by no means limited to the platforms of the Facebook group. Other platforms such as YouTube, Twitter or TikTok would also need effective content moderation, which would have to deal with ALL content-related problems on the platforms effectively and promptly, consisting of qualified people and not predominantly automated deletion algorithms. Only when the cost of efficient content moderation became proportionally high that it significantly impacted the profits of the social media giant would it become cheaper for the platform to change the algorithm so that fewer deletion requests and objections would arise. This threshold could be reached quickly, especially with mega-platforms such as Facebook, YouTube or TikTok, whose user numbers run into the billions.
The network would then have to implement more socially acceptable modifications to its algorithm of its own accord, even if they slowed down growth and profits. This would then also lead to less polarization and less hate speech on the platforms — not only because more harmful content would be deleted or moderated, but also because better algorithms would now amplify less toxic and polarizing content. Another important side effect of internalizing harm would also be that it would now be the largest platforms with the most potential for harm that would lose their competitive advantage of liability absolution — thus opening up the market to competitors again.
The legislator has so far failed to understand that its far-reaching limitations on the liability of platforms have contributed significantly to the fact that platform corporations such as Facebook or Google have been able to scale to immeasurable proportions. After all, the largest company that is allowed to discharge the most digital sewage sludge into society’s bathing lakes with impunity and at low cost also derives the highest economic benefit from this political subsidy, while at the same time closing the market to smaller competitors. Thus, with an appropriate re-regulation of the liability regime in the E-Commerce Directive, one could not only finally direct the problems to the one who caused or exacerbated them. It could also help to ensure that large platforms finally have to bear the costs of their unrestrained scaling — and not just reap the profits. Perhaps politicians will then also realize that for decades they have given the most lax legal framework to the very digital economic players that were able to scale most easily. Despite all the criticism of the poor management of the Facebook group, the problem of the “digital fallout” is ultimately a political one. In economics, there is a very specific term for the legislator’s failure to act in the event of a market failure, which once again shows how important the analysis of market failure due to external effects actually is: “State Failure”.
/end
Stefan Herwig operates Mindbase, a think tank that scientifically analyzes network policy issues. He advises companies and politicians on the social implications of digitization and its regulation.