He Who Will Not Delete Must Feel

So far, the story of social media regulation has been one of failure. Rushing ahead with a bold text to force social networks under national law, the German Ministry of Justice wants to end this tragedy. By Lei Lang & Tim Otto

Tim Otto
11 min readApr 11, 2017

It must have been painful for John Perry Barlow to follow the ongoing debate about the regulation of social networks and online communities. Indeed, the spirit of his declaration of independence of the cyberspace seems to have retreated into far distance. The once rhetorical question if the internet and its users shall be independent from nation state sovereignty has become the very pragmatical one of how this sovereignty will be best transferred onto online platforms and users.

The internet population in 2017. Source: Hootsuite

In fact, Barlow may had a different vision in mind when he crafted his quasi-constitutional statement in 1996. Back in those days, the internet seemed to be that well-structured cosmos, scarcely populated by a rather homogenous group of researchers, nerds and new world explorers. Twenty years later, this demographic has fundamentally changed: not only has the online population grown to a heterogenous mass of 3.7 billion people according to the ITU’s latest report on the information society. It has furthermore become a Petri dish for a multitude of commercial businesses — from innovative start-ups to multinationale entreprises.

Thus, it’s not really surprising to see national and supranational legislation reaching over to the virtual realm. Indeed, we’ve already witnessed plenty of regulation, framing the cyberspace’s technical procedures (e.g. network traffic control), commercial services (e.g. the EU’s e-commerce directive) or and peripheral infrastructure (e.g. spectrum management). We saw competition law entering the digital sphere as we did with intellectual property rights. Yet, the current debate on social media regulation shatters the Barlowsian ideal stronger than any of those former eruptions. For once, it seems truly novel and doesn’t have an explicit “real world” equivalent. Another reason might be its inevitable impact on the very fundamental philosophy underlying the declaration: Can online dialogue and virtual communication really remain that last resort within the public sphere that is beyond state control?

The struggle in regulating social media

Leaving all philosophical ideals behind, one has to state clearly that social media have gained an enormous socio-political weight, with Facebook at its gravitational center. With almost 2 billion users, the Silicon Valley giant easily surpasses the population of China, India or the European Union. Furthermore, it does not only compete with local bulletin-boards, newspapers, and town hall meetings. It actually replaces them, offering better functionality at any time in practically any location. Thus, the strive for regulation by numerous government seems not only reasonable but indeed very necessary: Given that the internet clearly has become a public space by the mere volume of users, Facebook (and platforms of similar relevance such as YouTube, Twitter or Google) have not only fragmented this space but privatized it: They have more or less excluded national governments from their sovereign monopole of executive power as shown throughout the endless debates on “Hate Speech” and “Fake News”. Of course, Mark Zuckerberg and co. are nowhere near being advocates for hate speech. It seems rather to be a clash of inherent cultural norms that fuels this controverse. On the one side: the (mostly) European actors, deeply marked by the continent’s horrific track record of political extremes, hate crimes and genocide. One the other side: the Valley’s brightest stars, raised by cyberspace ideologies (such as the one by the afore mentioned Barlow) and the US constitution‘s first amendement which traditionally considers the “freedom of speech” an untouchable good. It is this cultural divergence and the subsequent reluctance to delete hate contents (at all or in short time) that mobilizes politicians against social media platforms and makes them call for regulation.

Media company or not–does it even matter?

This call for regulation is loud, continuous but–unfortunately–also slightly monotone: so far it has been almost exclusively about treating Facebook and co as a media companies/news publisher. This legislative trick would have the welcomed side effect that social networks would become legally liable for any content published on their platform. Subsequently, they would be forced to introduce some sort of “gate-keeper” function. In radio, television or print, those “gate-keepers” (mostly journalists or editors) filter contents either to secure quality standards and/or to comply with legal barriers of “freedom of speech” (such as hate incitation).

The politician’s claim received support from representatives of the news/publishing industry, yet driven by a different motivation: Facebook and Google make tremendous profits from online advertising–a market which they more or less dominate due to their immense user traffic (see graphic). Traditional news publishers claim that their original content is at least partially creating this traffic. Thus, they would like to see social media networks to pay licensing fee for referencing it. Not only the British News Media Association asked the government to conduct “a coherent review of the regulatory status of Google and Facebook, and whether they should continue to be considered mere intermediaries”. The media association’s hopes linked with such kind of regulation might be too optimistic though: In 2014, Google answered a similar push by Spanish publishers with a shut down of the national “Google News” section. The result were traffic cuts of up to 14 percent on publisher side. Independent evaluators found that “there is no theoretical or empirical justification for the introduction of a fee paid by news aggregators to publishers for linking to their content”.

Furthermore, however understandable all those claims may be, they suffer from an undeniable inconsistency: social networks are not really publishing companies. There are significant differences regarding the nature of authors (a controllable amount of professionals vs. billions of private users), communication (linear vs. multidirectional) or programmation (editorial planning vs. individual preferences and social connections). Naturally, CEO Mark Zuckerberg or Sheryl Sandberg made a strong case for Facebook being a technology platform:

“When you think about a media company, you know, people are producing content, people are editing content, and that’s not us. We’re a technology company. We build tools. We do not produce the content. We exist to give you the tools to curate and have the experience that you want, to connect with the people and businesses and institutions in the world that you want.” Mark Zuckerberg, August 2016

Even one of the most vehement supporters of a mandated regulation, German Minister of Justice Heiko Maas, had to admit that Facebook “does not not correspond to the media concept of television or radio”.

More pragmatic individuals might argue (as Heiko Maas actually does) that it doesn’t really matter whether Facebook fits in the scheme of a traditional media company. After all, their main concerns like the lack of “gate-keepers” or missing content licences would be still addressed through this legal classification. However, pleased by a short-term success, politicians might miss out on the chance, if not the urgent need, to finally design a dedicated legal framework for social media platforms. After all, it has been almost ten years since Facebook started its triomphal march. And while there are specific legislations towards television and radio, social networks still go without one, although competing in both the social impact and the user numbers of traditional media. Surprisingly, it was Mark Zuckerberg who–in the aftermath of the Trump election and increasing pressure from global legislators–differed from his initial position and pointed out the uniqueness and novelty of social media platforms:

“Facebook is a new kind of platform. It’s not a traditional technology company. It’s not a traditional media company. We don’t write the news. But we know that we do a lot more than just distribute news, and we’re an important part of the public discourse.” Mark Zuckerberg, December 2016

If you don’t want to listen, find out the hard way

It’s not that social networks weren’t offered the chance to collaborate on a voluntary basis. Earlier in 2016, Facebook, Twitter, Google and Microsoft agreed on a code of conduct with the European Commission to remove the majority of reported illegal hate speech within 24 hours. However, only a few month later, a first evaluation showed that the social media giants only reviewed 40 percent of those requests within the proposed interval. Following up the report, the EU Justice Commissioner, Vera Jourova, warned the platforms:

“If Facebook, YouTube, Twitter and Microsoft want to convince me and the ministers that the non-legislative approach can work, they will have to act quickly and make a strong effort in the coming months.” Vera Jourova, 2016

German Minister of Justice Heiko Maas (SPD) has been for many years advocating for a stricter regulation of social networks.

Seeing the latest initiatives by European politicians the social networks’ efforts might not have been too convincing: Italy’s lower house President Laura Boldrini just recently called again for more compliance. And there’s Heiko Maas rushing ahead with a bold legislation draft in March 2017, that might drastically toughen regulation for social media platforms.

The “Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken“ (translates to “Law to improve the application of law in social networks”) would impose several obligations on commercial social networks:

  • A quarterly reporting on the handling of complaints on illegal contents
  • The deletion or the blocking within 24 hours of obviously illegal contents and within 7 days for less obvious ones
  • A documentation of all deleted contents for evidence purposes
  • The obligation to inform users on deletion/blocking of their content
  • Immediate deletion/blocking of all reproductions of deleted contents
  • Takes effective measures against renewed storage of banned content
  • The installation of a legal point of contact in Germany

Any infringement or violation of these terms can be punished through fines of up to 5 million Euro against responsible persons and up to 50 million Euro against the infringing company itself.

A step in the right direction…

Now, what is one supposed to think of this? Clearly, the German draft addresses the afore mentioned issue of non-specific, non-fitting regulatory frameworks for social networks. From this perspective, it can be seen as a step in the right direction. Indeed, the installation of a national legal point of contact is another welcome and urgently needed modification. Up to now, law suits against Facebook and co. had the bad habit to escalate in international law marathons. As Facebook users by default submit to Californian law on registration, this marathon normally starts with a trial to clear the responsible jurisdiction as seen in the famous case on the “L’origin du monde”. This practice would be drained through a legal representation in the respective states. Furthermore, a direct contact persons would allow police and justice to pursue crimes and infringements more efficiently. In general, this rule could rebalance the sovereignty loss of nation states and reduce the intangibility of Facebook when in conflict with national laws.

However, the serious concerns about the proposed legislation far outweigh its positive effects. Criticism comes from interest associations of consumer protection, press and digital economy. Even the “Amadeu Antonio”-Foundation, the third party institution previously partnering with Facebook in Germany to detect and delete hate speech, joins the critics.

…yet far beyond the target?

There are several points within the text giving rise to concern. Firstly, there’s the scope of the draft which is far beyond target:

Telemedia providers running internet platforms with commercial intent and allowing user to exchange arbitrary content with each other, to share them or to make them public (social network). Platforms offering journalistic and editorial content, won't be considered as social networks by this legislation. (Translated from German)

The range of concerned networks is further narrowed by a minimum threshold of 2 million users. While this definition obviously applies to big players like Facebook, Twitter, Instagram, Pinterest, Snapchat and Youtube, its blurred wording might equally cover messenger services like WhatsApp, Skype and iMessage, cloud services like Dropbox or web mail provider like GMX.

Another problem might result from an extensive use of content filters. According to Facebook, 510,000 comments and 136,000 photos are uploaded onto the platform every minute. Given this huge amount of content and the short reaction intervals proposed by the law, the use of filtering algorithms is more or less mandated by design. Similar filters are already applied by Facebook to identify pornographic content or nudity. The problem is that hateful content is far more subtile and only partially detectable through patterns in images or specific keywords. Automated filtering in combination with drastic fines for compliance failures does not only incentivize social networks to apply an overly strict and proactive approach: it actually forces them to do so. The result might be an extensive “overblocking” of legal contents, hence a deep incision of the freedom of speech.

A third issue arises from the actual classification of hateful contents. As “obviously illegal contents” have to be deleted within 24 hours, there won’t be hardly any time left to analyse them through a proper trial procedure. As mentioned above this might likely result in an exploitation of filtering algorithms. But even where the decision is left to humans, it still represents a transfer of judicial power to a private institution. For many people this is already a problem per se (for good reasons). Certainly affected users can theoretically file an appeal, yet this procedure would inverse the constitutional principle “nullum crimen, nulla poena sine lege”. Given the potential expenses and efforts of a legal proceedings, it’s likely that many users will decide for the path of least resistance–again a bitter defeat for both the freedom of speech and the constitutional state.

A good idea, badly executed

Actually, the German draft is a classic example for a good idea that is badly executed. It was almost painful to see politicians trying with all force to press such a disruptive phenomena like social network in the guise of classic television or print media. It was about time (to say the least) to acknowledge the unique role that social networks have in modern society and engage them with dedicated legislation and tailor-made regulation. Indeed, what is currently rather a side-note of the text might be its most important modifications: the installation and assignment of national legal representatives might give juridical access to executive authorities and law enforcement agencies. Furthermore, it might improve the quality and efficiency of national prosecution with the online realm.

But as explained above, the law is also overshooting the target. It entails blatant risks and threats for fundamental rights, namely the freedom of speech and the constitutional state. Now, this all might seem somehow acceptable when it concerns xenophobic contents but it is indeed very concerning to see censorship bypassing legal authorities and wandering in the hands of private actors and/or algorithms. In our opinion, these passages have to be reworked. A possible solution could be to impose algorithmic change: Instead of deleting a reported content it could be decoupled from the edge rank factors, especially the “post weight”. Thereby, a reported post wouldn’t gain further visibility and reach through likes, shares or any other user interaction. Thus, the post would not spread virally until it has been approved or disapproved by a legal body.

Furthermore, the law has to redefine its scope. This might be harder than it sounds. In the age of digitization, change is constant and technological advancement is fast-paced. It is therefore crucial to design a potential legislation as “technology-neutral” as possible in order to keep it sustainable over a longer time period. The German text tries to do so by framing several key attributes of social networks without going into too much detail or naming specific technologies and algorithms. This is again a well-though yet poorly executed idea. An adjustment could for example include spreading and weighting mechanisms based on content interactions. This would be still fairly neutral in terms of algorithm design but exclude for example mail providers or cloud services from the list.

--

--

Tim Otto

Consultant for Digital Marketing & Innovation Strategy