When it comes to responsibility for online content, where does the buck stop?

Enrique Dans
Enrique Dans
Published in
3 min readJan 22, 2023

--

IMAGE: A person holding a smartphone in black and white
IMAGE: Pankeyson Photos — Pixabay

The Gonzalez v. Google LLC trial in the U.S. Supreme Court looks set to be an acid test for technology companies’ liability for the content on their platforms. Several companies have submitted briefs to the court requesting that the provisions of Section 230, which protects online platforms from liability for what they publish, be upheld, warning they will go out of business otherwise.

Gonzalez v. Google pits the relatives of an American woman killed in the November 2015 Bataclan terrorist attack in Paris against YouTube, which they accuse of helping radicalize the perpetrators of the attack through its recommendation algorithm. Concurrently, the Supreme Court is reviewing a similar case, Twitter Inc. v. Taamneh, a lawsuit brought by the family of Jordanian Nawras Alassaf, who was killed in 2017 during an ISIS attack in Istanbul against Twitter, Google and Facebook for failing to control terrorist content on their platforms.

Platforms’ responsibility for the content they host or recommend through their algorithms is under the spotlight: on the one hand, it makes sense to give platforms some leeway, as long as they develop mechanisms to remove harmful or dangerous material either through social tagging systems or active monitoring.

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)