Tired of being ripped of by AI companies, artists are booby trapping their work

Enrique Dans
Enrique Dans
Published in
3 min readDec 19, 2023

--

IMAGE: A white label with a skull indicating poisonous content that reads “Deadly Nightshade”, a preparation made with the plant Atropa belladonna
IMAGE: Noorataijala — Pixabay

It wasn’t long after the appearance of Dall-E, followed by other generative image processing algorithms such as Midjourney or Stable Diffusion, that a big problem became apparent: the companies that had created them had accumulated huge collections of images labeled with descriptions, and then trained their algorithms with them.

Where did they acquire these huge collections of images? By scraping web sites, mostly image repositories. Getty Images’s lawsuit against Stable Diffusion made clear that the origin of their images was so obvious that in many cases the images generated contained distorted versions of their watermark, because the algorithm interpreted it as just another part of the image.

The legal problem was obvious: we have spent years saying that if something is public on the web it can be subject to scraping. There are legal precedents of all kinds that affirm the right of someone to go to a web page and take all of its content for whatever purposes they see fit. Because of its complexity, the case in question…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)