AI’s Lone Banana Problem: Art, Ontology, and the Unseen Threat
AI’s Encroachment on the Plane of Immanence: A Threat to Human Ontology
A friend who works as an ontologist for a company working with large sets of data, posted a link to this research paper on Generative AI’s problem of being able to create an image of a single banana. The name of the paper is ‘What the Lone Banana Problem Reveals About The Nature of Generative AI’ by Kai Riemer and Sandra Peter.
Kai Riemer sums up this ‘problem’ thusly:
Have you heard of the ‘lone banana problem’? And what it reveals about generative AI?
The lone banana problem refers to the seeming inability of even the latest image generators (such as Midjourney or Leonard.ai) in creating an image of a single, lone banana. Instead, what you get is bunches, or at least two bananas.
Why is this interesting? Well, it reveals an important truth about generative AI models — that these models represent the world (or more precisely their data) in a way that is very different from how we understand the world. — Source
The paper argues for a paradigm shift in how we perceive generative AI models like ChatGPT and Midjourney. Rather than viewing them as conventional information systems that accurately depict the ‘real’…