I feel weird every time I hear these narratives on AI, which create a sense of danger or insecurity about the ability to discern ‘fake’ (whatever that means) digital or printed information (images or anything else) from ‘authentic’ (whatever that means) digital or printed information. I genuinely don’t care much about this, even though it’s a controversy that stretches back to the invention of photography, long before recent advancements in AI.
I don’t care much, particularly because I’ve never given more importance or credibility to a piece of information just because a person articulated it with their voice or I saw them physically moving on a screen. Actually, I tend to attribute much more value to ‘things’ (knowledge, stories, announcements, contracts…) that are written, preferably on books and literature that has stood the test of time (when something is printed, or digitally published), culture (when a book is translated, or re-edited), and academic validation (when a text is used in education). This notion of the importance of what is written over oral tradition also stretches way back in time, before AI existed, to the period known as Protohistory.
For that reason, the inherent relevance I attribute to a TikTok video of a person recorded while saying anything is the same as that I attribute to a hyperrealistic AI-generated model of a person talking, posted on TikTok: zero. The value lies in the information transmitted by humans, not in the visual representation of a human. Furthermore, the medium that transmits that kind of value is language, not a computer screen.
In my opinion, generative AI is not a problem for humanity. A big problem, though, is cultural and intellectual decadence.