The three AI tactics to create the new False Reality (and submit us in the process)

We all believe we know what Reality is, even if it is due to an intuitive knowledge that we reach through common sense, but far fewer know how to differentiate today what is real from what is not due to the direct influence of the digital world in which we are. immersed. A world where, just like the rabbit hole in Alice in Wonderland, it can seem that up is down and inside is outside, to the bewilderment of the sanest and the amusement of the naive. And surely, if we stop for a moment to think about the situation, we will come to the conclusion that few circumstances exist in life that generate greater helplessness and personal anguish than the fact of not being able to distinguish between what is and what is not real, to the despair of our survival instinct, since there is no worse danger than living in a false appearance of security.

In this line of argument, we cannot ignore the fact that a possible disability in human discernment about Reality would negatively affect our value judgment about the Truth of things, facts and the circumstances that surround us. Which, invalidating this criterion, you don’t have to be very intellectually acute to understand that man would be at the mercy of any type of mass control strategy by direct manipulation of the collective mind. In other words, misinformation and deliberate confusion about Reality and therefore about Truth, inevitably lead to the subjugation of the will of the average man by third parties (a phenomenon that, on the other hand, is not new in the history of humanity).

Exposing which in an introductory way, I will point out that as a philosopher I have always liked to make use of the Principle of Reality (understood as that norm or maxim that objectively determines what Is as opposed to what Is not), as an Archimedean point of support for my philosophical reflections that seek the light of rational logic. Well, like every homo thinker, I have by natural vocation the irrepressible and impetuous need to search for the ultimate essence of the things that we call the foundations of Truth. In many cases, sincerity goes first, with more or less skill, and always as a personal maieutic method in the face of the mundane or transcendental challenges that life holds for us (1). And aware, likewise, both that human beings are cultural products from the moment even prior to our own conception -which represents a cognitive bias that determines us for life-, as well as being aware that objective human Reality does not exist outside of subjective general consensus (2). Everything and so, as a humanist philosopher I understand that there are three great levels of Reality: Reality itself (which underlies the essence of the nature of things), Reality perceived by man (which is that cognitively limited and conditioned knowledge culturally that we have about said nature), and the Reality created by man (which is one that may or may not adjust to the two previous levels of Reality by derivation through ignorance or an intentional will of the human being himself). However, beyond this triad of realities hitherto known to man, it is sociological evidence that the irruption of Artificial Intelligence (AI) in human life is creating a new and fourth level of Reality: the one created by itself AI, which seriously endangers the human Reality Principle itself.

As to why AI situates us in this worrying situation that devalues ​​the three classic levels of Reality exposed, we can describe it in a synthesized way in the following list of propositions presented as the Theorem that follows:

First: Human knowledge of the 21st century is based on digital experience and perception; that is to say, currently there is no self-respecting human knowledge without a digital field of work.

Second: The contemporary Digital Age, a differentiating feature of the present Fourth Human Industrial Revolution, cannot be conceived without an AI that encompasses -in an increasingly hegemonic way- the set of human activities.

Third: AI, in its double capacity, on the one hand of processing unlimited big data that describes Reality, and on the other hand of learning experiences, adapting and having behaviors similar to those that a person can have, is establishing itself as the new source of empirical knowledge of human beings through their interaction and observation with the digital world.

Fourth: Ergo, if from now on the knowledge of Reality apprehended by the contemporary human being is given by the AI, it will be the AI ​​and not the man who will have under his control what we should or should not understand as the Principle of Reality.

A logical conclusion that is not trivial, since whoever controls the Reality Principle will control the foundations of the Truth of things, and from there to creating a False Reality there is only a small step (which is the main objective of any strategy for the mass control, as well as the dream of every megalomaniac with autocratic airs). But how can AI control the Reality Principle, we may wonder. In my opinion, and in response to the question, today there are three great tactics used by AI to create a new False Reality and thereby subdue man in the process.

The Tactical Triad for an AI False Reality

Tactic 1: Control of Knowledge

As the first tactic we find control over the Knowledge to which humanity has access. That in a digital age, it is the same as controlling information data and its transmissions. Since, as the historian Yuval Noah Harari stated in 2021, whoever controls the data controls the world (3). A feasible potential risk that is not only expected to come from AI itself in its progressive evolution as an autonomous intelligent entity due to the development of a global technological habitat, a growing mega-capacity for real-time data management, and hyperconnectivity of operational processes between intelligent machines or AI’s; but also because of the interference of a small monopoly of technological titans — say for the moment OpenAI, Google and Meta in the western world — that concentrate the enormous resources necessary (giant amounts of cloud computing and enough electricity to boil Olympic swimming pools) capable of develop the “talking” AI that is coming (4), thus creating a future oligarchic hub of Reality that is already beginning to be perceived for indiscriminate consumption by the digital human population through the well-known ChatBots. A Knowledge that, therefore, whether centralized by AI or by a handful of human managers (which in any case are part of the new algocracy), will predictably be subject to both biases and manipulations of all the data required for identification, observation and analysis of Reality under parameters of partisan benefit.

That is to say, if we take into account that Reality is a perceptible nature that man himself partially represents under his subjective prism, where cultural determinism plays a determining role, it is logical to think that those subjects (humans) or entities (artificial) that have as their social responsibility the representation of Reality (by irresponsible delegation of the population as a whole) do so in a partial way, and that consequently said partiality is elevated to a category of general rule (regardless of whether this is a deliberate action or not). And we are not only talking about cognitive biases due to human logic, but also and with special emphasis on computational biases due to algorithmic logic in the midst of the digital age. Since, although it is true that there are as many definitions of biases as there are human study disciplines, we must understand the concept of bias in this present reflection -which has as its object of analysis the implications of AI in relation to Knowledge about Reality — as a type of statistical or sampling bias, which refers to the manipulation in the collection of some data to the detriment of others according to criteria of algorithmic logic. In this regard, and as an example of algorithmic bias, note that earlier this year news emerged of a ChatBot spreading the erroneous knowledge that the James Webb Space Telescope was the first to take images of a planet outside our solar system , when in reality the credit went to the Very Large Telescope of the European Southern Observatory in 2004 (5). Although in this episode the error was solved quickly by direct intervention of the astronomical scientific community itself, this is an unequivocal sign that the new general trend in the indiscriminate use of ChatBots as a source of human Knowledge -thus replacing books, academic libraries, and even the traditional educational system — will generate a new generation of people who, in all probability, will develop as individuals with a highly biased cosmological vision of Reality. A future human civilization where the Reality Principle will be generated by ChatBots or it will not be, regardless of its rigorous validation. And if we add to this the ever-present variable of human intentionality manifested in the manipulation of Knowledge, as we can see today and around the world with history books among other educational subjects for ideological interests, it is clear that the Principle of Reality and therefore Truth itself initiate a dangerous chess process of checkmate.

Tactic 2: Control of Information

As a second tactic we find control over the contextualization of the transmitted data, which is nothing other than what we call Information. In this sense, it is worth highlighting the famous Fake News, which, as we all know, is false news that appears to be true and whose objective is to misinform a specific audience by spreading it among the media and communication platforms to give them credibility. And what better way to control Public Information than by replacing human journalists with AI journalists. A countdown that already began in China a little over four years ago with a virtual presenter (6), who was joined two years later by the South Korean AI Kim Ju-ha (7), and that this same 2023 has exploded with four new additions: the Chinese Ren Xiaorong (8), the Kuwaitian Fedha (9), the Russian Snezhana Tumánova (10), and the Mexican Nat (11), who are surely nothing more than the vanguard of the new era of robotic journalism. Although it could also be that the new media of the future will even do without journalist avatars, as is the current case of the RadioGPT station (12) whose entire content is directed and managed entirely by an AI.

It is clear that, eliminating human journalists from the equation, no one can ensure the survival of the deontological code of the sector that guides the ethical behavior of professionals in the journalistic exercise, thus paving the way for the growing flow of Fake News that can reach to incur in some cases in crimes of fraud. And while some of the New Fakes can be easily detectable, as in the highly topical cases of AI image manipulation, there are others directly authored by ChatBots that, on the contrary, are less detectable and therefore more ethically malicious. To give an example of both assumptions, I am reminded of the image of Pope Francis dressed in a white padded coat from a presumably luxury fashion brand made by the free AI application Midjourney (13), whose The image was originally published on March 25 on the Reddit network (14), and from there it went viral among the rest of the most popular social networks in the world, unleashing all kinds of comments regarding the required vow of humility by the spiritual leader of the Catholic Church, to the astonishment and perplexity of their own and others. Here is an example of Fake News weak due to its rapid deactivation among the general public, since few were those who gave credibility to this manipulated information. While at its opposite pole, I remember the recent case in this same month of April of a strong Fake News due to the great public impact that it generated, creating a state of social alert, led by a criminal defense lawyer and respectable law professor from USA, who was the victim of a fake biography created by ChatGPT that accused him of committing sexual abuse and even fabricated and cited a fake Washington Post article to support the accusation. Needless to say, the professor found himself in the urgent and desperate obligation to publicly deny the false news, defending himself as best he could against the defamatory accusations of the AI, by publishing two articles in USA Today (15) and in his personal blog (16). The pertinent question in this case is not only who has legal responsibility for said defamation as it is an action committed by an AI, but also if said falsehood will last in the memory of the ChatBot as a valid source of information for the new generations. It is early to respond, but not so to sound the alarm and publicly incite the awakening of collective consciousness about the accurate manipulation that is coming in terms of information manipulation by AI.

Tactic 3: The instrumentalization of Personal Confidence

And as a third and last great tactic we find the manipulated use in the field of Personal Confidentiality. In this sense, the fact that human beings manifest behaviors of emotional dependence with AI is nothing new, as I already developed in “What sociological implications does the anthropomorphic “humanization” of robots entail?” (17), which on the other hand I never tire of divulging. To date, one of its greatest exponents is the AI ​​Replika (18), which, as the company itself announces very clearly on its web portal, is “The AI ​​partner who cares. Always here to listen and talk. Always by your side”, and which is already used by more than 10 million people to vent or alleviate their loneliness. That is, an AI as a Personal Confidant for humans. A sociological phenomenon, which is fed by the weakness and emotional imbalance of a large number of people, which has already begun its qualitative leap with the deployment of ChatBots. In fact, as an exemplary anecdote from just a few days ago, a social acquaintance explained to me the following when commenting on one of my latest articles on AI: “A friend of mine told me last week that she asked (to ChatGPT) even personal things… I got worried and alerted, I told her that we are friends for those, to listen to each other!” (sic). Certainly his answer could not be more accurate. But it is clear that, in environments of complex advanced societies such as ours, where people suffer from certain states of loneliness and suffer from a complete general ignorance regarding emotional self-management, the growing ability of ChatBots to interrelate as humans with our species is going to predictably generate the growth of a collective social phenomenon of empathy of the human being with the AI ​​that will cover levels as intimate as those of Personal Confidence. And that we are still in the ChatBot phase, whose shape cannot be less animated than that of a mobile device or desktop computer. Therefore, the recent announcement by OpenAI (creator of ChatGPT) to partner with the Norwegian company 1X (19, 20) specialized in robots, to create commercial humanoids with brains with AI, augurs a near horizon where the sociological phenomenon of Personal Confidence of individuals with robots will not only expand exponentially but will even eventually become normal. Have a confident robot friend at home! could be the next advertising slogan. Jokes aside, here is one more instrumental element so that the average human being, in the natural tendency of our species to grant moral qualities to inanimate or animate non-human beings, ends up blurring the dividing line between what is real and what is not real. As it happened recently, taking an extreme case, with a Belgian man who committed suicide prompted by a ChatBot with AI (21).

Knowledge, Information, and Personal Confidentiality are the three great tactics that AI uses to create the new False Reality in the contemporary era of humanity, creating a new world in which the average man, as immersed as he is subjugated to a digital habitat, it will end up not having sufficient capacity to discern between what is Truth and what is not. Because through this triad of tactics, the AI ​​will control the most relevant sources for the development of the cognitive faculty of the average man: empirical knowledge, analytical knowledge and affective knowledge. However, there will be those who naively consider that this scenario, already in the process of being implemented, will find the natural resistance of the human being’s own capacity for free thought, without paying attention to the consideration that to think for the average man there are already ChatBots that think for us (22), and that therefore in its power to package a consumable thought of easy popular intake under the auspices of the human law of minimum effort, for not remaining we will not even have free will. Since without one’s own thought there is no free thought, and without it there is no free will (23, 24). Exposed which, it is clear that philosophers, who by epistemology love knowledge understood as the ultimate essence of Reality and by extension of the Truth of things, are not only going to be socially even more accentuated rara avis in the new world that is coming, but I would even dare to say that we are on the path of representing the uncomfortable humanist resistance against a new model of social organization based on algorithmic autocracy. Although, on the other hand, in these times that we philosophers are already seasoned in swimming against the current. To AI what is of AI, and to man what is of man, Principle of Reality included.

References

(1) Philosophy as personal therapy. Jesús A. Mármol. A Seeker’s Log, September 1, 2017 https://acortar.link/51xaeI

(2) Human objective reality does not exist outside of the subjective general consensus. Jesus A. Marble. A Seeker’s Log, April 14, 2019 https://acortar.link/2DT7dw

(3) Information is the basis of political power: whoever controls the data controls the world. Interview with Yuval Noah Harari. Efecto Naím. Ethic, November 25, 2021 https://acortar.link/RDp2Ft

(4) The dangers of highly centralized AI. Clive Thompson. Medium, April 1, 2023 https://acortar.link/3GNUE6

(5) Bard: the Google chatbot bug that caused the company a loss of US$100 billion. Natalie Sherman. BBC, February 9, 2023 https://acortar.link/PlImAw

(6) Xinhua’s first English AI anchor makes debut. New China TV, November 7, 2018 https://acortar.link/TWfLB6

(7) Kim Ju-ha. MBN News, 2020 https://acortar.link/AZ63wX

(8) Meet Ren Xiaorong, a virtual news anchor for People’s Daily AI. Manya Koetse. Weibo, March 12, 2023 https://acortar.link/U4OgA0

(9) Kuwait News AI newscaster Fedha ‘represents everyone’. Kuwait Times, April 10, 2023 https://acortar.link/wgOYz6

(10) “Your weather girl”: a neural network created a beauty presenter on a Russian TV channel. Varvara Antonova. Stav.KP, March 22, 2023 https://acortar.link/FRbLAc

(11) The future has arrived! Meet Nat, the first AI-generated driver in Latin America. Formula Group, March 18, 2023 https://acortar.link/tV7IFF

(12) RadioGPT https://futurimedia.com/radiogpt/

(13) Midjouney https://www.midjourney.com/

(14) Pope Francis was seen wearing a striking jacket. Reddit, March 2023 https://acortar.link/Q7MepQ

(15) ChatGPT falsely accused me of sexually harassing my students. Can we really trust AI?. Jonathan Turley. USA Today, April 2023 https://acortar.link/JAqV9q

(16) Maligned by ChatGPT: My own strange experience with the artificiality of “artificial intelligence”. Jonathan Turley, Blog. April 6, 2023 https://acortar.link/1NdhK6

(17) What are the sociological implications of the anthropomorphic “humanization” of robots?. Jesús A. Mármol. Medium. December 19, 2022 https://acortar.link/t2w7eG

(18) Replika https://replika.com/

(19) Humanoid robotics company 1X raises €21.8 million from OpenAI, Tiger Global and others. Vishal Singh. Silicon Channels, March 27, 2023 https://acortar.link/RQW2vQ

(20) 1X https://www.1x.tech/

(21) A man commits suicide after being invited to do so by an AI chat. Imane El Atillah. Euronews, April 1, 2023 https://acortar.link/IDSsy1

(22) Are ChatBots going to make us dumber?. Jesús A. Mármol. Medium. January 2, 2023 https://acortar.link/sLoZei

(23) And you, do you have free will? Jesús A. Mármol. A Seeker’s Log, March 12, 2018 https://acortar.link/USsUkP

(24) And if Free Will did not exist? Mold tells us no. Jesús A. Mármol. A Seeker’s Log, December 31, 2022 https://acortar.link/B4MQmy

--

--