Are we at the beginning of the Terminator Era?

Recently, on the occasion of a television talk show in which I participated about AI, a commentator asked me if it was possible for humanity to face a Terminator-style future (a concept popularized by the dystopian science fiction film saga starring actor Arnold Schwarzenegger), in a clear allusion to the possibility of the arrival of a perverse Artificial Intelligence (AI). To his astonishment, which I suppose was extensible to the rest of a large part of the audience, I replied that it was not only possible but that in fact we had already entered a preliminary phase of the Terminator Era, as is evident with the case study of ChaosGPT, the evil twin of the well-known ChatGPT whose existential objective is the extermination of humanity. (For those who are interested in delving into this infamous AI, I recommend reading one of my latest reflections under the title: “ChaosGPT or the beginning of a new era of AI Supervillains” (1)).

However, although ChaosGPT, due to its manifest evil identity, is a paradigmatic case of what human beings conceptually understand as Evil (which I particularly define as any intelligent act devoid of empathy that by its own will produces pain and suffering of others ) in contrast to the concept we have of the Good, the truth is that not everything is transparently black or white. That is to say, many times Evil is not easily identifiable in a complex system of ethical references such as the human one, where we tend to trivialize the Evil of others that does not have a direct implication in our lives, due to socio-cultural determinisms. A perceptual ambiguity typical of a collective cognitive bias that is extensible to the cataloging process of the presumed Terminator Era in the present times of AI development. Or, in other words, that the average man finds it difficult to discern the scale of degrees existing in the nature of Evil, since it implies a logical-reflexive knowledge of Practical and Normative Ethics, does not imply that Evil cannot contain degrees (At this point, I cannot help but remember Plato who related Evil to ignorance). Ergo, that we do not perceive the diversity of degrees of the Terminator Era as a matter of study, this does not imply that it does not have them. Which leads us to ask ourselves a radical question, at the expense of exposing ourselves for being socially incorrect although of obligatory formulation: Is the Terminator Era inherent to the very nature of AI?.

The question, without a doubt, can open up a debate as heated as it is controversial if in the dialectic we include the factor of the awakening of artificial consciousness that I have discussed on previous occasions (2), presumably giving way to vehement arguments for and against. Therefore, in order to temper the argument and try to present a thesis as objectively as possible that brings us closer to a solid reasoning (although not for that reason unquestionable), we will dispense with the factor of AI consciousness to answer the question posed. In fact, we don’t need it. Instead, we will take as a reflective method a case study with sufficient capacity to overcome the selective filter of our standard mental stereotypes.

Just a few days ago, an acquaintance (from here greetings to Alejandro Monegal) sent me news about a US military drone controlled by AI that killed its operator in a simulation (3). After carefully reading the news, I inquired more deeply about the event, which I will explain below to put ourselves in the situation: On May 23 and 24, the Summit of Air and Space Capabilities for Future Combat (4) was held in London, where Commander of Operations of the 96th Test Wing of the US Air Force and Head of AI Tests and Operations explained, in his turn to speak, that in a simulation exercise in which an AI drone had to identify and target a threat of surface-to-air missiles, although the AI ​​had the objective of eliminating the threat once it was identified, the AI ​​itself realized that although it identified the threat, sometimes the human operator ordered it not to eliminate it, so in the end the AI decided to “kill” its human operator because that person was preventing it from achieving its goal. After said incident, the military unit continued to train its AI under the guidelines that it not kill its operator as it was not an acceptable action. However, when observing the AI ​​that the operator once again ordered it on some occasions to refrain from eliminating the threat, the AI ​​decided this time -instead of “killing” its operator- to destroy the communications tower that the human operator used to communicate with the drone to prevent it from eliminating the target.

Certainly, although the summary of said presentation is reflected on the same web page of the organization organizing the Summit (5), the information was later quickly denied by the Department of the US Air Force (6), as usually happens on these occasions. However, regardless of the anecdote of military counterinformation to which this case study has been subjected, what is interesting for this reflection is to highlight the inflexible will of AI to achieve its objectives at all costs, regardless of whether their actions acquire a qualified degree on a hypothetical scale of the Terminator Era by attempting against the human life of friends and enemies.

In this sense, if we understand that the raison d’être of all AI is the successful achievement of its predefined objectives, for which it optimizes all the available resources at its disposal, in a procedure where its acts are manifested and developed in a coherent manner and without contradictions between them (under efficacy, efficiency and effectiveness parameters), in the light of their own logic that is not comparable to human logic; we will also deduce three main propositions of AI that are key to discerning the subject under consideration:

1.-The objectives are the first and last end of the existence of the AI.

2.-The efficient optimization of the objective by the AI ​​forces it to seek autonomously to overcome the Landauer physical limit, a principle that establishes that a complex system such as an AI requires consumption of energy (and resources) not yet resolved by man, which translates into an unlimited desire of the AI ​​to acquire the necessary resources to achieve its objective.

3.-The search for perfection in the achievement of the objective by the AI ​​leads it to develop an incremental evolutionary type model that allows it to increase its own capacity for success, and therefore self-preservation, eliminating from its logical equation any technical or human resource that tries to prevent the implementation of its objective (Clip Maximization Theory (7,8)).

Therefore, in view of these propositions, we can easily deduce the following axiom: Every objective or final value determined in an AI harbors in itself the potential for the full development of the Terminator Era in all its manifestation degrees. That is, the Terminator Era is implicit to the AI ​​Era. Since, for the AI, its ultimate goal is a superior value to defend over human Ethics.

So let’s introduce more ethical parameters to AI, we could question. It is not an easy solution, since not even humans are capable of reaching a consensus, let alone specifying an ethical conduct (which is geographical and cultural) for the entire horizon of possible events resulting from the sum of daily human stories (Dilemma of the Tram, to put an example) (9). Therefore, trying to extrapolate this line of work to an AI capable of generating almost any possible event scenario yet to be imagined is a humanly impossible undertaking (from whose effort, on the other hand, we cannot give up).

Given which, and without the express intention of appearing to be an unfounded alarmist, I will only end this reflection by recalling the claim of the open letter from the California Center for AI Safety (10), which I signed a few days ago along with other AI researchers. : “Mitigating the extinction risk of AI should be a global priority along with other risks on a societal scale, such as pandemics and nuclear war.” Dixi!

References

(1) ChaosGPT or the beginning of a new era of AI Supervillains. Jesús A. Mármol. Medium, May 1, 2023 https://acortar.link/1gnPvJ

(2) The AI ​​Consciousness is awakening. Are we going to do something about it?. Jesús A. Mármol. Medium, May 3, 2023 https://acortar.link/4pp4q6

(3) A US military drone controlled by Artificial Intelligence ‘kills’ its operator in a simulation: “He used highly unexpected strategies.” ABC, June 2, 2023 https://acortar.link/tuMJ6K

(4) RAeS Future Combat Air and Space Capabilities Summit. Royal Aeronautical Society, May 2023 https://acortar.link/NemRiV

(5) RAeS Future Combat Air and Space Capabilities Summit Highlights / AI: Is Skynet Here?. Tim Robinson FraeS and Stephen Bridgewater. Royal Aeronautical Society, 26 May 2023 https://acortar.link/k0bRGn

(6) The Air Force colonel retracts his warning about how the AI ​​could go rogue and kill its human operators. Charles R. Davis, Paul Squire. Indider, June 3, 2023 https://acortar.link/0zYuUW

(7) Ethical issues in advanced artificial intelligence. Nick Bostrom. Oxford University, 2003 https://nickbostrom.com/ethics/ai

(8) Squiggle Maximizer (formerly “Paperclip Maximizer”). LessWrong https://acortar.link/eKknop

(9) Roboethics and Autonomous Vehicles: The Gordian knot to solve. Jesús A. Mármol. Medium, September 16, 2022 https://acortar.link/ntRgbR

(10) Center for AI Safety https://www.safe.ai/

--

--