Military AI’s ring of invisibility makes the just man unjust

Few know that the famous film saga of “The Lord of the Rings”, based on the work of the same name by the british Tolkien, in which a magic ring grants invisibility to its wearer upon payment of corrupting their morality, finds its origin in the myth of the Ring of Gyges recounted by Plato in his book The Republic; where a shepherd named Gyges finds said magical ring that grants invisibility and instead of using it for good, he instead used it to seduce his queen, kill the king, and take over his kingdom. A myth that gives us to understand that human beings tend by nature to show unfair behavior when they feel unpunished, under the selfish impulse of getting what does not belong to them for their own benefit. An axiom that was already posed by the greek philosopher Glaucon, who, even though he is less known than Plato, it is worth saying — as a didactic note for the reader — that he was both his brother and an interlocuting literary character in Plato’s own work of The Republic.

Yes, impunity — even if it were a feverish state that alters the consciousness of men — diverts people from acting correctly, since they know they are safe from any external value judgment and, therefore, from all responsibility for their actions. Which reaffirms Hobbes’ old thesis that man is a wolf to man himself, and therefore requires a social normative system (laws) that controls his selfish aggressive impulses. And there is no greater perception of impunity than that which can be offered by the acquired capacity for invisibility over the actions carried out. A capacity of invisibility that, in the field of war conflicts, is granted by Artificial Intelligence (AI) in current wars such as Ukraine (1) or Gaza (Palestine), and which therefore fully affects the Ethics of contemporary war, as well as by extension to the Ethics of International Law in the margin of the legitimate use of violence by States. This is a topic that, on the other hand, I will not develop in this article because I dealt with it in some depth in a past reflection a year ago under the title “Is it Ethical to Create Killer Robots?” (2). That is why this dissertation should be understood in the limited relationship between warlike capabilities of AI and its implications in the moral policy of those managers of it who perceive themselves to be exempt from any sanction for its possible criminal use.

Without intending to list the various wars currently evident that use AI, it is worth noting as a small sample that only in the last Arab-Jewish conflict in Gaza (which began on October 7 after a terrorist attack by Hamas) (3), the Israelis — thanks to technological and weapons support from the US — are using almost all types of existing AI military technologies against the Palestinian population: Killer Robots, which detect and eliminate targets autonomously; the Centaur Fight (under simile of being the human thinking head and part of the non-human body), where the AI ​​makes tactical decisions with human supervision; the Minotaur War (under simile of being the non-human thinking head and the human body), where tactical decisions are supervised by the AI ​​itself; and the Mosaic War, which is roughly a mixed version between the previous two for making strategic decisions optimized by AI under human supervision (4). A military technology that, today, beyond the AI ​​algorithmic systems that support and make decisions at a tactical and strategic level, is visible on the battlefield in the form of autonomous weapons, as well as drones, both autonomous and remote control that act as deadly bombers. In fact, so far in the current Gaza war, 50 percent of Israeli weapons are AI-guided (5) from a long safe distance in attacks.

It is clear that countries, immersed in the logic of a war, seek to increase their military intelligence ratios, protect their combat forces and especially their soldiers, and maximize their firepower, in order to operate on the ground with a tactical advantage that reaches a level of radical asymmetry with the enemy. And, in this sense, AI has become the military panacea — for those countries like Israel with cutting-edge technology — with the category of disruptive factor for the known history of war. Since military AI technology allows a high degree of impunity to be achieved, through weapons “invisibility” that, managed from a safe distance from the combat front, allows attacks launched to go unnoticed by both potential targets and eliminate, as well as for civilian victims as helpless collateral victims, as well as for internal and external observers (western public opinion included). That is, military AI is the new Ring of Gyges.

However, when a country at war reaches radical asymmetry with respect to its enemy country, thanks to the use of its magical ring of invisibility by AI, it does not take long for Ethics to inevitably be blown up. Since, under the foreseeable feverish effects of supervening states of self-impunity on the part of megalomaniac politicians and soldiers on duty, the temptation to cross the line of what is morally considered fair and what is not is probabilistically very high for the greedy and vengeful human nature. Without restrictions, man, as Hobbes said, becomes a wolf. Empirical evidence in the light of History that, as a complementary note, reminds me of the last incident of killer drones in Nigeria — of so many other similar episodes that occurred in the country -, which recently killed 85 civilians who were celebrating a muslim holiday (6) and that the Government described as an error typical of a routine military operation.

Following this line of argument, we can affirm that it is evident that Justice, as an instrumental means that seeks to ensure normative and practical Ethics in human relationships, has just moved to a new screen at the international level where it is subject to the technological advantage between countries over AI invisibility/impunity rings. From which it follows that man currently has great power that requires great responsibility, since otherwise these AI rings will end up corrupting the morality, not to mention the humanity, of their bearers — just as happened to the hobbit Gollum in The Lord of the Rings, or to the shepherd Gyges in Plato’s myth -, to the desperate helplessness of the rest of humanity. Yes, the new AI weapons technology can redefine Justice in the world, and with it, International Law itself; since this always ends up being rewritten and imposed by the prevailing military powers.

At this point in the presentation, the relevant question cannot be other than the one that gives us the answer to how we can internationally regulate the military use of AI, with the aim of ensuring a humanist Ethics that protects Human Rights, for their safeguarding. of the delirium of men of power under political and/or religious fundamentalist motivations. In this sense, there will be those who allude to implementing greater human supervision, or an increase in the effectiveness of tactical precision, or applying principles of proportionality in the use of force, and even regulating the nature and dimensions of military missions. But the truth is that the question posed, under human anthropological logic, has no effective answer. The reason is clear: in wars between men there is no rule other than destroying the enemy, and in the midst of an arms race with AI there is no country that wants to remain defenseless due to technological disadvantage. Although there could be an intermediate solution in the medium and long term, due to previous experience with the nuclear arms race between powers, which is none other than mutual deterrence, at least between blocks of equal strength. Aware, furthermore, that we must take into account the inherent obstacle that human Ethics is not universal but geographical (7), so what is ethical for some is not ethical for others and vice versa, due to cultural determinisms.

In the meantime, let’s prepare for the invisibility of the AI ​​force to make just men unjust, whit impunity for their atrocious acts in the eyes of the rest of the world (to everyone’s derision). Man has discovered the new Ring of Gyges, and those who possess it fall ill with an indelible power that, although it corrupts them morally, they neither want nor can give away. A new reality in the history of humanity where philosophers, like contemporary Glaucones, can only confirm from a critical thinking that, certainly, we preach from our small personal deserts.

References

(1) Ukraine’s AI drones seek out and attack Russian forces without human supervision. Forbes. David Hambling, October 17, 2023 https://acortar.link/WqJ650

(2) Is it Ethical to create Killer Robots?. Jesús A. Mármol. Medium, december 19, 2022 https://acortar.link/0HqG68

(3) Genocide in Palestine: the Whole (the people) by the Part (the terrorists). Jesús A. Mármol. Bitácora de un Buscador, December 13, 2023 https://acortar.link/Rd0ODJ

(4) AI and the future of War. Paul Lushenko. Bulletin of Atomic Scientists, November 29, 2023 https://acortar.link/qRC8n9

(5) In an apparent world first, the IDF deployed swarms of drones in the Gaza fighting. Judah Ari Gross. The Times of Israel, July 10, 2021 https://acortar.link/ZAbvFU

(6) Nigeria: The Government orders an investigation after an Army attack that leaves at least 85 dead. Juan Pablo Lucumí. France 24, 6 December 2023 https://acortar.link/QFcQi9

(7) Robotethics, like Ethics of AI, is today an intentional fallacy. Jesús A. Mármol. Medium, December 20, 2022 https://acortar.link/lr6Qa3

--

--