The Creeping Ouroboros

Reaction Paper #5 for Artificial Intelligence Law and Policy

ys
trialnerr0r
6 min readApr 22, 2019

--

To achieve autonomy in weapon systems, artificial intelligence has been leveraged quite a lot for machine visuals, targeting systems and more. The sophistication of smaller automation units allows us to get closer to autonomy, but the advancement of autonomous weapon system (AWS) is also controversial for weapons are directly linked to wars and conflicts, and neither of those are things we like or wish to happen. One of the most discussed topics in AI-IHL sphere would be what attitude should be adopted toward autonomous weapons, should we ban the utilization completely or should the use be allowed but regulated. The remedies suggested are mostly based on rules of laws, if not other related humanitarian law. Seeing all those “legal” approaches makes me wonder why the focus on legal remedies, why are we not talking about ethics, something higher and holier, that much in this specific area, where more people’s lives are on stake?

Why do we fight wars?

To respond to the questions, I first try to understand why we fight wars. Some said “wars were fought for fun and profit”[1], along the same line it is easy to think of reasons why we fight wars, for territorial gain, profits, religion, defense reasons etc. Superficially, profits, religion, defense are all different reasons but they are the same if assessed from the “forced obedience” aspect. Entities with power could force others into surrendering lands, changing religions or doing anything that they desire. But why do we surrender, why do wars have such influence on nations? Following is a more economic evaluation, wars consume numerous resources, nations surrender because they could not afford the destruction caused by wars, they exchange the peace with the demand from counterparts for the later cost lesser to them. Of course, nations could choose to continue the war, sacrificing all of its civilians in pursue of victory, but when all bridges are burnt nothing has left and lives are the only commodity available, rulers, also human, would very rarely to continue. So why fight wars? It is a brutal yet simple way to exploit the defeated since they have no other card left to play.[2]

War times, when laws and ethics align

Wars are complex, yet the cores of International humanitarian law (IHL), such as Geneva convention and its additional protocols, are simple. The innocent, such as civilians, should not be involved, so we have the principle of distinction[3]; collateral damage should be considered, thus the idea of proportionality[4] is written. Besides minimizing the costs wars would bring, IHL also thrives to continue the belief in humanity, it bans treachery, and any misuse of logo of ICRC or medicine is inhibited. To me IHL oddly is the most humane sets of international law (even compared with other conventions have “rights” in their title). It seems counterintuitive at first, to have the most humane govern the most chaotic times. But then I realized, maybe it is in the killings that we need to have the sense of human in us, which is why compulsory rules like IHL embody such amount of humanity. Only by reminding us that our enemies are no less than us that we could be emphatic and to show the basic amount of decency toward another equal, or else we are closer to whatever we don’t wish to be, killer robots maybe. Morality is codified into law for extreme situations, e.g. IHL, law and ethics align. Therefore, we appear to focus on legal remedies, yet de facto we already are embodying the ethical rules. So here we are again, in the middle of ethics and laws.

The ouroboros we cannot seem to escape from

AI in warzone seems new, yet in fact I would argue that the issues we have are not. The debate over killer robots is the same to whether machines could ever be given the chance to decide humans’ lives. Whether AWS should be allowed in war zone repeats the question whether AI should be allowed in high stake situations. The chain of command being disrupted by AI echoes with the algorithmic accountability despite this time accountability has even more weight. At the same time, we still have default hardships when implementing IHL, such as the difficulty to correctly discriminate combatants and non-combatants, to determine whether or not attacks are excessive and to assess if we could afford the collateral damages. Things have not changed much, only we are now in a more extreme situation.

AI doesn’t necessary change existing structures, to me it is its ability to amplify the natures of matters it is operating on makes everything hard, thus resulting into changes. Proportionality and discrimination have always been major issues for IHL participants, AI only shades lights on all the details in the dark that have been shadowed under human operation. We leverage AI to tackle complex matters that cannot be solved, AI does generate satisfying results, but now we don’t understand how AI achieve the results. We then resort to AI (not necessary the same one) in the hope of resolving the complexity again.

Trying to make sense of one black magic with more black magic creates the ouroboros we cannot seem to escape. An ouroboros is a snake that bites on its own tail, since it would never end, it symbolizes forever. It is not inherently good or bad, it just happens to represent the situation where two ideas feeding on each other quite well. Usually forever means steadiness, as we long from the idea of infinity, but when the ouroboros creeping in the AI sphere are feeding on complexity and opaqueness, its presence is not as welcomed.

“Stop using black-box AI in high-stake area” might be the one sword that could cut through the Gordian-knot-like ouroboros, yet it is almost impossible since we won’t be able to summon the consensus to make the call for the cut. We are extra careful (or even hesitate) to make moves when dealing with human, but it is because it involves human that we really need to, spot the dilemma.

Maybe “turtles all the ways down” really is the wrong sayings, the world is not really a flat plate supported on the back of a giant tortoise and underneath is just all turtles.[5] Maybe this world is actually a huge Gordian knot tied up by ouroborouses. Or maybe Hindu mythology has had it right all this time, that under the world tortoise, there will still be a ouroborous governing us, and no matter where we go we will always have the ouroborous creeping in the back.

[1] Paul Krugman, Why we fight wars?, https://www.nytimes.com/2014/08/18/opinion/paul-krugman-why-we-fight.html

[2] It is impossible for the defeated to have literally nothing in hands, but when civilians, fellow human beings, are the only resources, it is a common stop-point. So even start with an economic evaluation standpoint, in extreme times such as war, human and humanity still is still one obvious answer.

[3] ICRC, Rule 1. The Principle of Distinction between Civilians and Combatants, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule1

[4] ICRC, Rule 14. Proportionality in Attack, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule14

[5] Stephen Hawking, The Brief History of Time, Bantam Dell Publishing Group:“A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!””

--

--