Elon Musk and 116 other experts in the field of AI (artificial intelligence) and robotics have signed a petition “calling on the United Nations to ban lethal autonomous weapons, otherwise known as ‘killer robots.’” They believe it would lead to a “third revolution in warfare” (with gunpowder being the first, and nuclear weapons the second). Proponents of the technology say that these systems could more reliably identify (facial recognition) and hit known targets. But many feel there is just something that feels immoral in not having a human approving each strike. “The experts signing the letter say that autonomous weapons that kill without human intervention are “‘morally wrong…’”
· Point: Could this just be another example of moral relativism that we eventually accept and become accustomed to, much like our growing acceptance of drone strikes or women in military positions?
· Counterpoint: Isaac Asimov (1920–1992) was an American writer and professor of biochemistry at Boston University. He was known for his works of science fiction. The Oxford English Dictionary credits his science fiction for introducing the word, “robotics.” Amazing! He is perhaps best known for his “Three Laws of Robotics” (written way ahead of his time!):
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So, killer robots violate his first rule.
We call on technology to employ “Empathic AI” — programming that holds paramount humans’ safety, health, and welfare. Utilitarianism, on the other hand, is a moral code that urges us to make decisions that result in the greatest good for the greatest number of people. So would killer robots be a justifiable compromise to Empathic AI, helping to reduce the number of terrorists, and enhancing our overall well-being?