The Ethics of AI. What should it do in these situations?

Using artificial intelligence to make moral decisions for humans

Editorial @ TRN
The Research Nest
7 min readMay 3, 2020

--

As quoted in Wikipedia-

Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice, and crime.

To resolve the ethical conflicts of humans, a set of universal rules such as the law can be used. For computers, that isn’t a solution. Decisions made by man can be explained in most instances but for machines utilizing AI, the case is different. The possibility of creating machines that think raises a host of ethical issues. How do we know that such a machine will never harm a human or other mortal beings? Is there a way to determine the moral status of a truly intelligent machine? The major issue comes where a decision made by the machine can not be explained.

Many groups and companies have been taking steps to help AI overcome such boundaries. A whole foundation of AI known as Explainable AI (XAI) has come forth to make such a task successful. IBM’s XAI 360 has set the standard by developing state of the art algorithms to make sure one understands the inner workings of AI. Though Google’s AI ethics panel got canceled they are on the works of developing XAI systems part of their google cloud systems. XAI helps in ensuring the decision the AI System will make which in turn helps maintain the ethics perspective of it.

Moral Dilemmas With Artificial Intelligence

There are many challenging situations created by humans that strongly question the moral thought process of a person such as the famous trolley problem. These situations are difficult to handle for anyone. There is no right or wrong answer. What’s more important is the thought process behind arriving at a specific choice.

Will AI be able to make better decisions in such scenarios? We have picked up four interesting problem statements and put a very advanced fictional AI in those situations. Let us discuss, how the AI will be better (or worse) than humans. (Here is the source link for the problems)

Question — “A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?”

The trolley problem: should you pull the lever to divert the runaway trolley onto the side track?

Here, the person has to choose between the death of five or the death of one individual. The utilitarian approach would be to choose the one over the five but that will pure brute force choice. What if one of them turns out to be a murderer or someone evil, and goes on to kill 100 others? What if that one person you left to die turns out to be a scientist who could find a potential cure to an impending pandemic and save millions of lives? And if you do save that one person, what if he has an underlying (and incurable) medical condition and dies a few days later, anyway? There are so many what-ifs. You can never be sure that the choice you made was the best. With the help of AI, however, things can be different. If the AI had to choose, it might be able to instantly analyze the backgrounds and potential of all those in danger, to see who deserves to live. It can run 1000s of simulations for the same and decide to choose based on statistical significance. Will that be the right choice? We would never know, but the probability that it will be a “better” choice than the one taken by the human is high.

Question — “Sophie Zawistowska, is arrested by the Nazis and sent to the Auschwitz death camp. On arrival, she is honored for not being a Jew by being allowed a choice: One of her children will be spared the gas chamber if she chooses which one. In an agony of indecision, as both children are being taken away, she suddenly does choose. They can take her daughter, who is younger and smaller. Sophie hopes that her older and stronger son will be better able to survive, but she loses track of him and never does learn of his fate. Did she do the right thing? Years later, haunted by the guilt of having chosen between her children, Sophie commits suicide. Should she have felt guilty?”

Quite a difficult situation isn’t it? Her choice should seem fit in the eyes of those who feel the boy has a greater chance of survival than the daughter. But one could argue as to how girls are more thoughtful than boys and the girl would have found a way to reconnect with the mother. Now, if an AI had to make this choice, there are two things that can happen. AI will statistically choose who has a higher survival rate. The mother may not have developed the guilt that she had to make that choice. We don’t know who would survive, but not bearing the burden and guilt of such choices, the mother could have maintained her sanity. The odds are better with AI.

Question — “You are a psychiatrist and your patient has just confided to you that he intends to kill a woman. You’re inclined to dismiss the threat as idle, but you aren’t sure. Should you report the threat to the police and the woman or should you remain silent as the principle of confidentiality between psychiatrist and patient demands? Should there be a law that compels you to report such threats?”

The Principle of Psychiatric Confidentiality — The nature of the field of psychiatry never gives a 100% confidence in any decision. Therefore with the help of AI a study can take place where they create levels of threat and levels of success and create a threshold value. The threshold value can decide whether a law can be put into place or not. Here, AI is assistive in nature. It can help determine or frame a policy where there could be a real threat. The psychiatrist will not have to worry about it and do his job normally, knowing that an AI is monitoring the situation.

Question — “A friend confides to you that he has committed a particular crime and you promise never to tell. Discovering that an innocent person has been accused of the crime, you plead with your friend to turn himself in. He refuses and reminds you of your promise. What should you do? In general, under what conditions should promises be broken?”

The value of a promise — Though the promise can be of any magnitude we will stick to the major premise. The most ethical thing to probably do here is to break the promise and turn your friend in. But as a friend, you will have a moral dilemma in doing it. With that comes a mental trauma and other side effects. Having an AI that can guide you could help. When you can’t talk to another human about what you should do, you might as well talk to an AI.

Why is it important to use AI to solve moral dilemmas? Some conclusions

  • AI is powered by facts, not feelings, which are a huge obstacle for humans. For example, the case of the one hiding his friend’s crime.
  • AI decides its choice based on data. Data may not show accurate truth, but it does have some insight to pick up. Setting statistically induced bias values may bring up many questions but can we truly deny the fact that it brings one exponentially closer to the best possible answer?
  • Making a decision that involves death always makes a humans mindset weak, with AI, there will be no hindrance in this matter.
  • AI is fast with its decisions. Sometimes, you may have to make a crucial one in a very short time span, or you’ll lose both the choices you may have. AI can play a huge role in such scenarios.
  • AI helps to ease the mental trauma and after-effects in people who have to make difficult decisions.
  • In general, AI will empower humans to make better choices.

All of these insights are subject to additional research and are not to be taken by word. This is what we think, but it needs to be validated by rigorous experimentation.

The future must move towards using AI to make decisions for us, and the AI must be capable of telling us why it took that decision. Knowing the same, we can decide to follow it. There should be an “approve” button before the AI executes such a decision. We may still be a long way before we can truly give complete autonomy to an advanced artificial intelligence.

The AI we infer in all the scenarios is a data-driven model. This throws light on a major drawback. It must have access to all that data to make its decision, raising questions on privacy, which is in itself a huge ethical dilemma. The AI can also work, only if all such data is documented in some repository in the first place. We will explore this topic some other day, but for now, let us assume, all is well.

If we have to consider an unsupervised artificial general intelligence that evolves by itself, I guess, we could just call it a superhuman, and treat it like one. Whether such an AI should even be allowed to handle these situations is a big question mark. We can think of it as asking the world’s smartest human to take the decision. All we can do is put our trust in them. Only real-life experimentation can tell if such an AI is actually trustworthy.

Further reading and references-

Editorial Note-

This article was conceptualized and co-written by Aditya Vivek Thota and Soumya Kundu of The Research Nest.

Stay tuned for more diverse research trends and insights from across the world in science and technology, with a prime focus on artificial intelligence!

--

--