The Growing Role of Ethics in AI Development
In some strange paradox, today’s ethics deals with arguably the world, and societies most important questions, which, nevertheless are accepted to never reach a conclusion. Ethical dilemmas are essentially there to provide a moral dialogue that will thus indirectly influence law and societal values.
Although as we begin developing artificial intelligence and impregnating these artificial creations with moral compasses, we will have to make conclusive decisions as to the resolutions of different ethical dilemmas.
For example; the trolley problem, in which one is presented with the dilemma, that if a train is coming and 3 people are on the track, one can either let the three die, or push someone off the bridge, thus killing him and saving others. While this is largely mental chess today, aimed at amusing friends at a dinner party, and challenging world class academics alike, it is largely irrelevant as one would never run into an issue such as this, and if they did they would not have time to consider the ethical consequences of different actions. They would simply act out of impulse.
Although, if this problem was to arise with a self-driving car, in which it could swerve off the road and kill one person inside, or stay on the road and kill 3 people in the way, but save the rider, then philosophers would have to arrive at a conclusive decision, and state what is the more ethically justified move.
‘Ethical programmers’ would have to program the capacity for ethical decision making into the conscience algorithms of the self-driving cars software. Although how would this be done? Philosophers today cannot agree on the simplest of questions, such as whether or not a tree makes sound in a forest, how could they come to all agree on the value of human life?
Could we implement some sort of democratic system? In which people vote on which normative ethical theory they want a certain technology to consider? A website could present an array of different ethical theories, such as virtue ethics, utilitarianism, deontology, etc. And these normative ethical theories could be presented with their corresponding computer codes, and then the public could be allowed to vote and decide which one self-driving cars should rely on? This democratic process would at very least allow companies such as Tesla to justify a car swerving to avoid 3 people and in the act killing the rider inside, based upon the principles of utilitarianism, in which the majority is valued over the minority. The computer system could even consider, in a millisecond, factors such as the amount of dependants each person has/their age/their health/etc. And thus determine how much their life is worth. I mean would a computer be justified in killing three old people with cancer in the act of saving a mother of 5?
Though while today a great restriction of ethical theories is the sheer impracticality, and the fact that one cannot consider the consequences of different actions in real time, though computers and artificial life would be able to. Therefore, ethical theories would begin to have practical applications.
For example, one could examine Hume’s hedonic calculus which determines the value of different actions based on seven factors.
- Intensity — How intense is the pleasure or pain?
- Duration — How long does the pleasure of pain last?
- Certainty — What is the probability that the pleasure or pain will occur?
- Propinquity (nearness or remoteness) — How far off in the future is the pleasure or pain?
- Fecundity — What is the probability that the pleasure will lead to other pleasures?
- Purity — What is the probability that the pain will lead to other pains?
- Extent — How many persons are affected by the pleasure? **
Although it is impossible to make these determinations in real time when considering different actions, as we can simply not calculate all of these different outcomes, based on limited knowledge of others, as well as the simple inability to consider these large data sets in milliseconds.
Although, if deemed a correct, and justified moral theory, and if voters decided that this was the moral theory by which they would want Artificial life to base its decisions upon, one could theoretically program Hume’s hedonic calculus into artificial life, and allow it to quickly consider the consequences of its actions, and decide which action is the best, by determining what action would provide the greatest amount of pleasure to the greatest amount of people, based on the seven factors, to which it applies to every situation.
Many ethical theories greatest restrictions are their impracticality, but if we are able to make them practical, through robots that can consider enormous data sets and algorithms in a matter of milliseconds, how will ethics role in society change? May it go from a group of extroverted, ‘deep’ people, to the future lawmakers and police of society? I don’t know, I hope so as I’m going to study ethics in University.