The Growing Role of Ethics in AI Development

Giving self driving cars the capacity for moral decision making. https://goo.gl/images/zFXR91

In some strange paradox, today’s ethics deals with arguably, societies most important questions, which, nevertheless are accepted to never reach a conclusion. Ethical dilemmas are essentially there to provide a moral dialogue that will indirectly influence law and societal values.

Although, I believe that the role of ethics in society is going to grow exponentially as we begin developing artificial intelligence and impregnating these artificial creations with moral compasses. We, as a society will begin to have to make conclusive decisions as to the resolutions of different ethical dilemmas, and will have to decide which normative ethical theory is most applicable to each scenario.

Normative ethics is fundamentally the study of ethical action. It is the branch of philosophical ethics that investigates the set of questions that arise when considering how one ought to act. And moral dilemmas are situations which challenge and provide practical application for normative ethical theories.

In this post, I will present a famous ethical dilemma which I have recently been studying called the trolley problem, and explain how this can test the practicality of an equally famous normative ethical response called utilitarianism, in the context in which it is implemented into artificial intelligences decision making process, specifically with regard to self-driving cars.

The Trolley Problem

The trolley problem presents one with the dilemma, that if a train is coming and 3 people are on the track, one can either let the three die, or change the track, killing only 1 innocent bystander. While this is largely mental chess today, aimed at amusing friends at a dinner party, and challenging world class academics alike, it is largely irrelevant as one would never run into an issue such as this, and if they did they would not have time to consider the ethical consequences of different actions. They would simply act out of impulse. After hundreds of years of debate, and after countless opinions of prolific philosophers have been posited, one inevitably arrives at the somewhat disappointing conclusion, that there is no conclusion; there is no correct answer to this dilemma. Moreover, it is impossible to decide, in an instant, whether a group of people’s lives are more valuable than one person’s life, as there are just too many factors to take into consideration.

Although, as developments in computing power, and the growth of machine learning has proven, more and more factors (inputs) are possible to consider in a mere instant. With this in mind, if this problem was to arise with a self-driving car, in which it could swerve off the road and kill one person on the sidewalk or stay on the road and kill 3 people in the way, then philosophers would have to arrive at a conclusive decision, and decisively state what is the more ethically justified move.

Utilitarianism — Just watch a minute or so to gain a rough idea. 

Modern utilitarianism, as originally formulated by Jeremy Bentham in 1789, essentially argues that the most morally justified action is the action which provides the greatest amount of pleasure to the greatest amount of people. And while this theory has been instrumental in the development of modern societal values, its real time application is limited by its impracticality. It is wrong to imagine a human considering all of the outcomes, and the amount of pleasure for different amount of people each that her actions will produce. 

Although, as today, Utilitarianism’s impracticality is due to the lack of processing speed, and knowledge of humans, and thus it is impossible to apply to day to day decision making process’s, however it may be possible for a self-driving car to do this with its highly developed machine learning algorithms, in which it can process enormous data sets in a matter of milliseconds. 

More specifically this could be developed into code through a later reformulation of Bentham’s Utilitarianism, in which John Stuart Mill developed the Hedonic calculus, which determines the value of different actions based on seven factors. 

  1. Intensity — How intense is the pleasure or pain?
  2. Duration — How long does the pleasure of pain last?
  3. Certainty — What is the probability that the pleasure or pain will occur?
  4. Propinquity (nearness or remoteness) — How far off in the future is the pleasure or pain?
  5. Fecundity — What is the probability that the pleasure will lead to other pleasures?
  6. Purity — What is the probability that the pain will lead to other pains?
  7. Extent — How many persons are affected by the pleasure? **

Now, how may we apply this criterion to real time decision making for self-driving cars, within the trolley dilemma, when considering whether to swerve and kill one (a), or stay straight and kill three (b). But let’s assume, for arguments sake, that the three people are all extremely old, and have terminal diseases, whereas the person walking alone on the sidewalk is a mother of 5.

While humans could never consider all of these factors in real time, theoretically a machine learning algorithm could. And here are a few rough ideas for how the self-driving car may arrive at a decision based on these factors;

1. Intensity — After checking medical records of all parties, due to pain killers taken by three elderly people, their overall pain is less than the one young mother.

2. Duration — Three elderly people will die faster and experience less pain then mother.

3. Certainty — There is a 5% chance that the mother survives, whereas there is 0% chance the three elderly people survive.

4. Propinquity — unnecessary as its immediate either way.

5. Fecundity — unnecessary as this only results in pain.

6. Purity — The three elderly people have no family, and are all terminally ill, their death would cause 80% overall less displeasure then the death of the mother, based on her Facebook profile in which she has 1,000 friends, and a large family, who actively comment on her posts.

7. Extent — Three people in one situation whereas only one in another.

While it would be unlikely for this situation to arise, the computer system could still quickly skim through all of the data in both situations, and based on a series of criterion, such as these, could determine the overall amount of pleasure or pain each situation may cause. Based on the information provided, it would likely deem the killing of three people as apposed to one the more morally justified action, as while more people are killed by remaining straight, the mother has more dependants, and a larger social network, therefore the pain felt, would be much more far reaching. Although this would all depend on the weight, or importance attributed to each factor. 

Though, how can this normative ethical theory be decided upon? There are many issues with Utilitarianism, for example, if there was situation in which five bullies were beating up on one person, and deriving pleasure from it, then Utilitarianism would justify this action, as the amount of pleasure is greater as 5 people are enjoying it, as compared to one not enjoying it. 

Similar issues arise in politics, in which a certain political ideal may be valid in many ways, although it may have its limitation in others. 

The way society deals with this dilemma is through democracy, and allowing the public to vote.

Therefore, could we implement some sort of democratic system in ethical decision making? In which people vote on which normative ethical theory they want a certain technology to be based upon? A website could present an array of different ethical theories, such as virtue ethics, utilitarianism, deontology, etc (other normative ethical theories comparable to utilitarianism). And these normative ethical theories could be presented with their corresponding computer codes, and then the public could be allowed to vote and decide which one self-driving cars, for example, should rely on? This would at very least allow companies such as Tesla to justify a car killing three elderly people with terminal illnesses, as opposed to moving out the way and killing a mother of five, as it simply provides a greater amount of pleasure to a greater number of people.

Let us consider that if Hume’s calculus was deemed to be correct, and if voters decided that this was the moral theory by which they would want Artificial life to base its decisions upon, one could theoretically program Hume’s hedonic calculus into all aspects of artificial life, ranging from self-driving cars taking actions that provide the greatest pleasure, to artificial life in medicine. For example, a hospital with minimal resources, pulling the plug one dying person in order to transfer the medicine to another person with more friends and family. 

The basic premise of providing the greatest pleasure to the greatest amount could be applied to all developments in artificial intelligence. Although this may result in a horrifying dystopia in which people are given a right to life equal to their online popularity.

Today, many ethical theories greatest restrictions are their impracticality, but if we are able to make them practical, through robots that can consider enormous data sets and algorithms in milliseconds, how will ethics role in society change?