Who Is Moral Enough to Teach Morals to Machines?
Teaching machines right from wrong might prove to be an impossible task

Dal Spooner never trusted robots. He resented them for their cold logic based solely on Asimov’s principles — a rather mathematical approach to moral principles. The box office hit I, Robot highlighted the grey areas of what it means to be human, or, in reverse, a genuinely honest machine.
Although scientists and engineers are yet to establish artificial general intelligence (AGI), the highest form of intelligence, we need to ask ourselves how this AGI-capable machine would potentially act, feel, and the reason is imperative. Even narrow AI applications today need to react to situations such as an autonomous vehicle’s need to grasp a stop sign, for example. As artificial intelligence (AI) powered applications are becoming more and more a part of our everyday lives, downsides to this trend have emerged. Therefore, humans must figure out how and who should give morals to machines so that they can’t cause damage.
What type of intelligence does a moral machine need?
There is one big obstacle in teaching morals to a machine: We don’t know what intelligence is. As humans, we have a notion of what intelligent behavior is. Scientifically speaking, it is hard to define intelligence. Psychologically, intelligence functions as an umbrella term describing and bundling aspects of human intellectual capabilities manifested in sophisticated cognitive accomplishments, high levels of motivation, and self-awareness.
So what type of intelligence is needed for machines? While in Western countries, being smart is associated with being quick. Ergo, the fastest person to answer a question correctly is intelligent. However, in other parts of the world, a smart person takes time to answer a problem in the most contemplative, well-thought-out fashion. Therefore, being smart means considering an idea thoroughly before answering.
This example shows best what is still in store for us: Not only are we in the dark when it comes to defining intelligence, we don’t have a global agreement on how intelligence is expressed. From this perspective alone, it is essential to emphasize the importance of having diverse AI ethics boards. Because to try to answer these questions needs different points of view, realities of life and opinions.
Together, we should answer the following question: Is it conceivable that someday robots will be “good” decision-makers?
Acting according to and from ethical principles
In other words: Will some machines someday act according to and from ethical principles? James H. Moore introduced four types of moral agents, ranging from the weakest form of an agent (ethical impact agents) to fully ethical agents.
His classification is essentially about what kind of decision has to be made and how to possibly react to the result of the decision. The ethical agents have increasing ethical competence, up to the machine with free will. According to Moore, these full ethical agents have “central metaphysical features that we usually attribute to ethical agents like us — features such as consciousness, intentionality, and free will. Adult humans are our prime example of full ethical agents”.
Even before we live in a world with full ethical agents, we can’t answer the question of what we allow AI to do with “what we allow it to do.” We will need to come to a global understanding of what is fair/right or unfair/wrong and try to translate this newly achieved common human understanding into accepted algorithms. We need to achieve this rather quickly — for all AI applications.
Or, as I once put it in my blog post The Moral Machine: Teaching Right From Wrong: “If we cannot achieve this, then maybe, just maybe AI should outsmart us and ultimately teach us something — and when/if AI does, then it might not only be teaching globally but universally.”