The way we talk about Ethics is broken
Alex Lenail

This is a very interesting article. From an ethics perspective I would dispute two things. First, not everyone thinks Trolley Problems are useful. There are many vocal opponents who think they are “misleading” and the “intuitions” they provoke do not constitute trustworthy data for moral analysis (e.g. Allen Wood response to Parfit in On What Matters, Vol 2, p. 69). However, I would not contest they have a certain “canonical” status.

Second, there are far more “moral options” on the table than Deontology vs Utilitarianism. Personally, I rely on notions from Needs Theory (Soran Reader, Needs and Moral Necessity) and notions of fairness derived from Kant and Contractualism (Rawls, Scanlon) to provide AI formalizations to solve Trolley Problems.

Noting the question of feedback loops (as distinct from feedforward loops) and mindful of Tononi and Koch’s suggestion that consciousness and all that goes with it (free will, ethical autonomy) may never be possible with feedforward loops only, even so, it seems to me possible that Deep Learning may get some traction in the solution of moral problems if the moral problem domain equivalents of diagonal lines, faces and cats in the cat/dog classification domain can be defined. Clearly deep learning morals is considerably more complex than deep learning the differences between cats and dogs.

The AI might need some kind of “story processing” to deep learn cause and effect as well as classification of events as harmful or beneficial to humans with the ultimate aim of classifying a proposed plan of action as right or wrong. These lowel level neural nets would include needs (of agents), wants, risk assumption, benefits, burdens/harms, innocence, desert and such. These various factors contribute to the top level classification of an act as “right” or “wrong.”

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.