Excellent post. You raise many good points. My personal viewpoint on this problem is… who invited you to play ‘god’, (not you the author but the observer in the scenario), to decide who lives and who dies. It concerns me that this problem is regularly raised with regard to self-driving cars. Now, I accept that accidents happen, even when we try to avoid them using very cleaver technologies. What causes me concern, and is something that I really cannot accept, is that ‘someone’ could program a ‘solution’ to this problem into such cars. I think it is one thing to have an accident in which people not riding in a car (pedestrians) die, it is whole other thing for the ‘car’ to actively decide who should die. By all means try to minimise the impact of an accident, perhaps even to the point of destroying the car so that no pedestrians can be killed but do not under any circumstances let a ‘machine’ choose who lives and who dies, it is just not qualified to do so, in fact, no one is. This feels like one of those situations where just because we can do something (i.e., build such logic into a car) that we should not actually do so.
