Should The Robot Save The Embryos?
The Trolley Problem and objective vs. subjective morality
On Twitter, Patrick S. Tomlinson challenged “the pro-life crowd” to a thought experiment: There’s a fire at a fertility clinic and you can only save a five-year-old child or a container of 1,000 viable human embryos. Which do you choose?
Tomlinson claims people who say an embryo has the same moral value as a born child would rescue the child, proving that their pro-life stance is a sham. In The Weekly Standard, Berny Belvedere counters that “in crisis situations our decisions don’t inexorably follow what we believe to be right.” Though he holds pro-life views, Belvedere would save the five-year-old, because he “would not be able to stare into the eyes of a child in perfect fear and pass him by. The embryos cannot do anything, in their present state, to match the terror and the dread of a child about to be engulfed by the flames.”
This is sufficient to answer Tomlinson’s challenge — a pro-life individual choosing to save the child does not, on its own, disprove the pro-life position — but it ducks the most important question.
Was saving the child morally wrong? Should the embryos have been saved instead?
Individual decision-making is subject to in-the-moment human emotion, but morality shouldn’t be. Similarly, public policy should be based on rational analysis, factoring in how humans act in crises, but ultimately remaining detached from in-the-moment emotion.
The fertility clinic hypothetical thus leads to a practical question: Should we train first responders to save the embryos?
Being human, they might not be able to carry out the instructions in the moment, and it would be understandable if they save the child. But it won’t be long before robots assist human first responders, and they’re not subject to emotion. Should we program them to save the embryos?
The Morality of Self-Preservation
Belvedere answers Tomlinson’s hypothetical with a hypothetical of his own: given the option, would you save 100 strangers or your own spouse or child?
The “standard liberal position,” Belvedere writes, “crucially involves the view that every individual has equal value.” Therefore, if the standard liberal would save his family member rather than 100 strangers, his moral stance is, in Tomlinson’s words, a sham.
Belvedere’s point is that both his and Tomlinson’s thought experiments are flawed, because decision-making in moments of crisis is emotional. For that reason, the standard liberal’s decision to save his family member does not invalidate his belief in moral equality.
Nevertheless, his decision violates the principle that every individual has equal value. Belvedere’s argument implies that, though saving the family member was an emotional, spur-of-the-moment decision and therefore understandable, it was still morally incorrect.
In this, Belvedere blurs objective and subjective value. However, it’s not contradictory to believe all human beings have the same objective value while also believing some human beings have greater subjective value than others.
The latter position is both natural and a core basis of society. Every person — every animal, really — has an instinct for self-preservation. Acting according to that instinct is morally defensible. If given the choice between saving myself from certain death and saving two strangers, I would pick myself. To me, my life has greater subjective value than the lives of two strangers. Almost everyone believes the same about themselves.
When someone risks their life to save others we consider it heroic. The action doesn’t require guaranteed self-sacrifice — the value of self-preservation is so great that just putting oneself at risk to help strangers makes one a hero.
The instinct for self-preservation naturally extends to family members. Those we love are, in a sense, part of ourselves. That’s obvious with children, since the same evolutionary logic that creates an instinct for self-preservation applies to preserving offspring — perhaps even more so. But something similar can apply to spouses, parents, siblings, and other loved ones.
Society is built on an extension of this principle. Extended family, clan, and tribe all spring from the notion that protecting one’s own is not just acceptable, but moral. Political constructions — city, nation, state — rationalize and channel the instinct for extended self-preservation to create larger, more diverse groups, capable of greater flourishing than clans or tribes.
But subjective value remains. A government should place the lives of its people over the lives of others. The point of a nation-state’s military is national security and advancing national interests. It is reasonable to want countries to act for the good of humanity as a whole, but that will always be secondary. Protecting all people at all times is impossible. Therefore, as Just War theorists argue, if a state is forced to choose between the lives of its people and the lives of others, it is morally obligated to defend its own.
Every person has equal objective value but subjective value varies. Moral philosophy begins with the bedrock assumption that every individual is equal because its goal is discerning objective value. But it would be impossible to build a society upon objective value alone. Rejecting subjective value is so contrary to human nature that any attempt would be dystopian.
For this reason, saving a single family member from a fire rather than multiple strangers does not contradict the objective principle that all people are equal. However, unlike a random individual, a firefighter acts on behalf of the group. All other things equal, the first responder should save the highest number.
The child-or-embryos question is more complicated. If one embryo and one five-year-old have the same moral value, a first responder should save two embryos over one child. 1,000 embryos is a no-brainer.
However, I’d argue that the first responder should still save the child. The embryos are not value-less, and a first responder should accept more risk to save that container than one holding, say, liquid anesthetic. But they also do not have as much moral value as the five-year-old.
The child has subjective experiences and, in the vast majority of cases, a family. From those family members’ perspective, this five-year-old has immense subjective value. The embryos have neither their own experiences nor the same love from a family, giving them less moral value.
I recognize that people who embrace the pro-life position will dispute this. However, I challenge them to wrestle with the question of whether a first responder should prioritize saving two embryos over one five-year-old. And if the first responder should save the child, consider whether the calculus changes as the number of embryos increases.
Self-Driving Cars and the Trolley Problem
To illustrate the flaws in Tomlinson’s hypothetical, Belvedere uses the famous thought experiment known as the Trolley Problem.
A trolley (a train) is set to kill five people who are stuck on the tracks unless you pull a lever that derails the trolley onto another set of tracks which will kill only one person. Let’s assume all six people — the five on the first set of tracks and the one on the second set of tracks — are morally on a par. Meaning that it’s not like the five are convicted rapists and the one is on the verge of curing cancer. Most people conclude that if they were in that situation, they would pull the lever. Their reasoning is simple: if we have to choose, saving five lives is preferable to saving one.
However, in a variation of this thought experiment in which you have to push a large person into the path of the trolley, rather than simply pull a lever, fewer people would do it. Even though the trade is the same (one life for five) the action is sufficiently different that many people make a different choice. Pushing a person is violent, while redirecting a trolley is dispassionate.
This supports Belvedere’s claim that humans do not always act according to rational principles in the moment. The reason someone who would pull the lever wouldn’t push a bystander into the trolley’s path is the same reason a pro-life individual would save a child over multiple embryos.
However, thanks to self-driving cars, the Trolley Problem is no longer just a hypothetical.
Suppose a tree falls in the street and your car is going too fast to stop. Your only options are to swerve onto the sidewalk and hit pedestrians or crash into the fallen tree.
Almost everyone would instinctively swerve. If this killed bystanders, all but a sociopath would feel terrible. But when reviewing the awful them-or-me decision, many would find swerving not just acceptable in the moment, but morally defensible. And that number would rise if the driver’s child was in the car.
But what should a robot do? The argument from objective value says count the pedestrians, estimate how many the swerving car would hit, and if that number is higher than the number of passengers, crash into the tree. But the argument from subjective value instructs the car to prioritize its passengers.
Companies producing self-driving cars have to make this decision. It’s more complicated than the thought experiment I laid out — the machine can calculate risk probabilities, cars include measures that protect passengers in the event of a crash, etc. — but ultimately programmers must tell the car to prioritize passengers or consider them equal to pedestrians and the occupants of other vehicles.
I want the automated driver to do what I’d do — prioritize me and mine — and I think most people would say the same. But should the law require autonomous vehicles to treat everyone equally?
Belvedere’s main point does not help answer this question. Engineers, companies, governments, and voters are not making this decision in the heat of the moment, and can approach it calmly and rationally.
A similar question applies to the fertility clinic hypothetical. In the near future, automated robots will assist first responders. They can arrive at a disaster more quickly, and enter areas that are too dangerous for humans. If they respond to a disaster at a fertility clinic, should they save a child or a container of embryos?
Does everyone who holds a pro-life position on abortion really want programmers to teach the robot to let the child die?