Enhancing Virtues: Fairness

J. Hughes
Published by the IEET 2015–02–14

Photo by Tingey Injury Law Firm on Unsplash

From Moral Intuition to Moral Reasoning

Just as we have ancient neural architectures for bonding with our fellow mammals, we also appear to have evolved deeply-wired neural intuitions about fairness and morality. One of our deeply ingrained moral intuitions is that it is wrong to cheat, and that cheaters need to be punished. This impulse can be demonstrated in a laboratory experiment called the ultimatum game. One participant is given some money and instructed to offer a portion of it to the other participant. They can offer any fraction of the amount, or none at all, but they don’t get to keep any of the money if the other person rejects the split. Three quarters of participants offer something between 40% and 50%. When the splitter offers less than half it triggers a disgust reaction in the amygdala of the person who needs to choose to accept or reject the split. When that disgust reaction is strong enough, which is usually when the offer is less than 40% , the person will reject the split even though it means they are giving up whatever they were offered. That self-sacrifice to spank the “cheater” at the cost to oneself is known as “altruistic punishment.”

These intuitions can be observed in our simian cousins and human children. When chimpanzees and human children are set up in ultimatum situations they also mostly offer fairish splits, and their willingness to sacrifice rewards to punish cheaters is the same as in adult humans.[1] [2] Even human infants under two years old react negatively when they observe unequal rewards given to others.[3] According to Paul Bloom, one of the leading researchers on the moral life of infants and the author of Just Babies [4], infants exhibit four moral sensibilities:

  • moral judgment: some capacity to distinguish between kind and cruel actions.
  • empathy: suffering at the pain of those around us and wishing to make this pain go away.
  • fairness: a tendency to favor those who divide resources equally.
  • justice: a desire to see good actions rewarded and bad actions punished.[5]

We experience these biologically rooted moral intuitions differently than we do other kinds of values. A group at DePaul University in Chicago surveyed students about a variety of moral attitudes. Some had been ranked by previous researchers[6] as biologically determined and heritable, such attitudes towards premarital sex, racism and the death penalty, and others as only weakly influenced by biology and genes, such as attitudes about privacy. They found that the stronger the likely genetic influence on the value the more deeply held the students beliefs were about that value.[7]

Just as empathy has to be cultivated by intelligence to become a mature theory of mind and social intelligence, our moral intuitions can only take us so far. Is the affirmative action fair? Is collateral damage in a war morally justified? Should a poor man steal bread? In order to cultivate the virtue of fairness we need to move from innate moral intuitions to mature moral reasoning.

Liberal and Conservative Brains

The psychologist Jon Haidt adds to this picture by showing that we are not all equally sensitive to inherited moral intuitions. Haidt began his research on moral intuitions by studying reactions to topics like cannibalism and incest. By unraveling how people felt about these deeply emotive questions he eventually identified a set of core moral intuitions which he, and the other proponents of “Moral Foundations Theory,” believe have evolutionary and neurobiological roots:

  • Care/harm: protecting others from harm
  • Fairness/cheating: treating others in proportion to their actions
  • Liberty/oppression: judgments about whether subjects are tyrannized.
  • Ingroup Loyalty: to your race, group, family, nation
  • Respect for Authority/Hierarchy
  • Sanctity/Purity: sanctity/degradation, avoiding disgusting things, foods, actions.

Haidt found that conservatives, liberals and libertarians differ in their sensitivity to these innate, monkey-brain moral sentiments. Liberals are more se

Iyer R, Koleva S, Graham J, Ditto P, Haidt J. Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians. PLOS One. 2012; 7(8): e42366

nsitive to the first two, the impulses to protect others from harm and to fairness. Conservatives are less sensitive to these, and more sensitive to the impulses to protect the in-group, to defer to authority, and to have disgust for the profane. For instance, liberals are more likely to agree with the statements “I wish there were no nations or borders and we were all part of one big group” and conservatives are more likely to agree that “Respect for authority is something all children need to learn.” Libertarians are more sensitive to the liberty/oppression intuition, and less sensitive to the other five.[8]

Using survey responses from 25,000 people that allowed them to be assessed for their response to these moral intuitions and their views on political issues, Haidt and his team found that these moral intuitions predicted positions on issues ranging from gay marriage and immigration to global warming and defense spending.[9] These innate moral sentiments help explain why our political debates are so often like we are talking different languages. We simply can’t understand how the other side can take certain kinds of arguments or sentiments seriously, and not see the importance of our views. As Haidt and his collaborators recently framed it, liberals and conservatives are as different as people from entirely different cultures.[10]

These political differences are deeply rooted in neurobiological differences. The idea that political ideology has biological roots seems counterintuitive, since political views seem so determined by the time and place we find ourselves in. Also, what evolutionary advantage could there have been for humans to develop such divergent moral and political views? But a recent study by researchers from Harvard University, Brown University and Penn State University dramatically illustrates how deeply biological political ideology appears to be. They recruited twenty one adults, ten strongly liberal and eleven strongly conservative. The participants were asked to bathe in scent-free soap, refrain from smoking, drinking, deodorants, perfumes, sex or sleeping with humans or pets. They then taped a gauze pad under their arms for twenty four hours. The pads were frozen in vials, and thawed out later to be smelled by 125 participants whose politics had also been ascertained to be either strongly liberal or strongly conservative. The smellers rated each vial on a scale of 1 to 5 on attractiveness of the body odor. Controlling for gender, conservatives found the smell of other conservatives more attractive, and liberals liked how liberals smelled better. Somehow the biological bases of ideological preferences were being communicated through body odor. [11]

The evidence that these biological determinants of ideology are genetically inheritable is now quite strong. A 2014 meta-analysis of the effects of genes on politics looked at nineteen studies of 12,000 twins in five countries spanning three decades.[12] As in studies of genetic influences on intelligence they did not find any single gene that explained a significant amount about the twins’ political views. But they did find a significant and substantial genetic influence on political views across a wide range of issues in every country and time period. Attitudes towards things as diverse as school prayer, death penalty, gay rights, foreign aid, feminism, taxation and global warming were all genetically linked.


PFC vs. Amygdala One of the most popular scenarios used in the emerging experimental philosophy field is the trolley dilemma.[13] In the first trolley scenario the participant is told to imagine standing beside a train track and seeing a runaway trolley about to hit five men down the track. The participant is standing next to a lever which can switch the trolley to a track on which only one man is standing. Will the participant switch the train to the track to kill just one man instead of five? This is a classic utilitarian choice; the greater good for five outweighs the harm imposed on one. Most people choose to pull the lever.

In the second scenario, the “footbridge dilemma,” the participant is standing next to a very fat man on a bridge over the track. The participant is told that (however implausibly) the only way to stop the trolley hitting the five men is to push the fat man on the track. Most people say they wouldn’t or couldn’t push the fat man, even though the result would be the same as the first scenario; one man dies, five live.

Since neuroscientist Josh Greene and colleagues first used fMRI to watch the brains of people making these trolley decisions, more than decade of experiments has shown that the utilitarian decision in the first scenario is largely handled by the rational, prefrontal cortex, while the second “footbridge dilemma” strongly stirs up the emotional centers of the brain, overriding rational utilitarian calculation.[14] [15] Passive moral judgments based on intuitions such as “its never OK to push someone to their death” are based in the amygdala, while active moral reasoning, such as the reasoning necessary to ration


alize pushing the fat man on the track, relies on parts of the prefrontal cortex.[16] People with larger, more active and better connected prefrontal cortices are better able to filter and channel the hot moral intuitions — including the desire to protect others and punish others, but also disgust, loyalty, and submission to authority — bubbling up from our amygdalas. On the other hand when people are sleepy, distracted, pressed for time or under stress they are less likely to make rational, utilitarian judgments.[17][18] [19] [20] [21] [22]

Another way of understanding the genetic influences on moral and political thought is that our genes partly determine the relative influence of the prefrontal cortex versus the more emotional parts of the brain like the amygdala on our moral and political decision-making. Conservatives have larger and twitchier amygdalas than liberals and libertarians, startle more easily, and react more strongly to bad smells and unpleasant images.[23] [24] [25] [26] [27] [28] Conservatives are therefore more sensitive to the discomfort of uncertainty and cognitive dissonance, and work harder to avoid it.[29] Sensitivity to the two liberal moral intuitions, care and fairness, is correlated with larger volumes in the PFC, while sensitivity to conservative moral intuitions, deference to authority, ingroup loyalty and purity/sanctity, is correlated with larger volumes in the emotive limbic system.[30] When the influence of the PFC over the amygdala is reduced by alcohol or other cognitive burdens people express more racial bias[31] and conservative opinions,[32] and people become more conservative and morally judgmental when the amygdala’s disgust response is triggered by bad odors or the feeling of stickiness. [33] [34]

Liberal Virtues How then can we understand liberal versus conservative ideas of virtue? As Jon Haidt and colleagues recently observed, the intuitive style of thought favored by conservatives is the human default style, while the analytical style of thought more common among liberals has to be learned.[35] Liberals and conservatives don’t actually differ in their moral intuitions about authority, ingroup loyalty and sacred values. Both liberals and conservatives have prefrontal cortices that have been taught Enlightenment values, and amygdalas pinging them with disgust and alarm reactions. Rather their differences emerge because the prefrontal cortices of liberals are capable of filtering out the signals from the amygdala more successfully than in the brains of conservatives. When liberals feel impulses for deference to authority and hierarchy they are checked by reminders of the importance of equality and the questioning of authority. When liberals feel uneasy about outgroups, or impulses to favor their own kind, they are checked by reminders of the importance of tolerance and universalism.[36] When liberals feel revulsion about the breaking of taboos, such as seeing two men kiss, the feelings are checked by reminders that “They aren’t hurting anyone…”

The real difference between liberal virtue and conservative virtue then is why and how the two tribes come to moral conclusions. Conservatives believe that moral intuitions are self-justifying. Liberals believe that reason needs to interrogate our intuitions. This leads liberals to be more tolerant and humble in their moral and political claims, a cautiousness and diffidence that conservatives interpret as weakness and uncertainty.

Political IQ: https://www.psychologytoday.com/blog/the-scientific-fundamentalist/201003/why-liberals-are-more-intelligent-conservatives

Intelligence, Personality and Ideology In 2010 the evolutionary psychologist Satoshi Kanazawa published an article provocatively titled “Why liberals and atheists are more intelligent.” Kanazawa reviewed the large body of evidence that correlates intelligence with atheism [37] and political liberalism[38] [39], and proposed the “Savanna-IQ Interaction Hypothesis.” [40] The theory starts with the observation that human brains first evolved in the African savanna between 2.5 million and 130,000 years ago. Then, as we faced environmental challenges and started migrating around the globe, we had to evolve new cognitive abilities to deal with novel situations. This flexible form of learning and problem-solving is the basis of general intelligence, which then allowed us to invent tools, agriculture, and civilization. Individuals and groups with more of this ability are more open to novel experiences, more tolerant of ambiguity and complexity, and more open to novel ways of thinking such as atheism and liberalism.

Earlier I reviewed how the personality trait of openness to novelty is partly genetic and correlated with intelligence, and it is also correlated with political liberalism. [41] Across more than 70 studies of personality and politics reviewed by Sibley and Duckitt people who scored higher on openness to experience were less right wing, racially prejudiced and authoritarian.[42] Just as the variations in serotonin genes may partly explain why some populations are happier, geographic variations in the genetic settings for personality may be influencing the politics of countries and American states. Using personality data for 600,000 Americans a group at the University of Illinois found that the liberalism of a state was strongly related to its citizens’ level of openness to experience.[43]

Liberals are not without their own cognitive biases of course, and there are intelligent conservatives and stupid liberals. Both liberals and conservatives are prone to tune out information that doesn’t fit with their worldview.[44] But the biases of liberals and conservatives are not symmetrical. The psychological factors that tend toward liberalism undercut cognitive bias in ways that conservative psychology does not. Liberals are far more invested in the project of a deliberative democracy guided by science and rational discussion.

Jon Haidt has drawn a very different conclusion from the differences between liberals and conservatives however. To Haidt liberals are deaf to important conservative moral intuitions that they should work harder to appreciate. This is a version of the “naturalistic fallacy,” the idea that something is right because it exists. The root of liberal deafness to conservative moral intuitions is not because liberals lack a cognitive faculty, but because, in general, they are better at exercising their cognitive faculties.

The enhancement of fairness, moral reasoning and the “liberal virtues” is therefore part of the larger project of cognitive enhancement, focused on becoming increasingly aware of and independent of one’s own cognitive biases.

Building a Fairer Society

Education Much of the spread of the liberal virtues of tolerance, anti-authoritarianism, egalitarianism and secularism can be attributed to rising levels of education, which both spreads those norms and strengthen the prefrontal cognitive faculties and habits of reflection that enable them. For instance, educational level is the strongest predictor of Americans’ tolerance of sexual and racial minorities and general liberalism, [45][46] and of Europeans’ acceptance of immigrants.[47] Education is also a predictor of endorsement of fairness and caring moral intuitions. In an analysis of almost 60,000 people who had taken the Haidt et al. Moral Foundations survey Leeuwen, Koenig, Graham and Park found that people with more education were more likely to endorse the caring and fairness moral intuitions.[48]

Class and Social Equality The structure of society, and our position within it, also has a powerful effect on way we view morality and fairness. People not only become more tolerant as they are exposed to higher education, but also as they become more financially secure[49] in more equal societies.[50] Citizens of more equal societies generally are also more supportive of redistributive policies; acceptance of social inequality is both a cause and an effect of actual social inequality.[51] [52] On the other hand, the affluent — influenced by their vested interest in society — are generally are less supportive of egalitarian redistribution than the poor.[53]

So the natural political polarization on class lines is in between an egalitarian but racial-nationalist, moralistic and authoritarian working class, and tolerant and cosmopolitan but inegalitarian middle and upper classes.[54] There is less of this moral polarization in more equal countries however; the relatively equal Finns and Danes have higher moral consensus around the importance of an equal and tolerant society than the relatively unequal Britons and Swiss.[55] In other words, social inequality and social class distort the impact of liberal virtues on moral cognition, especially by weakening the egalitarian moral intuitions of educated and affluent cosmopolitans, while liberal virtues are expressed more consistently and broadly in more equal societies.

Training to Reduce Implicit Racial Bias

Implicit Bias Test ttps://implicit.harvard.edu/implicit/

One especially timely application of fairness enhancement is the attempt to reduce implicit racial biases in policing, spurred by the disproportionate killing of black men by American police. But evidence for the ubiquity of unconscious biases about race, gender and all kinds of things has been accumulating for sixty years, since a study of racialized attitudes towards dolls helped convince the US Supreme Court to decide the desegregation case Brown v Board of Education. The most common tool used to test for implicit racial bias today is the Harvard Implicit Association Test (IAT). The IAT asks subjects to rapidly match positive and negative words on a computer screen with white or black faces. More rapidly associating positive words with white faces and negative words with black faces (and vice versa) is a measure of unconscious racial bias, which is often at odds with the professed values of the subject. The test finds unconscious negative associations with black faces in both white and black subjects.

Many strategies for reducing biases have been attempted, but only now are we systematically evaluating their efficacy. As with the rethinking of psychotherapeutic approaches to trauma, which has dsicovered that some forms of talk therapy reinforce rather than dampen trauma, research on anti-racism programs has found that some can actually cause resentment and reinforce racial antagonism.[56] Some of the most effective interventions turn out not to be discussions of racism or the importance of fairness, but rather exercises that bind positive associations with stigmatized groups, such as reading about the heroism of black soldiers, using a black avatar in a video game[57] or imagining oneself being rescued by a black firefighter.[58] Loving-kindness meditation, which explicitly works on associating positive emotions with people you don’t like, has also been found to be effective in reducing implicit racial bias. In one controlled trial that compared whites randomly assigned to practice loving-kindness meditation, talk about loving-kindness or do nothing, the loving-kindness meditators saw significant declines in implicit racism.[59]

These methods work by changing the emotional valence of the stigmas bubbling up from the amygdala. Another approach however is to slow down those reactions and give the prefrontal cortex a chance to intercept and reject the biases. There is evidence, in fact, that people with stronger executive function exhibit less implicit bias.[60] [61] By shining a light of awareness on our biased sentiments we can develop our moral muscles.[62]

One study that demonstrated the effects of bias awareness looked at the calls made by National Basketball Association (NBA) referees before and after a major report on referee racial bias was published. The report showed that referees were more likely to call personal fouls against basketball players who were of a different race than the referee.[63] The report was released in May 2007, and received a lot of attention in basketball circles. When the team looked for same patterns after the report had been published however, they had disappeared.[64] The referees, along with society, had examined their behavior and overcame their unconscious biases.

Practicing Mindfulness of Biases

Would the same effect have been achieved however if only the referees had become aware of their racial biases, and not society as well? One suggestive study found that bilingual people make more utilitarian decisions in the trolley dilemma when they use the less used language; having to think harder slows down the instinctive reaction of amygdala to reject pushing the fat man.[65] There is also evidence accumulating that mindfulness meditation can dampen biases, such as ageism and racism, [66] change political cognition, and actually shrink the size of the amygdala.[67]

In 2013 geneticist James Fowler and some colleagues [68]recruited 139 people for an experiment on the effect of mindfulness on political opinion. The participants were told they would be shown some disgusting images and assigned to one of three groups. The first group was given this instruction to mindfully re-appraise feelings of disgust:

As you view the images, please try to adopt a detached and unemotional attitude. Or, you could think about the positive aspect of what you are seeing. Please try to think about what you are seeing objectively, watch all images carefully, but please try to think about what you are seeing in such a way that you feel less negative emotion.

A second group was instructed to suppress feelings of disgust:

As you view the images, if you have any feelings, please try your best not to let those feelings show. Watch all images carefully, but try to behave so that someone watching you would not know that you are feeling anything at all.

The third group was given no instruction. Then the three groups were shown images of things like cockroaches and dirty toilets, and asked to fill out the Moral Foundations Questionnaire that Jon Haidt developed to test moral intuitions. The mindful re-appraisal group was significantly less disgusted, and were significantly less likely to express moral purity concerns on the Moral Foundations questions.

Next, they recruited 119 people and first asked them to answer political questions. Then they wired them up to track their heart rate, and asked them a series of questions to measure their sensitivity to disgust, such as whether they would touch a dead body. Then they were randomly assigned to the three groups, re-appraisal, suppression and no instruction, and shown disgusting images. The mindful re-appraisers’ heart rate did not respond to the images, while the other two groups’ hearts did. Then they were tested on moral intuitions and policy views. Disgust-prone subjects remained more conservative in the suppression and no instruction groups. But for the mindful re-appraisers disgust sensitivity no longer was related to adopting moral and politically conservative views.

Fairness Reminders and Ethical Assistance Software

In a sense, we have used exocortical aids to improve moral decision-making since the beginning of civilization, in the form of amulets, tattoos, clothing and haircuts designed to remind us and our community of moral commitments. Today the moral exocortex has expanded to include “What Would Jesus Do?” bracelets and electronic Bible and Koran apps. But many secular digital aids are also emerging [69]. The New York State Bar Association, for instance, has created an app that gives users access to more than 900 decisions of their Professional Ethics Committee on issues confronting judges and attorneys. The MoralCompass app provides a flowchart of moral decision-making questions, and the SeeSaw app allows users to query other users about which action they should take in a situation [70].

Caregiving robot http://phys.org/news/2012-03-ethical-robots.html

Secular ethics assistants will also likely emerge from the efforts to design “moral machines” [71] and ethical artificial intelligence [72]. Some of this work is being done in order to provide onboard rules of engagement for autonomous battlefield robots, but there are moral decision applications being thought about for robots in many occupations, including industry, transportation, and medicine. Should your autonomous car drive you into the river to prevent killing five other?[73] How should a robotic home caregiver react when a demented patient refuses to bathe, eat or take medication?[74] The effort to codify and balance all the factual and value considerations involved in messy, human moral decision-making will be very complicated, and result in multiple possible morality settings, since there is wide moral variability in humans. As Wallach and Allen have argued, the full replication of recognizable human moral decision-making in machines will probably require both human-level cognitive abilities, and the program of character development and moral reasoning that produces mature morality in humans.

Eventually, as these morality AIs become more sophisticated, and woven into our environment and exocortices, and then tied directly to our brains, they will become a seamless part of our own cognition, allowing us to choose consciously to achieve levels of moral consistency that are currently impossible for most. [75]

But what if our inner AI angel reminders aren’t as loud as the persistent voice of our hind brain devils? Are there ways that we can affect the way our brain works to strengthen the hand of fairness and moral cognition?

Zapping Yourself to Fairness

One line of research that is both illuminating the neurology of morality, and suggests future avenues for fairness engineering of the brain, is investigation of the effects of transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (TDCS) on moral cognition. Teams in the Netherlands and at Harvard have used TMS to alternately suppress the different parts of the brain involved in the interplay between beliefs, empathy and emotions in making moral decisions. When the part of prefrontal cortex (right DLPFC) that helps rational, utilitarian decisions win out over emotional decisions is suppressed, subjects made moral decisions guided more by empathy. But when the empathy-related part of the brain (TPJ) was suppressed, they made more impersonal, rational, utilitarian decisions.[76]

Fairness Drugs

As we’ve discussed earlier, drugs and neurotransmitters have a complicated effect on moral cognition. For instance, drugs like Prozac which increase serotonin in the brain increase sensitivity to other peoples’ pain, reducing both rational and emotive impulses to hurt or punish others. Depending on the situation that may or may not increase the fairness of a decision. A real fairness drug would ideally change the balance of influence between the instinctive amygdala and the executive function in the prefrontal cortex. Stimulants would presumably have this effect, but there is as yet little research on the effect of stimulants on implicit biases. One suggestive exception is a study that found that subjects dosed with caffeine were more likely to consider alternatives to deeply held beliefs.[77]


Conversely, tamping down the urgency of the signaling from the amygdala would also help in thinking more fairly. For instance the blood pressure medication propranolol reduces the flickers of anxiety generated when white subjects see black faces, and a team at Oxford find it also improves their performance on the implicit bias test.[78] On the other hand the same group also found that propranolol reduced the likelihood that subjects would push the fat man on the track, so the effect of drug is clearly not a simple dampening of the amygdala.[79]

Another possible candidate is the psychedelic drug psilocybin which decreases the reactivity of the amygdala,[80] helps silence conditioned fear,[81] and generates a persistent improvement in the openness personality trait, [82] all of which would suggest it would reduce cognitive biases and improve fairness. But as yet there is no research on the effects of psychedelics on moral sentiments and cognition such as implicit racial biases.

Fairness Gene Therapy

As I reviewed earlier there is substantial evidence that there are genetic determinants of the relative strengths of our prefrontal cortex and our amygdalas, and thereby of our political and moral predispositions. One can certainly imagine these variants becoming future gene therapies for moral and political enhancement, although the prospect of such therapies being encouraged or mandated by governments immediately raises cognitive liberty concerns that I will address later.

We can, however, imagine a situation in which someone plagued by intense and disabling xenophobia voluntarily submits to a therapeutic regime of bias reduction exercises complemented by drug or gene therapies to modulate hyperreactive amygdalas, and strengthen executive function.

Effects of Enhancing Fairness

Conservatives have intellectual defenses of conservative virtues, and of their own interpretations of caring and fairness. It is possible to imagine a conservative moral enhancement that pursued the same strengthening of prefrontal executive function and reduction of amygdalic reactivity proposed here, and yet resulted in rational and utilitarian defenses of ingroup loyalty, respect for authority, and the sanctity of group symbols. That is in fact the social functionalist position that Jonathan Haidt has developed, that conservative virtues play an essential role in society.

On the other hand, perhaps the path of social progress is for liberals to fight for Enlightenment values, and then for conservatives to embed those values in prerational emotions and conservative virtues. After all, most American conservatives are defending the sanctity of the American constitution and deference to the authority of elected government, not for slavery or the divine right of kings.

If, however, the spread of Enlightenment values, critical reason, and liberalizing cognitive faculties are eroding the neurological basis for conservative virtue then the outstanding question is whether conservative virtues can possibly do the same work when rooted in the prefrontal cortex rather than the amygdala. In other words, we may recognize that we need to believe in some common collective prerational values, but can coming to that conclusion through reason ever be as binding as belief was? Can we simply decide to believe in unicorns once we know they don’t exist?

Yuval Levin framed the problem concisely in his “The Paradox of Conservative Bioethics.”[83] Conservatives are obliged to argue for their taboos through democratic debate, but taboos wither under rational, democratic scrutiny.

Conservatism traditionally leans on and seeks to protect the implicit wisdom contained in age-old institutions and social arrangements. It goes beyond this of course, and makes arguments and is at home in liberal democratic politics. But much of its appeal, and many of its arguments, are rooted in a sense that certain of the old assumptions have some value and some truth. A conservative bioethics, however, is forced to proceed by pulling up its own roots, and to begin by violating some of the very principles it seeks to defend.

The neurological and psychological evidence of how we think about fairness overwhelmingly suggests that modernity is already enhancing the brain and our virtues in a liberal direction, and that practices, devices and therapies to dampen biases and enhance fairness are able to move that process further. The result of enhanced fairness will hopefully be more self-awareness, less racism and xenophobia, more capacity for peaceful, rational civil discourse, and more support for egalitarianism. If however the enhancement of liberal virtue results in moral relativism and a lack of social cohesion, as Haidt warns, this would be another example of the need for the excesses of one virtue to be complemented and tempered by others.


[1] Proctor D, Brosnan SF, de Waal FBM. How fairly do chimpanzees play the ultimatum game? Communicative & Integrative Biology. 2013; 6:3, e23819

[2] Proctor D, Williamson RA, de Waal FBM, Brosnan SF. Chimpanzees play the ultimatum game. PNAS. 2013; 110(6): 2070–2075.

[3] Sloane S, Baillargeon R, Premack D. Do Infants Have a Sense of Fairness? Psychological Science. 2012; 23(2) 196–204.

[4] Bloom P. Just Babies: The Origins of Good and Evil. 2013. Crown.

[5] Bloom P. Did God Make These Babies Moral? New Republic. 2014; http://www.newrepublic.com/article/116200/moral-design-latest-form-intelligent-design-its-wrong

[6] Eaves EJ, Eysenck HJ, Martin NG. Genes, culture and personality: An empirical approach. 1989. San Diego, CA: Academic Press.

[7] Brandt MJ, Wetherell GA. What Attitudes are Moral Attitudes? The Case of Attitude Heritability. Social Psychological and Personality Science. 2012. 3(2) 172–179

[8] Iyer R, Koleva S, Graham J, Ditto P, Haidt J. Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians. PLOS One. 2012; 7(8): e42366.

[9] Koleva SP, Graham J, Iyer R, Ditto PH, Haidt J. Tracing the threads: How five moral concerns (especially Purity) help explain culture war attitudes. Journal of Research in Personality. 2012; 46: 184–194.

[10] Talheim T, et al. Liberals Think More Analytically (More “WEIRD”) Than Conservatives. Pers Soc Psychol Bull. 2015: 4(2): 250–267.

[11] McDermott R, Tingley D, Hatemi PK. Assortative Mating on Ideology Could Operate Through Olfactory Cues. American Journal of Political Science. 2014; May: 1–9.

[12] Hatemi PK, et al. Genetic Influences on Political Ideologies: Twin Analyses of 19 Measures of Political Ideologies from Five Democracies and Genome-Wide Findings from Three Populations. Behavioral Genetics. 2014; 44:282–294.

[13] Thomson JJ. The trolley problem. Yale Law Journal. 1985; 94: 1395–1415.

[14] Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science. 2001. 293: 2105–2108.

[15] Shenhav A, Greene JD. Integrative Moral Judgment: Dissociating the Roles of the Amygdala and Ventromedial Prefrontal Cortex. Journal of Neuroscience. 2014; 34(13):4741- 4749.

[16] Sevinc G, Spreng RN. Contextual and Perceptual Brain Processes Underlying Moral Cognition: A Quantitative Meta-Analysis of Moral Reasoning and Moral Emotions. PLOS One. 2014; 9(2): e87427.

[17] Olsen OK, Pallesen S, Eid J. The Impact of Partial Sleep Deprivation on Moral Reasoning in Military Officers. Sleep. 2010; 33(8): 1086–1090.

[18] Starcke K, Ludwig AC, Brand M. 2012. Anticipatory stress interferes with utilitarian moral judgment. Judgment and Decision Making. 2012; 7(1): 61–68.

[19] Trémolière B, et al. Mortality salience and morality: Thinking about death makes people less utilitarian. Cognition. 2012; 124(3): 379–384.

[20] Conway P, Gawronski B. Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology. 2013; 104(2): 216–235.

[21] Greene J, et al. Cognitive load selectively interferes with utilitarian moral judgment. Cognition. 2008; 107(3):1144–1154.

[22] Suter RS, Hertwig R. Time and moral judgment. Cognition. 2011; 119 (3):454–458.

[23] Schreiber D, et al. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLOS One. 2013; 8(2): e52970.

[24] Oxley DR, et al. Political Attitudes Vary with Physiological Traits. Science. 2008; 321: 1667–1670.

[25] Dodd MD, et al. The Left Rolls with the Good; The Right Confronts the Bad. Physiology and Cognition in Politics. Philosophical Transactions of the Royal Society Biological Sciences. 2012;367(1589): 640–649.

[26] Smith KB, et al. Disgust Sensitivity and the Neurophysiology of Left-Right Political Orientations. PLOS ONE. 2011; 6(10): e25552.

[27] Kanai R, et al. Political Orientations Are Correlated with Brain Structure in Young Adults. Current Biology. 2011; 21(8): 677–680.

[28] Helzer EG, Pizarro DA. Dirty Liberals! Reminders of Physical Cleanliness Influence Moral and Political Attitudes. Psychological Science. 2011; 22(4): 517–522.

[29] Nam HH, Jost JT, Van Bavel JJ. ‘‘Not for All the Tea in China!’’ Political Ideology and the Avoidance of Dissonance-Arousing Situations. PLOS One. 2013; 8(4): e59837.

[30] Lewis GJ, Kanai R, Bates TC, Rees G. Moral Values Are Associated with Individual Differences in Regional Brain Volume. Journal of Cognitive Neuroscience. 2012; 24(8): 1657–1663.

[31] Bartholow BD, Dickter CL, Sestir MA. Stereotype activation and control of race bias: Cognitive control of inhibition and its impairment by alcohol. Journal of Personality and Social Psychology. 2006; 90(2): 272–287.

[32] Eidelman S, et al. Low-Effort Thought Promotes Political Conservatism. Personality and Social Psychology Bulletin. 2011; 38(6) 808–820.

[33] Schnall S, Benton J, Harvey S. With A Clean Conscience: Cleanliness Reduces the Severity of Moral Judgments. Psychological Science. 2008;

[34] Adams TG, Stewart PA, Blanchar JC. Disgust and the Politics of Sex: Exposure to a Disgusting Odorant Increases Politically Conservative Views on Sex and Decreases Support for Gay Marriage. PLOS One. 2014; 9(5): e95572.

[35] Talhelm T, Haidt J, Oishi S, Zhang X, Miao FF, Chen S. Liberals Think More Analytically (More “WEIRD”) Than Conservatives. Personality and Social Psychology Bulletin. 2014; 41 (2): 250. DOI: 10.1177/0146167214563672

[36] Amodio DM, Devine PG, Harmon-Jones E. Individual differences in the regulation of intergroup bias: The role of conflict monitoring and neural signals for control. Journal of Personality and Social Psychology. 2008; 94: 60–74.

[37] Zuckerman M, Silberman J, Hall JA. 2013 ibid

[38] Hodson G, Busseri MA. Bright Minds and Dark Attitudes: Lower Cognitive Ability Predicts Greater Prejudice Through Right-Wing Ideology and Low Intergroup Contact. Psychological Science. 2012; 23(2): 187–195.

[39] Carl N. Verbal intelligence is correlated with socially and economically liberal beliefs. Intelligence. 2014; 44:142–148.

[40] Kanazawa S. Why liberals and atheists are more intelligent. Social Psychology Quarterly. 2010; 73, 33–57. doi:10.1177/019027

[41] Verhulst B, Eaves LJ, Hatemi PK. Correlation not Causation: The Relationship between Personality Traits and Political Ideologies. American Journal of Political Science. 2012; 56(1): 34–51.

[42] Sibley CG, Duckitt J. Personality and Prejudice: A Meta-Analysis and Theoretical Review. Personality and Social Psychology Review. 2008; 12:248–279.

[43] Mondak JJ, Canache D. Personality and Political Culture in the American States. Political Research Quarterly. 2014; 67(1) 26–41.

[44] Nisbet EC, Cooper KE, Garrett RK. The Partisan Brain: How Dissonant Science Messages Lead Conservatives and Liberals to (Dis)Trust Science. Annals of the American Academy of Political and Social Science. 2015; 658(1):36–66.

[45] Kozloski MJ. Homosexual Moral Acceptance and Social Tolerance: Are the Effects of Education Changing? Journal of Homosexuality. 2010; 57:1370–1383.

[46] Davis JA. A generation of attitude trends among US householders as measured in the NORC General Social Survey 1972–2010. Social Science Research. 2013; 42: 571–583.

[47] Borgonovi F. The relationship between education and levels of trust and tolerance in Europe. British Journal of Sociology. 2012; 63(1): 146–167.

[48] Van Leeuwen F, Koenig BL, Graham J, Park JH. Moral concerns across the United States: associations with life-history variables, pathogen prevalence, urbanization, cognitive ability, and social class. Evolution and Human Behavior. 2014; 35: 464–471.

[49] Carvacho H, et al. On the relation between social class and prejudice: The roles of education, income, and ideological attitudes. European Journal of Social Psychology. 2013;43: 272–285.

[50] Milligan S. Economic Inequality, Poverty, and Tolerance: Evidence from 22 Countries. Comparative Sociology. 2012; 11(4): 594–619.

[51] Kerr WR. Income inequality and social preferences for redistribution and compensation differentials. Journal of Monetary Economics. 2014; 66: 62–78.

[52] Trump KS. The Status Quo and Perceptions of Fairness: How Income Inequality Influences Public Opinion. Dissertation submitted to the Harvard University Faculty of Arts and Sciences. 2012.

[53] Andersen R, Yaish M. Public Opinion on Income Inequality in 20 Democracies: The Enduring Impact of Social Class and Economic Inequality. AIAS GINI Discussion Paper 48. 2012.

[54] Flavin P. Differences in Income, Policy Preferences, and Priorities in American Public Opinion. Paper presented at the annual meeting of the Midwest Political Science Association 67th Annual National Conference. 2009. http://citation.allacademic.com/meta/p362665_index.htm

[55] Kulin J, Svallfors S. Class, Values, and Attitudes Towards Redistribution: A European Comparison. Eur Sociol Rev. 2013; 29 (2): 155–167. doi: 10.1093/esr/jcr046

[56] Moss-Racusin CA, et al. Scientific Diversity Interventions. Science. 2014; 343(7): 615–616.

[57] Peck TC, et al. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and Cognition. 2013; 22(3): 779–787.

[58] Lai CK, et al. Reducing implicit racial preferences: I. A comparative investigation of 17 interventions. J Exp Psychol Gen. 2014;143(4):1765–85. doi: 10.1037/a0036260.

[59] Kang Y, Gray JR, Dovidio JF.The nondiscriminating heart: lovingkindness meditation training decreases implicit intergroup bias. J Exp Psychol Gen. 2014;143(3):1306–13. doi: 10.1037/a0034150.

[60] Diamond BJ, et al. Implicit Bias, Executive Control and Information Processing Speed. Journal of Cognition and Culture. 2012; 12(3–4): 183–193.

[61] Ito TA, et al. Toward a comprehensive understanding of executive cognitive function in implicit racial bias. Journal of Personality and Social Psychology. 2015; 108(2): 187–218.

[62] Fitzgerald C. A Neglected Aspect of Conscience: Awareness of Implicit Attitudes. Bioethics. 2014; 28(1): 0269–9702.

[63] Price J, Wolfers J. Racial Discrimination Among NBA Referees. The Quarterly Journal of Economics. 2010; 125 (4): 1859–1887.

[64] Pope DG, Price J, Wolfers J. Awareness Reduces Racial Bias. NBER Working Paper. 2013; http://www.nber.org/papers/w19765

[65] Costa A, et al. Your Morals Depend on Language. PLoS ONE. 2014; 9(4): e94842.

[66] Lueke A, Gibson B. Mindfulness Meditation Reduces Implicit Age and Race Bias: The Role of Reduced Automaticity of Responding. Social Psychological and Personality Science. 2014; 1–8.

[67] Holzel BK, et al. Stress reduction correlates with structural changes in the amygdala. SCAN. 2010; 5: 11–17.

[68] Schreiber D., Fonzo G., Simmons A. N., Dawes C. T., Flagan T., & Fowler J. H. (2013). Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS One, 8(2), e52970. doi:10.1371/journal.pone.0052970

[69] Selinger E, Seager T. “Digital Jiminy Crickets.” Slate. 2012. https://slate.com/technology/2012/07/ethical-decision-making-apps-damage-our-ability-to-make-moral-choices.html

[70] Statt N. Seesaw App Could Bring “Wisdom Of The Crowd” To Moral Dilemmas. ReadWrite. 2013 27 Apr. https://readwrite.com/2013/04/27/seesaw-app-could-bring-wisdom-of-the-crowd-to-moral-dilemmas/

[71] Wallach W, Allen C. Moral Machines: Teaching Robots Right from Wrong. 2009. Oxford University Press.

[72] Anderson M & Anderson SL. Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine. 2007. 28(4). DOI: https://doi.org/10.1609/aimag.v28i4.2065

[73] Goodall NJ. Machine Ethics and Automated Vehicles. In Meyer G, Beiker S (eds.), Road Vehicle Automation. Springer. 2014; 93–102.

[74] Lin P, Abney K, Bekey GA. Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press. 2011.

[75] Savulescu J, Maslen H. Moral Enhancement and Artificial Intelligence: Moral AI? Romportl J, et al. (eds.), Beyond Artificial Intelligence. Springer. 2015: 79–95.

[76] Jeurissen D, Sack AT, Roebroeck A, Russ BE, Pascual-Leone A. TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing. Frontiers in Neuroscience. 2014; 8(18): 1–9.

[77] Martin PY, Hamilton VE, McKimmie BM, Terry DJ, Martin R. Effects of caffeine on persuasion and attitude change: The role of secondary tasks in manipulating systematic message processing. European Journal of Social Psychology. 2007; 37: 320–338.

[78] Terbeck S, et al. Propranolol reduces implicit negative racial bias. Psychopharmacology. 2012; 222:419–424

[79] Terbeck S, Kahane G, McTavish S, Savulescu J, Levy N, Hewstone M, Cowen PJ. Beta adrenergic blockade reduces utilitarian judgement. Biological Psychiatry. 2013;92(2):323–8.

[80] Kraehenmann R, et al. Psilocybin-Induced Decrease in Amygdala Reactivity Correlates with Enhanced Positive Mood in Healthy Volunteers. Biological Psychiatry. 2014; doi:10.1016/j.biopsych.2014.04.010

[81] Catlow BJ, et al. Effects of psilocybin on hippocampal neurogenesis and extinction of trace fear conditioning. Exp Brain Res. 2013; DOI 10.1007/s00221–013–3579–0

[82] MacLean KA, et al. Mystical experiences occasioned by the hallucinogen psilocybin lead to increases in the personality domain of openness. J Psychopharmacol. 201; 25(11): 1453–1461.

[83] Levin Y. The Paradox of Conservative Bioethics. The New Atlantis. 2003; 1: 53–65.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
James J. Hughes PhD

James J. Hughes PhD

James J. Hughes is Executive Director of the Institute for Ethics and Emerging Technologies, and a research fellow at UMass Boston’s Center for Applied Ethics.