The 100 year history of morality as a science

Payal Lal
Cognitive Handshakes
7 min readDec 1, 2018

Morality has been a scholarly subject for centuries, but only recently has morality become a subject of scientific inquiry. Psychologists now address some of the biggest questions philosophers have been asking using field research, experimentation and even neuroscience. Questions like where does morality come from? Does morality stem from reasoning or emotion? Is there a biological underpinning to morality?

The case for reasoning

Psychologists started looking into morality around the mid-twentieth century from the perspective of cognitive developmentists such as Piaget and Kohlberg — key theorists in a subfield of psychology that focused on the development of thoughts and decision-making.

This period in psychology, and in history, was dominated by the view that human beings were rational agents, capable of using higher-order cognitive functioning to reason through difficult decisions. This emphasis on reasoning in psychology came about largely in response to the shortcomings of structuralism, functionalism, psychodynamic theory and behaviorism — all areas that fell short in some respect in fully accounting for the complexity of the human mind and behavior. Cognitive psychology was able to address the mechanisms responsible, at least in part, for this complexity by focusing on the parts of the mind and brain that contributed to perception and experience — attentional control, memory, inhibition of irrelevant or unwanted thoughts, and reasoning. When it came to understanding something like morality, the focus at this time was on the way people reasoned their way through a moral dilemma to reach a judgment of right or wrong; should or should not.

For instance, Kohlberg developed a stage-theory of moral reasoning that focused on the development of cognitive reasoning that mapped on to age-related stages of development, overall. By presenting children of different ages, and adults, with complex moral dilemmas (e.g., is it ok for a man to steal medicine to save his dying wife?) and asking them to reason their way to a moral decision, Kohlberg was able to map different features of reasoning onto periods of moral development, from least sophisticated to most sophisticated, and identify the cognitive mechanisms as well as the social environment responsible for a particular type of judgment. To illustrate, younger children (pre-school aged) were found to be more egoistic in their judgments by focusing on whether or not actions resulted in good or bad outcomes for themselves. This makes sense according to the limited cognitive development that has occurred in this stage whereby children are only able to focus on outcomes for the self rather than others as perspective taking abilities are limited to non-existent. In addition, the social environment of the pre-school aged child is one where children are exposed to a great deal of rule-based morality in a limited social context — caregivers are handing out rewards and punishments for the adherence to, or breaking of, rules, and peer interactions are not as regular as they will become when children enter school.

The bottom line is that this period of psychology emphasized reasoning, and with respect to morality, emphasized reasoning about harmful outcomes.

Emotions take over

Towards the end of the twentieth century, however, psychology began to shift its focus. The backdrop for this shift was a transition away from the industrial age — a practical age for human beings where people were seen as self-determined and rational, able to use hard work and mental effort to achieve their goals. Leaving this age behind and moving into a technological age, society transitioned away from this view of humans in large part due to the impact of the computer. In a comparison with a computer system, the human mind didn’t fare so well. It became evident that our minds had relatively low processing power and capacity in comparison to the computer, and most importantly, had poor accuracy for computations because of two problems that will never plague the machine: emotions and biases.

We began to see just how impactful emotions were in influencing our judgments. The field of psychology was focusing more and more on the impact of emotions on judgments in research, and exposing the lack of rationality that was previously thought to exist in decision-making. In addition, social psychology was burgeoning as a subfield in response to dramatic world events and shocking results from social experiments. With respect to the former, social psychology was heavily vested in understanding human behavior during WWII by asking questions about how ordinary people could be compelled to commit such egregious violations of human rights. Social psychology research also exposed some ugly truths about compliance and obedience through experiments such as the infamous Milgram shock study, and again, called into question the idea of human rationality and reason. Furthermore, work on attitudes in the 1970’s became increasingly popular in response to racial tensions in the United States between blacks and whites. Experiments were showing that attitudes were not always conscious and well-thought-out. In fact, people could report their conscious attitudes as being favorable to members of other racial groups, but in fact hold non-conscious implicit attitudes that were in direct contradiction of those expressed. For example, a white person could report a favorable attitude towards blacks but in reality, hold a very different attitude outside of their conscious awareness. In fact, these non-conscious implicit attitudes could carry a great deal of weightage in affecting actual decisions and behaviors without a person even being aware of the impact.

Morality was no exception to this shift away from rationality and reasoning. Psychology researchers found that moral judgments were made largely on the basis of strong, affective reactions to situations — the greater the intensity of the emotional reaction, the stronger the judgment of moral wrongness. You could even remove the harm from a situation altogether — no victim and no transgressor — and still find that a strong affective reaction could produce a moral judgment of wrongness as if harm were present. For instance, asking people whether it was morally acceptable to eat their already dead pet dog, where no harm occurred (the dog died of old age) resulted in strong reactions of anger and disgust that culminated in a judgment that the action of eating one’s already dead pet dog was very morally wrong.

What resulted from this shift in psychology and the subsequent research on moral judgment was a theory that morality stemmed from non-conscious gut-feelings or intuitions that are either a result of, or in response to, a flash of emotion. This was believed to be the driving force of moral judgment, with reasoning placed on the backburner. Reasoning was argued to serve the intuitions solely for the purpose of helping to convince others of our moral convictions. Ultimately, however, the convictions themselves lived outside of our conscious awareness. So, if this is the case, then how did we develop these intuitions in the first place?

Morality as a part of our biological makeup

This brings us to a focus in psychology on the individual and in particular the individual’s physical and biological processes that result in perception and experience. With the impact of technology improving research techniques, neuroscience was growing in popularity. As a result, the focus again shifted towards a biological understanding more than a psychosocial understanding of human beings in an effort to reach more objective conclusions. Arguably, with objective measures growing in popularity and accessibility (brain scanning), objective answers should be on the horizon. This focus leads us to looking for morality within the individual — somewhere in a physical process. Perhaps morality can be understood through neural connectivity in the brain, or at least in part, through an understanding of our genetic makeup.

A biologized and individualized morality sits nicely with the idea that moral judgments are unreasoned and instead intuition and emotion based. This is because the question of where intuitions come from can be answered by looking at an evolutionary explanation — an explanation that would result in morality being a universal, objective genetic trait that explains why we are not in the driver’s seat when it comes to making moral judgments. The evolutionary story for morality, simply put, asserts that morality is functional for survival and is therefore encoded into our genetic makeup on the basis of natural selection. Being able to detect a threat quickly and efficiently, for example — the basis of harm, is a protective mechanism. Therefore, these seemingly ethereal intuitions that result in moral judgments are the product of our evolutionary history, and are merely the expression of genetic moral traits. Now morality is not only unreasoned and non-rational, but also more objective in the sense that we can understand morality as being the result of a physical process just like how we understand the physical process of expelling tears as a parasympathetic nervous system response to stress associated with feelings of sadness.

Today, we’re left with some unanswered questions in the face of this shift away from viewing humans as rational beings when it comes to making moral decisions. The first of those questions is how do we explain the rich and variable nature of moral change throughout our modern human history if we are merely products of these non-conscious biological influences? And the second of those questions is how do we justify the condemning of immoral people on the basis that they have the power of moral choice if we are merely responding to the influence of our uncontrollable moral genes?

About the Author

This post is written by Nina Powell. Nina is faculty in the psychology department at the National University of Singapore (NUS) and Yale-NUS. Her work involves theoretical and empirical research on morality and ethics, the nature of consciousness and human development. She is the co-founder of Cognitive Handshakes.

--

--

Payal Lal
Cognitive Handshakes

Education, Technology and Psychology | Sales person and cheerleader of Linkedin Learning