Published in


Moral Signatures of the Brain

Morality is such a hard quality to define and yet one of the cornerstones of human civilization. Whether we look at religious passages in the Bible or stories from Aesop’s Fables, lessons on morality are ubiquitous in our society, and moral values are imparted upon as at a tender age. Rather than relying on explicit rules like “Don’t steal” or “Don’t be greedy”, although those do exist in formal texts like the “Ten Commandments”, we tend to use narratives and stories to serve as examples for moral virtues. Case in point: a child is more likely to understand the consequences of lying from listening to the story of “The Boy Who Cried Wolf” instead of just being told not to lie.

In fact, according to Dr. Rene Webber from the University of California, Santa Barbara, the presence of a moral conflict is what makes stories and narratives engaging in the first place. A conflict that depicts the violation of a moral value, whether that be freedom, loyalty, respect, love, etc.. is more effective in reaching the audience and conveying a message than their amoral counterparts.

So what kinds of moral conflicts do we encounter in our everyday lives? And how prevalent are they?

To analyze this, the Media Neuroscience Lab at UCSB has developed a Moral Narrative Analyzer (MoNA). In a nutshell, the algorithm simply scrapes the web every 15 minutes for any kind of text (news, movies, twitter feeds, etc..) and deploys big data techniques to map how moral frames are changing in real time.

But how do you exactly extract moral values from texts? After all morality is an implicit quality that cannot be scanned for in a document.

MoNA first uses sentiment analysis to scan the entirety of the text. It generates a graph outlining general sentiment, indicating how moral paradigms shift during the duration of the overall text. Looking at the inflection points in the graph, researchers at the lab are able to pre-select scenes for a more detailed round of human coding that might have more interesting moral information. A person would then simply look at the pre-selected portions and highlight the moral dilemmas they encounter.

Example of a Graph Generated by Sentiment Analysis

But MoNA is only useful for cataloging a sample of texts. If you are scraping the web every fifteen minutes and pre-selecting scenes for human decoding, it is obviously not feasible. The lab uses the database of samples obtained by MoNA to create moral dictionaries; the pre-selected sections that were highlighted by lab personnel are scanned to look at words that co-occur with certain moral categories. For example under the moral category of authority, words like ‘congress’, ‘signed’, and ‘sentencing’ appeared quite frequently while words like ‘evidence’, ‘fraud’, and ‘votes’ were associated with the moral category of fairness.

In this way, the lab is able to generate a moral profile of any given text.

But why do we care about doing any of this? What are the implications?

Here’s the general thought process behind such an endeavor. Most of the events that are reported on in the news or spoken about in the media can be morally framed. Whether it is a protest, a call to action, a trade sanction, or an immigration policy, there is always a moral aspect to the conflict with one party challenging the other’s moral framework. If we have a real time analysis of when, where, and with what intensity these moral paradigms are changing, we can predict when that bubble of peace might burst.

But there are two sides to every coin. The concept of morality in and of itself is really interesting; humans are the only social beings with such an implicit level of moral conduct in a societal setting. As academics, we question why morality is so intertwined in our decision making processes. Might it have served an evolutionary purpose?

Unfortunately we can’t go back in time and look at the brain structures of our ancestors to track this evolutionary change. But clearly, nature has shown that our bodies have evolved in specific ways to optimally suit our functions. Was morality selected for? If so, then maybe the way the brain processes these moral scenarios might be a window of looking into the past.

Are the networks in the brain that process different moral paradigms dissociable? And if these moral paradigms trigger certain discernible activity in the brain, can we then apply that backwards and use those models in our brain to predict behavior in morally challenging situations?

I know this sounds super crazy, so let’s break it down a bit.

How do our brains decode moral information?

The media neuroscience lab measured this by placing subjects in fMRI scanners and presenting them with moral dilemmas. Subjects had to rate how morally wrong a dilemma was on a discrete scale from 1 to 4, and brain scans were taken for each scenario.

A machine learning classifier then learned the associations between activated brain regions and a stimulus. Specifically, they found that the precuneus and medial prefrontal cortex were activate when the subject was reading about a moral violation. From the ventral view, you can also see activation of the insula and the temporal lobes.

Activation of Brain Regions when Subjects Encounter Moral Paradigms

Ok, so we know that moral paradigms correspond to the activation of certain areas in the brain. But we still haven’t answered whether different paradigms trigger the activation of different networks.

To answer this question, the lab grouped moral values into two domains, drawing correlations from the political left and the political right.

Those who tend to align with the political right value virtues like group cohesion, hierarchy, and loyalty. Dr. Weber’s team aptly categorized these attributes as the binding moral domain. On the political left, people tend to value fairness and freedom, which make up the individual moral domain.

Equipped with these two overarching domains of morality, Dr. Weber’s lab employed a “searchlight technique” to see if these domains corresponded to different structures in the brain. That is, is the binding moral domain neurologically different than the individual moral domain?

The searchlight technique used in the fMRI study found that the binding moral domain strongly activated the medial prefrontal cortex and temporal-parietal junctions compared to the individual moral domain.

What does this mean?

The way I look at it, there are two implications. One that is more immediate and relevant, and one that requires a little development.

The immediate implication is that it serves as a window to better understand ourselves. If conservatives rely on a different moral framework compared to liberals, maybe we can use this knowledge to communicate in a way that is less polarizing to different groups of people. Acknowledging that different moral paradigms are not just a social construct but entail differences at the neurological level brings us one step closer to changing the way we communicate with those that don’t share the same moral frameworks. We can take steps towards productive discourse instead of demonizing the opposing side.

The second implication is about predicting behavior from these brain scans. Can we, in theory, train a model to look at fMRI scans and predict a “moral profile” for an individual? Although the ethics of this are a little murky, this knowledge could help in targeting different messages to different groups of people, appealing to their unique moral profile, because as we know, appeal to morality is key to inciting action and engagement.

Sources and Studies

If you want to read more about the studies in this article, I have included the citations below:

René Weber, Jacob T. Fisher, Frederic R. Hopp & Chelsea Lonergan (2017): Taking messages into the magnet: Method–theory synergy in communication neuroscience, Communication Monographs, DOI: 10.1080/03637751.2017.1395059

René Weber, J. Michael Mangus & Richard Huskey (2015) Brain Imaging in Communication Research: A Practical Guide to Understanding and Evaluating fMRI Studies, Communication Methods and Measures, 9:1–2, 5–29, DOI: 10.1080/19312458.2014.999754

Media Neuroscience Lab:



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store