Tribalism: Policy Considerations
“Truth isn’t truth.” — Mayor Rudy Giuliani, NBC Meet the Press 8/19/18
What can be understood about tribalism that might inform policies aimed at creating a future in which minorities are safe and the rule of law is created and enforced with blindness to location, race and religion? Tribalism is a growing concern from the US to China as identity politics replaces the liberal philosophies prevailing since the end of WWII. This is not surprising: the liberal worldview aspires to bring different people together through the abstract idea of common values; tribalism unites people across tangible issues like land/resources, biological proxies (eg race) and explicit social affiliations (eg religion). These tangibles appear to drive in-group identification and out-group differentiation through cognitive architecture that has evolved throughout our primate past. Some of these architectural features drive behavior toward in- versus out-group individuals based on biased perceptions that are often accepted as factually true, providing an illusorily “objective” foundation upon which we base our social perspectives.
This generates what is an increasingly important question: are perceptions affected by an understanding of who’s team I am on?
The answer is yes.
Is There a “Reality”
On 23 November 1951, Dartmouth faced Princeton in a highly-anticipated football game. The game was rough and many cried foul — how they called foul bears some examination. The Princeton Alumni Weekly wrote:
…there was undeniable evidence that the losers’ tactics were the result of an actual style of play, and reports on other games they have played this season substantiate this.
The Dartmouth wrote:
The game was rough and did get a bit out of hand… Yet most of the roughing penalties were called against Princeton while Dartmouth received more of the illegal-use-of-the-hands variety.
Albert Hastorf (Dartmouth) and Hadley Cantril (Princeton) explored this event and published what has become a well-cited article in the social psychology literature. First, they used a questionnaire to establish that there were no differences in how well the Princeton or Dartmouth students knew players on their football team, whether they had ever played football and how well they knew the rules of the game. Second, they showed a movie of the game to students at each school asking them identify and rate fouls as “mild” or “flagrant.”
Nearly all the Princeton students felt the game was “rough and dirty”; the vast majority thought that Dartmouth had started the rough play. When Princeton students watched the game, they saw the Dartmouth team make more than twice as many fouls as their own team. Furthermore, Dartmouth fouls were two to one “flagrant” while Princeton fouls were three to one “mild.”
Dartmouth students also saw the game as rough — but nearly half as “rough and fair” with the next-largest group saw the game as “clean and fair.” While many Dartmouth students felt their own team was to blame for the roughness, a few felt that Princeton started it and a large majority felt both sides were to blame. When watching the game, Dartmouth students saw an equal number of fouls with a one to one ratio of flagrant to mild fouls on the Dartmouth side compared to one flagrant to two mild for the Princeton team.
The authors concluded:
…it is inaccurate and misleading to say that different people have different “attitudes” concerning the same “thing.” For the “thing” simply is not the same for different people whether the “thing” is a football game, a presidential candidate, Communism, or spinach. We do not simply “react to” a happening or to some impingement from the environment in a determined way… We behave according to what we bring to the occasion, and what each of us brings to the occasion is more or less unique.
Hastorf and Cantril argued that their evidence showed that
…there is no such “thing” as a “game” existing “out there” in its own right which people merely “observe.” The “game” “exists” for a person and is experienced by him only in so far as certain happenings have significances in terms of his purpose. Out of all the occurrences going on in the environment, a person selects those that have some significance for him from his own egocentric position in the total matrix.
Asserting that there is no such thing as a ‘game’ outside our individual perceptions is likely an overstatement. Another possible conclusion: there was an objective game event but our personal perceptual relationship to it can vary significantly depending on our background and experiences. This is what a research group from Australia led by Pascal Mohlenberghs have found.
Different Brains, Different Perceptions
Dr Pascal Molenberghs is a research psychologist in Melbourne interested in neuroscience of in-group bias. In a 2013 article, his research team reported on a group of 48 volunteers randomly assigned to either a “red team” or a “blue team.” These volunteers first participated in a team-building exercise where they competed for speed in pressing a button as fast as possible after a “Go” signal. Participants were then shown randomly generated ‘‘RED WINS” or ‘‘BLUE WINS’’ on their screens; each participant ‘‘won’’ 50% of their trials to avoid any sense of team superiority. If participants’ responses took longer than 700 ms, the opposing team won to ensure that participants remained unaware of the randomized nature of the “WINS” notifications.
Next, 24 participants performed a task in which they viewed pairs of video clips of “blue team” and “red team” arms quickly reaching for a button to press — very much like the team building exercise (see figure 1).

All clips were 1500 milliseconds (ms) in total length and were edited so that the time from onset of movement until the arm reached the button was strictly controlled to be either 233, 300, 367 or 433 ms. Therefore, when comparing any two clips, the time difference between them could be 0, 67, 133 or 200 ms. Every participant saw the exact same set of clips to avoid differences in judgement from subtle physical differences in the video clips. After each set of clips, the participant was asked to judge which of the two actions was faster — own team or other team.
If team affiliation had no influence on participants’ judgement, then the point at which participants judge their own team’s actions as faster on 50% of trials should coincide with the speed of the two clips being identical. However, Molenberghs found that participants actually judged the actions of their own team members as roughly 30 ms faster than identical actions performed by other team members. A statistical test revealed that the likelihood that this finding was random was only less than 0.1%. While 30 ms seems almost so short as to be trivial, the real story is that people’s actual understanding of an event is significantly changed by trivially important group identity.

Figure 2 summarizes their results. The graph plots the probability that a participant chose his team faster (y-axis) against the actual time difference between the video clips (x-axis). For example: even if the other team’s clip was actually 133 ms faster (look for -133), participants still had a 15% chance of saying it was slower. If the clips were the same (no time difference, look for 0) participants were 60% likely to say their own team was faster. As the time difference between clips in a trial approached the relatively long 200 ms, participants were much more accurate in their judgement of which clip was faster.
The conclusion is: your perception of how fast a hand moves from rest to push a button is in part dependent on whether you think that hand is on your — or another’s — team. At least when you are not paying explicit attention to this facet of your experience.
Molenberghs then placed the subjects in an fMRI and modified the paired video task keeping only videos with a time difference of 0 or 67 ms, making it harder to differentiate the victor. When the paired video clips were exactly equal in time-to-the-button, the 24 participants judged the actions of their own team as faster only 53.9% of the time, versus 60% in the previous experiment, a statistically significant result. When the investigators looked only at the 13 of 24 participants who actually selected their own team as faster on >50% of trials, their average percentage was 60.8% (p = 0.001) — identical to the previous experiment. The other 11 participants showed no significant bias when judging two video clips of equal duration.
Molenberghs then applied an Implicit Association Test (IAT) to these 24 subjects. Introduced in 1998 by Greenwald, the IAT asks people to associate specific responses with a visual cue. Easy tasks, like saying “hello” if a face is male and “goodbye” if it is female are done quickly and with low error rates. As association tasks become more complex, reaction times and error rates increase. What if the task is not necessarily complex, but the association runs counter to individual beliefs? Greenwald’s paper found that white subjects took longer to associate African American faces with positive words than with negative words. The IAT paradigm, while not without controversy, has been widely used to explore implicit biases that many subjects would not feel comfortable expressing directly: people do not want to stand up and say “I feel negatively about African Americans, it is easier for me to associate them with crime than with rainbows.”
Molenberghs asked participants to associate positive vs negative words with the pictures of blue vs red team used in the fMRI experiment. The 11 participants who showed no bias on the fMRI paired video task showed no significant difference in time taken on the IAT; the 13 participants who did show bias on the fMRI paired video task showed a significant bias in favor of their own team on the IAT. Their mean response times for own-team/positive conditions was 654 ms while the own-team/negative conditions produced a significantly longer mean time of 723 ms, a statistically significant finding. It should be noted that this association was not without some statistical doubt: the difference between the biased and non-biased group failed to reach significance.
Finally, Molenberghs’ group analyzed the fMRI data from the 11 non-biased and 13 biased participants. While these participants were watching the paired videos they were also presented single video clips with either a red or blue sleeved arm reaching for a button; these clips were not actively compared to another but were viewed in the context of the paired video task. Meanwhile, the fMRI acquired pictures of brain activation for each participant. They then contrasted activation patterns of people with bias to those without bias and found a single cluster within a region of the brain called the Inferior Parietal Lobule, or IPL with a high degree of statistical significance (figure 3). To double check this difference in the IPL, the investigators looked for fMRI differences when participants passively observed own team and other team actions without any paired video component. No difference in fMRI signal was found when participants passively watched button pushing trails without any assessment of which team was faster. This was true both when all 24 participants were evaluated as well as when just the data from the 13 biased participants were analyzed. More recent research has also found that the parietal cortex “may provide top-down influence to selectively facilitate the visual representation in favors for [sic] the higher-valued option.”

Implications
These findings have some real-world implications.
First, while conscious judgement and action take place anterior to the central sulcus (CS) of the brain, preconscious sensory information is acquired and integrated posterior to the CS (figure 4). It is not until sensory information passes to the front half of the brain that we can contemplate how to act — courteously, recklessly, etc. That the IPL is in the posterior half of the brain strongly suggests that bias is encoded into our actual perception of the information we are receiving. This makes bias somewhat resistant to conscious attempts at “fairness” or “objectivity” — if you fail to appreciate that your experience of the world is perceptually biased, how might you reasonably question your fairness? On the contrary, we often question others’ fairness based on deep confidence in our own perceptions — an underlying dynamic for conflict.

Second, we have become used to thinking about evolution as synonymous with DNA — small variations in an individual’s genetic code mean that each organism has a unique relationship with the environment and that some relationships are more successful (elephants) than others (wooly mammoths). But evolution has other media through which to act. Just as genetic variability motivates traditional genetic evolution, cognitive variability drives Neural Darwinism: the theory that many of the connections in our brain are shaped by our life experiences. We do know that environment impacts perception. For example, the amygdala is a small brain region which, in White Americans, has been clearly shown to become more active on fMRI when shown African American faces. Telzer et al found that White American children with greater diverse of peers showed a dampened amygdala response to African American faces. Because some participants in Mohlenberghs’ study had an fMRI-positive IPL and others did not is variability, implying that this function is subject to evolutionary forces. The “ability” to perceive sensory information that is biased but does not prompt any soul-searching regarding one’s fairness can foster in-group affiliation and group success — but also has liabilities. Perceptual bias interpreted as “Truth” can foster out-group differentiation by blinding us to available information, generating overconfidence in our personal perceptual reality and creating conditions which lower the probability of humility and introspection of shared circumstances.
Group affiliation is clearly important and renders survival advantage across many species: ants, wild dogs, primates and so on. However, successful “syncing” of an individual human brain to other human brains to enjoy this advantage comes at a cost: implicit biases in perception that occur before we have the chance to contemplate “what is fair?” Conflict pervades so many places and affects so many people, a greater appreciation of the neural mechanics of in-group bias can perhaps help inform effective policies to mitigate tribalism in order to establish a fair rule of law.
