Personality delusions

Daniele Orner Ginor
Brave
Published in
9 min readJun 17, 2019

--

Personality assessments are popular. That doesn’t make them trustworthy.

In Jorge Luis Borges’s short story Funes the Memorious, a nineteen-year-old stable hand suffers a horse riding accident while working on a remote ranch in South-West Uruguay. After falling off a wild horse, he loses consciousness. Even though he wakes up paralyzed, he also discovers that he has acquired the gift of a perfect memory.

This supernatural gift turns out to be a burden though, for Funes is incapable of general, abstract ideas. He finds ambiguity in the word “leaf” because he not only remembers “every leaf on every tree of every wood, but even every one of the times he had perceived or imagined it”. He has difficulties understanding the generic term “dog”, because dogs come in so many shapes and sizes.

If it weren’t for our brains’ capacity to categorize data, navigating daily life would have been extremely challenging. Imagine that like people suffering from colour agnosia, you were unable to consistently group together the 100,000 discriminable colour hues that are reported to exist. Or imagine not being sure how to operate a doorknob every time you encounter one with a slightly different design. Funes, in Borges’ story, is surprised every time he sees his own face in the mirror!

Thankfully, our brains are usually able to organise the enormous influx of information coming through our senses. This wonderful ability comes with uncomfortable side-effects, however, when the subject of our categorisation is people rather than things.

If you are reading this, chances are high you’ve come across an online personality test — one of those tests that ask you whether you “prefer to interact with few or many people at parties”, or whether you “often misplace things”, then subsequently, magically, tells you what type of person you are compatible with, or which career path is more suitable for you.

More likely than not, such a test would be a variation of the Myers-Briggs personality test (or MBTI), one of the most widely used frameworks for categorising people in the past 100 years. The MBTI test was developed in the 1920s by Katharine Cook Briggs and her daughter Isabel Briggs Myers, based on a theory by early psychologist Carl Jung. The ambition of Myers and Briggs — neither formally educated in psychology — was to help the growing number of women entering the workforce to find suitable jobs for their specific personalities.

The MBTI test classifies people according to 4 “temperament” dichotomies: extraverted and introverted, intuitive and sensing, thinking and feeling, judging and perceiving. The resulting combinations then segment test-takers into one of the 16 personality types.

[Image from: https://www.16personalities.com/personality-types]

Each type comes with easy-to-understand descriptions, with the fun side-effect of comparing you with fictional characters who may or may not have the same temperament as you. Winnie the Pooh, for the record, is apparently a “Defender” (introverted, sensing, feeling, judging) — as is Ophelia in Shakespeare’s Hamlet. Marvel’s Black Panther is a “Commander“ (extraverted, intuitive, thinking, judging).

It is hard to overstate how popular these assessments have become. Fifty million people are estimated to have taken the Myers-Briggs personality test since it was added to the portfolio of the Educational Testing Service in 1962. Around 10,000 companies, 2,500 colleges and universities as well as 200 government agencies in the United States use the test every year. CPP, the private company which publishes MBTI, is reported to make an estimated $20 million annually.

Because these tools are used for such critical decisions such as filtering out potential employees and finding marriage partners, the ethical implications are huge.

Let’s look at a case that was prominently featured in the news recently. Cambridge Analytica, a British political consulting firm, together with researchers at the Cambridge University Psychometric Center, was accused of harvesting millions of Facebook users’ personal data under the pretext of an academic study and using it to target voters during the 2016 Trump presidential campaign. The controversy sparked a global debate around data privacy and the use of psychographic profiling.

It also helped shed light on the scientific validity of personality tests, and how good they actually are at predicting people’s behavior.

Because as it turns out, these tests don’t really work, or at least not the way we think.

Are personality assessments really predicting anything?

In the first quarter of 2019, 7 million advertisers were promoting their products and services on Facebook. Advertisers love Facebook because it has gathered enough data on how its users behave online to be able to predict people’s commercial preferences and only market towards the most relevant groups.

Cambridge Analytica made a claim that goes beyond traditional advertising: it held that it could use psychometrics methods to sway votes. In other words, that it could leverage Facebook data to understand the personality of its users and then use this psychological information — more than any other type of information — to match political online ads to the users most likely to click on them.

In hindsight, after all the dust, stupor, and sensational headlines settled, it has proven difficult to assess how much impact political micro-targeting (as the method is called) really had on the election results — if it had any impact at all. Voters still seem to prefer general messages to targeted ones, messages based on broad principles and collective benefits (like health, immigration, the economy, etc.). And if anything, research shows that targeted messages can actually create a backlash and reduce political support.

More importantly, recent studies suggest that personality might not be a particularly good predictor of voting preferences at all. Researchers found that your parents’ political orientation, for instance, is by far a stronger predictor of your own political choices. And so is, well, just asking people what they’re going to vote for.

“Personality traits are correlated with political values, but the correlation is generally weak”, says Eitan Hersh, a professor of political science at Tufts University, and author of the book “Hacking the Electorate: How Campaigns Perceive Voters”.

And correlation doesn’t mean causation. Another study, by Brad Verhulst and Peter Hatemi (from Virginia Commonwealth University and Pennsylvania State University), found no evidence whatsoever of a cause-to-effect relation between personality traits and political attitudes. The fact that someone is liberal does not make them more tolerant, just as being tolerant does not make someone liberal.

Such findings provoke incredible antagonism. The authors explain, “Our papers are wildly unpopular as countless scholars are dedicated to a theory where personality traits cause the formation of attitudes.” This topic is “filled with alarming levels of vitriol that at times overshadows the substantive scientific findings that ought to be the cornerstone of academic discourse”.

This situation is not likely to go away soon, as Maria Konnikova from the New Yorker observes: “The desire for causality, or at least some basic truths — Of course those Republicans are closed-minded people! Of course those damn Democrats are neurotic! — persists. And despite studies like Verhulst’s, we can’t seem to let it go.”

If this is true for personality and politics, could it be true for personality predictions in general? Could the science of putting people into boxes be tainted by unhealthy levels of wishful thinking?

The many ways personality assessments go wrong.

Broadly speaking, there are two categories of personality assessments, each problematic in their own way: the populist ones, and the elitist ones.

MBTI-type tests are extensively used by introspective individuals as well as Fortune 500 companies, the military, educational institutions and other industries to assess employees, students, soldiers, and potential marriage partners. Maybe because they are so popular, it is assumed they are true — and in that sense, they are populistic. However, neither Jung’s theory, nor Myers and Briggs’ application of it emerged out of controlled experiments. Theirs is a conceptual framework, a mental model, rather than a scientific instrument. To quote University of Pennsylvania professor of psychology Adam Grant: “In social science, we use four standards: Are the categories reliable, valid, independent, and comprehensive? For the MBTI, the evidence says not very, no, no, and not really”.

Then there is the Big Five Inventory, a personality test favored by academics and used in hundreds of peer-reviewed studies. It is also known as OCEAN, as it measures the following personality traits: Openness to new experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Its support by authoritative experts might be seen as evidence that it is true — and in that respect, it is elitist. But despite its more respectable position in the academic community, it is far from perfect.

Here are some of the many criticisms of the Big Five framework:

1. Voter preferences notwithstanding, Big Five personality traits are predictive of some things, but these are often boring things. It has been claimed, for instance, that the conscientiousness trait is a good predictor of success at work. But what does conscientiousness actually mean? Because the concept is vague, when you use it for predictive purposes, you end up with platitudes that are clearly true for most people but not very insightful. For example, “high conscientiousness predicts academic performance”, can be translated in the layman vernacular as “people who work harder tend to get better grades” — an observation that probably didn’t require the brain power of multiple PhDs in the first place.

2. It is nearly impossible not to cheat the Big Five test. If you are taking the test for a specific purpose, for example, to woo a potential employer, then you will do everything you can to make a good impression — whether consciously or subconsciously. That’s because, like most humans, you are susceptible to a cognitive bias called social desirability.

Since it is pretty obvious what each of the questions on the Big Five is meant to measure, the application of some common sense makes it easy to imagine which of the five personality traits would be deemed most suitable for a specific job position. For example, if you are applying for a job in sales, you should probably choose answers that reflect a high level of extraversion. So don’t mention that you never “start conversations”, and that you “prefer being alone.” The Big Five test is practically an invitation to fabricate the most desirable personality for every occasion.

3. It is WEIRD. Like most studies in psychology, the Big Five originated in the dusty labs of an Ivy-league University. It was therefore first validated on a sample of students who were all Western, Educated, Industrial, Rich and Democratic, a topic we’ve extensively covered in the last post. Though attempts to replicate the study on similarly WEIRD samples are usually successful, they’re not necessarily so otherwise. For example, the Openness to experience trait appears to be practically nonexistent in Asian countries, while amongst farmers in the Bolivian Amazon, none of the five traits seem to exist at all.

4. It doesn’t tell the complete story. Dan McAdams, from Northwestern University, has called the Big Five test “psychology of the stranger,” because it refers to traits that are relatively easy to observe in a stranger. Other defining elements of one’s personality — the more private ones, or those that depend on external circumstances — are excluded from it. What about, for example, honesty, humility, manipulativeness or self-esteem? And what about life satisfaction, personal traumas, spiritual orientation, motivations and priorities in life?

The Big Five categories might be more valid statistically than Myers Briggs’, but the claim that they are comprehensive is still highly debatable.

Medical students learn to “first, do no harm”: if you’re not sure a certain treatment can help, but there’s a risk that it will hurt a patient then it is better to do nothing at all. It is unclear how insightful personality tests really are, but misusing them can completely change a person’s career and life. It is a big responsibility and there’s a lot at stake. As recruiters and workforce planners, we at Brave understand that no single psychological evaluation approach should be used in isolation to describe a person’s rich inner world, let alone predict their behavior — at work or otherwise.

In the next post of this series, we’ll delve deeper on what it is about our brains that makes us so attached to these personality assessments. And how we should shift the paradigm to embrace the complexities of human nature, with the help of new advances in computational methods.

Authors

Daniele Orner is Chief Scientist at Brave, where she uses her background in linguistics, software engineering, and behavioral psychology to develop models for understanding people and their talents.

Amina Islam has a Ph.D. in engineering and is currently putting her skills and academic background into doing evidence-based research on the impact of informal learning programs.

Get in touch to hire, to get hired, or to join our team: brave.careers

--

--