We are polarized in America along political lines. People who identify as Republicans and Democrats disagree on just about every issue. To get a sense of just how polarized the nation is, consider this graphic from a report issued by the Pew Research Center in early 2019. The graphic shows what Republicans and Democrats think are important priorities for the country. There is almost no agreement. The only issue that people who identify as Republicans and people who identify as Democrat agree on is jobs. By contrast, in 1999, Republicans and Democrats both had education, crime, health care, and social security as top priorities.
There is no shortage of explanations as to why we are this way.
A primary explanation is that a flooded and fragmented media environment makes it possible for people to live in echo chambers. This is the argument in Fault Lines, a historical account of polarization by Princeton historians Kevin Kruse and Julian Zelizer. Kruse and Zelizer look at four major divisions — political, class, racial, and sexual, and argue that since the 1970’s these lines have been deepening. These fault lines, they argue were amplified by a saturated, fragmented, opinionated media landscape. Or, as the authors put it — “technology balkanized the public square.” People could retreat into a Republican Fox News-Talk Radio echo chamber if you are a Republican or Daily Show-NPR-liberal blogger echo chamber if you are a Democrat.
Other explanations are more social media-centered. These explanations place the blame on algorithms that put users in a “filter bubble” of sorted information. These algorithms, in an attempt to keep a user’s attention span, send news stories and other bits of information to a user that reinforces preconceived notions. Or, automated bots are brought up as possible culprits. These bots sow discord on social media platforms by spreading fake news or making comments aimed at increasing tensions.
But something is missing in this dialogue.
There is a step between being exposed to information and developing a viewpoint or conclusion. Even though people are in filter bubbles and echo chambers, they still must make choices as to what information to value, how the information is soldered onto ideas already held, and when to stop taking in new information. These choices are often automatic and subconscious, but they are the paths to polarization.
Paths to Polarization
I teach a course at my university on social science research methods. It is a standard course that introduces students to how researchers in social science fields such as psychology, sociology, and political science produce knowledge.
Early in the course, I talk about the knowledge that people produce as they live their lives. People are like amateur social scientists trying to understand the patterns in the world. We ask ourselves questions — how to get promoted, how to impress someone we are attracted to, what is the best way to avoid traffic, what people should I socialize with, and so on. Like scientists, we collect data through talking with others and through our own observations. We then develop a theory to answer our questions.
Unfortunately, people are subject to cognitive biases that lead them to draw weak or completely erroneous conclusions. Stereotypes, prejudices, and moral panics are examples. I believe that the specific paths to polarization are the many cognitive biases we apply to the wealth of information we are presented with each day.
The halo and horns effect. When our general impressions of a person in one context impact our evaluations of them in another, unrelated context, we are giving them a halo or horns. When those evaluations are positive, it is the halo effect: “I will buy the car pitched by that handsome man because I liked him in that mediocre comedy.” If those evaluations are negative, it is the horns effect: “That child psychologist supported Donald Trump, I don’t want to buy his book on parenting.” In both cases, the conclusions are not supported by evidence. The good-looking movie star is not an automotive engineer or someone who evaluates cars for a living, and it is likely that the Trump supporter has a wealth of research and experience on parenting.
The halo/horns effect is especially pernicious in the current information ecosystem, especially on social media. People can become influencers on social media platforms through dint of hard work, talent, and personality. They can avoid many of the traditional institutions of training and education that acted as filters in the past. We generally see this liberating quality of the Internet as a good thing. But it also means that people with large followings on YouTube, Twitter, and other platforms — people with halos, can throw their weight behind ideas even if they have little expertise on the subject. The term “retweets do not mean endorsement” is common on Twitter. And we understand in theory what that means. However, in practice, retweets are always endorsements: the person who shares the information believes that the information is worthy of being shared.
The horns effect is more integral to political polarization in my opinion. Once someone has a stigma associated with them in one context, their ideas become devalued in future, unrelated contexts. Nothing they say can have any value to you. For example, suppose Ben Carson is offered a professorship in a department of medicine at a university that is known for its liberal student body. How will he be received? If he is rejected — despite demonstrated world-class knowledge of medicine — you are saying something not just about his politics, but his worth as a person. In effect, putting horns on a person is equal to “othering” that person and dehumanizing them.
Overgeneralization. When we use a few examples about someone or something to draw a general conclusion, we are committing the error of overgeneralization. If we see three dog walkers who are talking to themselves while they do their job, we don’t assume then, that there is some link between being a dog walker and potential mental illness. It is silly on the face of it. Overgeneralization is likely familiar to many in Western societies in another form. We are constantly told to avoid stereotyping people. We know it is not only morally wrong but inaccurate to assume that all Hispanic people are day laborers or domestics.
Despite society’s taboo against overgeneralizing based on race, people still draw false conclusions about people and groups. I believe this is because our current information ecosystem is excellent at providing biased observations. How can someone guard against calling all police racist if everything in their social media feed shows cops as racist towards black people? Similarly, how can someone not think college campuses are filled with students looking to protest a conservative speaker if all the podcasts and news articles they consume say this? In these situations, drawing general conclusions appear entirely valid.
Selective observation. Let’s say a trickle of contrary information slips into our echo chamber. We see or read something that goes against what we expect. Let’s say a pro-open borders person happens to follow someone who shares a link to an article showing the negative impacts of immigration. Even if she reads it — acknowledges the careful reasoning and strong evidence — somehow the article vanishes from his memory. This is selective observation — the tendency to disregard or downplay information that is contrary to our preconceived notions.
This is like confirmation bias, where the interpretation of what we say tends to be biased. Here, I am talking more about what we wish to pay attention to, which is before we begin any interpretation. We take note of the events that will confirm what we already know and dismiss information that runs counter.
The current information ecosystem withers our defenses against selective observation bias by filtering out most contrary information. If we had to disregard information that made us unconformable and do this repeatedly, then eventually we would be forced to accommodate that information.
False consensus. It is hard for people to distinguish between what we think and what we think other people believe. We imagine that people share our views and that our viewpoint is the majority one. This is false consensus. This cognitive error makes it easier for someone to imagine an “us” that is a normal majority and a “them” that is a crazy, immoral minority.
False consensus powers a lot of the quick rejection of contrary opinions. Why engage with a piece of information or a person when you believe that a person’s ideas are so beyond the pale they don’t need to be considered? This cognitive bias makes it easier for conservatives to dismiss the claims from the progressive left academia — they must be crazy because everyone I know is rational and thinks like me.
Like selective observation, our media environment makes false consensus easier by removing contrary information. We have fewer occasions where we encounter someone who has a different option than us. Moreover, because much of our interaction is online, we can easily remove ourselves from that situation, instead of doing the hard work of making a connection and finding a consensus.
Premature closure. When we come to conclusions too soon without collecting sufficient evidence, we have committed premature closure. Premature closure — as its clinical sounding name suggests — is a cognitive bias that professionals in the medical field must guard against. A primary care doctor may see hundreds of patients who relay similar symptoms but must resist the tendency to make a quick (and possibly incorrect) diagnoses.
Premature closure happens in everyday life and is similar in function to prejudice. Like stereotypes, we try to avoid prejudging someone. But it is hard for us to apply this diligence to other areas of life. We often make snap judgments about issues based on insufficient evidence. Consider what happens after an accusation of racism, sexual harassment, or homophobia is made. The accused party is tried and convicted on social media without anyone knowing the specifics of the case. There will even be calls for boycotts, cancellations, and terminations. But only the accuser, the accused, and the legal professionals involved really know anything about the event. This is premature closure in action.
The Paradox of Choice — In a Good Way
We are all trying to navigate the world the best that we can with the available information we have. Unfortunately, our cognitive biases combined with the echo chambers and filter bubbles produced by modern communication technologies will inevitably lead most of us to draw simplistic conclusions about the reality around us.
So how do we deal with this problem? We can’t change the information ecosystem. And we certainly can’t change how our brain works. But we can make the information ecosystem work for us by constructing an environment cluttered with contrary information.
This is easy on social media. All you need to do is start following people, pages, and channels supporting contrary ideas for a few months. If you feel that maybe this is giving attention to ideas and people you don’t like, think of it as trying to make your arguments sharper by listening to people on the “other side” and anticipating what they might say in a debate. Even if you adopt this rationale, the simple fact of listening to contrary ideas will activate the natural processes associated with taking in contrary information. When exposed to information, your brain will automatically try and process it.
At the very least, you will gain an appreciation for the nuances of any given issue. That is already a step forward because now you are accepting the fact that the world is extremely complex.
For some people, it may cause deliberation and equivocation. This is the paradox of choice — more options lead to less action. But when it comes to complex issues such as gun control, criminal justice reform, racism, economic inequality, and so on, we need less knee-jerk reactions. I think this would be the best outcome possible.