Why I argue online.

Source: xkcd.com

This was written by Albert on his blog and imported to Chiasma’s blog as it might be relevant to our readers.

Anyone who knows me will tell you that I like debating in general. I also argue online. A lot. I can stick with a Facebook or Reddit thread for days and one of my prime destinations on the web are forums to debate topics ranging from philosophy to politics, from dissecting a One Piece Manga chapter to arguing about the latest Hyperloop design.

Arguing online is different than arguing in “real time”. Arguing online has a very specific temporality that removes the instantaneity of conversations where one has to listen and reply almost immediately. We have this tendency, especially in polarizing topics, to want to defend an opinion more than wanting to get to a consensus, mainly in order to avoid “being wrong” or “losing face”, it is almost like a knee-jerk reaction. When arguing online, on the other hand, one has the time to read, pause, think, research, prepare an answer and then post. One would then imagine that online arguing would be calmer, more rational than its AFK (Away From Keyboard) counterpart. Unfortunately, a quick visit to any of the websites I mentioned earlier or even any random comment section on any large website or community is enough to quickly notice that this cannot be further from the truth. Online “discussions” are often closed and polarized with the infamous “troll” representing the apex of this facet of online communication.

It is inevitable then that, every once in a while, some of the people I know ask me why do I “waste” my time arguing on the internet, often with people that I have never met or that are part of the friends list on Facebook that I never see nor talk to except for the occasional skirmish in the comments sections on some status about homeopathy (www.howdoeshomeopathywork.com). This question is often followed by an incredulous: “It’s not like you believe that you’re going to change their mind, do you?”. I’ve always found these questions to be unsound mainly because they reduce discussions to changing the opinion of the person we’re arguing with; and while this is obviously one of the goals of any discussion, it is far from being the only one. On the same note, although some research does support the view that it is very hard to change someone’s opinion in an online discussion (even reinforcing it in some cases), things are not as clear cut. To address these points, we have to look at how opinion change happens. How does our opinion flow? Do we get swayed by opinions which are divergent from ours, or does opposition just make us more convinced of what we already believe in? I will attempt to explain my motivations for online arguing through the research that has been published on mental flexibility/rigidity and how opinions usually function and flow. Let’s start with clarifying one of the major arguments against arguing online.

The backfire effect is one of the most cited and counter intuitive effects studied in argumentation between two opposite positions. It was formalized by Nyhan and Reifler in 2010. Nyhan and Reifler found out that when you present proof against a deeply held belief to someone, their beliefs actually get stronger. So when you start breaking out all your references, links and proof to your “conspirationist” friend about the moon landing, you’re actually pushing them to believe even more than we never went to our natural satellite. Nyhan and Reifler were adding yet another brick to the edifice of what is called “motivated reasoning”: motivated reasoning is a biased way of thinking where a person uses their reasoning to arrive to an already defined conclusion. We tend to assume that when we reason, we are a bit like a scientist: we look at the information in front of us and try to follow it to a certain conclusion. In motivated reasoning, we act less as scientists and more as lawyers: we want to defend our client and have already decided on the conclusion as we set out to find the necessary elements to prove it. This concept has been touched upon by many researchers but first formalized by Kunda in 1990. ‘Motivated reasoning’ has been massively developed since Kunda and one of the main reasons of it’s existence is reduction of cognitive dissonance that we will not get into in this article (I wrote an article in french on cognitive dissonance that you can find here . I will update this post when the English version is up).

source: phdcomics.com

Are motivated reasoners a hopeless case then? Is our conspirationist friend bound to always deny himself the astonishing testimonies of the Apollo crew?

It appears that, even in people with deeply held beliefs, it is not necessarily the case. While the backfire effect is real, Redlawsk, Civettini and Emmerson (2010), propose the existence of an affective tipping point where motivated reasoners eventually end up by “getting it”. The authors agree that, while a motivated reasoning voter does become more supportive of a preferred candidate in the face of negative information towards that candidate, they probably cannot hold this ad infinitum. To do so would suggest continued motivated reasoning even in the face of extensive disconfirming information. They propose that motivated reasoning processes can be overcome simply by continuing to encounter information incongruent with expectations and show experimental evidence that such an affective tipping point does in fact exist. They also show that as this tipping point is reached, anxiety increases, which is coherent with cognitive dissonance theories. “The existence of this tipping point suggests that voters are not immune to disconfirming information after all, even when initially acting as motivated reasoners.” This tipping point, because of cognitive dissonance reduction and other factors, is extremely hard to reach in one go because you need a massive amount of counter information and exposure for that information to reach this tipping point, however, all hope is not lost. An example of an affective tipping point in recent history is the change of public opinion on smoking as more and more proof piled up. Smoking was, for a long time, considered “good for your health” and it took a long battle to switch public opinion, this battle is still ongoing in many places on the globe even though research has shown that smoking was correlated with negative health symptoms for almost a century and banning indoor smoking is still an uphill battle in most countries.

This all seems quite grim for the usefulness of arguing online. However, there is two major caveats that often get lost in the science news cycle:

  1. The backfire effect and other motivated reasoning research conclusions do not apply to all online discussions as they are most potent when arguing with a person that has deeply held beliefs and polarized opinions. And this is a very important factor to keep in mind. If you’re debating with someone, while using facts, a topic that does not have a high engagement level and you don’t either, finding a middle ground is a highly likely outcome. In the beginning of this article, I said that it was quite reductive to only argue with someone if we think we’re going to change their mind, and, as we’ve seen till this point, it generally is extremely hard to do. But we were mainly talking about highly motivated people that are very much committed to their opinions. And while a lot of online discussions happen with highly engaged people not everyone is and not engaging in conversations because a person might be polarized means we lose a lot of interesting exchanges with people that might have a different opinion but are mentally flexible to new information.
  2. Contrary to AFK arguments which happen in 1 on 1 or small group settings, online, there are other people reading that might not be as engaged and attached to their opinions. And an online argument is also directed at them. When I’m arguing on a comments sections, there is the person i’m debating with but my arguments are also directed to the larger audience other readers that might come across our exchange. A “silent reader majority” of some sort. Those readers are probably much less motivated reasoners than my interlocutor and using sound rational, evidence backed arguments online is also a way for the readers to be exposed to different opinions even more so that the person replying to me — especially if that person has a highly polarized opinion. This is an extremely important point, because the alternative would be to have readers be only exposed to those with high motivation reasoning and any nuancing and middle ground positions will be lost because of the “why would you argue online with them?” rationale.

We’ve seen how opinions change in highly polarized positions with the tipping point and backfire effect examples and why it is nonetheless very important to engage in conversations despite these effects. In the next section we’ll take a look at how opinions fluctuate in cases of non-highly held beliefs (in our case the passive audience) in both the presence or absence of extremely polarized opinions and how can averagely opinionated people develop such extreme positions. To do that, we’re going to use a model that describes how interconnected individuals can influence one another’s beliefs called the Deffuant-Weisbuch (DW) or Relative Agreement (RA) model developed in 2002 and it’s reexamination by Meadows and Cliff (2012).

A small note: The next part is a bit more technical that the first one because we go into the modelization of opinion change but I wanted to include the details for those interested. Otherwise, feel free to jump to the conclusion :)

The RA model describes a collection of agents (modelized people) with some opinions each held to a certain degree of confidence. Agents can impact each others’ opinion based on how confident they are in their beliefs. In this model, extremists are people who hold minority opinions and are extremely confident in those opinions. If moderates are confident in their opinion, extremists will have a hard time swaying it and the contrary being true. Deffuant et. al argued that there were three possible outcomes from the simulation:

  1. Moderate agents converge towards the center (central convergence) [fig 1.]
  2. Moderate agents splits into two approximately equal sized groups one of which converges towards the positive opinion and the other to the negative opinion (bipolar convergence) [fig 2.]
  3. The majority of the moderate agents converge towards a single extreme (single extreme convergence).

Let’s look at cases 1 and 2 as case number 3 could not have been replicated in the Meadows and Cliff paper.

Fig 1: Central convergence X-axis: Iteration ; Y-Axis: Range of Opinions ; Color: Confidence (green not confident, red very confident) Deffuant (2002)

In this figure, the extremists hold their position at high confidence levels (in orange), their beliefs are not swayed and they remain at the fringes. However, in this simulation the general uncertainty was set to a low level, so moderates were also fairly confident in their opinions (yellowish-green), which leads us to a central convergence where the system stabilizes in a moderate position. This is the situation for well established topics that are only put in doubt by a small fringe of the population albeit with extreme confidence, an example would be the “flat earth society”.

However, if the moderates are not very confident in their opinion on a certain topic, it becomes more prone to be swayed by an extremist:

fig 2. Bipolar convergence. X-axis: Iteration ; Y-Axis: Range of Opinions ; Color: Confidence (green not confident, red very confident) Deffuant (2002)

In this scenario, moderate agents have very low confidence levels in their opinions and are this swayed towards one side or another or the extreme position. This creates a locked down system that is harder to “depolarize” as the more extreme an agent is, the harder it is to sway his opinion.

However, what is interesting in the RA model, is that what sways agents in the middle is not necessarily the extremists but also the people “leaning” towards an extreme and that are in contact with both the center and the extreme positions. So someone leaning up, might be having a higher effect in polarizing the opinion towards that position than the extremist all the way up in red as they are already under the influence of this extreme. This would tend to push us to promote “cutting” off extreme position, censoring them and not giving them a platform to polarize opinion. However, this could not be more wrong. There will always be people with extreme positions and if we cut them off, we would be denying them a potential tipping point and we would be creating another system, cut off from the general population that will just radicalize even more. These are things we have already seen happen in the past year and were often referred to as “echo chambers”. Trump supporters or Brexit supporters often felt like they were “cut off” from the public debate and created “bubbles” on social media or had specific media outlets that echo their highly polarized opinions and won both votes. The RA model presents a solution to this conundrum which will bring us back to arguing online since it has been a moment since we’ve swayed from our topic: creating higher confidence in people’s existing beliefs. Education instead of moderation. Reducing uncertainty results in the reduction of the influence of extremists without needing to attack them. That way, we get a double bonus: by not attacking them, we are avoiding their own backfire effect AND we give information for people to reduce their uncertainty. This double action of debating online in a specific non-attacking form, when possible, while keeping in mind the rest of the people reading makes arguing online if not useful, at least not as useless that my friends seem to think. There are also other methods to debate without confronting or attacking a point of view such as explicitation in order to realize the illusion of explanatory depth for example (more to come on that in a future post).

Arguing online can be tiring and taxing as it seems that a lot of time it’s quite a “donquixotian” exercice however, it also has it’s positives. It is a good training in control as the challenge is often not to “lose patience”,resort to name calling or close ended replies, especially with highly motivated reasoners. It also helps in fighting our own polarization as expliciting one’s own opinion is a very good exercise in mental flexibility (as long as it’s not a motivated reasoning) and also a good way to realize the shortcomings of our opinions when they are motivated. Knowing about the affective tipping point, the DA model, or that other people that might be reading a comment, a Reddit post or the heated conversation on who the best NBA player of all time is an even higher motivator to keep on going.

So next time someone asks me why I argue online, or if someone asks you why you do it, send them this way and let’s get a debate going :)

Originally published at medium.com on January 8, 2017.

We are a group of people interested in mental flexibility. Get in touch at facebook.com/chiasmaparis

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store