The False Dichotomy of Online Discussions
Since its conception, many people have viewed the Internet as a potential vehicle for democratic deliberation and political engagement. The Internet allows people with a wide range of perspectives and backgrounds to come together to discuss current events. Furthermore, anonymity allows people to speak on a “level playing field” — preconceptions and bias don’t enter into the conversation if you don’t have any idea who you are speaking to. People should be able to have respectful conversations with people with differing views, and the parties involved should come away from the discussion having learned something. It should be easier to be politically involved and engaged.
However, life rarely works the way you want it to. For all its promise, the Internet has not facilitated the level of discussion that many had hoped it would. The “filter bubble” is one reason for this — people simply don’t speak to as many people with different views as we would hope. People also aren’t as polite as deliberative discourse scholars had hoped. Incivility is rampant online, and trolls in particular fuel this fire. One of the features of the Internet that people had hoped would encourage deliberative discourse also allows for harassment and cruelty — anonymity. The original thought was that under a veil of anonymity, people would have to listen and give equal weight to the thoughts of those who they might dismiss or belittle in person. Anonymity instead allowed for some of the nastier of humanity’s urges to be indulged — people are able to attack each other through their devices in ways that they would not dare to in person. These attacks often target people for the very things we hoped anonymity would mask — race, gender, sexuality, religion, and so on(Lenhart & Ybarra, 2017; Marwick & Caplan, 2017; Hughey & Daniels, 2013). Anonymity does not protect those we had hoped it would protect. It instead shields attackers and bigots from repercussions for their actions.
For a while now, people have been searching for a solution to online harassment. Some researchers are creating Artificial Intelligence programs that detect and prevent it (Pew Research Center, 2017). One solution favored by some experts is the removal of anonymity. People are less likely to post abuse under their own name, so attaching people’s names to their words online could curb the problem significantly (Pew Research Center, 2017). Of course, eliminating anonymous participation does have some potential problems. If everything we say online can be tracked and linked to our real identities, then it would be possible for governments and other powerful institutions to track what we say, potentially leading to the suppression of free speech. Anonymity may still exist, but it will do so in hidden, enclosed spaces that foster one point of view, leading to a furthering of the “filter bubble effect”. Free discussion may be suppressed, and where it does exist, it may be severely limited. This may seem like a paranoid dystopian idea, but it is a fear that some experts have (Pew Research Center, 2017).
A tension is thus produced. Should anonymity exist, if it enables abuse and harassment? Or should everything possible be done to prevent harassment, even if it may lead to the stifling of free speech? Proponents of anonymity have a tendency to argue that anonymity does allow for free speech and unfettered discussions, and that harassment may be unpleasant but is a necessary evil that allows the kinds of discussions that deliberative discourse scholars idealize to exist. People should just “grow thicker skin” and be less sensitive. However, this line of thinking assumes that the discussion that exists alongside harassment is not influenced by the presence of harassment. Unfortunately, this does not seem to be the case.
Exposure to online incivility (even as a lurker) seems to have several effects. Exposure to incivility facilitates feeling anger, and anger promotes the defense of pre-existing attitudes and makes people less likely to critically evaluate new information (Phillips & Smith, 2004; Borah, 2014; Gervais, 2015; Weeks, 2015). Exposure to uncivil comments can polarize perceptions of the topic being discussed, as well as make people feel that society itself is more polarized (Anderson, Brossard, Scheufele, Xenos, & Ladwig, 2013; Hwang, Kim, & Huh, 2014). Uncivil comets also decrease people’s perceptions of the credibility of the source of the original message (Ng & Detenber, 2005). The implication of this is that if we want people to critically examine their beliefs and update what they believe based on encountering and critically evaluating new information, then harassment and incivility need to be eliminated. Their presence does not just offend people, it literally changes the discussion and makes people less likely to think critically about new information. There cannot be deliberative discourse in the presence of incivility and harassment.
Furthermore, harassment and incivility do not impact everyone in the same way. Anonymity does not dictate that harassment is experienced in the same way by everyone. In fact, it tends to be experienced differently by different people, and these differences often fall along traditional lines of power. This can allow for the reinforcement of currently existing power differentials (Marwick & Miller, 2014). Reinforcing these current power structures online undermines one of the benefits of anonymity — allowing people to speak from a “level playing field”. Even if people conceal their identities online, witnessing harassment of this nature can shut down or influence the discussion. What harassment and incivility are present are both visible and persistent. For instance, if something awful is said about one race of people online, even if it targets no specific person, many people can see this comment. Not only can they see it, but it remains present after the topic changes or discussion moves elsewhere. One comment can therefore influence the entire audience, regardless of whether or not the audience takes part in the discussion. This can freeze people out of the discussion and in the process create a belief among the audience that the narrative mostly accepted by other people is whatever is espoused by the harassment/incivility. If someone spews noxious racism and it remains uncontested, then those reading will have the impression that most other people in the discussion support the racist statement. This influences how people perceive the views of society, and reinforces current power structures.
This view, then, that anonymity allows for helpful deliberative discourse with a small side of hatred is not entirely correct. It would be more accurate to say that anonymity allows for discussions that are at least partly shaped by incivility and harassment. That is not to say, necessarily, that anonymity should be disposed of. It does have several benefits. However, incivility and harassment need to be curbed. How? As I mentioned earlier, many think that Artificial Intelligence programs will be able to detect and prevent incivility and harassment. However, the construction of these programs will need to be very carefully considered. When building these programs, we will have to decide what constitutes harassment and incivility, and what the response will be to different kinds/levels/types/etc. of harassment/incivility. These decisions will be made by the programmers working on the program — and these people will make these decisions through the lenses of their own experiences. Certain values will get built into these programs, and these values may or may not reflect wider societal values. Programmers do not tend to be a very diverse group, and it is possible that the values that end up built into these programs will be a bit out of touch with the reality of most users, or the users who are often targeted by incivility and harassment. What then can be done? If technological interventions aren’t the sole answer, then we must turn to social interventions. Communities online are created, and they develop their own norms to govern themselves with. If communities can be built by a wider set of people, that the Internet allows us to connect, then these norms can be set so that they serve pro-social goals. Interestingly, it also looks as if small technical interventions by news organizations can encourage different forms of socialization in their comments sections (Stroud, Muddiman, & Scacco, 2016).