Multiple testimonies before Congress, a presidential accusation, and a public apology: With all of that, there is no doubt that big tech has become the new target in a decades-old argument about the media’s supposed anti-conservative bias.

The fears may be misplaced. My own research on conservative news practices demonstrates the complexity around this issue and the ways in which voices are silenced online. In fact, conservatism abounds in media, depending on what you search. Google isn’t shaping our ideological positions: It’s the other way around.

In other words, what we believe shapes what we search, how we search, and when we search for it.

Nonetheless, platforms like Facebook, Twitter, and YouTube do have the power to diminish content creators who espouse liberal or conservative positions. That runs counter to the ideals of Web 2.0, which brought with it a promise that our ideas would not be hampered by the power of a few large conglomerates.

But algorithms and their human controllers at large tech companies aren’t all that determine social media visibility. My research demonstrates that users largely determine which topics and conversations are heard and seen.

Conceptualizing silencing as implicit/explicit (action) and front stage/back stage (visibility) elucidates four powerful ways to silence: avoidance, harassment, reappropriation, and deletion.

Using data collected over the past five years, I aim to draw attention to the role the public plays in determining which ideas get shared and which don’t. The silencing tactics I have observed occur along two crosscutting dimensions: visibility and action. Using Stuart Hall’s distinction between overt and inferential racism, I first classify “silencing” as either an implicit or explicit action. “Visibility” has to do with whether the implicit or explicit actions are done in the so-called front or back stage. Drawing on Erving Goffman’s dramaturgical analogy, front-stage silencing is highly visible, while back-stage silencing is hidden from public view.

Conceptualizing silencing as implicit/explicit (action) and front stage/back stage (visibility) elucidates four powerful ways to silence: avoidance, harassment, reappropriation, and deletion. These processes are not platform-specific nor mutually exclusive, but rather exist along a continuum — opening and closing the opportunity of expression.

Avoidance

An inconspicuous and invisible form of silencing that signifies a topic is unimportant. Also used as a strategy to circumvent harassment.

At its most inconspicuous level, silencing is avoidance. Despite the proliferation of open-source platforms, avoidance remains a powerful tactic for obscuring content.

For example, on Wikipedia (dubbed “the free encyclopedia that anyone can edit”), fewer than 18 percent of English-language biographies are about women. People often respond to this statistic with another — most of the people who create Wikipedia content are men, thus explaining the gap. Yet this response only exemplifies the power of avoidance — putting the onus on the historically marginalized to achieve adequate representation.

By actively avoiding topics, those in a dominant position can wield the conversation, affording only some visibility. Suggesting that more women editors would increase the number of biographies about women also fails to account for the fact that Wikipedia can be hostile to women and may be resistant to female participation. Given this environment, my research demonstrates that women tend to avoid sharing their expertise.

This avoidance is not exclusive to Wikipedia. In my professional life, I know of many women who avoid weighing in on social media discussions because they don’t want to deal with the harassment that ensues.

Harassment

Credit: barbaramarini/iStock/Getty Images Plus
A direct and visible form of silencing that intimidates individuals/groups.

Counter to the passive behavior that underlies the silencing impact of avoidance, harassment is a form of silencing that is explicit, direct, and visible.

Numerous reports and research projects demonstrate that women and people of color are continuously at risk of harassment online. (There’s good research about this from the Pew Research Center and the Guardian.) These studies focus on the problems women and people of color face online but tend to focus exclusively on visible forms of harassment, including unwanted sexual advances or innuendo, stalking or repeated contact, verbal threats of violence, doxing, revenge porn, and simulated rape.

Not only is harassment used by others in a community to silence the kinds of expression they don’t like or don’t agree with, but many fail to consider how avoidance also works as a form of self-censorship because of the fear of harassment to begin with. While legislation surrounding online harassment has improved, those frequently harassed are often left with few resources to combat the problem and start avoiding the spaces altogether.

Reappropriation

An implicit and visible form of silencing that shifts the power dynamic of a phrase.

Sociologists and anthropologists use the concept of reappropriation to refer to the process by which a group reclaims artifacts or terms that were previously used in a disparaging way. However, my silencing matrix draws on intersectional and gender theorists to analyze how concepts created to empower historically marginalized groups are diluted in the spirit of inclusivity.

Instead of taking a term that has a negative connotation and giving it power (for instance, the evolution of the word “queer” within the LGBTQ+ community), reappropriation as silencing flips the power of the expression back in favor of those already in a position of privilege, effectively muting or depoliticizing subversive expression.

An apt example of reappropriation as silencing is changing the hashtag #BlackLivesMatter to read #AllLivesMatter. While this subtle action still allows the original hashtag to persist, it also creates a canned response meant to disempower the Black Lives Matter movement. On the surface, #AllLivesMatter seems to foster inclusivity, but this subtle change in text makes Black Lives Matter seem exclusionary and aggressive. Drawing on the “naturalized representation” of #AllLivesMatter, the act of reappropriation enables racism by refusing to engage with social and political issues (like police brutality and racial profiling). AllLivesMatter not only shifts power back to those who already hold it but also allows for a memetic repurposing seen on social media.

Deletion

An explicit but invisible form of silencing woven into most content moderation decisions.

Unlike harassment and reappropriation, deletion is a relatively invisible form of power. This form of silencing starts with a flag, a signal from others in the community that the content is inappropriate or does not meet their standards for inclusion. While these systems for flagging content are often created with the best intentions, a sociological analysis of what results unveils some troubling findings.

In my extensive research on Wikipedia, I find that women are more likely to be categorized as “non-notable” and nominated for deletion. My study of YikYak, a defunct social media app that was once popular on college campuses, found that racist content was regularly removed via its flagging systems (downvoting) but that resistance dialog (for example, Black Lives Matter) was also removed with the same fervor from the community. What these findings indicate is that the content moderation process is largely determined by the opinions of a loud few who take the time to flag content for review.

What my silencing matrix demonstrates is that the freedom of some to be heard and seen is coupled with the “unfreedom of many to communicate without response.” Not only are topics routinely avoided in participatory media environments, my research also demonstrates a more concerted effort to silence. The power to intimidate, rewrite, or delete the contributions of those we don’t agree with is not a power bestowed to a relative few at the top of the big-tech food chain. Discussions that focus exclusively on “the algorithms” or deliberate decisions of top executives distract us from the larger situation at hand: In an era of participatory media, we all have the power to silence, and it comes in many forms. Silencing can be a subtle reframing of a topic so it can align better with our existing ideological positions. It can be the simple gesture of a flag designed to remove content we just don’t like or agree with. Silencing can also be malicious, a deliberate harassment of individuals or organizations to push them off the platform entirely. The fear of harassment or rejection can also create the desire for us to self-censor, avoiding participatory media spaces altogether.

The point of starting this conversation is to better understand the relative importance of community silencing. We regularly argue that technology silences the public, but we have little knowledge regarding how much censorship is bottom-up versus top-down.

In an era of binge watching, status update alerts, scrolling newsfeeds, and trending news vis-à-vis storylines and upvotes, the power of the crowd to silence has disastrous consequences. More attention to this matter is needed, and I’m calling on readers to help. Can you think of other examples of avoidance, deletion, reappropriation, and/or harassment? If you feel comfortable, please share these examples in the comments below. One of the only ways to combat silencing is to amplify the ideas we are trying to push out.