Facebook and Democracy: A Primer
[A] society which is mobile, which is full of channels for the distribution of a change occurring anywhere, must see to it that its members are educated to personal initiative and adaptability. Otherwise they will be overwhelmed by the changes in which they are caught.
— John Dewey, Democracy and Education, 1916
They are peculiar things, the distant, fragile connections that we collect on Facebook. I understand and appreciate their appeal, not just as a fledgling academic grateful for a built-in audience, but as a human who appreciates that not all that is past is lost. I’ve come to realize that these brittle connections, however, may not have a ton of inherent value if we consider the toxic ecosystem in which they persist.
Take what happened to me this week. I decided to chime in on a conversation thread started by a Facebook employee I went to middle school with. His post insisted that former Facebook exec Chris Hughes’s efforts to break up Facebook meant that Hughes should forfeit his Facebook earnings. It was a very weird take. I decided to let him know, aware that this proxy argument would give way to the more central one that would and should be held between two otherwise-disconnected people like ourselves in a public forum: Should Facebook be broken up? When should it be considered a public utility? On my side, someone who teaches in a university and does research on democracy in the digital era, and on the other, a Facebook employee and his network of colleagues.
They found any criticism of their employer unwarranted. Facebook, their argument went, provides “millions” of jobs. The welfare of its employees, contractors, and those for whom they allow to do business are too important for Facebook to be broken up. After all, the only vocal opponents of Facebook are from archaic industries like newspaper publishing, deservedly flailing in changing economic environments. The topics of privacy, democracy, fake news, harassment… none of these things so much as occurred to them to be areas of reasonable complaint. This seemed absurd to me, in light of all the whistleblower exposés, sweeping GDPR legislation in the EU, Cambridge Analytica, Zuckerberg’s congressional testimony, AND A ROHINGYA GENOCIDE just to name a few things on the top of my head. After all, less than three days before this happened Facebook paid five billion dollars to settle a privacy case.
I started to bring these things topics up, feeling both incredulous and frustrated. I ignored my own rules about social media disagreements (see below), and it felt good to engage self-righteously. “Fuck these guys,” I told myself. However, I regret not being able to take a more strategic, tactful approach. Before long, dude said something fucked up about me always being crazy and unfriended me. I was taken aback, but I soon recognized that nothing about this should be surprising.
Everything we know about Facebook’s pernicious effects on democracy lead to this inevitable result. Here we have silo’d networks of quasi-public exchange, polarized by a profit-maximizing ecosystem. Facebook privileges self-congratulatory messaging and denounces detractors in order to maximize our engagement, and they aggressively mine our personal data to effectively cloister us to this end. Of course a long-time Facebook employee would reject my challenges on their face: accepting critique would disrupt everything not just within his curated social media frame but impugn the very method with which he makes a living as harming the world. If I’m not crazy, then he’s faced with some uncomfortable truths he set aside a long time ago.
Welcome to the current chapter in our massive natural experiment known as democracy. I present to you the latest challenge with this form of conjoined living: Rapid advances in communicative technology — much of which has occurred within our lifetime(s) — challenges our ability to create a more informed public that can functionally participate together in an already fragmented society. What follows is a reminder: Little discussed here hasn’t been elsewhere, but it occurs to me that many of us still don’t recognize what is going on. So consider this a brief primer on our current dilemma with the public sphere that may be helpful to set the stage for anyone having such a discussion moving forward.
So, what is this “public sphere,” exactly?
In 1989, Jürgen Habermas wrote about the emergence of something he coined the public sphere. He theorized that in the 18th century there was this historic opportunity for an exchange of ideas, ostensibly, on the strength of their individual merits and not the background of the speaker. When an idea became consensus through this process, it was a powerful, public check on the ruling class. Most important was that it was accessible to all, regardless of background. Habermas observed that the public sphere as we see it today is in the establishment of the fourth estate, more commonly known to us as journalism.
It has three basic functions:
- Journalism is a watchdog; it keeps government officials accountable for their actions.
- It is the site of public debate and promotes civic duty.
- Society at large benefits from consensus and it builds social fabric.
During our first two centuries as a republic, however, the reality was that not everyone has had the opportunity to be a full participant, which has always been the severe limitation of Habermas’s contribution. Other academics, most notably Nancy Fraser in 1990, developed theories about subaltern counter publics. These smaller counter “publics” act as important sites of organization and resistance, but they rarely command broad recognition. The reality is that opportunities for participation in central conversations of broad public concern have historically been dominated by a few elites.
As technology advanced beyond that of the 18th and 19th centuries, power only grew more centralized. Publishers, then radio broadcasts, followed by television and cable networks expanded access to information as they cemented their own influence. By the middle of the 20th century, the Commission on Freedom of the Press (1947) warned us about the few wielding tremendous power over the many, as large media conglomerates emerged to exert troubling amounts of market control. Famously known as the Hutchins Commission after Robert Hutchins, then president of the University of Chicago, assembled in response to rising public concern regarding media ownership resting in the hands of the few. The commission — comprised entirely of white, male academics — indicated that hulking media conglomerates must learn to be responsive to the broader needs of a diverse democracy.
Little action followed their many recommendations.
Then (“then” doing a ton of work here, I know), the advent of the internet appeared to fulfill the democratic goals of the commission. It has changed the scope of who the relevant actors are by opening the gates to everyone. The public sphere became crowded. Tons of noise was being generated from who NYU Professor Clay Shirky calls “shouters.” Shirky argues that shouters, in and of themselves, are neither good or bad. The allowance of many more voices to the public sphere means those unjustly silenced are now heard (great!). Think Black Twitter.
They are heard, however, right along with the thoughtless and evil among us (oh, oh no). Think 4chan, Holocaust denialism, etc.. It can be both good and bad to turn up the volume on everyone’s personal microphone. We have new ideas, but we also give equal opportunity to those who favor the status quo the chance to drown out their voices. Owen Fiss refers to this as the irony of free speech: He pointed out that when speech itself inhibits speech, “the classic remedy of more speech rings hollow.”
Beyond the complication of our “shouters” is that while the content is ostensibly democratized, the terrain within which we now exchange information is controlled by even fewer elites with even less democratic constraints. This is an important point that is routinely glossed over, and one that Facebook only embraces when it suits them. For example, Facebook has incredible editorial influence through its News Feed, and if there’s just one thing to take away from this piece I would hope it would be that.
At least with television their remained some veneer of public interest. While we can now participate ostensibly on our own terms, the way Facebook controls the environment to ensure that they profit from your participation has unintended consequences. Their incentives revolve around your engagement and the monetization of your private information. Despite public statements that indicate otherwise, they have little demonstrable interest in making sure participation serves democracy more than their financial interests. Engagement metrics prioritize the extreme; the polarization and cloistered echo chambers that result serve to do little more than drive us apart. They also elevate fringe voices.
And because of the degree of penetration of this technology, not logging on is tantamount to not participating in democracy. Many of us who wish to disabuse ourselves of Facebook and other social media find ourselves confronted with a dilemma. Where and how do we participate if doing so is counterproductive? Twitter? Nextdoor?
The News Feed and the Erosion of the Public Sphere
Facebook provides a massive public sphere with over two billion monthly users. In doing so, they are to be credited for not only the inclusion of historically excluded voices, but they are also culpable for the preservation of the status quo through the creation of filter bubbles: algorithmically-driven enclaves of people and ideas that reinforce our already held personal opinions. The ecosystem has exploited human nature to prevent us from engaging in the democratic, cosmopolitan promise of the internet. Billed to be the great bridger of worlds, the internet has instead cloistered us into echo chambers in which we only hear what we want to hear, fit with dopamine-releasing feedback loops by increasingly powerful mathematical engines.
As of 2015 almost all ad targeting on the internet was still using simple logistic regression modeling. This is much of the advertising online feels repetitive or completely irrelevant. One of my sources in tech rolled their eyes at me when I explained this to them, pointing out that Facebook’s feature engineering has long been beyond basic logistic regression. Now they employ engagement experimental design mechanisms that “can run real-time perfectly, characteristically matched cohort a/b testing of every engagement metric, constantly,” whatever that means. What matters is that as Facebook comes to combine more sophisticated machine learning techniques with the amassed data at their fingertips, the reality of their ability to change our behavior can only grow.
So how do they actually influence our behavior? Facebook’s move to a model of a non-chronological timeline called the News Feed exposes the user primarily to content that this algorithm has determined you are more likely to click on. The Wall Street Journal’s Red Feed/Blue Feed remains a powerful testament to this insulation that breeds more usage and hence more revenue. Increased time on their website and increased clicks on articles provided are converted into real dollars for their organization.
Facebook not only profits off of creating a personalized web that effectively walls you off from agitation or real public discourse. It also has the additional unforeseen function of entrenching you in your beliefs by algorithmically prioritizing misinformation that vilifies your opponents. The filter bubble is an effective metaphor because it also takes into account how social media can mediate your experience when and how you encounter views that differ from your own.
An investigative report by Frontline and James Jacoby revealed that Facebook executive Sheryl Sandberg aggressively ramped up the surveillance mechanisms of Facebook upon reports of flattening revenue; this, despite a public stance emphasizing privacy by her and their robotic CEO that preceded the ramp-up by weeks. According to Antonio Garcia Martinez, a product manager from 2011–2013, “she basically said, like, we have to do something. You people have to do something. And so there was a big effort to basically pull out all the stops and start experimenting way more aggressively.” One implied result being how the autonomous News Feed algorithm, the cornerstone of Facebook’s revenue generation, was engineered to use the new swaths of surveillance data to drive engagement. This accelerated the anti-democratic chaos.
On Facebook, fake news stories and misinformation spread faster and deeper than sober, factual reporting. The NY Times and The Wall Street Journal appear in your feed with identical framing as Stormfront and other White Nationalist, formerly fringe, online publications. And because of the vast amount of Americans who get their news through Facebook, many publications depend on Facebook as a portal. This lends legitimacy to publications that have anonymous, unaccountable authorship.
There have been a handful of examples around the world with regard to what the consequences of algorithmically assisted tribalism are. In the case of Myanmar, where Facebook is the de facto vessel of the internet, Facebook execs admitted that the News Feed algorithm has contributed to genocide of the nation’s Rohingya Muslims. The Rohingya were subject to widely circulated misinformation and racist fake news campaigns that falsely accused of their people crimes.
Shortly after peaceful revolutions in Tunisia and Egypt, to which Facebook was initially credited with making possible, there was quick a quick and violent regression to social disorder, as suggested by Egyptian activist Wael Ghonim:
What was happening in Egypt was polarization. And all these voices started to clash. And the environment on social media breeded that kind of clash, like that polarization, rewarded it… If you increase the tone of your posts against your opponents, you are going to get more distribution [through Facebook]… The hardest part for me was seeing the tool that brought us together tearing us apart. These tools are just enablers for whomever. They, they don’t separate between what’s good and bad. They just look at engagement metrics.
Dividing the Divided and Identity Politics
It’s important to note that partisanship is not new. We have known that divisions have existed as long as there has been countries to divide. A now-famous study of how people formulated their opinions during the 1952 presidential election between Dwight D. Eisenhower and Adlai Stevenson noted how partisan divides create “us vs. them” mentalities that preclude the sense of we.
[A]ctual cleavage within the community is deepened by the voter’s perception of it. Differences in perception are product of social stratification, and they reinforce it. Perceptual distortion increases the objective differences between “we” and “they.” …This makes for a unidimensional or monolithic distinction between the good people and the bad people (in religion, in status, in culture, and in politics), and it is a danger to a pluralistically organized democracy. (p. 86)
When you no longer view your fellow citizen as a partner in a democracy you lose the capability to have any rational discourse with them. Engagement focused algorithms foster this division as they privilege sensationalist fictions that shift distinction between opinions and articulation of facts. You know, #fakenews.
This is why the emergence of identity politics as a salient touchstone in the public sphere is worth our attention as an illustration. As professors Liliana Mason and Julie Wronski point out, our social groups are integral to our beliefs. “[P]artisanship in American politics is directly related to individual-level comprehension of party-group alliances.” And as we are ushered by high engagement, often fictitious, social media posts towards one extreme or another, we take on social identities that engender their acceptance.
While Republicans and conservatives are often quick to identify Democrats as the proprietors of identity politics, Democrats are far more likely to incorporate a wide collection of group interests. Republicans, however, are a party of strict ideological purity. Again, from Mason and Wronski:
What we find is that Republican “purity” applies to in-party social homogeneity. A Republican who does not fit the White, Christian mold is far less attached to the Republican Party than one who does fit the mold. This effect is stronger among Republicans than among Democrats, who include significantly more individuals whose racial and religious identities do not match those of the average Democrat… Republicans are more reliant than Democrats on their social identities for constructing strong partisan attachments.
This has observable consequences. Take the incoming freshman class to the House of Representatives as an example of Republican purity.
We also see Republicans relying heavily on the testimony of “out group” minorities to reinforce their beliefs. Privileged attention is given to women who denounce feminism, gays who are comfortable with homophobia, Blacks who argue against racism, and Latinxs who hold anti-immigrant beliefs. They are all given outsized attention for their ability to, supposedly, to rise beyond the broader beliefs of herd mentality.
Republicans are also quick to bemoan the persecution of the Christian white males and not only ignore but challenge the claims of systematic oppression for other groups. This is because “[w]hite males’ sense of persecution may be based on (somewhat) diminishing privilege.” People tend to view their social standing with recency bias, basing their beliefs on more recent developments than actual any comparative reality, something that filter bubbles can preserve.
This is all pretty messed up. What can we do?
There is no conclusive research that states one party is any less susceptible to the pernicious effects of manipulation in a social media environment than the other. But there are some things you can do to resist having a conversation decay at least from your end. So if you’re trying to engage in a topic you feel you know a thing or two about, there are some things to keep in mind when navigating a social media argument to avoid my mistake(s). It’s really important to establish, off the bat, if the person you’re talking with even wants to engage fruitfully. There’s nothing very helpful about arguing with a troll. Although some would argue that debate, in general, is less about the people engaged in it than the people watching or lurking from the sides, trolls are often very good at what they do which is creating ensuring division.
In research on good-faith online discussions, researchers have found that “first-person pronouns (“I”) indicate an opinion is malleable, but first-person plural pronouns (“we”) suggest the opposite. So stick to the first person. Changeable opinions are also expressed more calmly and more positively, using words including “help” and “please,” and more adjectives and adverbs. This makes perfect sense in light of the work discussed earlier regarding the overlapping circles of identity and ideology. It’s worth mentioning that modifying your own language with these terms will go a long way, and I always recommend throwing in a “like” or two on your opposing interlocutor’s posts and acknowledging the points they make when they make them.
Now that you’re engaged in an actual discussion, and not tossing invectives back and forth, there are a few techniques you want to keep in mind, and some you may want to consider leaving behind. Direct refutation of facts with your own facts can backfire. In a series of studies addressing beliefs that vaccines cause autism, the researchers found that “[r]efuting claims of an MMR/autism link… decreased intent to vaccinate among parents who had the least favorable vaccine attitudes.” What did work, however, was a technique known as replacement. Instead of countering the belief of autism being linked to vaccines directly, researchers provided anecdotal evidence of the effects of preventable disease fit with photos of children who were suffering from these diseases. They showed them pictures of kids with measles. This was a far more successful strategy, especially with parents for whom these risks hit closer to home who became far more likely to change their opinion. It also fits when capital T truths are no longer self evident and sourcing is easily dismissed.
The next thing to keep in mind has a lot to do with something I was discussing earlier. When conservatives are using minority figureheads to argue for positions that are often at odds with that groups expected beliefs, they are leveraging something called source credibility. People who speak from surprising or unexpected positions are often ascribed with higher credibility than those speaking from positions that one would expect. A climate scientist denying climate change, for example, would have far more credibility than an oil tycoon. Counter factually, an oil tycoon speaking about the grave consequences of global warming would have more source credibility than a climate scientist. A 2017 experiment took advantage of misinformation circulating about the Affordable Care Act (ACA), better known as Obama Care with regard to the 2009 “Lie of the Year” from Politifact: Death Panels.
The source of the misinformation isn’t exactly clear, but several prominent republicans including Sarah Palin indicated that there was a provision in the bill that would allow so-called Death Panels to decide if elderly patients actually warranted care or not. This, of course, was a negligent fiction. Findings for democrats on the issue were not statistically significant as they were far less likely to believe in Death Panels in the first place.
Republicans, however, demonstrated a statistically significant shift in their stance when given a refutation by a Republican legislator compared to one given by a Democrat. The unexpected source lent increased credibility. This indicated that calling in expert testimony is going to be far more effective if you can find one such unexpected source.
I may be personally better off without the flimsy connection to random middle-school dude, but the problem our interaction uncovers remains. In the de facto arena of public debate, there was no room for us to have an important conversation. It was over before it began. Facebook instead is dividing us, exploiting human nature and our private information for profit, and then packaging it in an ironic veneer of togetherness or first amendment freedoms. Here we were, poised to have a critical conversation about Facebook, on Facebook, with a Facebook employee as one of the primary interlocutors. But no critical information was actually shared nor was there any real potential for rational discourse. Instead of demonstrating its ostensible value, this experience reveals its tragic limitations.
Facebook remains one of the most powerful and unchecked influences on our democracy, and now I have no access to this person for debate. He remains blissfully ignorant of his own actions, happily wandering the decadent, gentrifying hallways of their HQ. I’ll continue to do my best, on the other side, to draw college undergrads into these important conversations before they are swept up by Facebook and their ilk to potentially become part of the problem.
This is by no means a comprehensive look at the pitfalls of social media and their root causes, nor is it a complete set of argumentative techniques that will let you win every (or even any) debate to “own the libs.” Hopefully, this is just a bit of knowledge that you can use to inoculate yourself from a social disease that as of yet there is no cure. Use these tools as you see fit, and every time you concern yourself with those who have the wool pulled over their eyes, consider that as long as you’re online with them in such an ecosystem you’re not exactly on the outside of Plato’s cave.
The line between private and public is to be drawn on the basis of the extent and scope of the consequences of acts which are so important to need control.
— John Dewey, The Public and its Problems, 1927