You are your bubble. Stop worrying and learn to love it.

A projection of The Hopf fibration onto R3. Source.

Let’s examine one of the more insidious pieces of propaganda swirling through our networks: the social media “filter bubble” that is “destroying democracy”.

Immediately following the election, several articles called attention to the apparent problem of the “filter bubble” created by the social media “echo chamber”. There were interviews with the person who coined the term, tips for how to escape the bubble (including the notorious list of questionable sources), and even apps for that. Quite a lot of the scorn was directed at Facebook for encouraging the creation of these bubbles and allowing “fake news” to spread. The bubble meme hit its crescendo last week with an SNL skit that localized the center of the bubble on the class of sheltered hipsters in Brooklyn.

The worry goes like this: social media encourages users to surround themselves with like-minded people and bias-confirming news, and to block persons who disagree, so that we’re confronted only with our own perspective reflected back in our streams. We set our networks up to pander to our beliefs instead of challenging them with diversity and dissenting views. Thus, social media’s natural narcissism results in homogeneous digital tribes shielded from any opposing views and incapable of working towards common ends. Trapped in isolating digital bubbles, we turn against each other with petty tribal bickering and identity politics, further eroding any chance of democratic cooperation and mutual support.

This worry is horseshit.

It is not merely false. It is insidious propaganda that is maddeningly difficult to address and counteract. Our problems are numerous, complex, and extremely diverse, but your bubbles are not among them.

In this essay I want to work through this filter bubble story and how it plays out in the media. I want to do this somewhat carefully, because it is an excellent example of propaganda in the digital age. But for those with tweet-length attention spans, here’s my position:

The rest of this post will attempt to defend this view, albeit in nearly the reverse order. Let’s start by getting clear on propaganda.

The Hopf fibration projected onto a 3-ball. Source.

Propaganda turns political ideals against themselves

The word “propaganda” has overtones of 1984-style totalitarian thought control and so carries the whiff of a conspiracy theory, as if implicating a hidden cabal of malicious forces. This is unfortunate, because it lends to the mistaken impression that control must always emanate from a single, coherent source. It is harder, but more important, to think about distributed systems of control where, many independent agents can interact to maintain a higher-order coherent state. In such a system, the discourse can be very tightly regulated to stay within strict boundaries, even when no “one” agent has a dominating position.

One intuitive model for thinking about distributed multi-agent control systems is the way the norms of a language change over time. Languages persist though their use within an interacting community of users, and the norms which govern their use binds the community together and makes them intelligible to one another. These norms are demonstrated and reinforced with every speaker who engages the community, with every instance of participation. In this sense, control of the norms is distributed across the entire population of users.

Which is not to say that all speakers hold equal influence over the norms of the language. In fact, some speakers can wield considerable influence over a language despite holding relatively little power in the community itself. Consider, for instance, how the slang of urban youth trickles out and shapes the language of their wealthier suburban counterparts. In one sense, no single participant is actively coordinating the population’s use of slang, and so no single source is responsible the resulting flow. At the same time, every participant in the culture is engaged in the same network of activity generating these norms, and in this sense we’re all contributing to and responsible for its dynamics.

These paradoxes of individual and collective responsibility are not new, but the digital age has brought our failures to reconcile them into high relief.

Propaganda is best understood within this context of distributed, interactive control systems. Jason Stanley’s How Propaganda Works offers a theory of propaganda as political rhetoric that controls the discourse by exploiting an ideal. Here’s a snippet from a review of the book for a flavor of his view:

From Jonathan Wolff’s review of Jason Stanley’s How Propaganda Works

In the example, a democratic ideal of reasonableness is exploited to block potential objections to racially inflammatory language, by making the objections themselves appear unreasonable. In this way, the ideal is employed for its own subversion. Provoking a concern for reasonableness provides cover for the expression of racism, and undermines any attempt at critical objection. Stanley calls this rhetorical exploitation of an ideal “undermining propaganda”.

How Propaganda Works pg 53

The alternative, which he calls “supporting propaganda”, isn’t always good in the moral sense, because the rhetoric may be in the service of a neutral or unworthy ideal. However, supporting propaganda is characterized by a consistency between the expression and realization of an ideal —supporting propaganda practices what it preaches. Stanley recognizes how propaganda presented with integrity can be a valuable asset in a spirited public defense of democratic ideals.

But his primary concern is with the ways democratic ideals are subverted in countries like the US, which regularly evoke the vocabulary of liberty, justice, and opportunity in order to “mask the gap” between those ideals and the political realities. Stanley argues that propaganda is typically deployed in the service of “flawed ideologies”, or systems of belief with strongly held but false assumptions. In a perfectly rational society, these false assumptions would be exposed and corrected. But in the real world, flawed ideologies generate imbalances of power that can be profitably exploited. Thus, anyone with a stake in this exploitation (that is, anyone with power) has a natural interest in shielding these flaws from critique in order to prevent the disruption of power. No further coordination is required to motivate the creation and spread of propaganda that subverts democratic ideals. When packaged to appear as a concern for the ideals it undermines, even those with a genuine desire to protect those ideals can be put in the service of their subversion.

The filter bubble narrative is clearly structured to appeal to the democratic ideals of diversity, inclusivity, cooperation. People expressing concern about their bubbles and looking for ways to pierce them appear to be animated by precisely these worthy democratic ideals. If this concern took the form of supportive propaganda, we’d expect the resulting discourse to support the realization of these ideals. Conversely, if this were undermining propaganda, we’d expect the presentation of these ideals to actively undermine their realization. We’d also expect to find a “flawed ideology” being shielded by the propaganda.

With these definitions established, the next step of my argument is to describe how the filter bubble story functions as undermining propaganda, actively subverting the very ideals it purports to endorse. That’s not to say the journalists and social media users worried about their bubbles are lying, or that they are deliberately hostile to democracy. On the contrary: my claim is that our earnest endorsement of these democratic ideals is being used against itself.

Understanding how so many well-meaning people can be manipulated into subverting their own ideals is not easy. Part of the challenge is to appreciate the structure of the media environment we’re immersed in, and the role of “the filter bubble” within it, since it is confusion over these very structures that make such subversion possible.

Whose bubble is it, anyway?

There’s a deep and subtle tension, rarely made explicit, that lies at the heart of the concern over social media filter bubbles: whose bubble is it? Facebook puts a lot of effort into designing the algorithms that control what appears in your streams, and these algorithms have considerable influence over the content and direction of the national discourse. Facebook is not shy about this influence, but uses it explicitly to, for instance, mobilize voter turn out. So on the one hand, it appears that Facebook is the primary agent responsible for our bubbles in virtue of designing the very algorithms that construct them.

But on the other hand, those algorithms function by customizing the stream to fit a user’s particular interests. The content in my stream is a direct product of the people I’ve friended, the various bits of content I’ve liked and engaged, and other activity I’ve displayed for the network. If anyone is responsible for the content on my stream, it’s me. So the filter bubble story is often cast in terms of the narcissism or gullibility of social media users, with the implication that it’s our job to improve our bubbles.

So is it my bubble or not?

When Eli Pariser coined the phrase “filter bubble” in 2010, the concern was aimed squarely on the so-called “personalized web” that provided customized results on the basis of user engagement. In Pariser’s terms, the “filter bubble” refers to “that personal ecosystem of information that’s been catered by these algorithms to who they think you are.” He continues:

from a 2010 interview with Pariser in The Atlantic

Notice that Pariser is not attacking any person for the construction of any bubble. Instead, the threat derives from the “invisible” technologies that “lock us in” to an artificially narrow media environment. The code controlling your stream on Facebook is optimized for maximizing engagement in order to sell targeted advertising. This gives social media an incentive towards compulsive, clickbait content that is more likely to flatter your biases than to challenge them. The result is an artificially constructed world designed to pander to your interests and stroke your ego, connected to reality only by the thin fiber prism of what you want to see.

While Pariser’s critique is aimed at social media software, his 2011 book does briefly discuss the psychology of compulsion and cognitive biases that reinforce the filter bubble’s central attraction. His aim is not to criticize the users of social media so much as explain why they find the personalized reinforcement so seductive. Ian Bogost’s raised a similar criticism of compulsive “freemium gaming” with his Cow Clicker parody game released around the same time. Both critiques target the software developers exploiting their users with aggressively manipulative digital media and Machiavellian ethics.

Invisible technologies or narcissistic millennials?

But in the five years since Pariser’s book, the media-centered focus of this critique seems to have been lost within the fog of the culture wars. Much of the commentary linked at the top of this essay is framed to suggest personal or cultural failures that give rise to the filter bubble. They attack Facebook too, but not without some hits at the devolution of its users. (We’ll talk more about what’s wrong with Facebook in the next section). The SNL skit above exaggerates this critique and makes it explicit, not to parody its falsehood but to underscore the ways it rings true. The skit plays off the stereotype of sheltered millennial hipsters, implying that filter bubbles are the product of a desperate attempts to avoid reality. In this way, the threat of algorithmic control is subsumed by the identity politics that otherwise dominates the national conversation.

This shift of focus, from the corporate incentives controlling social media algorithms to the entitled perspectives of social media users, is where the undermining nature of this propaganda comes clearly into view. If the filter bubble is simply another front in the culture wars, then the bickering and incompetence seen online is a direct product of the divisions and failures of the broader culture. If millennials prefer a sheltered reality, no wonder they find themselves trapped in a bubble! What did they expect?

This reasoning is deployed not only to explain our political circumstances, but also to locate the blame on ourselves as social media users/consumers. This explanation appears satisfying because we’re already overwhelmed with guilt about our social media use, and we’re been primed for hostility towards the stereotypes implicated in the critique. But the explanation does not suggest any obvious solution. Since the bubble is presented as arising from the same irresolvable tensions that fuel the ongoing culture wars, a search for solutions gives way to hopelessness, helplessness, and righteous indignation at your scapegoat of choice. At best, users are asked to reflect on their own social media habits, to look for ways to pierce their bubbles. In other words, the only avenue of critique is directed inwards, towards self-improvement , while the technologies responsible run free.

This dynamic — of shame, guilt, helplessness, scapegoating, and self-criticism— should by now be a familiar refrain from social media under capitalism. In 2013, Sarah Gram described a more focus and vicious version of the same manipulative rhetoric targeting the selfies taken by social media’s young girls. She writes, “The Young-Girl is the model citizen of contemporary society not because we worship her, but because by expending her energy on the cultivation of her body, her potential as a revolutionary subject is neutralized.” She continues:

The same undermining rhetoric that Gram identifies in the selfie is at work in our current discussions of the filter bubble. This logic has extended beyond the control of young girls, and now covers the 60%+ of Americans who primarily get their news through Facebook, all of whom are now presented as legitimate targets of political disgust for their construction of their narrow bubbles. Just as young girls are expected to cultivate their bodies for society’s use, now the general public is expected to cultivate their media filters for the same neutralizing, exploitative ends. To riff on Gram: in an economy of attention, it is a disaster for democracy that citizens should take up media space for their own political perspectives and identities, space that could go to more important, “real news” sources. What are we to make of the filter bubble, but that we first code the stream and then punish its users?

As Gram’s recognizes with the selfie, the discussion of filter bubbles is ultimately about digital labor under capitalism. We are not only expected to participate in this corporate media environment, we must now accept this framework as the primary vehicle through which democratic civic engagement occurs. Despite proprietary ownership, the complete lack of transparency or democratic oversight, and absent any recognition or legal protection for the labor we’ve poured into the system, we’re asked to accept this corporate framework as a legitimate authority for managing and monitoring the public sphere.

This proposal has seen no popular votes apart from active users counts, and has undergone no expert review. At no point have we paused to think carefully about whether any media source should have the reach and influence of Facebook, let alone CNN or the NYT. Instead, we’re offered a critique of the filter bubble that bypasses any careful evaluation of the media ecosystem and jumps directly to the self-loathing conclusion that we’re using it wrong. This hasty critique allows us to move on to the comfortable post-election habit of characterizing the exact demographics to blame. As a result, any potential for deliberation or critical reflection — any potential for democracy — is neutralized.

What was first presented as an earnest concern for civic engagement has now been recast as a method for subverting it. This places the filter bubble discourse firmly within Stanley’s definition of undermining propaganda, although we have yet to fully identify the flawed ideologies it serves to protect. To be clear, I’m not claiming that any specific party to the discussion has the explicit intent of subverting these democratic ideals. At the same time, the subversion of these ideals is not accidental or an afterthought. This propaganda is the product of widely distributed interests in perpetuating a flawed ideology. The filter bubble discourse confirms our existing biases, and it protects the flawed ideologies we’re invested in exploiting. We perpetuate the propaganda willingly because it is in our interests to do so.

Two steps in the argument to go. In this next section, I‘ll talk more about Facebook’s role as a shill in the greater media ecosystem, and the ideological flaw it imposes. I’ll close in the final section with some thoughts about the structure of our bubbles.

An artist’s impression of a spin foam in loop quantum gravity. Source.

Facebook and the Art of the Shill

I have been strongly critical of Facebook for years. Facebook has an unreasonable degree of control over our social and political interactions, and the bias has compromised our very capacity for community organization and engagement. We need an open, transparent, user-centered (as opposed to ad-centered or stream-centered) networking service as a next-generation alternative to the current slate of aging social media offerings. As soon as a viable option arrives, I will be happy to join the exodus. As one of the last people on Earth that still uses Google+, let me assure you that no viable alternatives yet exist.

But in all fairness, the very same manipulative propaganda discussed above is being used against Facebook for the same subversive ends. In the week since the filter bubble story popped, the focus has shifted again to the viral “fake news” spreading across the streams. We’ve seen interviews with its creators, viral lists of obfuscation, and accusations of foreign tampering. On this issue Facebook took the bait, describing the active role they play in monitoring the content of the stream. NPR recently reported that a FB employee makes a censorship decision every 10 seconds. With this mounting pressure on FB over the issue of fake news, the attention and resources invested in monitoring and “curating” the stream can be expected to increase.

Again, it is striking how far this response is from realizing any of the democratic ideals supposedly animating the “fake news” concern. No attempt is made to explain how improved monitoring and curation of the stream is supposed to achieve a more free and open press, or a more informed and engaged public. The discussion is entirely focused on how best to control and augment the stream.

Leaving aside the surveillance issues for a second, consider the proposed technical solution to the filter bubble: serendipity. The idea is this: if users are inclined to lock themselves into a narrow view, we might counteract this inclination by sprinkling the stream with opposing perspectives in order to artificially raise its diversity and scope, exposing users to new ideas and sparking cross-community engagement. Similarly, if the problem is fake news, the solution is to scan for offending sources and remove them from the stream. The operating assumption appears to be that if we can just engineer the flow of content on social media to provide the right balance of information and perspective, we’ll be able to produce model citizens fully informed and prepared to discharge their civic duty.

Of course, this proposal is absurd. The claim that social media lacks in serendipity has been challenged repeatedly on empirical grounds. Facebook knows well that the degree separation between nodes has decreased steadily for years, indicating greater density of connections between users, and less room for isolating bubbles. But for our purposes, the assumption motivating the proposal is more important: that the stream must be controlled and sanitized for the sake of democracy. This assumption is fundamentally at odds with the democratic ideals, but it frames the entire discourse of social media. This critical assumption is the ideological flaw that the filter bubble propaganda is employed to protect. Our genuine concern for civic engagement are turned into a justification for ceding ever more control.

Again in fairness, Facebook appears to be as blindsided by this propaganda as anyone else. Facebook is not innocent, and has worked hard to amass the influence it has. But despite its size, Facebook does not control the media. The company has relatively little power or authority compared to traditional media sources, and little experience with the vicious realities of politics. This puts FB in a compromising position, and they take it from all sides: an irate user base filled with malicious agents; a disgruntled tech community concerned about transparency, security, and interoperability; a mainstream media that loves to fight turf wars and has the resources to fund them; and a wide variety of governing bodies who are thirsty for regulation and jealous of the monitoring and PSYOPS potential FB is carrying under the hood. Facebook’s habitual response to political pressure is to give in, all while parroting the very ideals it subverts. Facebook is known to work with police departments to provide information about criminal activity and political activists. In other words: the political pressure works.

Facebook functions happily as a willing, if not entirely aware, Pollyanna for this form of propaganda and social control. This results in a happy arrangement for those few who enjoy the current configuration of power. The government gets free access to pervasive surveillance tools, without any of the constitutional difficulties involved with building or maintaining them. Traditional media gets a scapegoat to distract from their overwhelming failures to inform the public, and has the opportunity to put an upstart new media company in its place. At the same time, traditional media now has a unified vehicle through which to run its traditional advertising propaganda. Facebook has all the diversity and youthful vigor of early cable TV, while maintaining the same control and consistency of form as major network. From the perspective of a traditional media that still hasn’t come to terms with the digital age, it is the best of both worlds.

Everyone gets to use the service without paying for its construction and operation, and without shouldering any of the responsibility or public ire for its failures. The potential for democratic civic engagement is neutralized, but the public have easy access to an outlet for venting their fears and emotions, and any radical impulse generated by this venting is quickly swept up with censorship. In exchange for being the public’s whipping boy, Facebook gets to assume the position of The Default Network, and enjoy all the privileges and influence that comes with it high-ranking servitude. Everyone wins but democracy.

Wikipedia defines a shill as follows: “In most uses, shill refers to someone who purposely gives onlookers, participants or “marks” the impression of an enthusiastic customer independent of the seller, marketer or con artist, for whom they are secretly working.” In Stanley’s terms, a shill is essentially a propagandist: purporting to exhibit some ideal, usually honesty or neutrality, so that the mark doesn’t realize they are being ripped off. The term “secret” in the definition implies a conspiracy, but we must resist this compulsion. The claim is more general: Facebook is working openly to support existing power structures and reinforce their flawed ideologies at the expense of the democratic ideals they subvert. FB is not compelled by a conspiracy of agents, or even malicious intention, but simply because many different agents have independent reasons for preferring these structures are maintained, and Facebook does not have the stomach to resist. Instead, Facebook believes it has the obligation to defend the ideals of democracy by taking the very actions that subvert them. Facebook isn’t the creator of this propaganda, they are merely its biggest and most earnest consumer. And all of us strapped to the network have no choice but to hang on for the ride.

Thanks for sticking through this far! One more section to go, on how to build bubbles that resist.

M. C. Escher (1956) Bond of Union

A Facebook monadology

To this point, we’ve described a media environment where many distinct agents have independent motivation to subvert the ideals of democracy, and where the “filter bubble” story serves as propaganda towards these ends. Furthermore, we’ve identified the assumption that the discourse must be sanitized and controlled as the critical flaw in an ideology opposed to these democratic ideals, which gives some explanation of the motivation to subvert them, even for agents who might explicitly endorse these ideals. While it is true that Facebook’s algorithms have an inordinate influence on social media, we’ve discussed some ways this concern can be used to misrepresent the problems with social media in order to maintain an imbalance of power.

But one important detail has not been covered. What would social media look like without the flawed assumption of centralized, uniform control? In other words, what would social media look like as a form of supportive propaganda, where ideals expressed align with ideals realized? If our concern is for an inclusive, diverse, free and honest medium for civic engagement and cooperation, what technologies or ideologies are required for realizing these ideals and protecting them from subversion? It is a good question, and there aren’t a whole lot of proposals on the table for answering it. A systematic proposal designed explicitly for protecting these democratic ideals might better orient the discussion towards these goals, and might help break the cycle of propaganda that obscures them.

One place to begin is with a social media story that went around about a decade before Pariser’s book on the “filter bubble”, long before anyone knew what a Facebook was. In 2001, Davenport and Beck wrote a book called The Attention Economy, which had many of the seeds of our current filter bubble concern. Taking cues from Herbert Simon, D&B imagined attention as the selection process that sorts through an overwhelming stream of incoming data in order to decide how to act. Since the supply of data always outstripped the supply of attention, they imagine attention as the limiting resource, and the problem of action is in deciding how to spend it. We can’t attend to every signal, so cognition filters the inputs to emphasize only what’s important for deciding what to do next. D&B’s book explores the implications this cognitive limitation has for digital media, especially web design.

But the use of attention within digital media is just one application of a much deeper philosophical point. The point is that action always depends on a selection procedure, and any limits on this selection corresponds to limitations on the possibilities for action. If we’re talking about cognition, then cognitive attention is the selection procedure that constrains action, and any manipulation of attention will correspond to a manipulation on those potential actions. In this light, the filter bubble worry is simply another framing on the attention economy. Your filter bubble is the selection procedure by which you tame the data stream, in which you decide what is or is not worthy of your attention.

To make the point clear: it is the fundamental job of any cognitive system to construct a filter on their world, which is a necessary step in making progress on any action. The better constructed your filter, the better equipped you’ll be for whatever actions you take. Too broad a filter and the important signals will be lost in the noise; too narrow a filter and you might not register the signal at all. What you want is a filtered tuned just right for accomplishing whatever projects lay before you. And in a very deep sense, you are your bubble. The process of constructing a social identity is identical to the process of deciding how to act, which is identical again to the process of filtering and interpreting your world. Thus, any constraints imposed on your filter are also constraints on your possibilities for action, constraints on the freedom of your decisions and the construction of your world. If you are your bubble, then any attempt to control or manipulate your bubble is likewise an attempt to control you.

The metaphor of the “bubble” motivates people to try and “pierce” it with serendipitous interactions. But if we understand the bubble as the construction of your identity, and if we’re motivated by the democratic ideals in which one is encouraged to express their identities as a contribution to civic engagement, then we shouldn’t be piercing bubbles at all. From this perspective, piercing bubbles is another way of compromising one’s identity, allowing it to be constructed in accord with someone else’s will.

The alternative would be to provide a social network with the tools for constructing and maintaining radically personalized and divergent identities, tuned to optimize the specific goals and interests of their users. Instead of homogenizing our identities into an undifferentiated lumpy mass, we should be trying to specialize our identities by letting our freak flags fly.

Such radicalized identities will likely tend to divide the population into distinct groups of relatively like-minded cliques. But democracy is meant to function through each citizen contributing their own interests and values, and this function is lost if we gloss over our differences. Democratic ideals assume that even radically distinct people can work together towards common ends without sacrificing this diversity. So allowing users to radicalize, specialize, and develop in relative isolation cannot be itself taken as a subversion of democratic ideals.


In fact, it is possible that allowing communities to radically differentiate might make it easier to see our common ends. If communities A and B share nothing else in common but the need for health care, and there’s no political pressure forcing agreement on any other issue, then perhaps A and B can learn to work together, despite their differences. Allowing them to develop a sufficiently personalized and independent community identity can make this agreement much easier to swallow.

Because after all, this is how organization works in practice. All organisms start off as a clump of identical, undifferentiated cells. As they mature, these cells differentiate into specialized cells with unique and specific functions. Importantly, differentiation of some cells will trigger the differentiation of others, so homogeneity serves as an impediment to development. It is precisely because these cells are allowed to differentiate at will that the system as a whole is able to organize and maintain itself. Absent differentiation, an organization wont.

robot. made of robots.