Designing to Escape the Filter Bubble

Arunabh Satpathy
The Graph
Published in
5 min readJan 27, 2017

NOTE: This column is written for The Daily of the University of Washington, and was originally published HERE.

Illustration by Ben Celsi

The filter bubble and its effect on the election has been well documented by now. Donald Trump had a massive turnout at the general election in November, much to the surprise of coastal liberals.

In the turbulence of its wake, the election brought up several pertinent questions about media consumption today. Why are Americans so balkanized? What are the causes? How could the internet simultaneously be so open and still obscure the massive support for Trump?

Designing ourselves out of the filter bubble has become crucial.

The peak of searches was on November 20th, 12 days after Trump’s election.

For the uninitiated, the filter bubble is the “bubble” of information and misinformation created by social media platforms that prize engagement over veracity. It could be argued that social bubbles have always existed. The theory of “selective exposure” shows that people gravitated toward information reinforcing their existing views before electronic filter bubbles were recognized. However, the internet has massively amplified these extant tendencies.

The first thing to recognize is that designed systems are not neutral and that they usually carry the biases built in by designers. For instance, a core problem with platforms like Reddit is that upvoted links will continue to be upvoted because they are more visible. This problem is still relatively benign on something like Reddit, but it takes on a more sinister tone in more serious settings.

An example was when Donald Trump managed to distract the media by making shocking claims about millions of illegal voters the same morning that The New York Times published a story on his financial conflicts of interest. Platforms like Twitter, which are built and biased for engagement and virality, focused on the more shocking story while the media spent its resources to disprove it, and the power of the financial conflicts story was blunted.

But the upvote bias can be predicted. Some platforms are also prone to emergent bias, which only reveals itself once the platform has reached scale. At the micro level, or during a beta test, it’s hard to tell how large an impact articles with unfounded claims will have on Facebook; but at scale in 2016, it became enough of a problem for Mark Zuckerberg to explicitly address the issue.

So the question is, how can we design against the filter bubbles?

One solution might be to hand back control to editors who can filter political content based on a more balanced viewpoint. According to a New York Magazine article, “These algorithms aren’t optimizing for journalism, they’re optimizing for engagement.”

If Facebook hired editors thoroughly trained in journalistic ethics, they could tweak the algorithms if they spotted people balkanizing on the internet.

Of course, as the platform is inherently biased toward likes, shares, and advertising revenue, they cannot turn Facebook into an ideal media company. But because it is a media aggregator, the algorithmic principles could be counterbalanced by some journalistic principles, making sure that personalization doesn’t run amok. One better balanced example comes from the past, when newspapers balanced advertising revenue and editorial integrity.

“The way things were designed was that you picked up a paper and saw a whole,” said Batya Friedman, professor in the Information School at the UW. “To the extent that people are getting really personalized headlines, we’re losing the sense of society as a whole.”

This solution has its own problems, including the personal biases of the editors and the fuzzy edges of “political content.” However, humans in the content delivery chain might be more responsive to the environment than rigid algorithms are.

There are also a number of apps and extensions available that can alter our access to information. Sean Munson, assistant professor in the Department of Human Centered Design & Engineering at the UW, was part of a team that created Balancer, a Google Chrome extension that “analyses your web browsing to show you the political slant of your reading history.” If you’re massively “off-balance,” it can make suggestions on what you can read.

But this solution assumes that people are looking for balance in the first place.

“This happens when people are curious about a topic, when they feel it is particularly important to be correct (and may doubt their existing belief a bit), and when they are reminded of a norm of fairness,” Munson wrote in an email. “These all have potential, but one of the weaknesses is that they often require that people read more, and people have a lot competing for their attention.”

According to Munson, sites like Facebook could get people to reflect on how diverse their network is, and if they are missing important voices and perspectives. Another way is reminding people who disagree just how similar they are, so they have some common assumptions on which they base their worldview.

Or we could all just be more disciplined in how we approach information. The fact that on Facebook you add people who you are friends with might already bias you to a particular crowd. And Facebook’s design is an amplified version of how people interact in real life. So if we were more conscious about talking to people we disagree with, finding common grounds or principles from which to discuss, and developing a sense of “information hygiene” in real life, the systems we design would mold themselves to create a better society.

About the author: Arunabh Satpathy is a UX/UI designer and Journalist studying at the University of Washington in Seattle. Visit his website at arunabh.space or tweet at @sarunabh

--

--

Arunabh Satpathy
The Graph

Website: https://www.arunabh.space || UX/UI design, journalism, futurology, prog metal, and fried-food.