Richard Reisman
4 min readNov 12, 2016

--

Making algorithms that effectively counter filter bubbles is all too clearly an urgent need, and there are promising strategies (as noted by Tim and many comments), but it will take great care to improve them and use them wisely. Drawing from a 2012 post of mine:

Balanced information may actually inflame extreme views — that is the counter-intuitive suggestion in a NY Times op-ed by Cass Sunstein, “Breaking Up the Echo” (9/17/12). Sunstein is drawing on some very interesting research, and this points toward an important new direction for our media systems.

Sunstein’s suggestion is that what we need are what he calls “surprising validators,” people one gives credence to who suggest one’s view might be wrong. While all media and public discourse can try to leverage this insight, an even greater opportunity is for electronic media services to exploit this insight that “what matters most may be not what is said, but who, exactly, is saying it.”

Quoting Sunstein:

People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”

Our initial convictions are more apt to be shaken if it’s not easy to dismiss the source as biased, confused, self-interested or simply mistaken. This is one reason that seemingly irrelevant characteristics, like appearance, or taste in food and drink, can have a big impact on credibility. Such characteristics can suggest that the validators are in fact surprising — that they are “like” the people to whom they are speaking.

It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.

Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it.

My post picked up on that:

This struck a chord with me, as something to build on. Applying the idea of “surprising validators” (people who can make us think again):
•The media and social network systems that are personalized to serve each of us can understand who says what, who I identify and agree with in a given domain, and when a person I respect holds views that are different from views that I have expressed that I might be wrong about. Such people may be “friends” in my social network, or distant figures that I am known to consider wise. (Of course it is the friends I consider wise, not those I like but view as misguided, that need to be identified and leveraged.)
•By alerting me that people I identify and agree with think differently on a given point, such systems can make me think again — if not to change my mind, at least to consider the idea that reasonable people can differ on this point.
•Such an approach could build on the related efforts for systems that recognize disagreement and suggest balance noted above. …But as Sunstein suggests, the trick is to focus on the surprising validators.
•Surprising validators can be identified in terms of a variety of dimensions of values, beliefs, tastes, and stature that can be sensed and algorithmically categorized (both overall and by subject domain). In this way the voices for balance who are most likely to be given credence by each individual can be selectively raised to their attention.
•Such surprising validations (or reasons to re-think) might be flagged as such, to further aid people in being alert to the blinders of biased assimilation and to counter foolish polarization.

This provides a specific, practical method for directly countering the worst aspects of the echo chambers and filter bubbles…

This offers a way to more intelligently shape the “wisdom of crowds,” a process that could become a powerful force for moderation, balance, and mutual understanding. We need not just to make our “filter bubbles” more permeable, but much like a living cell, we need to engineer a semi-permeable membrane that is very smart about what it does or does not filter.

Applying this kind of strategy to conventional discourse would be complex and difficult to do without pervasive computer support, but within our electronic filters (topical news filters and recommenders, social network services, etc.) this is just another level of algorithm. Just as Google took old academic ideas about hubs and authority, and applied these seemingly subtle and insignificant signals to make search engines significantly more relevant, new kinds of filter services can use the subtle signals of surprising validators (and surprising combinators) to make our filters more wisely permeable.

(My original post, Filtering for Serendipity — Extremism, “Filter Bubbles” and “Surprising Validators” also suggested broader strategies for managed serendipity: “with surprising validators we have a model that may be extended more broadly — focused not on disputes, but on crossing other kinds of boundaries — based on who else has made a similar crossing…”)

--

--

Richard Reisman

Nonresident Senior Fellow: Lincoln Network | Author of FairPay | Pioneer of Digital Services | Inventor, Innovator & Futurist