Phase VII: Preliminary Research Plan (Filter Bubble Edition)

Study ideation #1:

Min Kim
Breaking Out of Filter Bubbles
4 min readDec 6, 2016

--

  1. First, ask the participants where they think they lie in either of these matrixes (the two serve different purposes, and I’m leaning more towards the first one, as it’s more straight-forward and truer to the study’s purpose):
source 1: https://www.politicalcompass.org/analysis2 source 2: Dan Kahan’s Cultural Cognition Project

2. Second, have the users navigate through a politically neutral Facebook or Twitter feed page, observe each of their clicks, likes, etc, for quantitative data.

fig 1, flow chart: both reading and skipping a post will contribute to how the user affects their own filter bubble; the more you interact with that post, the larger your filter bubble becomes.

3. I would then take the answers and visualize the qualitative data into a mind map or a flow chart format (fig 1), with each like/comment/share button of a post having a (semantic differential, fig 2) scale system by which I could rate their level of contribution to their filter bubble.

4. Probe how they think they’re intaking the various information (ranging from domestic, social, or leisurely to educational or political) on Social Media (SM) platforms, for qualitative analysis.

fig 2, semantic differential system: the second step to measuring how loud/quiet the user’s political leanings are, based on their behavior analyzed from step #1.

5. Show them the flow chart of what they just did (also present to them their score on the semantic differential scale system that measures their bias), and ask the following preliminary questions:

  • “were you aware that you were doing it? And how deep into one side of view you were leaning to? yes? no? Does it matter to you? Are you comfortable with your score on the scale system? Would you like to be more aware in the future? why and how? What would you do to change it?”

After this initial round, I would evaluate the participant responses, then come up with several redesigns of the social media newsfeeds for prototype testing, to gauge if their filter bubble score changes at all since compared to the current newsfeed display. The ultimate goal in this round of research and rapid prototyping would be to attempt neutralizing users’ bias.

Open Questions

  • Am I measuring what I think I’m measuring? (i.e. with the semantic differentials, etc)
  • Are the participants under or overestimating other users’ bias?
  • Are they potentially underestimating their own bias? And if so, what research methods do I use to gauge it?
  • In the end, what side they really believe in matters less; the important question is, how do they perceive their own and each others’ biases, and how much of the information on SM platforms is contributing to that?

Study Ideation #2: with a focus on the Influence of the platforms’ UI

Lay two different versions of headlines of the same article side by side, just worded and pictured differently, then ask: which would you click on and why? Would you click on the other one as well, and why?

The exact research methods are yet to be determined, but I would analyze what of the current grid system of google, twitter, fb (& reddit?) is aggravating the filter bubble effect (what of the UI makes them keep clicking on similar things? does it mislead them to assume anything about the nature of the news?) — is there a correlation? then, either:

1. Redesign or reconfigure the ways in which these information could/ought to be shown based on ‘basic human values’, re-test (user research) to see if their perception’s changed. Then list out findings to speculate on what the next moves could be for Design & CS? or;

2. Based on the ‘principles’ or ‘basic human values’, explain why the SM platforms should amplify those ‘missing elements’ discovered from the research, or;

3. Speculate on some kind of design intervention, then conduct user testing to see how the users react to those and how their interactions might change (per Cameron’s suggestion to Deign by research). Examples might include:

  • Live data visualization that displays the level of your bias with each button/post you click on
  • A bar graph of categories that indicate what of the Principles (from this post: openness, civility, acceptance, honesty, etc)
  • Color-coded newsfeed posts with different shades of blue and red to indicate how extreme that post is leaning towards Liberal or Conservative (or Democratic & Republican) ideas.

Some interesting feedback from my colleagues were:

  • What if I consider a specific case study where UI exacerbated the nature of social media virality? i.e. Sometime in ’14, the ALS challenge was a viral thing on SM channels, around the same time as the Baltimore shooting incident. Because of the nature of Facebook and Twitter’s UI elements (the “like” buttons and the sharing feature), it encouraged the spread of the former and not the latter content. It might be worthwhile to consider the different characteristics of the visual elements perpetuating the cycle of SM virality.
  • Am I keeping it too open with the topic of politics? Would I maybe benefit from closing the loop so as to have a more definitive story (i.e. gender politics)?

--

--