Phase V: Research Plan for Transparency in Google Search

Abstract update

Min Kim
Breaking Out of Filter Bubbles
2 min readDec 4, 2016

--

With the digitization of our everyday lives comes needs and implications that no one has considered before. Machine Learning algorithms are defining every minutiae of our lives, from social media feeds to credit scores, and we marvel at the extent to which these algorithms can help us make speedy decisions, and yet the ethical reasonings behind the decisions made by these algorithms go unchecked. I would like to explore a space where design can intervene to provide transparency to this black box system, to mediate trust between the machines and their users, and to empower the users with more control.

Exploratory Interview Questions Regarding Data Awareness

  • are you aware of what kinds of personal data you’re being mined for?
  • are you aware of what they’re being used for?
  • how comfortable are you with it?
  • knowing that they’re primarily used to curate relevant products/contents for you, would you still opt to be mined for these data? If you’re hesitant, tell me more about that.
  • interacting with users to find out what some of their problems are; mapping what you’re reading to the actual users as well:
  • can you think of a time when you conducted a search for something, maybe something important, and got distracted because you saw an ad for something you’ve researched for in the past but doesn’t apply to you anymore?
  • or can you think of a time when you conducted a search for something and were confused or frustrated because you were shown results you didn’t really expect?
  • were you distracted? how did you overcome it?

Regarding the Research Methods

  • What can Itest? It’s actually quite difficult to figure out exactly what kind of discrimination is being practiced in google search about jobs or loans, etc. (Location-based strategies proved effective with marketers, with geo-fencing averaging an above 0.90 percent click-through rate followed by city, ZIP code, DMA, audience and third-party place-based audience targeting. The report also reveals that marketers are increasingly embracing location-based targeting, with the number of location-based campaigns across the Verve network jumping from 17 percent in 2011 up to 36 percent in 2012.) http://www.mobilemarketer.com/cms/news/research/14731.html
  • It’ll be tough to target and test for that .90 percent of difference, there’s too many variables I’d need to test. Can I just make assumptions about the future and research people’s receptivity to the idea of having multiple diff digital models of themselves? As with speculative design?
  • I could focus on: 1. purchase practices, or 2. research practices.

Goals & Open Research Questions

  • what’s a good use case where ML is implemented into a future privacy issue?
  • how to test the algorithm for discrimination?
  • how to prove that the AI/ML is biased?
  • how to make the algorithm decision making process (“black box”) more transparent?
  • what do i want to achieve by making this transparent?
  • how to make this transparency fair to everyone, so that people don’t dig into loopholes and take advantage of the system?

--

--