Phase III: Philosophical Foundation

Min Kim
Breaking Out of Filter Bubbles
3 min readDec 4, 2016

Among the readings we’d done at Seminar III, four philosophers’ ideas stood out to me as especially relevant to the context of algorithm models (AM):

  • Michel Foucault, and Steven Dorrestjin’s interpretation of Foucault’s writings on technology, discipline, and punishment, and power dynamics
  • Bruno Latour and the Actor-Network Theory, where actants of both (or more) sides play equal part in influencing one another, which in this context are the algorithm models, their users, and the affects they have on one another.
  • Langdon Winner and the concept of artifacts having politics, and algorithm models’ lack of transparency and selective decision making raising questions of discrimination.

On Power

  • You have “control” and “power” because it’s wielded over someone.
  • Power is wielded from the top > down, because it’s strategic in effective surveillance, and offers ease & speed, so proves advantageous when you want to control something (functional momentum of standing fully upright over another being).

The Panopticon

Foucault’s writing illustrates a form of power, that isn’t power over, but an interesting, lateral view of it. If they feel like they’re being watched, they’re regaining their own consciousness; artificially re-imbuing consciousness leads to the prisoners surveilling their own behavior, and at that point they can be released. Panopticon isn’t about all-pervasive surveillance. Similar to God having an all seeing x-ray vision, this leads to people monitoring their own behavior. Foucault asks, if you live in this society, how do you resist this? how do you discover new pleasures?

There’s pleasure in “feeling my way through the matrix” and pleasure in intaking information from the world and processing it internally then sending out data. (i.e. apple watch; selfies; etc)

Our current black box system works like a Panopticon, where as a user, you see what they’re doing but also don’t hold any power over how they should/should not be used. It controls a large number of people, and yet there’s no chance for tyranny because there’s zero transparency.

“Bentham develops the idea that disciplines could be dispersed throughout society. He provides a formula for the functioning of a society that is penetrated by disciplinary mechanisms. There are two images of discipline: one) the discipline blockade — an exceptional enclosed space on the edge of society; and two) the discipline-mechanism — a functional mechanism to make power operate more efficiently.”

“The panopticon develops out of the need for surveillance shown in the plague. Plague measures were needed to protect society: the panopticon allows power to operate efficiently.” (source)

“The disciplinary society is not necessarily one with a panopticon in every street: it is one where the state controls such methods of coercion and operates them throughout society. The development of a disciplinary society involves socio-economic factors, particularly population increase and economic development. … more sophisticated societies offer greater opportunities for control and observation. Foucault assumes that modern society is based on the idea that all citizens are free and entitled to make certain demands on the state. Foucault is not against such political ideals: he merely argues that they cannot be understood without the mechanisms that also control and examine the citizen.” (Dorrestjin)

On Transparency

So, why then do we have the need for a panopticon?

Along these lines, something else quite fascinating was brought to my attention by my advisor, Peter Scupelli, and that was the concept of blockchain. With Bitcoin, everyone gets to see the history of a transaction; everyone’s accountable, therefore there’s no bias, or opaqueness like algorithm decision-making (black-box) models.

Are transparency and accountability two different things to tackle? One delivers to the users a lot more power because transparency will lead to actionable behaviors (i.e. ending association with a company), the other just lets them know that there exists some entity that is actually to blame, that the system is indeed biased. But it might be a better starting point, and potentially more applicable in the long run, considering black boxes and their applications of machine learning might very well change over time.

So, then, should my focus be:

  1. To provide “transparency?” And if so, how? why? what’d be the purpose, i.e. what would i want to motivate them to do? what’s the actionable behavior that I want them to derive out of this? Or is it simply;
  2. To make people question their versions of reality?

I’m growing quite fond of the latter idea.

--

--