Big Data, Big Consequences
Big data, algorithmic analysis, resistance, privacy paradox, collectivism
Baruh & Popescu argue that the collection of big data, and the associated loss of privacy, is becoming normalized in western culture. Withdrawal and assimilation are forms of resistance, but the authors argue for a solution based on the collective aspect of privacy and social action.
“Beyond the technical aspect of big data processing and its practical applications, big data seem to be generating a new social organization of knowledge that normalizes a climate of privacy loss while reproducing or even accentuating existing inequalities” (2)
“Namely, the ideology of big data naturalizes algorithmic analysis of quantitative data as the paramount expression of truth” (5)
“Recent empirical evidence (e.g. Turow et al., 2015) suggests that one of the main reasons for the privacy paradox is the lack of a meaningful privacy choice resulting in an attitude of “Why bother”?” (9)
A concept that struck me this week was the intersection of big data, privacy, and inequality. As the authors discuss, privacy becomes a commodity as it is exchanged for services that we use, and it can also be marketed as a service to help with limiting the data available to corporations. This made me think about the other inequalities that are present when elements of humanity are commodified. As Baruh & Propescu claim, privacy loss can “accentuate” inequalities, so I assume that other factors come into play here.
Accessibility, for both privacy literacy and the systems that could assist with this, becomes a social issue, one involving socioeconomic class and its affordances (of lack thereof). This brings to mind a question posed by Selinger & Hartzog, who ask who is benefiting from technologies like facial recognition. Clearly, corporations and social media conglomerates are benefiting, but the authors also discuss that faceprints can read for race, gender, and age. In a larger sense, by consequence, those who can afford to educate themselves and others, and those who have access to customizable privacy systems also reap the benefits of a societal system that favors some of these identity categories over others.
This may also be seen by the de-contextualization of data from its sources. As the authors observe, big data analysis becomes “paramount” and “truthful” in making recommendations and predictions, but Oremus reminds us that “people are too complex” to be analyzed by algorithm alone. By collecting data out of context based on user activity, algorithms overlook important factors that contribute to user interactions with content. Offering ads and recommendations based solely on this information could be seen as the reproduction of inequalities, based on privacy and other factors, that are persistent in our society. The idea that the privacy paradox is a result, in some sense, of users giving up due to lack of choice, mirrors the structures that affect people, and therefore their “shadows,” within and outside of the web.
In what ways have you resisted issues of privacy (withdrawing from certain platforms, educating yourself on policies, etc.)? Do you feel differently about these efforts, considering privacy as a collective effort instead of an individual effort?