Data Culture

Week 7

How worried should I be? I think, if anything, after reading and watching pieces on big data culture, a new sense of paranoia has enveloped me. I found myself asking, “How much of my data is out there?” “What can I do to limit my exposure and possible negative ramifications to the data already collected about me? and “Should I even put up a fight?”

Big data and the systems already in place undoubtedly have collected so much information about me already. I think of WMD’s in the banking and insurance agencies, the digital footprint at stores I regularly shop at — Target, Trader Joe’s and Giant Eagle — and a whole host of other profiles set up for me by the internet services and social media platforms which I use — Facebook, Google, Verizon, etc. What options do I have, really?

Early on, when I got my first iPhone in December 2013, (yes, I know, late adopter, I liked my flip-phone, what can I say?) within a few weeks, I found a menu that showed a map of everywhere I had been since getting this phone. Cool? I mean sort of, if I wanted to go back and figure out which restaurants I went to and when I visited my friend in DC, yeah, maybe then it would be useful to me. But more than anything it was worrisome. What if someone else got their hands on my phone? Perhaps much more worrisome was that I was not the only one with this data. If I had access to it, Verizon, my cell phone provider certainly did. What about Apple? Maybe. Probably.

Source: http://mix108.com/your-iphone-remembers-everywhere-youve-been-when-you-got-there-and-how-long-you-stayed-heres-how-to-turn-off-location-history/

This certainly left me feeling rather uncomfortable. My metadata could be used against me in countless ways, or monetized without a single cent going to me for providing my data.

“Metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content.”
David Cole http://www.nybooks.com/daily/2014/05/10/we-kill-people-based-metadata/

It’s not like I had anything to hide by taking a trip to DC for the weekend, but there are just too many “what-if’s.” Cole points out that “[The NSA] was authorized to collect data on all callers one, two, or three steps removed from the suspect number.” What if I happen to be in the wrong place at the wrong time, or the unwilling recipient of a wrong number? According to Cole this could easily “land you in the NSA’s database of SUSPECTED TERRORIST CONTACTS.” Great if you’re guilty of something, but what about the majority of us who is not?

Granted, the likelihood of me being being classified as a terrorist is (hopefully) quite low. But how else could my data be used against me?

Cathy Oneil talks to us about WMD’s or weapons of math destruction. If I’m not already, my information could easily be part of a one utilized by my bank and/or insurance company. What if the WMD decides that my interest rate is too low or increases the cost of my car insurance policy? According to Oniel, there’s not much I could do about it. But regardless of what affect it has on me, that doesn’t speak for anyone else. Oniel writes:

“But the point is not whether some people benefit. It’s that so many suffer. These models powered by algorithms, slam doors in the face of millions of people, often for the flimsiest of reasons, and offer no appeal. They’re unfair.”

If a WMD deems me to be ‘good’ or if it lowers my premium, I’m happy. But what about everyone else of whom the opposite occurs? It also seems far too likely that each and every one of us could fall prey to these systems. That’s a risk I’m certainly weary of.

In addition to the WMD’s that have my data, what about social media platforms? I use them with the notion that ‘the less I put in, the less they know about me.’ But that’s like saying the less I drive, the less miles my car will have. I’m still driving. I still have a car and have paid the money. The information was already given to the state government and an insurance company. Bottom line: they still know more about me than I’d like them to.

One interesting thing I found from the Generation Like Frontline video was their description of likes as a form of social currency. Many of the kids shown were talking about how the only form of validation for one of their posts or videos were how many views/likes/retweets it got. The video makes a point that this connects to our desires for attention and validation.

Well guess what!? Those kids aren’t the only ones. I’m guilty of it too. When I post a particularly pretty shot of a sunset with the Pittsburgh skyline in the background to Instagram, I’m conditioned to think “oh, this is really cool, I should get a lot of likes on this.” I will even check back a few times later in the day to see how well it’s ‘doing.’ While I am by no means going to the extent that these kids are in getting likes and even corporate ‘swag,’ I’m guilty of the same emotions as they are.

Tyler Oakley on Frontline

But the idea that kids like Tyler Oakley are being used by major corporations as tools to market products to their peers, who in turn are using the same tactics to get more likes and more validation for themselves, is scary. And a large part of this is because so many are unaware of this or just flat out don’t care.

‘Marketing’ (however that word is currently defined) is an essential part of our society. But to what extent should it go? What unnerves me a lot from watching the program was that many of the kids didn’t even know what the expression “sell out” meant. Are you kidding me?

So I think I will stick to my initial question. How worried should I (or we all) be? Are these trends going to continue and the methodologies behind them going to become even more persuasive and seamless? At what point will we no longer be able to tell what is an ad and what isn’t? Will these WMD’s create a ‘digital caste’ system that makes it very hard for us to improve our data in the eyes of the almighty algorithm?

I surely hope not, but with the way these technologies and strategies are advancing, I have an uneasy feeling. Can our government change laws in order to better protect us? If so, what will these look like and how long will they take? According to Cole when referencing the USA Freedom Act, a bill created to rein in the NSA’s spying on Americans, “…if Congress and the White House, Republicans and Democrats, liberals and conservatives, all now agree on reform, how meaningful can the reform be?”

That’s something I’m eager to see.


Your mission: Referring to at least two of the readings and also at least one of the videos below, discuss the implications of algorithms and data mining on your digital experience. What is the single biggest question that these various pieces leave you with?