Facebook, data discrimination and what YOU(TH) need to know about it.

Kristen Loritz
The 430th
Published in
8 min readNov 12, 2015
Photo credit: Cyber Security at MoD

In August 2015, while everyone else was on vacation, Silicon Valley was bustling. Facebook quietly filed for a patent — a patent that could help banks approve or deny customers a loan based on their Facebook friends.

With 1.19 billion active users who frequent the social site, banks can now tap into Facebook’s gold mine of data. A detailed reading on the patent summary reveals:

“In a fourth embodiment of the invention, the service provider is a lender. When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.”

News flash: this type of profiling, more broadly known as data discrimination, is not unique to Facebook. In fact, online bots, cookies and algorithms have been crawling through our information for years. As more ‘Facebook patent’ stories take the spotlight in news media, citizens should be wary and take discomfort in these new discriminatory practices online.

“Data discrimination is the use of personal information, aggregated data or metadata, gathered from offline or digital sources,” said Jonathan Obar, an assistant professor of Communications and Digital Media Studies at the University of Ontario Institute of Technology. “This information is used to make eligibility decisions that unfairly privilege or marginalize an individual or group.”

Obar earned a PhD in Mass Communications from Penn State University and his research focuses on communication policy, policy making and digital activism. In the case of the Facebook patent, banks would have the ability to marginalize certain users. The user data that Facebook sells them becomes equivalent to a catalogue of credit scores. And if you have good credit? Tough luck. Your poor friends may be dragging you down.

Vulnerable groups such as students, whom many are just starting to build their financial credit, may not be able to take out a student loan — depending on who they are friends with. The actuarial reports of the Canada Student Loan Program predict:

“Student financial need will continue to rise, which will drive up the amount of student loans required to pay for post-secondary education. The federal government predicts tuition fees will rise at a rate of 2.5 percent above inflation annually over the next 25 years. At this rate, it is expected that fees will increase from an average of $5,959 in 2014–15 to $19,900 in 2035–36” (source).

If young people are struggling to pay their fees now, imagine a world dictated by data-informed loans. While some may celebrate, the rest who are flagged and targeted may be forced to live a financial nightmare.

“When we hear about Facebook collecting data, I’m not surprised. What’s shocking is the way it is used for data discrimination,” said Professor Nicole Cohen, a CCIT faculty member at the University of Toronto. “It seems unfair, unjust and offensive. You have to think Facebook must respect users in some way, but when events such as these are in the news, we recognize the main interest for companies like Facebook lies in business opportunities rather than equality for its’ users.”

Cohen, who holds a PhD in Communication and Culture from York University, believes young people need to be informed and think critically about these issues.

“Youth are always the most targeted by advertisers. There’s a big market of kids asking their parents for financial help which not only gives marketers access to parents, but their wallets and a chance to groom the consumers of the future.”

“We face pressure to participate in web activities including SNSs, and though it’s fun, we also know that it’s creating a power imbalance and young people are at the bottom,” said Cohen.

While the free web enables us to access an abundance of information, it has also become a Wild West: minimal data regulations give powerful players the reins to continue using, divvying up and selling user information at their will and at their disposal.

“In the United States, it was recently announced that the Federal Communications Commission (FCC) can’t force Google and Facebook to stop tracking users,” said Cohen. “The reason given was that these sites fall outside the jurisdiction, but then again, you have to wonder if the money spent on lobbying the government has anything to do with it.”

Government transparency group, MapLight, reported that Google spent $5.4 million dollars lobbying the federal government — a 43 percent increase from the same period (first quarter) last year.

Google is one example of many tech giants that dish out millions to influence our national decision makers, while also selling citizens’ data to them. Meanwhile, citizens spend their time helping Google generate that money just by browsing, clicking, sharing and using the platform.

You can’t hide

We know about the larger macro issues on a national scale, but data discrimination isn’t just a macro problem. In fact, categorizing data can actually be an effective way to crack down on crime, terrorism and other disruptive societal activities. But what does data discrimination look like on a day-to-day basis? These are the marketing forces you may not think twice about.

In a study conducted by MediaSmarts’, it was found that 68% of students believed that if a website had a privacy policy in place, the platform would not share their information with others.

“It is common to ignore privacy and terms of service policies,” said Obar.

Infact, according to a study by PhD candidate Aleecia M. McDonald and associate professor Lorrie Faith Cranor of Carnegie Mellon University, it would take 244 hours a year to actually read these policies.

“When we utilize “quick join” options, ignore privacy policies, or skim them, or consistently fail to understand them in great detail when we actually do read them, how are we to understand a quickly evolving set of privacy and data related concerns?” said Obar.

We continue to ‘accept’ these contracts in order to access our favourite sites, but that doesn’t mean we really ‘agree’ with the terms.

Opting out isn’t an option, not when the sites we frequent act as hubs for public discourse. Unplugging ourselves from these outlets can be alienating, so we give up our information, privacy rights and actually help perpetuate data discrimination practices in order to participate online.

The variety and intimacy of the information we share is also growing. A 2013 Pew Research Study reported 84% of teens post their interests, 71% post their city or town, 53% post their email address, 20% post their cell number and 92% post their real name. All of this information, whether ‘protected’ by privacy settings or not, is purchasable by marketers to create individual profiles that have the potential to marginalize certain users.

“Every interaction that we have with a computer, whether at the grocery store, on the highway, on our person (wearables) or in our hand (mobile), might contribute to the data stockpiles,” said Obar. “Every decision we make, whether it produces data we can see or not, might eventually contribute to an eligibility decision that could change the course of our life.”

We don’t know what those data profiles contain, who is viewing them and how we are being judged based on their content. Then again, what if we have nothing to hide?

“One other layer to this challenge is that we don’t really know what is happening behind the scenes,” said Obar. “I’ve published research looking into what we call data privacy transparency, especially by organizations like ISPs that manage and share our data. How are we to understand Big Data management, and the relevance of analytics, if companies aren’t transparent?”

If we don’t know how our information is being used, it becomes impossible to claim, I have nothing to hide. As professor Daniel Solove of the George Washington University Law School wrote, “everyone has something to hide from someone.”

The trouble with algorithms

It can be difficult to pinpoint where much of the data discrimination begins, but algorithms may prove a good place to start.

“An algorithm is essentially a set of steps that a computer takes to complete a task,” said Obar. “More often than not, discrimination concerns result not from algorithms themselves, but from data collection procedures that misrepresent individuals or groups, and data management procedures that place individuals in hierarchies and categories.”

While segmentation may arise from collection procedures, there is no denying that algorithms are also programmed as an objective technology.

“They are precise and scientific, but algorithms are also designed by humans with different assumptions and perspectives,” said Cohen. “These variances in perspectives get baked into algorithms that shape the information we get. From a basic search to job applications, credit scores and news. Companies like Google certainly aren’t always altruistic.”

Jonathan Obar among a team of experts discussing algorithms on The Agenda with Steve Paikin.

If more transparency existed on how companies sort data into hierarchies and categories, citizens could combat discriminatory practices online. Obar explains it’s not that simple.

“The challenge is that every sector of the global economy is categorizing people in their own way, and within each sector there is also variation. This leads to a mosaic of approaches as well as potential algorithms. What needs to be better understood is the extent to which data collection/management is labeling individuals unfairly.”

Cohen illustrates the vastness of these concerns, but how many people actually know about them?

“If we look at big data, there are tons of examples such as police using mapping tools to target neighbourhoods where crime is higher,” said Cohen. “These types of tools promote prejudice by targeting racialized or low income communities because data is being used without asking critical questions and it is often used out of context.”

While these data collection and implementation practices are generally promoted as protecting citizens through predictive measures, many are simply a bad excuse to exploit vulnerable groups.

Even if you don’t live in a racially targeted community, or low income bracket, it doesn’t mean you’re not exploited in other ways. Everyone has to search for a job at some point, which is why data discrimination practices in the work force are also a disturbing but prevalent trend.

“There are instances of employers using data to select job candidates or screen people. Some companies even look at what internet browser a candidate uses. If you have a default browser, rather than an installed browser, there is a higher chance you won’t get the job interview.”

Discrimination on the Internet is rearing its’ ugly algorithmic face, and as much as we would like to hide behind our privacy settings, we can’t. Discriminatory practices are become more niche on the net, and power hungry players like marketers, state agencies and corporations are able to maintain their tight grasp and control of our information.

Moving forward

We know the buzzwords, we know the implications, so what do we do now?

“In Canada, citizens are protected by the Personal Information Protection and Electronic Documents Act (PIPEDA),” said Obar. “I’d say it’s more important that students engage with the Office of the Privacy Commissioner and the privacy advocacy community to give a voice to their concerns.”

There are several ways that youth can take part in shaping the future of online activity and data collection. Informing yourself is the first step.

“Youth can join the conversation by engaging with the privacy advocacy community, ask for your data from video surveillance operators, ISPs, social media sites and data brokers to push the government to make data privacy transparency a norm,” said Obar. “Also, generate demand for infomediaries like LifeLock.”

The Facebook patent serves as an uncomfortable reminder of how data can be used to target and discriminate. So ask questions and pay attention. The powerful players will find a way to use your information against your will, just like Facebook did. And when they do, there will be no place to hide.

--

--

Kristen Loritz
The 430th

German engineered, Italian crafted, Canadian born and always Hungary