Facebook and Cambridge Analytica: What’s this really about?
Given the privacy panic and disregard for the facts, it’s tempting to believe the outrage over Facebook and Cambridge Analytica is entirely justified. However, once you peel back the rhetoric and hysteria, you realize that Facebook’s main crime was that it simply placed too much trust in a university researcher.
Facebook has provided access to users’ public profiles for research purposes, such as identifying flu epidemics and finding areas underserved by doctors or otherwise overlooked. Unfortunately, as recent news has uncovered, not all researchers have honored their contracts with Facebook regarding privileged access to public data.
All of this was publicly reported by the Guardian in 2015.
A University of Cambridge professor, Alexsandr Kogan, developed a questionnaire app for his research. Kogan paid 240,000 users up to $2 to take his app quiz, which required users to login with their Facebook ID and give Kogan access to the public data of their Facebook friends. While allowing the collection of this public data, Facebook prohibited Professor Kogan from sharing his data with anyone else. That is the contract that Professor Kogan breached.
This kind of access to friends and contacts is not unique to Facebook apps. Think about the times you’ve installed apps on Apple and Android where the app requested access to your contacts list. Nonetheless because of potential privacy concerns, Facebook stopped giving apps direct access to public profiles of friends in 2015.
In 2015, Facebook heard that Professor Kogan had broken his contract by selling or sharing public profile data he collected through his app with Cambridge Analytica. Facebook then ordered both Kogan and Cambridge Analytica to destroy the data and received assurances that all data had been erased.
Why the privacy panic now? Two reasons: politics and the growing anti-tech movement.
All of this was publicly reported by the Guardian in 2015. So why the privacy panic now? Two reasons: politics and the growing anti-tech movement.
The media in 2008 trumpeted Facebook and its data collection abilities as one of “the most groundbreaking piece of technology developed” for Obama’s presidential campaign — even helping him win the Democratic nomination over Hillary Clinton. Time Magazine said Obama’s clever use of the data was “certain to be the norm.”
In 2016, the Hillary Clinton campaign boasted about its massive data collection and analysis teams, celebrating that they had more statisticians than the entire staff at Trump’s headquarters. Experts said, “I have never seen a campaign that’s more driven by the analytics.”
So what has really changed since then? Is it just that these same data analysis tools were also successfully used by the Trump campaign? And for the Brexit campaign in the UK?
Another factor in the furor over Cambridge Analytica comes from the troubling new trend of populist suspicion about the tech industry. Tech’s traditional rivals have thrown gasoline on this fire, as the movie and music industry seek some pay-back, and telecom companies exact revenge for Net Neutrality.
Once we move away from politics and anti-tech suspicions, it’s clear that technology is like any tool — it can be used for good and for bad. These are the same tools that promote social change like the Arab Spring, challenges to police violence and gun laws, and giving voice to disenfranchised people.
These tools give us access to more data than at any time in the history of man, but tools can be abused. That is why the tech industry is already subject to state and federal laws, and the Federal Trade Commission and states can bring enforcement against any business for unfair or deceptive trade practices.
So rather than blame technology for election outcomes and call for more regulation, let’s start by seeing this incident for what it is: a university researcher who breached his contract has made it harder for anyone to ever again do good work using data from Facebook public profiles.