Demystifying Facebook’s Latest Controversy — and How to Avoid the Next One

Alexander Golden
Slalom Technology
Published in
5 min readMar 23, 2018
Photograph: Dado Ruvic/Reuters

On March 17th, the New York Times reported that Cambridge Analytica, a startup out of the United Kingdom, had used millions of records of illicitly pilfered Facebook data as the basis for a business that profiled voters and then resold the data to political campaigns. Facebook shares promptly dropped over 17% on the news, and angry constituents are contacting their lawmakers to insist that both companies be brought to account.

For anyone who builds mobile applications or websites, however, this practice of sharing users’ information isn’t news at all. In fact, it’s been the operating model of the Internet for years. Or in other words, Facebook isn’t the only company doing this, and it’s not a bug. It’s a feature.

OAuth: A Primer

The basic technology that facilitated the transfer of information from Facebook to Cambridge Analytica is known as OAuth (pronounced oh-auth). It is an authorization standard that defines a pattern for permitting access to information. There’s a good chance that, in your Internet travels, you’ve encountered OAuth without even realizing it. Just a few of the companies that use OAuth to grant permission to access users’ data include: Google, Facebook, Twitter, FitBit, Microsoft, and Apple. Any sound familiar?

The basic flow of OAuth is as follows:

A website or mobile app (in this case, Waze) would like to request access to data stored on another website or service (in this case, Facebook).
The user is redirected to the site from which the data is requested; the site prompts the user to grant permission for the sharing of that data.
Finally, the user is redirected back to the original website/mobile app. If the user has granted permission, this site is given access to the requested data, and can use it to personalize the user’s experience.

Perhaps the most common (and simplest) use of OAuth, demonstrated in the flow above, occurs when a website provides the ability to log in with your Facebook account. The requesting website receives your email address, name, and friends list from Facebook, which can be used to pleasantly personalize your experience. You get to use the website without having to create yet another account and remember yet another password. Everybody wins… Right?

Good Intentions Gone Bad

The problem, of course, occurs when websites abuse OAuth by requesting data that is not necessary or by then using that data in ways that do not ultimately benefit the user.

In the case of Cambridge Analytica, this is exactly what happened. The company created a personality survey and requested gobs of personal data from Facebook. Users were prompted to — and did — grant the survey wide-ranging access to their personal data. What they didn’t know at the time, of course, is what the owner of the survey would ultimately to do with that data. And now we know: record it for later resale to political campaigns or other high-paying — but possibly nefarious — customers.

Which begs the question: If people are so offended by the use of their personal data in this manner, why did they agree to give it away in the first pace?

The answer is surprisingly simple: People don’t read. Or in other words, users have become so accustomed to long and confusing terms and conditions pages that they simply click ‘accept’ without understanding the implications of what they’re agreeing to.

Cracking the Nut

It’s a vexing problem: Users and websites both benefit when user information is responsibly shared and utilized. But there will always be bad actors out to trick users into sharing more than they intended — and then to use that data in a morally repugnant fashion. What, then, can we do to encourage ‘good’ use of data and punish ‘bad’ use of that same data?

As with most complex problems, the solution will likely need to be multi-faceted: A combination of action from individuals, the private sector, and the government.

As individuals, we can and should be the first line of defense in protecting our own data. Pay careful attention when you’re prompted to share your information with other websites and apps, and ask yourself: Is the information the site is asking for appropriate and necessary? Do I trust the site to use the information in a responsible fashion? Does the site have a clear privacy policy explaining how user data is collected and utilized? In the case of Cambridge Analytica, the answer to all three was most certainly a resounding ‘No’.

Sites and corporations that collect, store, and share users’ personal data (such as Facebook) also have a responsibility to be better stewards of the data with which they’ve been entrusted. This includes placing common-sense restrictions on what data can be shared and under what circumstances. Although it was a case of the proverbial ‘closing the barn door after the horse had bolted,’ Facebook would later tighten its rules for accessing individuals’ and their friends’ data, and put a more stringent review process in place for those sites asking for large volumes of data.

From a govermental perspective, the European Union has led the way in implementing regulations intended to mitigate the problem. The latest is the General Data Protection Regulation, which mandates that companies must disclose not only what data will be collected, but how that data will ultimately be used. And citizens of the EU retain ownership of their personal information at all times, with the ability to request that companies destroy that data.

And now the million-dollar question: Will the US follow the EU’s lead? With a professedly anti-regulation Congress and administration in place, it seems unlikely. But politicians have a way of changing their tune when a groundswell of popular opinion dictates otherwise. Only time will tell.

--

--