Facebook users have always been the product sold, not the customers served. For years, millions of Facebook users who routinely log on to the one billion-plus active accounts have traded their memories and experiences in exchange for a free platform to stay connected and get information.
Users felt that exchange was negligible and fair if it meant they might see the occasional advertisement for something they were interested. Then, Facebook went too far: if recent media reports are to be believed, Facebook allowed a major data mining operation to harvest user information of tens of millions of accounts as part of a sociopsycho propaganda campaign that almost certainly contributed to the eventual outcome of the 2016 presidential election.
There is no reason not to assume that Facebook’s business relationship with Cambridge Analytica was the first-ever of its kind for the social network, and plenty of reasons to believe it wasn’t. A former data engineer from Cambridge University who passed along a data set of Facebook users to Cambridge Analytica recently told the BBC Radio he was assured by both companies the data harvesting practice that has drawn so much recent criticism was a “routine” practice done by “tens of thousands of apps.”
What makes the Cambridge Analytica situation so problematic isn’t that the company harvested data from users who opted to take “personality quizzes” on the social platform — Facebook users regularly grant third-party services access to their profiles in exchange for certain benefits — but that Cambridge Analytica was also able to extract data from those connected to the users who took the quizzes. Those affiliated users didn’t grant Cambridge Analytica permission to review their data and likely didn’t know their data had been collected until news reports detailing the company’s activities began appearing last week.
In fact, every year — like clockwork — Facebook has responded to criticisms of lackluster security and data exposure by rolling out “improvements” to its privacy offerings. More often than not, Facebook heralds the changes as enabling users to take better control of their data; in reality, the changes led to confusion and frustration:
- In 2008, Facebook users were grouped primarily into work and school networks. Users began complaining that status updates and photos intended for a small handful of people were being exposed to a wider audience on the company’s new “news feed” feature. Facebook responded by offering users the ability to categorize connections, grouping them into family and friends lists. The change require most users to sort through dozens of not hundreds of connections and manually group each connection accordingly — and the feature still didn’t prevent private moments from being publicly discovered.
- In 2009, Facebook began offering users the ability to limit the audience of individual Facebook posts, satisfying those who complained that the 2008 changes didn’t go far enough. At the same time, Facebook revoked an option to make some personal information private. Users complained at the change, which required a person’s full name, gender and city be publicly listed on the platform.
- In 2011, Facebook completely overhauled it’s privacy settings. The overhaul was so complex and so confusing that Facebook require all of its users (at the time, more than 700 million accounts) watch a tutorial that explained the new changes. Facebook removed an option that prevented friends from tagging a user in a location “check in,” a move that was criticized by the Electronic Frontier Foundation.
- In 2013, Facebook unveiled an option that allowed users to mass restrict prior posts. At the same time, it removed an option that allows users to hide their profiles from searches. The Washington Post reported Facebook’s intention was to force users to “control their privacy on an item-by-item basis.” Presumably, Facebook figured users would be more likely to share data publicly — data the company could then use internally and funnel to third parties — if those users had to adjust their privacy settings every time they made a post, uploaded a photo or took some other kind of action on the site.
- In 2015, Facebook said it was concerned about the privacy of its users, particularly in the wake of reporting that the U.S. government harvested user information. Then the company made billions of Facebook posts publicly searchable.
- In 2016, Facebook announced new and “confusing” features that would allow users to opt out of most advertisement-based tracking both on and off the website. But the website was still being criticized for allowing far too much information about its users to be publicly available.
- In 2017, Facebook once again changed its privacy menu in an attempt to make it “easier for people to find tools for controlling their information on Facebook.” At the same time, the company successfully fought off a lawsuit that correctly alleged the website was tracking the web activities of its users even when they were not logged on to the website. In an editorial published by the New York Times, former Facebook employee Sandy Parakalis wrote the website “prioritized data collection from its users over protecting them from abuse” and had “no incentive to police the collection or use of that data.”
- Just two months ago, Facebook began publishing tutorial videos for its users that showcased many of the websites privacy features. It also published an internal set of rules that the website said governed how Facebook reappropriates user data. “Not everyone wants to share everything with everyone — including with us,” Facebook’s privacy officer wrote. As it had done every year for the last 10 years, Facebook admitted it would continually change its privacy features, but spun the acknowledgement as a flexible and beneficial way for users to control the privacy and use of the data they share with the company.
Users who became invested in Facebook as a lifeline may have complained about all of those changes, but almost all of them acquiesced. Facebook always came out on top.
But now, things are different. Here, Facebook and Cambridge Analytica are being accused of misappropriating data that most users never knew was fair game. It was one thing for the companies to pass along data from those who opted in — a condition of taking personality quizzes offered by Cambridge Analytica. But it is a very different, and indeed more sinister, thing for the companies to harvest data from users who were merely connected to the first set. Those users didn’t knowingly pass along their information. They didn’t opt in.
Facebook is facing a crisis of trust. Any privacy terms that allowed Facebook and Cambridge Analytica to do what they did must be re-examined without consideration to either company’s profit motive. Facebook has had no problem executing changes in privacy features in the past, and it should have no problem doing so now.
The author used his phone to conduct research for — and ultimately write and publish — this post. Your patience with any linking or style errors (including mistakes in spelling or grammar) is appreciated. Contact the author here to report any such problems. Photo courtesy QuoteCatalog.com, reused here under terms of permission.