Over the last week, many of you have sought my opinion about the latest Facebook scandal, which has hit the markets, raised multiple legal issues and even made the cover of The Economist; so here goes. The story sounds like something from a thriller: a researcher at the UK’s University of Cambridge who specializes on the social and psychological manipulation of voters, with links to the University of St. Petersburg, is contacted by an ultraconservative multi-millionaire who finances an obscure company called Cambridge Analytica that uses ethically questionable methods to help an unlikely and dysfunctional candidate like Donald Trump win the 2016 US presidential election.
There you have it: are we talking about some kind of sophisticated attack, an unprecedented data leak or the exploitation of some unknown vulnerability that allowed Cambridge Analytica company to steal personal data and trace relatively sophisticated psychological profiles of fifty million Americans, with between three thousand and five thousand facts about each of them? Sadly not (if you don’t believe there can be so many data about you, you can download the data Facebook has collected about you since you started using it). No, we are talking about something that has been possible on Facebook without too many problems from practically the beginning of its advertising activity.
Cambridge Analytica used a trivial and superficial personality test that was shared with their friends, to access millions of profiles on Facebook. Tests of this nature or games such as FarmVille, MafiaWars or PetSociety, have been a feature of the social network for many years and that Facebook, regardless of its talk of “platform abuse”, has allowed as part of a strategy to gain more information about its users.
In short, Cambridge Analytica, Robert Mercer or Alexander Nix are not evil geniuses: Facebook API not only allows third parties access to your data by setting superficial quizzes, but also allows access to all your friends’ profiles. By getting 270,000 people to take the test, many of them paid by Amazon Mechanical Turk, who offered access to their personal profiles in breach of the terms of service, Cambridge Analytica gained access, based on a conservative average of 185 friends per user, to the profiles of almost fifty million users. Nice one.
The problem here is not the existence of a free platform for people to share information about their lives with friends and family. If Facebook did not exist, somebody else would have invented it: I sincerely believe that being able to see the photos and videos my friends have uploaded from their vacations in a couple of clicks makes the world a better place; as does my mother being able to know where I am in the world, what I had for dinner and where I ate it without having to leave her house 600 kilometers away from where I live, and the same applies about a former school friend I might otherwise have lost contact with.
Equally, the problem is not that all this is financed through advertising, particularly if that advertising is not of the all-singing, all dancing intrusive type. The problem arises when Facebook allows unsupervised access to its platform by third parties to obtain data from millions of people and then manipulate it to its own ends.
The problem is when Facebook knows what companies like Cambridge Analytica are up to, but does nothing about it until the publication of a phenomenal piece of investigative journalism by The Guardian, The New York Times and Channel 4. Facebook knew exactly what Cambridge Analytica was doing for more than three years (possibly with the exception of the use of academic data for profit), but worked hard to play down the importance of it so as not to harm its advertising business, even threatening those publications with legal actions. The procedures that allegedly forced Aleksandr Kogan or Cambridge Analytica to destroy the data they had obtained underhandedly, as well as blocking access to the platform failed completely, revealing the naivety of the company and the people running it.
Researching people’s Facebook data is not a bad thing either; it helps us understand our society and to develop all kinds of interesting tools, as well as possibly preventing suicide, and better understanding harassment, abuse or information flows on the internet. The solution is not now to block access to our data, but rather the opposite: to give more access to researchers, but with better controls and total transparency about what they are going to do with it. Access to Facebook data cannot be on a paid basis, which is what has happened until now. Facebook did not sell anyone’s data, but instead allowed third parties with nefarious intentions access to detailed information about its users lives, their preferences, their attitudes …
The problem really comes into focus when naivety transforms into stupidity, when Facebook fails to see that the people accessing its platform are not using data for academic research. It is absolutely unacceptable that just because somebody we know takes a personality test on Facebook, we end up being used for electioneering and to spread false news based on our private information. Facebook deserves a hefty fine and it may be that we now have to contemplate further regulating the internet. In the light of what happened, we can assume that many of us will be a lot more careful about what we share on Facebook from now on. Facebook’s naivety will affect its relationship with companies such as Cambridge Analytica or others and with its users. There are limits to ingenuousness.
Facebook is guilty of culpa in vigilando: by having failed to supervise the platform properly, another person or company has caused damage, the civil responsibility for which must be assumed by those who failed in their responsibilities. In his candid interview with CNN and his comments in a recent post, Mark Zuckerberg claims Cambridge Analytica and Aleksander Kogan breached the trust Facebook placed in them; but there was another breach of trust, a far more serious one, between Facebook and its users. The former may have been due to a combination of naivety and stupidity, but the second, unless Facebook makes huge and highly visible efforts to correct it, could bring about the company’s collapse. And if it does, it won’t be for a lack of warnings.
(En español, aquí)