It’s been over a week since Facebook announced that, thanks to a coding vulnerability, access tokens for at least 50 million* accounts were stolen. Access tokens are important. As Facebook explained in its blog detailing the hack, they are “the equivalent of digital keys that keep people logged in to Facebook so they don’t need to re-enter their password every time they use the app.”
The hack also impacted Facebook’s Single Sign-On, which lets people use one account to log into other sites, meaning the impact of the breach is perhaps wider than even Facebook initially reported. Still, at the moment, there’s no way to know how big of a problem it is, or will be. Nor do we know who did it. We’re in the dark for one simple reason: Facebook has said next to nothing about what it knows — or if it knows much at all.
Ad-driven platforms tend to succeed thanks to one thing: our vulnerability.
Bad as it might have been to sit in complete silence, something worse has happened: Facebook’s community has filled the void. Over the weekend, a hoax circulated on Facebook. Users reported seeing a message from another person claiming to have received a weird friend request. The message suggested the user send a mass warning to everyone, urging them to avoid accepting bizarre friend requests.
Here’s an example:
Hi….I actually got another friend request from you which I ignored so you may want to check your account. Hold your finger on the message until the forward button appears…then hit forward and all the people you want to forward too….I had to do the people individually. PLEASE DO NOT ACCEPT A NEW friendship FROM ME AT THIS TIME.
In other words, in the absence of real news from Facebook on a massive hack, fake news about the massive hack took over Facebook. As a parable of the ad-based platform economy, there is perhaps no better example than this. For, at their heart, ad-driven platforms are designed around, and tend to succeed thanks to one thing: our vulnerability.
Across Silicon Valley this week, another tech giant has been grappling with a data problem.
On Monday, the Wall Street Journal reported that, this past spring, Google uncovered a problem within its long-derided Google+ social platform. To anyone paying attention over the last year to social network privacy concerns, the issue sounded familiar. A bug Google uncovered as part of a sweeping company-wide audit of the application program interfaces (APIs), “gave outside developers potential access to private Google+ profile data between 2015 and March 2018,” the Journal reported.
Google suspects that as many as 500,000 users might have been affected.
In a blog, also published Monday, Google specified that the data was “limited to static, optional Google+ Profile feeds including name, email address, occupation, gender and age,” and did not include “any other data you may have posted or connected to Google+” or other Google services. Still, Google suspects that as many as 500,000 users might have been affected. Google also claims there is no evidence developers were aware of the bug, or that profile data was misused.
While the Google API bug and Facebook’s access key hack can’t be compared in terms of scale, nor in terms of the sorts of data breaches they are, they do ultimately reveal something. What the two share, as products of giants of the ad-based platform world, is a tendency to make us more susceptible, either by design or by accident, to a variety of things we may not otherwise have been subjected — to misinformation, to targeted messaging, and to personal security threats.
Dismissing this increased vulnerability or openness as a price to pay for access — that taking on this risk is part-and-parcel of being a platform user — isn’t an adequate response, if for no other reason than that tradeoff is rarely, if ever, explicitly put to us when we sign up. Instead, we’re offered something quite different.
“We’d like to bring the nuance and richness of real-life sharing to software. We want to make Google better by including you, your relationships, and your interests,” Google explained when it launched Google+ in 2011. “You and over a billion others trust Google, and we don’t take this lightly. In fact, we’ve focused on the user for over a decade: liberating data, working for an open Internet, and respecting people’s freedom to be who they want to be.” Facebook’s language — even now — is similar. Facebook, CEO Mark Zuckerberg wrote in 2017, “stands for bringing us closer together and building a global community.”
Even at times when ad-based platforms have helped people be who they want to be, or build communities, that vulnerability — to the influence of foreign actors, to persuasion, or to security lapse — never disappears. In some cases, those expressions of self and those communities only increase it.
Between 2015 and 2017, the Internet Research Agency (IRA), otherwise known as Russia’s “troll farm” that’s been linked to targeted misinformation campaigns on Facebook in the run-up to the 2016 election, was particularly focused on Black Lives Matter (BLM).
The trolls sought to exploit the movement’s legitimate goals in order to deepen ideological discord between its members and other sectors of American society.
The movement was successful at gaining support and attention by using social platforms like Facebook. It granted a voice and audience to activists and supporters, and connected them with like-minded people in their cities and states. Yet that activism and connection made the people who joined BLM groups on Facebook vulnerable — not necessarily to authorities like police agencies, but to other, unexpected forces, like the IRA.
As an investigation by the U.S. House Intelligence Committee revealed earlier this year, BLM activists and groups were frequently targeted by the IRA. The trolls sought to exploit the movement’s legitimate goals in order to deepen ideological discord between its members and other sectors of American society.
As April Glaser wrote at Slate, the IRA were using Facebook’s “ad-targeting tools just as they were intended: to reach specific groups of people with specific interests, as revealed through their Facebook likes and listed enthusiasms.” And they were able to do that because, behind the scenes, algorithms could work away at testing those groups — what messages they liked most, which ones were more likely to make them annoyed — and, simultaneously, other groups, too — like people who might hate Black Lives Matter — on the same metrics.
Facebook’s ad-targeting setup “can be exploited by anyone looking to target people based on negative stereotypes, racial profiling, and extremely specific points of interest, hitting people with just the right kind of messaging that will provoke a reaction,” Glaser wrote. Eventually, all kinds of users, not just those who voluntarily joined BLM groups or activities on Facebook, might have been susceptible to messaging framed around it. This messaging could have shifted their viewpoints without them even knowing how.
This is another thing that links the Google+ API bug disclosure and the Facebook hack: how unaware we are, as users, of what is happening on the social platforms beyond our immediate line of sight.
As the Journal reported Monday, though Google made its discovery months ago, it chose not to tell the public “in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage.” The Journal’s reporters reviewed a memo “prepared by Google’s legal and policy staff and shared with senior executives [that] warned that disclosing the incident would likely trigger ‘immediate regulatory interest’ and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.”
It was probably an astute assessment by the Google team, but one that ultimately proved fruitless, as the API bug disclosure came to light only days after news of Facebook’s hack. Yet, even if the waiting game gamble had paid off, an attempt to separate the two stories is a fool’s errand. They are connected by design, in the invisible vulnerabilities they create.
*On Friday, Facebook issued an update on the hack. It now says that, “of the 50 million people whose access tokens we believed were affected, about 30 million actually had their tokens stolen.”