Facebook (probably) isn’t listening to you, but the reality’s kind of worse
If you’ve signed up for a Facebook account, you’ve been supplying that friendly blue corporation with a lot of information about yourself that it uses to deliver ads tailored to you as an individual — someone with preferences, ethnicity, location, desires and hate.
Agreements with websites outside of Facebook mean stuff you’re doing elsewhere is part of this tailoring process. Cruising the the New York Times, for instance, informs a cluster of other sites, as Ghostery and Disconnect (two privacy add-ons for Chrome) reveal:
Facebook uses your location, the things your friends like and the things that you like to cram you into a bunch of weirdly specific categories (if you’re logged into Facebook, you can view them here) and to supply this to advertisers who want to target people in a specific area. To their credit, they also make some of this information available, along with bits and pieces on why specific ads were targeted to you:
This information isn’t hard to find, but it’s also one step down a menu —probably a big reason so many people aren’t aware of it. Even in these sections, we’re only granted a tiny peek into the complex instructions Facebook is giving to its computers, or the complex, highly individualised datasets that feed into these instructions- hence the cautious wording when explaining why we’re seeing things. There’s a reason the phrases are filled with weasel words — this is the magic sauce that makes the company (very) profitable — it makes perfect sense that they’d clutch the ingredients of this recipe tight to their chests.
Recently, there have been a slew of anecdotal reports that one of the data points feeding into the magic sauce is Facebook secretly activating the microphone, ingesting and transcribing audio, and the resulting key words. It’s certainly possible to do this with an app, as researchers have demonstrated, but Facebook denies this pretty strongly and unequivocally — their Vice-President of ads responded to PJ Vogt, host of the Reply All podcast, with this:
That episode of Reply All is jam-packed with creepy stories of conversations about products immediately followed by Facebook serving ads precisely related to the discussion. PJ and Alex Goldman, the hosts, aren’t convinced. They’re of the school of thought that Facebook isn’t listening to your conversations — but that countering the belief is nearly impossible. Goldman exasperatedly says,
“The problem here, which is the same problem with reporting out this story, is that Facebook not only is like a black box that tends to not want to tell you about how their stuff works. It is done using so many complex algorithms, that they don’t even know. If I was like, “Hey tell me how this ad got served to this gentleman,” the people of Facebook would say like, “I don’t know the answer to that.””
So who’s right? The thing is, despite the data they collect being complex, and the formulas that blends this data into a froth being secret, I think this is a testable claim. Last week, I decided to act on a idea I had a few months ago, and test it.
It’s been a week since I started. First, I grabbed three old crappy Samsung phones lying around in a box, and reset each to factory settings. Then, I created three fake identities — one guy, two women, all born at the same time. For each, I created one gmail account, and one Facebook account, using an incognito browser window in Chrome (I won’t post details because I want to keep my test going).
Before I could even access the standard Facebook home page, lo and behold, Facebook was urging me to add my real account, Kim (my partner) and about 30 of my actual, real-life friends (and even some acquaintances), on two of three fake accounts.
Jeez. How the hell did that happen? Facebook could have used my IP address, the name of the Wi-Fi network I was on, or the phone’s GPS to accurately link the three fake accounts with my real account, which I assume are all geo-located down to a few metres. It was pretty startling, but a great illustration of how pervasive their feelers are. It probably wasn’t great for keeping these new accounts on a totally pure, unadulterated baseline, but I pushed ahead anyway.
Each day this week, I turned each phone one for a few minutes as I spoke a few preset topic words to each:
- Account 1: KFC, Dinner, Chicken, Burger, Wrap
- Account 2: Marriage, Engaged, Ring, Wedding, Proposed
- Account 3: Renovation, Home, Kitchen, Builder, Plumber
Why yes, I did sound utterly ludicrous hollering random words at an inert slab of circuits:
The other thing I did while the phone was on was scroll through the newsfeed for between 3 and 5 minutes, and note down any ads I saw. Here’s the odd thing — for almost the entire week, I didn’t see a single ad. Each account has no friends, and has only liked ABC News, 9 News and The Oz.
The only non-news item in my newsfeeds was Saturday, when one of the three accounts (the one that should be serving me with KFC and junk food ads) prompted me to smash the like button on a Northern Territory engineering consultancy using Buzz Lightyear from Toy Story to promote fracking (???).
So, after a week with a trio of (mostly) blank slate accounts only exposed a specific collection of spoken words, Facebook isn’t serving ads based on those words. I think is is another sliver of evidence to add to the ‘they’re not listening’ conclusion, but it’s not conclusive:
- Maybe there’s a minimum amount of information required from a variety of sources before *any* ads are served — so the next stage will be filling out these accounts, while keeping at the topic words
- Maybe they don’t do it for every account, or only for accounts with certain features, or perhaps no one in Australia
- Maybe seven days isn’t enough (but people did report stories after single instances)
- Kim suggests my actual phone heard me discussing my plans with her and warned its human owners of my dastardly plans….
Controlling for the weaknesses of this quasi-clumsy test is pretty tricky, but I’m going to like a few more accounts, try and add a few more friends, and see if I detect a signal in the noise that is the Facebook news feed.
It’s tempting to put public theories about microphone activation down to cognitive bias and illusion, and there’s little doubt that this is part of the stories flooding social media (and ReplyAll’s mentions).
Facebook PR guy Adam Isserlis, blames the ‘Frequency Illusion’ (aka the ‘Baader-Meinhof phenomenon’) — where we start noticing things we’re more aware of due to selective attention (your friend just bought a red Mazda, and now you seem to see them everywhere you go…).
There’s a bit of good old confirmation bias too, I’d posit. It’s a similar thing where we get excited about things that confirm our theory, but totally ignore things that counter our theory. We might served 2,000 ads on Facebook unrelated to conversation, but the one that coincides with our chat sticks in our mind.
What’s incredibly clear here is that even though Facebook may not be actively using your microphone to serve ads, they’re still largely responsible for the sentiment that drives these theories.
Specifically, Facebook’s rough communication and shortfalls around transparency on their advertising magic sauce drives us to fill in these gaps with our own theories — namely, theories that are far simpler than the complex and incomprehensible realities of Facebook’s shockingly clever data snaffling.
I felt that exact jolt of weird, gross surprise when a totally new account from an incognito browser window suggested me and my closest friends within seconds. Both Facebook and Twitter are toying with better transparency around advertising, and there’s already some information there, if you’re compelled to dig. But these explanations need to be more detailed, more honest, and significantly more prominent, if they’re going to stave off the creepiness that’s starting to dominate reactions to their service.
Machines that know us because their creators use secretive techniques rightly make us nervous, and when we’re nervous, we seek out solace in solid, strongly held beliefs. This is why Vogt failed to convince any callers of an alternate explanation, on ReplyAll. Once people are alienated and angry, they’ll sink deeper into a theory.
The suspicions held by the community about these opaque systems may not always be accurate, but the instincts that drive them are precisely spot on. We need to understand and judge the ethical nuances of instructions handed to computers. “Sometimes data behaves unethically”, wrote Antonio Garcia-Martinez, an ex-Facebook advertising team member. Yep. It’s true — ProPublica found you could target “Jew Haters” on Facebook:
“Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”
It seems there are many good reasons to to figure out ways to better test and understand this mysterious machine that wants to quantify our own desires with machine accuracy.
I’m going to keep at my test and plug in some new variables, but I suspect I’ll keep getting negative results. But the important thing, friends, is not the veracity of the theory, but the sentiment that drove people to cling so strongly to it. In this case, it’s one of many cases of large-scale community backlash caused by a lack of algorithmic transparency and a weird disdain for human data points they’re soaking up.