There’s a flip-phone in this patent filing.

Is Facebook Licensing this Intrusive Google Patent?

The Urban Legend that Won’t Die Until it’s True

Michelle Shevin
Adventures in Consumer Technology
7 min readMay 20, 2017

--

Ever get an extremely niche ad on Facebook that hits way too close to home? A content recommendation that seems ripped from your own words — not those typed in a search bar, but ones that came out of your mouth. Perhaps a travel site with an auto-generated headline about the random little town in Oregon your friend just mentioned on the phone. Or an ad for a urinal, so out of place it has you scratching your head, until you remember you complained out loud an hour ago that your bidet was broken.

Three friends this month (one repeatedly) have insisted Facebook is serving them content and advertising based on conversations it overhears via their phone mic.

Ever heard of McMinnville, OR? My friend Ruthie has, and so has Facebook.

Of course, Facebook has flat-out denied this persistent Reddit conspiracy theory more than once, claiming that its microphone usage — for those who enable it — is limited to 15 second bursts during status updates in order to identify background music or tv content.

In 2012 while working for a pre-trend sensing service, I took note of a Google patent (filed in 2008) to use “ambient” environmental data — including background noise — to serve advertising. The relatively broad patent grants that sounds (words, music, etc.) picked up during active microphone use could become fodder for personalized recommendations and tailored advertising.

Hypothetically, is it plausible that Facebook — famous for initially requesting extensive microphone permissions through the Messenger app — would license that patent? Or, would Facebook ever form a partnership with Google behind the scenes, taking a percentage of the ad revenue generated from Google’s use of Facebook as an advertising platform? After all, sponsored Google results don’t do as well as sponsored Facebook content, and cross-promotional content utilizing Google IP could be a business model win-win.

In 2012, Google brushed off privacy concerns related to this patent by explaining that they often file for IP on employees’ ideas with no current intention to develop the tech. Subtext being: we want to own the capability, whether or not it is currently technically feasible or ethically viable.

But here’s the rub: the common consumer belief that Google/Facebook would indeed go to this length to serve relevant ads, could, over time, normalize that (reduced) level of expected privacy.

The more people believe this is something Facebook would do, the less surprised they might be to see expanded permissions requested in an updated Terms of Service agreement, either in Messenger or another application. In Facebook’s inevitable VR play, for example, ToS would naturally require broad microphone permissions and likely use collected sensor data to “improve the user experience” (read: deliver personalized content). By then, will anyone bat an eyelash at the idea?

Spun as intrusions of privacy, the Google patent and the alleged Facebook ad targeting are downright creepy. But cheeky brand behavior is already widely celebrated, especially in response to consumer activity. Many people already have their debit card linked to Facebook, and in a near future when drones can deliver goods on-demand, social platforms could plausibly serve as a sort of wish-granting genie in a bottle.

So while common wisdom holds that tech companies should steer clear of the uncanny valley, it’s actually a strategic advantage for Facebook if its users believe it has permeated every aspect of their life. Provided they don’t get creeped out enough to deactivate their profile, this slow slide into well of course Facebook knows what I need positions the service for a lucrative and influential future.

Our Algorithmically Intermediated, Immersive Advertising Future

Facebook is far from the only tech company pushing the boundaries of consumer privacy and comfort to optimize ad delivery. Personalized content and ad targeting based on voice is currently undergoing a major market test, led by Amazon’s Alexa Voice Service and the Google Assistant. The distinction between serving up content based on voice commands versus overheard conversations is already a fine line — and shrinking.

For years now, the Microsoft Xbox One has been always watching and always listening. Play Halo at your friend’s house once, and this disembodied little black box will recognize you by face and greet you by name as soon as you walk in the room — forever. How long until our phones and social networks capitalize on the inevitable desensitization that occurs when we believe all of our devices are always spying on us?

The revenue potential is undeniable, so it’s only natural that retailers and advertisers will seize on related capabilities. And as is often the case with new technology, regulations won’t be able to keep up. By drawing semantic distinctions between things like microphone “access” and “listening”/“recording,” sensor data could potentially be analyzed without running afoul of privacy laws. And emerging techniques like differential privacy could claim to effectively solve for related concerns, semantics aside. (Update: as of 2023, a focus on “privacy enhancing technologies” that narrowly aim to protect identity is indeed threatening to distract from the complex range of rights and values embedded in the concept of privacy.)

But “privacy” itself — already a vague term — often stands in as a proxy for more intangible concerns like insecurity, stigmatization, and exposure. New thinking and new laws will have to account for these vulnerabilities with more specificity, particularly as new capabilities push the sensory and categorical boundaries of what we would call “privacy intrustions.”

Earlier this month, a leaked document showed two Facebook executives had planned to monetize the mood states of Australian teenagers, using a system called sentiment analysis to target timely ads to teens whose posts indicated they felt “useless,” “anxious,” and “stupid,” among other moments of vulnerability. It’s not the company’s first brush with research ethics, and it lays bare an exploitative corporate mindset in which you ask for forgiveness, not for permission.

Past the particular privacy shock of targeting children, there is an impressive and powerful simplicty to this type of mood-targeting. Emerging techniques could bring near-clinical precision to mood analysis, as Apple ResearchKit studies are beginning to demonstrate. Such capabilities represent exciting potential for the personalization of learning and health interventions, but by now it should come as no surprise that when collected by a private company, the information is first and foremost for sale. This data brokerage forms the backbone of a new economy, and its transactions will get just about as weird as we’ll let them.

This week, Google announced Google Lens, a search interface accessed through the camera. This UI reboot of reverse image search builds on computer vision and deep learning algorithms to serve relevant content based on visual information. Google aims to use this new search interface to deliver not just relevant images but also contextualized information. And wherever services are innovating around content delivery, it’s a safe bet they’re aiming to personalize advertising as well.

Essentially, it’s only a matter of time before we become convinced we are suddenly seeing ads for Cinnabon because we just walked by the airport kiosk with Instagram open to the camera interface. Consciously, all you were aware of was smelling a delicious breakfast pastry, but now for days you’ll be bombarded by the ads every time you go online.

In a way, the very fantasy paves the way for a future in which our every utterance, feeling, focal point — perhaps some day even our thoughts, become the purview of myriad recommendation engines.

This insidious form of surveillance capitalism is connected to a larger co-optation of the means of value creation. The old adage, if the service is free, you’re the product, only begins to hint at what’s at stake. The aim of this new immersive, personalized, algorithmically intermediated advertising is not just to influence consumer purchasing but also to design human behavior. Jaron Lanier, a leading thinker on the associated implications, is cautionary about the long-term economic consequences, as well as the persistent societal and corporate fantasies that influence the types of futures we will put up with occupying.

In the near term, readable privacy policies, sensor ethics, and independent ethical review thus become increasingly important, not just for creep-factor/PR purposes but also because of the long-term trajectory set in motion by such precedents. The ethical demands we make regarding voice technology today lay the groundwork for what we will expect from companies brokering our data and activies in virtual reality and beyond. It bears repeating that the information consumers freely give away in exchange for services train the very algorithms that generate value, concentrating more wealth among the few corporations big enough to amass a controlling stake in data. This is a powerful consumer position, but that power is decentralized — requiring broad education and new forms of digital organizing. We are barreling toward new immersive realities, and must do so with our eyes open.

Convinced you’ve been served content based on something your device overheard? Tweet me @michebox or email me at michelle.shevin@gmail.com.

The views I’ve expressed are my own and do not necessarily reflect the views of my employer.

--

--

Michelle Shevin
Adventures in Consumer Technology

Tech Fellow at the Ford Foundation. Adjunct on futures thinking at NYU ITP. Dancing ghost in my machine. All views my own.