Big Tech Regulators Are Missing the Point
It has been a tragic saga, for people who are familiar with the ways that social media platforms and companies operate, to watch government regulatory sessions with Big Tech companies. For many young people, this began with U.S. lawmakers’ questioning in Congressional hearings; sessions that revealed the lack of understanding of social media by, frankly, elder legislators. However, for those of us who study modern technology and the way that it has mutated capitalism into an entirely new beast, the frustrations with how lawyers, government officials, and any who engage in mainstream regulatory discourse, continue and intensify.
This is primarily because regulators seem to not have an understanding of the actual imperatives guiding Big Tech. While they aim at Antitrust, they tip their hand in journals like the New York Times and say that the case is harder to make than they expected. They fail to realize, because they are not versed in deep understanding of the paradigms that guide Big Tech, why their case is so hard. Companies like Google, Facebook, Apple, Amazon, Microsoft, and the like, have not been motivated by user products for over a decade. They are focused on data and prediction products. The disconnect between this older understanding of how capitalism has worked, and how Shoshanna Zuboff’s appropriately named “surveillance capitalism” works currently is ruining any chance of actually reigning in Big Tech. There is an urgent need for deeper understandings of surveillance capitalism and its imperatives in order to truly reveal the danger Big Tech poses to all of us, and move towards substantive regulation.
Surveillance Capitalism?
Zuboff’s paradigm-shifting work, The Age of Surveillance Capitalism, is a necessary prerequisite read for anybody who dares challenge Big Tech’s hegemonic influence. I’ll detail a few key concepts that motivate the regulatory arguments against Big Tech and best depict why current Antitrust cases will likely fall embarrassingly flat.
Facebook is not after Instagram or WhatsApp in order to improve the actual user interfaces or messaging capabilities, they are after these companies to acquire more of your behavioral data to feed into their machine learning prediction algorithms.
Primary is the idea that companies like Google, Facebook, Amazon, and more, are not in the business of making their user products better. Zuboff calls this old cycle of product improvement the “behavioral reinvestment cycle.” This argues that, in the old days, Google may have used user data on how their search bar has been used in order to improve the search bar itself — potentially adding a new feature like search suggestions. This cycle closely mirrors the cycle of capital reinvestment from industrial capitalism, where we can imagine the profits from a company like Ford Motor to be reinvested back into their production lines or the cars themselves.
This is not how Big Tech companies operate. This point could not be more important. Companies who are playing the surveillance capitalist game are not interested in changing their products to better serve users. The actual products these companies sell are predictions — that’s why Google is in the advertising business; they predict how you are feeling, thinking, and how you may do so in the future in order to give you a perfectly timed and tailored advertisement. You are not the customer for Big Tech companies. You are the raw materials, you generate behavioral data that they analyze, and then they sell predictions to their actual customers: advertisers.
All of this motivates the true incentives that Big Tech are following, which follow what Zuboff calls the extraction imperative. Their prediction products improve as they harvest more behavioral data from you. Therefore, there is a strong incentive to extract more data from you — i.e. they want to make you use their platforms more, and in different ways. There is also an incentive under the extraction imperative to simply collect as much data as possible, and this is facilitated by acquiring diverse companies. Facebook is not after Instagram or WhatsApp in order to improve the actual user interfaces or messaging capabilities, they are after these companies to acquire more of your behavioral data to feed into their machine learning prediction algorithms.
Under these incentives, companies like Facebook have spent years biding their time and taking flak from privacy scandal after privacy scandal because their entire business relies on gathering more data from you. For example, in 2014, Facebook faced intense privacy backlash after acquiring WhatsApp, and vowed to keep the data from the two apps in separate silos. Almost seven years later, however, in today’s New York Times article on the Antitrust cases, it is taken as common sense that the apps are being integrated. The article states,
“In September, 18 months after the initial announcement that the apps would work together, Facebook unveiled the integration of Instagram and its Messenger services. The company anticipates that it may take even longer to complete the technical work for stitching together WhatsApp with its other apps.”
Zuboff points out that this is part of a pattern that Big Tech companies have used since the early 2000’s: they do something that shocks us and raises privacy concerns, apologize and say they made a mistake and will protect privacy, and then wait long enough until everyone forgets and simply do it anyway. She calls this the “dispossession cycle,” and it is crucial to understand for any regulator trying to understand how these companies operate.
How Regulators Should Proceed
In light of these ideas that drastically shift how Big Tech is understood, regulators need to commensurately shift their strategies. The narratives that Facebook and Google have become expert at blasting out in blog posts will trump regulators’ narratives unless they, and the public, truly understand what these companies are after.
They plainly make people think their apps are communication, entertainment, or gaming tools. But this is only what they are on the surface: they are actually tools to make behavioral prediction products for advertisers.
Instead of trying to argue that product-based competition has been harmed by Big Tech snapping up would-be competitors like Instagram or WhatsApp, a better argument must emphasize that prediction product competition is monopolized by acquiring more sources of data. I should strongly note that I do not endorse in the slightest the idea that a market of prediction products is even legitimate. Nor do I wish to imply it doesn’t infringe heavily on human rights. However, using the language of surveillance capitalism will help regulators take the first step in the argument against Big Tech, and will lead to even stronger critiques that these predictions products — based on enormous and rich streams of behavioral data — infringe on autonomy as they arguably “know” you so well they can manipulate you. The anti-competitive argument easily follows from recognizing that the competition lies in competing data extraction and predictions, not competing user interfaces or product features.
An understanding of surveillance capitalism, and the extraction and prediction imperatives, also counters the typical narratives woven by companies like Facebook and Google. In the same New York Times article, Facebook executives are quoted saying things like,
“These transactions were intended to provide better products for the people who use them, and they unquestionably did,” Jennifer Newstead, Facebook’s general counsel…
… Mr. Zuckerberg said Facebook was fighting a far larger ecosystem of competitors that went beyond social networking, including “Google, Twitter, Snapchat, iMessage, TikTok, YouTube and more consumer apps, to many others in advertising.” That is because Facebook and its other apps are used for communication and entertainment, such as streaming video and gaming.
These narratives make it seem like Big Tech companies are motivated by the old-school “behavioral reinvestment cycle” described above. They plainly make people think their apps are communication, entertainment, or gaming tools. But this is only what they are on the surface: they are actually tools to make behavioral prediction products for advertisers. The line that these companies “make better products for users” is utilized over and over again. It is a diversionary tactic, and should be recognized as such. Regulators need to be crystal clear in their counter-narratives and call out these diversions. The moves of regulators often are the only exposure a broader public receives to these issues, so regulators must do better to expose Big Tech’s charades to the general population.
Regulators must finally also understand that arguments for privacy are not just based on Big Tech companies knowing where you live, or who your friends are. The true invasion of privacy is that, though prediction, they know how you feel, where you may be going, even what you may think about soon. Our thoughts and feelings are no longer private, and those are what are being fed to advertisers to make you more likely to view or click their ads. The same way that democracy is often tied with freedom of speech, we need to deeply understand the implications of systems of consolidated power having this knowledge so that we can move to protect freedom of behavior or freedom of thought. These ideas deserve a longer treatment, but it should suffice to say that they must be the crux of truly motivating why Big Tech is so dangerous.
Urgency is Needed, with Caution
These ideas scratch the surface of how understanding of Big Tech companies needs to radically shift in order to motivate any regulatory action and rhetoric that cuts at the core of the actual problems. Without such an understanding, regulators seem doomed to face frustrations and lose the trust of the public through failed action, and easy counter-arguments coming from Big Tech.
Regulation as an ideology has decayed in the U.S. since the Neoliberal period under Reagan, and is now a partisan issue. Failed regulatory action will only stymie momentum towards understanding that a capitalist system can only function if it is regulated. We thus need to speak with urgency towards spreading understanding of surveillance capitalism so that everyone understands what’s at stake if Big Tech is left unchecked. Though, we must also be cautious and note that the problem is so contingent on a fairly massive ideological shift that it would likely take something along the lines of a social movement to meet the challenge — something that would take time.
In the meantime, those who understand how surveillance capitalism operates must raise their voices and share these ideas as widely as possible. The power and reach of Big Tech’s behavioral extraction and manipulation will only increase with time.