Too Big To Like: Why Do We Trust Big Tech?

Big Tech companies like Amazon, Apple, Facebook, Google, and Microsoft, exhibit a new stage of corporate power. These organizations are so large, our views on them are inconsequential - they have effectively become too big to like.

Aarshin Karande
The Startup


Originally published by The Republic on August 20, 2019.

‘I don’t know why. They ‘trust me.’ Dumb fucks.’ You may be surprised that this message was written by Mark Zuckerberg during his Harvard University years. He was speaking of his classmates who were Facebook’s first users. Zuckerberg would become the world’s youngest self-made billionaire. Surprise aside, Zuckerberg’s question remains compelling and critical today: Why do we trust him? And, do we still?

Big Tech companies like Amazon, Apple, Facebook, Google, and Microsoft, exhibit a new stage of corporate power. These organizations are so large they traverse borders, so wealthy they exert ineffable influence over the global polity, so vital they have constituted (and appropriated) an entirely new kind of ‘social.’ Big Tech’s interests are eager to capture, retain, and shape people and their attention. Today’s casual citizen-consumers, in search of dank memes and lit content, find themselves in the midst of cyberwarfare, data colonialism, and surveillance-based behavioural grooming.

Big Data supremacy, prompted by neoliberalist globalization, has spurred these vast, fundamental, and incongruous changes to the Internet. Today, fundamental structures of the Internet, Internet security, and critical infrastructure are increasingly under the control of these companies, their interests, and their vulnerabilities.

With Big Tech towering over Internet users in all aspects and forms of their digital lives, we must ask: what does ‘choice’ really mean on the Internet nowadays? Are meaningful choices over our digital lives possible outside the terms and conditions of Big Tech companies? Are these companies beyond the reach of society and its corrective mechanisms?

Or, are we in a situation where they are too big to ‘like’-that whether we have chosen them or not has become inconsequential? Have we truly become sitting ducks, or as Mark Zuckerberg said, ‘dumb fucks’?

Fabricating ‘Populism’ as ‘Democracy’

As the Internet became increasingly domesticated with home computing in the 1980s and 1990s, those pioneering this shift recognized its political and cultural potential. Technologists embraced computing as a libertarian opportunity to reclaim freedom and deny the power of the rule of law by the rulers of law. Since then, mass technology has deviated greatly from this emancipatory vision.

At Davos in 1996, Internet activist John Perry Barlow declared independence for cyberspace. In it, he said, ‘Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind… On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.’ Barlow’s remarks articulated a hope common among computing activists of the time about how the Internet should allow people to be free and express without fear of oppression, censorship, or incrimination.

A similar sentiment was expressed by hacker Lloyd Blankenship, though in a more fundamentalist fervour, in what has come to be known as ‘The Hacker Manifesto’. In it Blankenship describes hacking as ‘a refuge from the day-to-day incompetencies’ where people like him are ‘dominated by sadists, or ignored by the apathetic’ and the Internet is ‘run by profiteering gluttons.’ Speaking of society, he decries how ‘you build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it’s for our own good, yet [hackers are] the criminals.’ He concludes the provocative essay by asserting that a hacker’s only crime ‘is that of curiosity.’

The Internet that Barlow and Blankenship vividly illustrate is a vision for the Internet being a sanctuary for those dissidents of plutocratic and kleptocratic greed wishing to pursue freedom to their ideal ends and means. Why couldn’t the Internet be an alternative reality for those exhausted by the casual oppressions of common society?

These sentiments are revealing because they imply a view of technology as being politically and culturally agnostic. They articulate the potential for this agnosticism to be exploited for other aims, such as libertarian, humanist, or anarchist. The Internet could be unconditionally inclusive of all people and for people as long as you could code a place for yourself.

The humanitarian pathos of the Internet has been tested in the past several years. Under the guise of populism, illiberal and xenophobic movements have sewn civil impasse, nativist sympathies, and violent transgressions across the world. Researchers like Mojca Pajnik and Birgit Sauer have noticed how online dialectics comparing populism and democracy have grown recently.

In their 2018 book, Network Propaganda:Manipulation, Disinformation, and Radicalization in American Politics, Yochai Benkler, Robert Faris and Hal Roberts present research indicating how countless actors used or manipulated algorithms on social media platforms to spread disinformation (i.e. ‘fake news’) to manipulate public opinion. Most interestingly, the authors found how the existing polarization of the American news media structure was exploited to render disinformation more easily and effective.

The inclusive Internet is at an impasse with an exclusive one. Internet users have yet to reckon with what this mediated gridlock means about their ability to hold platforms accountable and to what extent Big Tech platforms implicate the remaining structure of cyberspace and its security.

The pro-innovation mantra espoused by the West has rendered away the agnosticism of the Internet. Then, what would an illiberal Internet mean for liberal democracies? Or, as Carole Cadwalladr of The Guardian considered in her TED Talk: is it actually possible to have a free and fair election ever again when technology disrupts a country’s electoral laws?

Colonization via Personalization

Social media platforms have proliferated under the pretence of community and personalization. Leveraging algorithmic processing and analysis, these platforms have grown to be mass communication systems so deeply embedded in the everyday lives of users that being exposed to disinformation may be like The Invisible Gorilla-it remains present but unnoticed.

Importantly, the currency of Big Tech is Big Data. Its trade has irrevocably changed the physics of the global economy and challenged conventions surrounding power and wealth. Diane Coyle notes how the failure of fields like Economics to adapt to big data jeopardizes their relevance. Conventional wisdom-which is rarely in convention or wise-once held that the Internet demonstrates freedom at its best but, now, is reluctant to acknowledge how such freedoms have been exploited for monopolistic ends.

Today’s big data practices reorder society through social data and data relations, turning data life into a data stream. Nick Couldry and Ulises Mejias describe this dynamic as ‘data colonialism.’ ‘Colonialism’ is a very curious and layered term to invoke here: If social media platforms practice ‘data colonialism,’ then what is being colonized? Privacy, certainly, as norms surrounding self-disclosure, self-censorship, and data handling have shifted over the years. But, what is data colonizing?

Paying Attention

Couldry and Jannis Kallinikos ask us to consider what about social media is really ‘social.’ For many, social media have become an irrevocable part of our everyday lives (as we have made it to be). What was taken to be a part of our lives has quickly become life itself for many. We take social media to be so integral and apparently representative of our everyday lives. But, do social media reasonably and accurately represent the meanings and contexts of content posted on them? Are platforms even interested in reflecting the real-or is ‘real enough’ good enough?

Though many researchers remain wary of big data practices, one can find many voices in critical media studies who covet the richness of big data. Ruth Leys notes how many researchers embrace big data without raising fundamental questions about the methodological and philosophical complexities of assuming representativity in big data sets.

Technologists will decry alarmists who claim that human behaviour is being manipulated for the profit of Big Tech. But, we must recognize that big data presents a wholly new way by which corporations take user-generated content and observed behaviour and repurposes them in a way convenient for advertisers to better predict, shape, and guide users’ attention. Advertisers pay Big Tech handsomely to gauge how attention can be moved at will.

As users shape their platform experiences to better reflect their personal beliefs, choices, and ideas, algorithms and advertisers become more acquainted-virtually closer-to capture why users do what they do and what they could do next. But, why predict such behaviour at all (and pay money for such information) if the intention is not ultimately strategic, exploitative interference in our lives?

Concerned about these data practices and their growing uses, Karen Yeung studied how platform algorithms are designed to direct users’ attention to encourage desirable behavioural outcomes-the ‘hypernudge.’ By delineating precise nudging tactics used by social media platforms through algorithms, Yeung describes how the purpose of such algorithms is, effectively, inherently manipulative.

Where will these innovative detours take our everyday lives? For Couldry and Mejias, these practices characterize a ‘new stage of capitalism whose outlines we only glimpse: the capitalization of life without limit.’

Using feedback loops, platforms like Facebook curate experiences aimed at eclipsing users from content that is interfering, unpredictable, or unresponsive. Simply said, algorithms governing social media sites thrive on black-boxing users in a way where they are made unaware of (and thus cannot scrutinize) platform practices. Exposing the inner-workings of algorithms may ruin the verisimilitude of these platforms. Concerned about such murky terms and conditions, Ian Bogost scrutinizes the values underlying sites like Facebook and questions what responsibility really means with such companies.

Platforms for Monopolies

What is ‘ democracy ‘? Is democracy a physical thing or is it a theory-an image that we tend to and strive for? By knowing what we mean when invoking ‘democracy,’ we can better identify and confront that which it is not (and should not be). Aforementioned details about network monopolies, attention manipulation, and political propaganda compel us to ask: what is democratic about the Internet?

As Western republics have wrestled with an increasingly diverse and competitive Internet over the past decade, participation in the digital revolution has demanded developing in ways that are not explicitly liberal or democratic. ‘Democracy’ has been out-prioritized by ‘capitalism.’ The beneficiaries of these shifts in priorities have been Big Tech companies. Consequently, Big Tech have become a key dimension of global inequality.

The transnational character of social platform companies instigates unchecked influence over various countries and peoples. Zeynep Tufecki has detailed how activists in countries like Turkey struggle navigating Facebook’s privacy and content policies in their pursuit to challenge government actions that curtail civil rights and freedom. Tarleton Gillespie addresses how even the term ‘platform’ suggests a populist appeal despite how practices may not only diverge from but altogether disregard the idea of ‘giving platform to’ users.

The globalizing ambitions of Big Tech companies have been excused as a by-product of an increasingly competitive, globalized economy. But recognizing global dominance as an explicit aim of eager tech monopolists-what Martin Moore and Damian Tambini articulate as ‘digital dominance’-would suggest a graver situation for global democracy. In the UK, researchers have emphasized the need for coordinated policy intervention efforts to effectively confront the power of Big Tech along issues like tech mergers, online harms, childhood development, and data rights.

The extent to which these companies have monopolized Internet-based services is historically unprecedented. There is very little competition involved in these businesses and, Amanda Lotz argues, critical and regulatory attention should be placed on markets and behaviour. This scrutiny is, to say the least, challenging to pursue in a post-9/11 surveillance settlement where the CIA uses Amazon Web Services (AWS) to supply its cloud-computing needs.

How should Facebook be regulated, then-as a newspaper, bank, or tech business? In which countries and to what extent? These questions pose a challenge to policymakers still reeling from the regulatory failures of the 2008 financial crisis.

Considering the global nature of Big Tech companies, their numerous location-based policies, and how various features are tailored to specific countries and their governments, we must ask to what extent do companies like Facebook, Twitter, Amazon, Google, and Apple have a fiduciary responsibility to values like fairness and freedom? Do these values relate to and or shape their business models in any measurable, observable way?

Falsifying Belief as Behaviour

Recently, The Economist decried how ‘citizens are so consumed by pleasure-seeking that they beggar the economy; so hostile to authority that they ignore the advice of experts; and so committed to liberty that they lose any common purpose.’ A rich question is being raised here: How do we sort the role of users in the midst of what appears to be Big Tech supremacy?

More research is needed to appraise how Big Tech companies actually envision and anatomize users and their agencies. The history of media studies details a gradual paradigm shift in how people using media are regarded, going from “mass media audiences”-a systemic view of groups exploiting media for uses and gratifications-to “users”-an individuated view of audiences whose particular engagements and views of media are used to shape themselves.

This change in focus reflects a shift in media psychology which increasingly focuses on individuals organizing and making sense of their lives through media as opposed to people merely gratifying themselves through them. For early media scholars, audiences were rational, expressing explicit aims and gains. Now, audiences are affective, demonstrating idiosyncrasies and implicit conflicts. The scholastic view of audiences has diverged over time as the complexity of media users has been appreciated.

Last July and August at the University of Oxford, I presented research advocating the need to integrate these complementary perspectives. An essentialist, compartmentalized view of audiences-as either rational or irrational-denies media research a nuanced view of the many ways media are engaged within an individuals’ life.

By bridging the gap between an affective model and cognitive model, we are better opened to audiences’ ‘consciousness,’ the broader totality of beliefs and behaviours surrounding media use. An inadequate account of user psychology threatens an inconsiderate account of media power. An integrated view is critical for appraising the extent of Big Tech’s power and that of their users.

The conflation between belief (narratives, lived experiences, personal values) and behaviour (cognition, neuroscience, rationality) oversimplifies our accounts of how people actually engage with technology (e.g. the notion that specific behaviours accurately and totally reflect peoples’ inner lives and held beliefs). Many Big Tech companies have adopted a rationalist-reductionist view of audience psychology, indicated in ideas surrounding user interface and user experience (UI/UX) research.

Such companies aim to produce a generalized platform experience that simultaneously appeals to everyone yet is algorithmically adept enough to capture the specific metrics of a users’ life-world valuable to advertisers. Attention has come to suggest meaningfulness and clicks as endorsements. Complex phenomena like deliberation, resistance, and conflict are beyond the considerations of this frame.

This simplified ‘anatomy’ of technology users has been exploited by social media companies to drive business value. If users are more complex, nuanced, and unpredictable than Big Tech presumes, what would that mean for practices aiming to ‘flatten’ individuality? Shoshana Zuboff writes about how this totalizing view inherently dehumanizes users and sets up the repurposing their behaviour into metadata to for auctioning off. The curious, sophisticated relationship between belief and behaviour, irrationality and rationality, needs to be taken more seriously and investigated thoroughly.

The monopolistic aims of Big Tech are predicated on this reductive account of users. Understanding people to be simple allows one to view more of them at once. The ‘bigness’ of platforms thrives on the ‘smallness’ of individual users.

As Diane Coyle said, ‘A digital platform is either large or dead.’ Writing about the economics of big data, she notes how ‘the standard economic framework of individual choices made independently of one another, with no externalities, and monetary exchange for the transfer of private property, offers no help in answering these questions,’ about technology change and power. If we cannot see users fully, we will never know how they’re really changing with technology.

Surveillance as Incarceration

If today’s globalization is predicated on ‘surveillance capitalism’ -businesses whose value to the market is defined by collecting big data on users’ behaviour and selling this metadata to advertisers seeking to draw their attention-then what does it mean to be ‘sorted’ (e.g. limited to how an algorithm identifies you and organizes your data) in the pursuit of ‘accessing’ the world through digital platforms? We must rearticulate today’s globalism under the sociotechnical terms and conditions it is predicated on to recognize how it affords and limits specific human interests.

Anand Giridhardas writes about how the altruistic ethos of technology leaders misleads the public about how today’s neoliberal globalization-via-technology concentrates wealth and power in uncharted, new ways. Policies reinforcing the status of Big Tech and their relationship with lawmakers also contribute to this widening inequality.

Artificial intelligence (AI), a moon shot for the global technology race, will prove to advance these problems and challenge regulators. Unlike nuclear non-proliferation, systematic efforts are not being taken to curtail the unintended consequences of AI. Moreover, Big Tech platforms are increasingly relying on AI and automation to perform, grow, and outcompete.

AI machines rely on encoded values to analyse and learn-a difficult fact for AI ethicists. Geo-technological rivalry about the values underlying AI technology has become increasingly problematic in the pursuit of a global ethical framework for AI development (e.g. how the US and China contest fundamental ideas about human rights, civil ideals, and government power as expressed in technology). As much as Internet-based technologies have enabled the traversing of borders, it has also augmented the significance of geopolitical tensions.

The scale of inequality prompted by these projects, the AI race, the surveillance capitalist agenda, and geopolitical supremacy, renders Internet users that much more at the mercy of Big Tech practices and unable to adopt corrective powers to check these players and their powers. Exercising these checks on Big Tech is essential for people to refuse the possibility of being technologically predestined, or engineered, into specific fixed systems-and realities.

Too Big to ‘Like’

As the 2008 financial crisis crippled the global economy, academics and journalists preoccupied themselves with the issue of how such a massive disaster arrived so unpredictably and without many realistic safeguards. Later reports and research indicated how financial institutions pursued irresponsible practices aimed at leveraging risky assets to generate greater wealth. Such decisions were made under the assumption that these institutions, like Lehman Brothers, were so big, so essential, and so irreducible to civilization that they would persist beyond any disaster. As Andrew Ross Sorkin echoed in the term Stewart McKinney coined, they were ‘too big to fail.’

Are we making the same mistake with Big Tech? Are these companies so indelible from daily life that they have become ‘too big to like’? Here to stay in our lives regardless of whether or not we have chosen them-or have any choice beyond them?

We are being confronted by a reality where platforms for everybody could mean platforms for nobody. The monolithic Internet commonly spoken of has been subsumed by private property masquerading as public spaces, monopolists masquerading as altruists, and elites masquerading as the everyman.

The ‘social’ fabric of today’s globalism is in the service of companies that exist beyond our choosing any meaningful alternatives. What does a globally-connected human civilization at the mercy of these companies entail? What does it mean to choose when so much in our new sociotechnical global lives could remain beyond our choosing? Where even reality itself has become but the repurposing of aggregate content generated by algorithmic values and capitalist interests meant to provoke our attention and attitudes?

Mark Zuckerberg once described Facebook’s motto as, ‘Move fast and break things. Unless you are breaking stuff, you are not moving fast enough.’ In 2014, the company changed the motto to ‘Move fast with stable infrastructure,’ because ‘by building a stable infrastructure, we allow ourselves to always make sure that we’re moving forward, even if we move a little bit slower up front.’ Mark Zuckerberg and Facebook have made it clear that they are here to stay, as are the problems that come with them. But, if Mark Zuckerberg doesn’t plan to account for what has been broken in the process of ‘moving fast,’ who will?

Aarshin Karande is an Indian-American scholar of technology, beliefs, and policy, who studied at the London School of Economics, Oxford, and the University of Washington Bothell.



Aarshin Karande
The Startup

AI Ethics & Psychopolitics — Studies at Cambridge. Formerly at LSE, Oxford, and UW Bothell. Indian Classical musician.